Was discussing on subject with my old friend that has degree in Audio and Video Technologies, although he doesn't program he gave me an idea that so far stands. what we have is RGB color space 24 bit (8 bit per channel) display, and we have only 8 bit (256) levels of gray.
any conversion of 16bit gray -> 8bit gray is damaging image information so we need to preserve as much data as we can. While we present it as best as we can not to loose on speed in conversion. Since monitors interpret this as signal trough NON RGB system, we should look trough those systems (or only one system, need more intel on this). What we actually need is to preserve as much data as we can in Luminance or Brightness (I am only guessing here what components of this system might be).
Now his idea is to not only to use 24bit RGB images with primary levels of gray where all components are equal to each other. But also to use different values by 1 (or 2 in case of blue component) of each RGB component to display secondary levels of gray by using coefficients of RGB->GRAY conversion algorithms.
RMY Greyscale: Red: 0.5 Green: 0.419 Blue: 0.081, where blue has lowest intensity and we would map 1st secondary level of grey to it, 2nd to green, 3rd to red, 4th to blue+green, 5th to blue+red, 6th to green+red, and with primary level this gives me 7 levels of gray (could add for second time blue component to gain 8th which would give me 16bits levels in luma). Or with BT709 gray scale: Red: 0.2125 Green: 0.7154 Blue: 0.0721 similar approach.
Friend suggests using BT709 before RMY.
And that human eye is not that good that could recognize these little differences in color components, while we preserve LUMA of 16-bit gray scale images.
Can anyone confirm or deny this?
should I go this road, since I need to build huge LUT tables for conversion statically if I want a speed