Click here to Skip to main content
13,045,379 members (80,319 online)
Rate this:
Please Sign up or sign in to vote.
See more:
Hi guys,

I am using Visual Studio 2010 and Shazzam Editor for a C#/WPF application.
I would like to map colors to an image made of floats using a pixel shader but I am stucked.

For example, let's say I want to map a blue-red gradiant to that image (the min/max will be set by the user).

In pixel shaders, processing RGB images is easy. However my image is not made of RGB values. I must pass the original float values. The only way to pass floats I found is to use PixelFormats.Gray32Float. One of the problems with this format is the values must fit within the range 0-1.

So the fursthest I could go is:
In C# code:
- Normalize the image so that its value fit within the range 0-1
- Create a WriteableBitmap with PixelFormats.Gray32Float and store the normalized values in it.
- Pass this bitmap to the shader.
- Pass the original min/max values to the shader.
- Pass the user min/max to the shader.
//original image
int width = 2000;
int height = 2000;
float[] image = new float[width * height];
//just for testing (original values will be in the range 0..1999)
for (int x = 0; x < width; x++)
    for (int y = 0; y < height; y++)
        image[y * width + x] = (float)x;
float min = image.Min();
float max = image.Max();
//build the normalized bitmap image
float[] normalizedImage = image.Select(x => (x - min) / (max - min)).ToArray();
WriteableBitmap bitmap = new WriteableBitmap(width, height, 96, 96, PixelFormats.Gray32Float, null);
bitmap.WritePixels(new Int32Rect(0, 0, width, height), normalizedImage, width * sizeof(float), 0);
//send parameters to the shader
shaderLut.InputMin = min;
shaderLut.InputMax = max;
//values below LutMin will be blue
shaderLut.LutMin = 200;
//values above LutMax will be red
shaderLut.LutMax = 1500;
shaderLut.Input = new ImageBrush(bitmap);

In the shader code:
- get the normalized value as a gray level
- convert it back to its original value
- interpolate between the min/max provided by the user
//normalized image source
sampler2D Input : register(s0);
//original min
float InputMin : register(C0);
//original max
float InputMax : register(C1);
//user provided min
float LutMin : register(C2);
//user provided max
float LutMax : register(C3);
float4 main(float2 locationInSource : TEXCOORD) : COLOR
  //get the normalized value for this pixel (r, g, b are be the same value)
  float normalizedValue = tex2D(Input, locationInSource.xy).r;
  //gamma correction (because Gray32Float gamma is set to 1.0, if I don't do this the colors are not mapped properly)
  normalizedValue = pow(normalizedValue, 2.2);
  //original value
  float originalValue = lerp(InputMin, InputMax, normalizedValue);
  //map the value to its color
  float scale = (originalValue - LutMin) / (LutMax - LutMin);
  float3 blue = { 0, 0, 1 };
  float3 red = { 1, 0, 0 };
  float3 color = lerp(blue, red, scale); 
  return float4(color, 1);

All this works almost perfectly. The issue appears when LutMax - LutMin becomes very low compared to InputMax - InputMin: in this case, the gradient quality gets poorer (in the previous example, set LutMin to 1000 and LutMax to 1100 for example).
I suppose this is because the float value read from the pixel shader is not as precise as the float value given to the WriteableBitmap (probably I can't get more than 256 different values?).

So does anyone know how to pass the original float values without loosing precision?
Posted 17-Jan-13 23:26pm

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

  Print Answers RSS
Top Experts
Last 24hrsThis month

Advertise | Privacy | Mobile
Web01 | 2.8.170713.1 | Last Updated 18 Jan 2013
Copyright © CodeProject, 1999-2017
All Rights Reserved. Terms of Service
Layout: fixed | fluid

CodeProject, 503-250 Ferrand Drive Toronto Ontario, M3C 3G8 Canada +1 416-849-8900 x 100