Fair enough. But if you do receive this "event", the application responds to it in some fashion. In the case of a smart phone, they might "smoothly scroll the page". And it's at that level I'm living in at the moment. I'm curious how it's generally implemented... the smooth scrolling or page animation if you prefer.
<italic>You're going to tell me what I want to know, or I'm going to beat you to death in your own house.
"Where liberty dwells, there is my country." B. Franklin, 1783
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Ah, I see what you're getting at. Well, it certainly depends on the development environment you are using. I use WPF, which provides inbuilt access to animation - I would simply hook into that. If you're using Win32, then you're going to have a lot more work because you're going to be hooked into GDI+.
I was brought up to respect my elders. I don't respect many people nowadays.
I need to draw a thick line with SetPixel API. I search very, but can't find any open source algorithm (Bresenham algorithm is one pixel, murphy algorithm is for line with more than 3 pixel thick, ... ).
If you can please help me.
Notice that I can not use CPen and other facilities in windows. I should use SetPixel.
Excuse me for bad English.
Its been a few years (20+) since I had to do this, but I seem to remember the best way to acheive this was to use the standard Bresenham algorithm, but to just set more than one pixel at each point. Basically for a three pixel wide line, set the 8 pixels around each point as well as the point itself... Some optimisation is possible.
I have an application that displays (large 2500X3000) 12-bit grayscale images within an image control. I am using the Gray16 pixel format and I am scaling the data to 16 bits by simple bit-shifting. Images have low contrast, so I implemented functionality to adjust brightness and contrast. The functionality was based on setting the image's Effect to a pixel shader that accepts brightness and contrast parameters. Obviously, this implementation is very efficient and avoids moving large amount of data around in memory.
When adjusting the contrast to a high setting, the displayed image exhibits unacceptable contouring, as though the bit-depth of the image has been reduced to 5 or 6. As a baseline test, I implemented the same functionality without a pixel shader - I applied the contrast and brightness settings to the image, reduced the bit-depth to 8 bits and displayed the image with the Gray8 pixel format (one that I have a great deal of familiarity with). The latter implementation showed little to no contouring.
Does anyone know why applying a pixel shader would reduce the apparent bit-depth of an image? Does the pixel shader HLSL code work on reduced bit-depth data, e.g. eight or less, that is a function of the hardware/drivers or the OS?
All data in the pixel shader are floats that are scaled from 0 to 1, so you haven't any idea of the quantization level (at least to my knowledge)
Hum. My shader knowledge is limited but if anything I thought it'd be higher (I seem to recall some modern cards being able to do 64bit floats).
You possibly have to turn it on (or off) since working with fixed points is inherently faster. For this application I sense that speed is not your issue.
Thank you! I did read through that and I noted that it was a container in a container housing another container. I was trying to avoid adding one more container to the list if possible but if you think that's a good route, then I'm buying it!
I've got a working datagridview housed in a toolstrip though a host. It's resizeable, and rounded, and those pesky little rectangle corners show up if the background is changed...yuk!
Last Visit: 10-Jul-20 7:19 Last Update: 10-Jul-20 7:19