Its been a few years (20+) since I had to do this, but I seem to remember the best way to acheive this was to use the standard Bresenham algorithm, but to just set more than one pixel at each point. Basically for a three pixel wide line, set the 8 pixels around each point as well as the point itself... Some optimisation is possible.
I have an application that displays (large 2500X3000) 12-bit grayscale images within an image control. I am using the Gray16 pixel format and I am scaling the data to 16 bits by simple bit-shifting. Images have low contrast, so I implemented functionality to adjust brightness and contrast. The functionality was based on setting the image's Effect to a pixel shader that accepts brightness and contrast parameters. Obviously, this implementation is very efficient and avoids moving large amount of data around in memory.
When adjusting the contrast to a high setting, the displayed image exhibits unacceptable contouring, as though the bit-depth of the image has been reduced to 5 or 6. As a baseline test, I implemented the same functionality without a pixel shader - I applied the contrast and brightness settings to the image, reduced the bit-depth to 8 bits and displayed the image with the Gray8 pixel format (one that I have a great deal of familiarity with). The latter implementation showed little to no contouring.
Does anyone know why applying a pixel shader would reduce the apparent bit-depth of an image? Does the pixel shader HLSL code work on reduced bit-depth data, e.g. eight or less, that is a function of the hardware/drivers or the OS?
All data in the pixel shader are floats that are scaled from 0 to 1, so you haven't any idea of the quantization level (at least to my knowledge)
Hum. My shader knowledge is limited but if anything I thought it'd be higher (I seem to recall some modern cards being able to do 64bit floats).
You possibly have to turn it on (or off) since working with fixed points is inherently faster. For this application I sense that speed is not your issue.
Thank you! I did read through that and I noted that it was a container in a container housing another container. I was trying to avoid adding one more container to the list if possible but if you think that's a good route, then I'm buying it!
I've got a working datagridview housed in a toolstrip though a host. It's resizeable, and rounded, and those pesky little rectangle corners show up if the background is changed...yuk!
Is it possible to draw a hexagon with children and paint it with Lineargradientbrush? If it is possible, can somebody help me with the sample? Example is the type used in Microsoft office 2007 color dialog box.
I googled it as you directed and I got good one but it is written in C++ and I am not familiar with C++. This is the link (www.codeproject.com/Articles/24970/XColorDialog-MFC-color-picker-control-that-disp). Can you help in converting it to VB.NET or C#? Or can you build the DLL in the same language?
I am aware of that but remember that 2 good heads are better than one. Well, you advice is very ok. You learn better when you do thing yourself but other persons' idea can pave you way. Your contributions on how to do it is also impotant.
Hi, I'm trying to render the contents of a window, with desktop composition disabled, on an off-screen buffer. I have no control over the window and the only thing I have is the window handle and the DC. I thought about allocating an off-screen DC and doing something like:
/* The window handle is stored in the
hWnd variable and the offscreen DC,
is in hDC */
PostMessage(hWnd, WM_PAINT, 0, 0);
But it doesn't seem to work. Please note that I don't want the application to render on the primary display DC. In other words I want to redirect it's graphical output, and blit the result in my window.
I'm trying to mix Device Contexts and OpenGL but not the usual way. I don't want to draw with GDI on top of an OpenGL scene. What I wanna do, is apply an HDC bitmap as a texture on an OpenGL polygon. I know that this is generally a bad idea, but if you know anything about how is that possible, you would be very helpful. I'm coding on C/C++ on Win32. A DirectX 8/9 solution would also be appreciated.
Finally found the solution myself. If anyone else is interested in that, the process is simple. You create a bitmap from the DC (there are many websites that describe this), and then you apply that BITMAP object as a texture, the way you'd apply any other bitmap texture.
Hello my name is emile my best pro lang is vb2010. I love vb and I now all the code by hart I can pro in C# just as good as vb I now that C# is the best of the 2 langs but I want to do it in vb I have made an 3d cube using directx 10 and visual stodio 2010 but can't add 2 cubes or more I need step by step help to re making the cube and adding more then one to buld a box to walk in (in to make an camera that moves like a first person game) help
If you really want help, you will reduce this to the relevant snippet of code where your problem is and then ask a clear coherent question and description of your problem. No one is going to go through this mess to try and figure out what it is that you want to do.
Why is common sense not common?
Never argue with an idiot. They will drag you down to their level where they are an expert.
Sometimes it takes a lot of work to be lazy
Please stand in front of my pistol, smile and wait for the flash - JSOP 2012
In school I had not heard anything about vectors and matrices yet, so I figured the whole transformation and projection stuff out with regular trigonometry. What a triumph when I had my first tetrahedron choppily rotating on the screen
What i want to do is to make an animation of a female character that appears on SCREEN and does everything it was supposed to do during animation.
The MAIN problem that i have no idea how to do this on the screen, without any borders of audio/video player. Closest EXAMPLE is VHGD aka VirtuaGirl player - girls walk on your screen without any border of the player (my version is NOT porn, it is rather gift).
Animation will be done by myself, all i ask is how to make it look like a VHGD.
How can i perform this using C# or any 3rd party libraries/ect?
The first thing is - thank you for the reply. I`ll look into that right now (well i`m working my 16th hour now )
If i get it right, i just need to use some sort of player to play usual AVI/H.264 file with animation... (of cource using technique you`ve provided)
Another question... this animation is "Advisor" - a copy of a real user actually that performs some actions like find document, search the database, edit the tables in the database and even dancing a bit. Is there any way to manipulate the AVI file to trigger events according to gestures?
For example, advisor raising it`s hand in the air and windows creates several buttons near advisor`s fingers.
You would need some sort of image recognition software or collision detection, to trigger an event when part of the body hits a certain point on the screen. Sorry, but that is well beyond my knowledge and experience level.
Thank you anyway, image recognition can be done using AForge library and neural networks... For some weird reason i`m occupied in program for image recognition (motion detection). Thanks a lot for a pointer to obvious thing!