Basically, a hamburger button's flyout contains a Splitview, where the pane is a Listview menu. When a menu item is chosen, the SplitView's content frame navigates to an associated page. Simple enough.
On a menu page, if I have a button with a flyout that contains a ColorPicker, there seems to be some sort of dismiss area overlap problem. I've tried various combination of LightDismissOverlayMode settings on the SplitView and the button flyouts, to no avail.
Here's the hamburgar button within the StackPanel of the main page:
That's a good theory, and it led me to the solution.
If you refer back to the picture linked in the OP, the problem smells of two transparencies overlapping each other, but even when removing all transparent colors from my code, the problem persisted.
So, for a final test I went into my Windows Settings/Personalization/Colors and noticed "Transparency Effects" was enabled. Disabling it solved the problem, and I guess it makes sense since my app inherits its theme from the Windows settings on my PC. Nevertheless, this seems like an odd behavior from the base Windows theme for it to show up only when related to flyouts.
I'm guessing there's a way to override the base theme on a child flyout, but for now I'm just going to leave Transparency Effects disabled.
this is bullshit, the first VAO without std::vector get drawn but not the second with vector :
Vertices = Vector3f(-1.0f, -1.0f, 0.5773f);
Vertices = Vector3f(0.0f, -1.0f, -1.15475f);
Vertices = Vector3f(1.0f, -1.0f, 0.5773f);
Vertices = Vector3f(0.0f, 1.0f, 0.0f);
Your vector contains pointers to vertex, not the actual vertices. In fact it contains the same pointer four times, so even if OpenGL can handle pointers to vertex data you only have the same vertex repeated. The index buffer suffers from the same problem.
The third parameter is supposed to be a pointer to the data you wish to copy, but you are passing a pointer to a pointer. The vector contains a list of pointers to vertex elements, so &Vertices2 is the address of the first pointer item in the vector. It should be:
So I'm aware of MonoGame, SharpDx, etc, but these are all significant overkill for what I need, and what's more, none of them work over the remote desktop protocol.
I'm just wanting to do some very simple 2D graphics, and it doesn't even have to run at a particularly high frame rate (even 20FPS is fine). But most importantly, it has to work over the RDP, which means it has to be software rendering.
I have played with a WritableBitmap with some degree of success, but was wondering if anyone had any other ideas?
Obviously, this isn't for any sort of "real" game or anything. Just basically an exercise to keep me from forgetting how to use C#.
I'm somewhat new to DirectX programming and need a bit of guidance. My requirement is currently limited to video and 2D drawing and so far I've managed to get SharpDX to play a video from a file without problems, and also have been able to do my 2D drawing. This was simple enough and the performance is good.
Ultimately, my video source will need to be video input devices ( i.e. web cams, video capture cards, etc. ).
My question is: Does SharpDX.MediaFoundation include DirectShow wrappers such that I can access and play video from local video input devices? If so, can someone point me in the right direction and link some examples of using SharpDX for this purpose?
I understand it's no longer supported, but it's battle tested and I don't think the risk at present is any greater than investing in the "latest and greatest" GUI framework only to have it killed off before it gets off the ground.
Besides, I feel if I use SharpDX to get a better handle on the underlying concepts, porting to something new(er) when required would be easier.
As to my original question, I know there's other NuGet packages that make easy work of displaying camera input ( i.e. AForge, Accord ), but I'm not convinced they're taking full advantage of the GPU, which is what I'm looking for. Specifically, I need to route the capture byte stream directly to the mediaengine's render surface as I'll be dealing with larger sizes and framerates.
I'm using WinForms (yeah, I know ... old school), tho I don't think that makes much difference as in the end its simply the window/control Handle that has to be specified as the swapchain.OuputHandle. I'm doing that already for playing a video file as a test and performance is very good.
When looking through SharpDX.MediaFoundation with Reflector, I get the strong feeling it provides the means for capture input, but I can't find any complete examples in that specific area.
No problem. Given that it saw over 2.5 million downloads, I figured there might be somebody that used it for this purpose and I don't think DirectX11 is going anywhere any time soon.
As for newer platforms, I unfortunately don't have a lot of experience with them so I'm concerned about the stumbling block that would present and not knowing their limitations. I could struggle my way thru UWP, but using what accelerated graphics API? Win2D? WinUI3? DirectN ?
It's hard to know which platform to choose that will still be around by the time I learn it.
I do not know pygame but I notice that you do not call pygame.display.update() until the end of the program. In most GUI applications you need to call update during normal running in order to keep it current.
I have run into a dead end in another OpenGL forum.
I have a basic understanding how (OpenGL) stencilling works.
My stencil is in 2D , no depths.
I am extracting a (circular) part of OpenGL objects, no problem.
I am trying to further manipulate the stencil result.
The task is to enlarge the result and move it to the center of the screen.
I used glTranslatef but it moves the original stencil, not the result. I want to move the result only.
I think my issue is that the stencil was build in "top of the modleview matrix" ( no push /pop) and the objects to be stencilled are in various matrix "stack" - used push / pop.
I understand the "result" is in color buffer, but it sure looks as I am manipulating the stencil buffer, which has been already disabled.
Am I correct in all this?
Do I have to rebuild all on SAME matrixview stack level ?
I am sorry if I am not using correct OpenGL terminology.
First - to avoid "confusion" - I am using term "object" to describe what is created in OpenGL in-between "new " and "end" loop code.
I have a "text (object)" rendered (in OpenGL window) , I need to delete such text and replace it with another one.
I do no want to translate , rotate etc. ... just delete and replace.
What I have tried so far ALWAYS writes OVER the existing text and makes a mess.
Is that possible in OpenGL?
I think I'll try (a hack ) - moving / translating current(text) matrix off the screen....
As a general "rule" NOT to invite uncalled for criticism on my under construction code I am therefore very reluctant to post code.
However, every rule has an exception so here is the latest implementation of "write text message" to OpenGL window.
It writes the message, but in wrong position - irrelevant for now.
(The disabled #ifdef / #endif code was added and that is why the message is written in wrong raster position.The raster position code must be in wrong place. )
It is my understanding that OpenGL keeps the data - bitmap in this case - in video card hardware. So to override / update the data I have to access that particular hardware .
Or in another words - OpenGL "variable / object" cannot be just overwritten like C code.
So I believe I need to keep track of "matrix" where the initial bitmap is written and then
should clear the entire matrix.
Correct me if I am wrong, but glClear would not work SELECTIVELY , it would clear entire buffer.( I did neglected to add "selectively" to the post title )
Yes, glClear does clear the whole viewport - you would then redraw all of the content for the next frame. Matrices only affect what you are going to render in the future, not what you have already sent to be rendered.
Since I just discover that push / pop matrix is "deprecated " clearing all maybe the only decent option anyway. Really not that hard since I keep system state...
Bottom line OpenGL was not the best choise for my application. O well...
I am going to use graphviz to generate state flow charts (i.e. blobs representing states, arrows from one blob to another representing events/transtions) based on a transition table.
There are ways to give hints about the preferred placements, in rough terms, through the "rank" specification. That is not defining the placements, just one input parameter to the placement algorithm. Ideally, I would like to give the user a mechanism to drag one or more blobs to a different position, and then redo the layout but with this one blob pinned.
I do not have the time do decipher the placement algorithm from the source code , so now I hope that either someone knows graphviz in great detail, or the placement algorithms: Is pinning of individual graphic elements at all compatible with the placement algorithm? If it is, can this be done in the dot input language to graphviz? Or in some other way?