Click here to Skip to main content
14,664,481 members
Rate this:
Please Sign up or sign in to vote.
See more:

I am trying to simulate a videofeedback loop using OpenGL buffers. Physically this is done by connecting a video camera to a TV, which shows what the camera is recording, and pointing the camera directly on the TV. In this way a loop is created, like an iterated system. Spectacular images can appear then.

I already did this with the help of the accum buffer and rendering to textures with the command glCopyTexImage2D. The content of the accum buffer is returned to the screen, the screen is copied to the texture, which is applied and rendered and finally the result is loaded in the accum buffer. Then the loop starts again.

In want of better performance I learned a little about framebuffers and how they can be attached to textures to do off-screen rendering to textures. The new desing is like this: two framebuffers and two textures are defined and attached to each other. So we have fbo[0] -> tex[0], fbo[1] -> tex[1]. In the render loop, first we bind the fbo[0] and draw to it the content of tex[1]. Then we bind fbo[1] and draw to it the content of tex[0]. In that way a rendering loop is created.

But I have not seen any performance by using buffers respect the first way I described. Why? For sure there is a better way to do these things. Can you help me in this issue? Any ideas?

Thanks a lot!

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

CodeProject, 503-250 Ferrand Drive Toronto Ontario, M3C 3G8 Canada +1 416-849-8900 x 100