I am trying to simulate a videofeedback loop using OpenGL buffers. Physically this is done by connecting a video camera to a TV, which shows what the camera is recording, and pointing the camera directly on the TV. In this way a loop is created, like an iterated system. Spectacular images can appear then.
I already did this with the help of the accum buffer and rendering to textures with the command
glCopyTexImage2D. The content of the accum buffer is returned to the screen, the screen is copied to the texture, which is applied and rendered and finally the result is loaded in the accum buffer. Then the loop starts again.
In want of better performance I learned a little about framebuffers and how they can be attached to textures to do off-screen rendering to textures. The new desing is like this: two framebuffers and two textures are defined and attached to each other. So we have fbo -> tex, fbo -> tex. In the render loop, first we bind the fbo and draw to it the content of tex. Then we bind fbo and draw to it the content of tex. In that way a rendering loop is created.
But I have not seen any performance by using buffers respect the first way I described. Why? For sure there is a better way to do these things. Can you help me in this issue? Any ideas?
Thanks a lot!