Click here to Skip to main content
15,885,985 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
hi Guys,

I have two query :
(i)How to use double buffering in voice recording using wave API.Pls suggest me the steps to do double buffering?

(ii)I want one buffer to record voice , once it get fill it should go for playing and at the same time my second buffer should take care of recording . once the second buffer got fill then it should go for playing, and first buffer should go for recording....

Pls tell me the steps so that i can do the above said. Thanks for your valuable comments and answer in advance . Looking forward for your response
Posted
Updated 25-Jun-12 22:27pm
v2

1 solution

You do not mention it here, but I know from your previous questions that you actually want to play the audio in a client application, which receives the audio through a socket connection. I am only going to address the server side here.

It seems to me that you have focused too much on the API documentation's notes about using double buffering. Yes, you should do this to avoid time gaps with missing audio resulting in "pops" and audio dropouts, but do not let this dictate the architecture of the rest of your application.

To separate the functionality, I would suggest you have classes set up for the following 3 areas in your server (your server will obviously have more classes and functionality, but these are the ones I will be referring to):
- Audio Source handler responsible for initializing the WaveIn device and reacting to new audio data being captured.
- File handler responsible for creating audio files, adding new audio to the files, closing files, etc.
- Network handler responsible for accepting incoming client connections and send audio packets to the connected client(s).

When you create the two buffers for capturing the audio from the WaveIn device, you have to decide how large these buffers are going to be. It is usually fine if they are large enough to contain 0.5-1 second of audio or even less.
During initialization, you reference the two buffers using waveInAddBuffer() and waveInPrepareHeader().
Now every time you get the signal that the current buffer is full, you should perform the following steps:
- Call waveInAddBuffer() to switch the audio capturing to use the other buffer.
- Pass a copy of the buffer that has just been filled to the File handler.
- Pass another copy of the buffer to the Network handler.

I know this might seem like a lot of copying, but this way the buffers used for capturing audio are always available for being swapped, file handling is not compromised due to network issues and vice versa.
You need a mechanism for passing a block of audio data to the File and Network handlers so the Audio Source handler can get rid of the data and not worry about what happens to it. There are several ways of doing this and I will not go into any details here, except recommend that you use a non-blocking method.


I hope this helps.

Soren Madsen
 
Share this answer
 

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900