Click here to Skip to main content
13,045,540 members (68,021 online)
Rate this:
Please Sign up or sign in to vote.
hi Guys,

I have two query :
(i)How to use double buffering in voice recording using wave API.Pls suggest me the steps to do double buffering?

(ii)I want one buffer to record voice , once it get fill it should go for playing and at the same time my second buffer should take care of recording . once the second buffer got fill then it should go for playing, and first buffer should go for recording....

Pls tell me the steps so that i can do the above said. Thanks for your valuable comments and answer in advance . Looking forward for your response
Posted 25-Jun-12 22:06pm
Updated 25-Jun-12 22:27pm

1 solution

Rate this: bad
Please Sign up or sign in to vote.

Solution 1

You do not mention it here, but I know from your previous questions that you actually want to play the audio in a client application, which receives the audio through a socket connection. I am only going to address the server side here.

It seems to me that you have focused too much on the API documentation's notes about using double buffering. Yes, you should do this to avoid time gaps with missing audio resulting in "pops" and audio dropouts, but do not let this dictate the architecture of the rest of your application.

To separate the functionality, I would suggest you have classes set up for the following 3 areas in your server (your server will obviously have more classes and functionality, but these are the ones I will be referring to):
- Audio Source handler responsible for initializing the WaveIn device and reacting to new audio data being captured.
- File handler responsible for creating audio files, adding new audio to the files, closing files, etc.
- Network handler responsible for accepting incoming client connections and send audio packets to the connected client(s).

When you create the two buffers for capturing the audio from the WaveIn device, you have to decide how large these buffers are going to be. It is usually fine if they are large enough to contain 0.5-1 second of audio or even less.
During initialization, you reference the two buffers using waveInAddBuffer() and waveInPrepareHeader().
Now every time you get the signal that the current buffer is full, you should perform the following steps:
- Call waveInAddBuffer() to switch the audio capturing to use the other buffer.
- Pass a copy of the buffer that has just been filled to the File handler.
- Pass another copy of the buffer to the Network handler.

I know this might seem like a lot of copying, but this way the buffers used for capturing audio are always available for being swapped, file handling is not compromised due to network issues and vice versa.
You need a mechanism for passing a block of audio data to the File and Network handlers so the Audio Source handler can get rid of the data and not worry about what happens to it. There are several ways of doing this and I will not go into any details here, except recommend that you use a non-blocking method.

I hope this helps.

Soren Madsen

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

  Print Answers RSS
Top Experts
Last 24hrsThis month

Advertise | Privacy | Mobile
Web01 | 2.8.170713.1 | Last Updated 28 Jun 2012
Copyright © CodeProject, 1999-2017
All Rights Reserved. Terms of Service
Layout: fixed | fluid

CodeProject, 503-250 Ferrand Drive Toronto Ontario, M3C 3G8 Canada +1 416-849-8900 x 100