|
Hi,
Did you change the original code before you encountered "m_BuffersDone" stuff? The whole idea is; it is initialized to ZERO inside the recording thread, see the code:
-------------------------------------------------------------
DWORD WINAPI CWaveINSimple::waveInProc(LPVOID arg) {
...
while (GetMessage(&msg, 0, 0, 0) == 1) {
switch (msg.message) {
...
// Main thread is opening the WAVE device.
case MM_WIM_OPEN:
_this->m_BuffersDone = 0; // !!!!
break;
...
}
}
...
}
-------------------------------------------------------------
Furthermore, if the thread fails to create or WaveIn fails to open:
-------------------------------------------------------------
waveInThread = CreateThread(NULL, 0, pStartRoutine, (PVOID) this, 0, &dwThreadID);
if (!waveInThread) {
this->m_Receiver = NULL;
throw "Can't create WAVE recording thread.";
}
CloseHandle(waveInThread);
// Open the WaveIN Device, specifying Thread's ID as a callback.
err = waveInOpen(&this->m_WaveInHandle, this->m_nWaveDeviceID, &this->m_waveFormat, dwThreadID, 0, CALLBACK_THREAD);
//!!!!! MIND THE dwThreadID OF THE THREAD
if (err) {
// Open failed, say to Thread to stop.
this->m_SIG = EXIT_SIG;
this->Close(4);
throw "Can't open WaveIN Device.";
}
this->m_SIG = CONTINUE_SIG;
-------------------------------------------------------------
then exception is thrown and proper clean-up is performed. I have unit tested this works fine.
Regarding:
>>can you explain me why have I to expect that "_this->m_BuffersDone" have to reach the value of 2
Because original code is using two buffers for recording:
-------------------------------------------------------------
...
err = waveInAddBuffer(this->m_WaveInHandle, &this->m_WaveHeader[0], sizeof(WAVEHDR));
...
err = waveInAddBuffer(this->m_WaveInHandle, &this->m_WaveHeader[1], sizeof(WAVEHDR));
-------------------------------------------------------------
So, if you add more buffers or use just one, then the "while (this->m_BuffersDone < 2)" check should be adjusted. Additionally, i would recommend using "Stop()" (not "_Stop()") which is thread-safe and "reacts" better on fast start/stop switches.
Regarding "equalizer", do you mean de-composition in frequencies using Fourier series?
Regards,
Ruslan
|
|
|
|
|
Thanks for your quick reply.
1) I think my brain is in a loop too. Yes, you are right: m_BuffersDone is initialized under MM_WIM_OPEN. Based on your reply, I figured out that your waveInProc() doesn't receive MM_WIM_OPEN becuse, in my application, I already opened the same waveform-audio input device in another thread (this thread is monitoring the sound level to decide to start recording or not). Thus, this is solved (infact I wrote "potential bug" ).
2) Thanks for the explanation about how many MM_WIM_DATA will be received after a waveInReset() call. Also, I'm using the "Stop": I referred to "_Stop" just to be more fast in the problem explanation.
3) After understood the problem on topic 1) above, I re-analyzed the situation and there is no reason by which your waveInProc() sometimes doesn't receive both the MM_WIM_DATA with EXIT_SIG as flag after a waveInReset() call (remaining the the while loop for ever). It happens even without my "sound level monitoring thread". Have you any idea? (Don't get crazy for this).
4) Parametric Equalizer: yes I mean de-composition in frequencies using Fourier series able to modify what I'm going to listen (like Windows Media Player or WinAMP has). I'm looking at the link DJ_T-O sent to me, but I do not know if I will be useful for what I need. If you use it, are you using to modify what you are listening?
Thanks and best regards.
|
|
|
|
|
(3) First of all, make sure:
- volatile int m_SIG;
- volatile unsigned char m_BuffersDone;
are still declared "volatile" in order to avoid compiler to optimize usage of those attributes.
Additionally, try this version:
----------------------------------------------------
case MM_WIM_DATA:
if ((((WAVEHDR *)msg.lParam)->dwBytesRecorded) && (_this->m_Receiver)) {
_this->m_Receiver->ReceiveBuffer(((WAVEHDR *)msg.lParam)->lpData,
((WAVEHDR *)msg.lParam)->dwBytesRecorded);
}
++_this->m_BuffersDone;
if (_this->m_SIG != EXIT_SIG) {
err = waveInAddBuffer(_this->m_WaveInHandle, (WAVEHDR *)msg.lParam, sizeof(WAVEHDR));
if (!err) --_this->m_BuffersDone;
}
break;
----------------------------------------------------
If problem still persists then ... might be the driver causing this? In any case please let me know if this code solved the problem.
(4) >>If you use it, are you using to modify what you are listening?
Nope, just to draw frequencies charts.
Regards,
Ruslan
|
|
|
|
|
Have a look at
http://www.codeproject.com/audio/waveInFFT.asp
Hope it helps.
CU
T-O
|
|
|
|
|
Yep, that is a good source ... I have used it as well
Thx DJ_T-O!
Regards,
Ruslan
|
|
|
|
|
First of all - thanks for you code.
I do have a one problem...
I would like to acquire from professional soundcard(RME soundcard 24bit 192Khz) using your code, how can reuse what you have done?
Can you help me ?Sorry for my english.
I really need help on this one.
thanks.
|
|
|
|
|
Hi,
That's a good question actually. I don't believe that the application doesn't work on your sound card as it is. It is just the matter of predefined configurations:
this->m_waveFormat.wFormatTag = WAVE_FORMAT_PCM;
this->m_waveFormat.nChannels = 2;
this->m_waveFormat.nSamplesPerSec = 44100;
this->m_waveFormat.wBitsPerSample = 16;
which your sound card should really support.
However, I understand that you want the whole 24bits PCM from your sound card, not just 16bits PCM . Well, in this case you need to re-define those values (mentioned above). The only problem with this is, I am using Blade interface of the LAME API, which is more simplistic. So, Blade interface supports "beEncodeChunk" function to encode passed PCM. The input for the PCM is expected to be "PSHORT pSamples", this is a pointer to SHORT which is 16bits, so input is expected 16bits PCM. Here you may have troubles. I think you will need to investigate lower level LAME API or to do some math to convert 24bits PCM into 16bits PCM in order to resolve this problem.
Regards,
Ruslan Ciurca
|
|
|
|
|
Hi,
First, Thank's to the developer of this project. It is very useful.
After modifying your code to record directly without giving any information I obtain sometimes an exception caused by the last line "CWaveINSimple::CleanUp()"
can you please explain me why?
Samiro
|
|
|
|
|
Hi Samiro,
Well, it depends on what have you modified. If you redefined the type of the m_arrWaveINDevices member of the class then I presume the problem is here:
vector<cwaveinsimple*>::iterator itPos = m_arrWaveINDevices.begin();
for (; itPos < m_arrWaveINDevices.end(); itPos++) {
delete *itPos; // !!! Here the problem may be !!!
}
And the reason is due to the fact that m_arrWaveINDevices vector was declared as a collection of pointers to objects, not actual copies o objects (in which case "delete *itPos" may fail).
Additionally, make sure that the following is declared outside the body of the class (in case you deleted this):
vector<cwaveinsimple*> CWaveINSimple::m_arrWaveINDevices;
QMutex CWaveINSimple::m_qGlobalMutex;
volatile bool CWaveINSimple::m_isDeviceListLoaded = false;
Those members of the CWaveINSimple class are declared static (!!!) and this is the way (in C++) to initialize them.
Regards,
Ruslan
|
|
|
|
|
Hi,
First of all - thanks for you code, it's very good.
but I do have a one problem...
In some cases, after I'm stopping the recording, I see (or hear, actually) that the last second or so is missing.
is it possible that if I call device->stop() to soon then the last buffer doesn't reach the receivebuffer function ?
I really need help on this one.
thanks.
|
|
|
|
|
Hi,
Well, yes and no at the same time.
A. When you call "device->stop()", this
- tells to the recording thread, of the device, that recording is about to stop (no more buffers will be passed to the device to handle recorded PCM),
- says to the device to un-queue all the queued buffers, which are passed to the ReceiveBuffer(...) anyway. Well, now it depends on the physical device if it succeeded to write any (or all remaining) PCM in those buffers (to be un-queued).
- "device->stop()" waits untill all the queued buffers are processed (while un-queuing).
- close the device.
For more details see "CWaveINSimple::_Stop()" and "CWaveINSimple::waveInProc(LPVOID arg)". I am sorry for the comment in the code saying "(via MM_WIM_DONE)", it surely should be "(via MM_WIM_DATA)".
So, no, all the remaining sound buffers are processed correctly, even last ones, from the technical point of view (exactly what "device->stop()" should do).
B. And yes, it depends on when you stop the device, because last sound buffer (which you hear while recording) may not be the last sound buffer passed to the ReceiveBuffer(...) (considering what is written above in "A"). To resolve this, you probably need to wait for a short while before calling "device->stop()" (this short while depends on the latency of the physical device or sound driver). Alternatively, you can reduce the size of the buffers (supposing that physical device or sound driver is optimized for smaller buffer size) passed to the device. See in the constructor "CWaveINSimple::CWaveINSimple(...)" the following line:
this->m_WaveHeader[1].dwBufferLength = this->m_WaveHeader[0].dwBufferLength = this->m_waveFormat.nAvgBytesPerSec << 1;
Set something more appropriate, instead of "this->m_waveFormat.nAvgBytesPerSec << 1", but make sure the value is divisible by "m_waveFormat.nBlockAlign", in this case 4. Or even better, use "magical" numbers like 2^N, where N >= 9 (otherwise buffer size is too small and CPU may be at a higher usage rate).
Regards,
Ruslan Ciurca
|
|
|
|
|
Hi,
I haven't looked at your code in detail but it is possible to write code that will terminate a recording without losing any bufferred data. What you do is this:
1. Call waveInStop ()
2. Wait for the Receiver to process any remaining bufferred data
3. Call waveInReset ()
4. Call waveInClose ()
The tricky part is knowing when step (2) is complete. The way I do this is (in effect) to post a special message to the receiver thread after calling waveInStop () . Then, when I see this message in the receiver thread, I know my job is done. Calling waveInReset before this point will lose some audio that you otherwise would have captured.
|
|
|
|
|
Hi Paul,
The difference between waveInStop() and waveInReset() is:
- waveInStop() - stops audio input, the currently used for recording buffer is market as done and returned to the application, any other queued buffers stay in the queue. So, technically it costs nothing to call waveInStart() next time, it's like pausing the recording.
- waveInReset() - stops audio input, the currently used for recording buffer is market as done and returned to the application, any other queued buffers are also marked as done and returned to the application. And yes, current position is reset to zero (waveInGetPosition(...), current position is a ~ almost a recording time).
So, technically, calling waveInStop() isn't mandatory.
Regards,
Ruslan
|
|
|
|
|
Hi,
Thanks for your response. The point I was trying to make is that calling waveInReset without first calling waveInStop discards any buffers 'in transit' in the audio system. That's why ran9 is losing the last second or so of his recording. I have tested my own code, which does things in the way I outlined in my previous post, and I don't lose any audio at the end of the recording.
|
|
|
|
|
Paul,
But from buffers 'in transit' in the audio system, there always one is filled (to be fired out) or partially filled, while others are empty. So, waveInStop just tells to the driver to fire out 'filled or partially filled' buffer (currently processed) and to keep rest buffers in queue empty and they will not be processed by the driver until next waveInStart. That is what MSDN says:
"If there are any buffers in the queue, the current buffer (!!!) will be marked as done (the dwBytesRecorded member in the header will contain the length of data), but any empty buffers (!!!) in the queue will remain there."
WaveInReset is doing almost the same, except it also tells to the driver to fire out rest buffers from the queue, but they are empty anyway.
Could you please elaborate with more details your position? I must admit I may be wrong (at the end, I also tested the code and haven't mentioned any loses ... different sound drivers, manufactures?), so it's purely a technical curiosity.
Regards,
Ruslan.
|
|
|
|
|
Hi,
I think the difference is that waveInReset discards any buffer(s) in the process of being filled (i.e it returns it/them with a length of zero) whereas waveinStop does not. Although this is not made clear in the documentation, I think that's how it works. Here is a post which implies as much:
http://www.codeguru.com/forum/archive/index.php/t-220538.html[^]
You might be right about only one buffer being affected in this way though, so it's hard to see why ran9 is losing as much as 1 second of audio. Ran9, are you running on Windows Vista? The audio stack has been completely redesigned and, in my experience, uses noticeably more CPU time than XP, so this might be a factor.
|
|
|
|
|
Ah I see . I have also heard about this problem, but ...
>>What's strange is that I swear it never did this before. I'm beginning to wonder if it has something to do with a version of the OS or something.
It may also be the sound driver implementation causing this problem, OS just guaranties the correct communication with drivers via API's. E.g. in some cases it is mandatory to align buffers to the 'm_waveFormat.nBlockAlign' factor, otherwise application crushes. With other driver implementations you may not notice this. Other driver implementations may use internal buffers for caching in which case PCM from internal buffers are copied to the passed buffers from the queue and this introduces latency. Other driver implementations may record directly to the passed buffers from the queue. And so one ...
From the other point of view, as per the link provided, 3 seconds part of the buffer may contain the required sound where 7 seconds part of the same buffer may be undesirable garbage of (e.g.) next track. So, waveInStop may yet not be the final solution.
In any case, I think the best workaround is to reduce the size of the passed buffers in order to reduce the size of the possible loses. The technical support for this idea is the fact that many systems that display sound spectrum (de-composition of the sound in "per channel frequencies" using Fourier series) indeed use buffers of a smaller size (and, typically, more than two buffers) for a better granularity and, as a result, less loses. Additionally, waveInStop will also behave better in such case ... less time to wait to complete one buffer. Generally, buffers of 1-2 seconds:
nChannels = 2;
nSamplesPerSec = 44100;
wBitsPerSample = 16;
nBlockAlign = nChannels * (wBitsPerSample/8); // == 4
nAvgBytesPerSec = nSamplesPerSec * nBlockAlign; // == 176400 bytes to keep 1 second of PCM
are quite huge.
Regards,
Ruslan
|
|
|
|
|
Hi,
Yes, I agree about using smallish buffers (although I still suspect that calling waveInReset is an invitation to all of the links in the chain between the sound card and the application to 'drop everything'). A larger number of smaller buffers = less latency, traded off against higher CPU overheads. I did read somewhere that very small buffers can crash some sound card drivers though, so I guess it pays to experiment.
For 16 bit 44.1kHz stereo audio, I actually use buffers of 1152 samples (i.e. just over 4k bytes) as this corresponds to one MP3 frame; this seems to work well in practise. I provide 1MB-worth of buffers (just under 6 seconds' worth), which might be overkill, but I wanted to do my best to not miss any audio.
BTW, I don't think waveInStop will ever return a buffer partially filled with garbage. I believe that it either returns a smaller dwBytesRecorded value, or waits until the buffer is completely full before returning it. But I could be wrong as I have not explicitly tested this.
So, to summarise, the formula for success seems to be:
- provide lots of smallish buffers
- call waveInStop and process any buffers returned before calling waveInReset
At least, that's what I do and it seems to work. One thing I have learned about waveIn is that it pays to adopt a belt-and-braces approach as, it seems, if something can go wrong, it will. And everything seems to have more CPU overhead on Vista
Oh yes, I didn't know about the 'm_waveFormat.nBlockAlign' issue, thank you for raising it. Re-reading the docs I see you are right. Fortunately, I seem to be getting away with it
|
|
|
|
|
I was loosing last couple of sec as well but all I did was to reduce buffer duration from 5sec to 1 sec and increased number of buffers and it did work.
I tried using Close and Reset functions but it didn't matter too much. I think main problem is if you have bigger buffers.
|
|
|
|
|
hi
i'm try to find a code that record sound
i found 3 project before it, and are all horrible documentation
this is maybe, the more easier to understand and do it work..
but, i need a very, very simple example in console c++
if u can help me, i need just record and save
a simple example in console will be very helpfull
gratz
a computer science student (Brazil, Santa Catarina)
|
|
|
|
|
Hi,
Do you need a very simple example to record sound in
- raw PCM or
- MP3
format?
Additionally, the API used in the very simple example should be
- WinSDK or
- using classes from this article
?
Regards,
Ruslan
|
|
|
|
|
i need a console simple example
can be recorded in mp3, will be a good choise
i'll use this class with my app, using wxWidgets
i just need the main example, starting, encoding, and stop record
a example using the classes
if possible, in a console app
another stuffs i learn after it
tks for help me
-- modified at 19:38 Wednesday 15th August, 2007
|
|
|
|
|
Well, then it is really simple then.
Have a look at the "Examples" section in the article, point 3. "mp3Writer" is half of the job and the code below (between "try {...} catch ()") is the second half.
A. Here
CWaveINSimple& device = CWaveINSimple::GetDevice(strDeviceName);
you need the provide the name of the WaveIN device.
B. Here
CMixerLine& mixerline = mixer.GetLine(strLineName);
you need to provide the name of the line to record from (line of the device)
C. Here
mixerline.SetVolume(0);
you need to provide the volume level of the line.
REMARK: All three (strDeviceName, strLineName and volume level) you can hardcode as constants or read from the command line (up to you).
D. This line
device.Start((IReceiver *) mp3Wr);
starts the recording process.
E. This line
device.Stop();
stops the recording.
You can find all these in sources (top of the article), which also includes/contains "main" function of the console application provided as an example. The complexity there is just to handle command line parameters and print available in the system WaveIN devices and Lines of the selected device. Anything else is as described above.
Please let me know if you succeed or not with this, otherwise I will provide a simple during the weekend.
Regards,
Ruslan
|
|
|
|
|
i try compile your code
but get some errors
i'm using mingw32 compiler..
i dont have V6.0
look this error msg
[Linker Error] undefined reference to `mixerGetLineInfoA@12'
[Linker Error] undefined reference to `mixerGetLineControlsA@12'
[Linker Error] undefined reference to `mixerGetControlDetailsA@12'
[Linker Error] undefined reference to `mixerGetControlDetailsA@12'
and some others linker errors
|
|
|
|
|
To resolve those linkage errors you need to add 'winmm.lib' to your project. Possibly, with mingw32 the name of the lib is different, i have never used mingw32, but idea is you need to provide the proper lib to the linker.
Regards,
Ruslan
|
|
|
|
|