|
Hi,
Well, yes and no at the same time.
A. When you call "device->stop()", this
- tells to the recording thread, of the device, that recording is about to stop (no more buffers will be passed to the device to handle recorded PCM),
- says to the device to un-queue all the queued buffers, which are passed to the ReceiveBuffer(...) anyway. Well, now it depends on the physical device if it succeeded to write any (or all remaining) PCM in those buffers (to be un-queued).
- "device->stop()" waits untill all the queued buffers are processed (while un-queuing).
- close the device.
For more details see "CWaveINSimple::_Stop()" and "CWaveINSimple::waveInProc(LPVOID arg)". I am sorry for the comment in the code saying "(via MM_WIM_DONE)", it surely should be "(via MM_WIM_DATA)".
So, no, all the remaining sound buffers are processed correctly, even last ones, from the technical point of view (exactly what "device->stop()" should do).
B. And yes, it depends on when you stop the device, because last sound buffer (which you hear while recording) may not be the last sound buffer passed to the ReceiveBuffer(...) (considering what is written above in "A"). To resolve this, you probably need to wait for a short while before calling "device->stop()" (this short while depends on the latency of the physical device or sound driver). Alternatively, you can reduce the size of the buffers (supposing that physical device or sound driver is optimized for smaller buffer size) passed to the device. See in the constructor "CWaveINSimple::CWaveINSimple(...)" the following line:
this->m_WaveHeader[1].dwBufferLength = this->m_WaveHeader[0].dwBufferLength = this->m_waveFormat.nAvgBytesPerSec << 1;
Set something more appropriate, instead of "this->m_waveFormat.nAvgBytesPerSec << 1", but make sure the value is divisible by "m_waveFormat.nBlockAlign", in this case 4. Or even better, use "magical" numbers like 2^N, where N >= 9 (otherwise buffer size is too small and CPU may be at a higher usage rate).
Regards,
Ruslan Ciurca
|
|
|
|
|
Hi,
I haven't looked at your code in detail but it is possible to write code that will terminate a recording without losing any bufferred data. What you do is this:
1. Call waveInStop ()
2. Wait for the Receiver to process any remaining bufferred data
3. Call waveInReset ()
4. Call waveInClose ()
The tricky part is knowing when step (2) is complete. The way I do this is (in effect) to post a special message to the receiver thread after calling waveInStop () . Then, when I see this message in the receiver thread, I know my job is done. Calling waveInReset before this point will lose some audio that you otherwise would have captured.
|
|
|
|
|
Hi Paul,
The difference between waveInStop() and waveInReset() is:
- waveInStop() - stops audio input, the currently used for recording buffer is market as done and returned to the application, any other queued buffers stay in the queue. So, technically it costs nothing to call waveInStart() next time, it's like pausing the recording.
- waveInReset() - stops audio input, the currently used for recording buffer is market as done and returned to the application, any other queued buffers are also marked as done and returned to the application. And yes, current position is reset to zero (waveInGetPosition(...), current position is a ~ almost a recording time).
So, technically, calling waveInStop() isn't mandatory.
Regards,
Ruslan
|
|
|
|
|
Hi,
Thanks for your response. The point I was trying to make is that calling waveInReset without first calling waveInStop discards any buffers 'in transit' in the audio system. That's why ran9 is losing the last second or so of his recording. I have tested my own code, which does things in the way I outlined in my previous post, and I don't lose any audio at the end of the recording.
|
|
|
|
|
Paul,
But from buffers 'in transit' in the audio system, there always one is filled (to be fired out) or partially filled, while others are empty. So, waveInStop just tells to the driver to fire out 'filled or partially filled' buffer (currently processed) and to keep rest buffers in queue empty and they will not be processed by the driver until next waveInStart. That is what MSDN says:
"If there are any buffers in the queue, the current buffer (!!!) will be marked as done (the dwBytesRecorded member in the header will contain the length of data), but any empty buffers (!!!) in the queue will remain there."
WaveInReset is doing almost the same, except it also tells to the driver to fire out rest buffers from the queue, but they are empty anyway.
Could you please elaborate with more details your position? I must admit I may be wrong (at the end, I also tested the code and haven't mentioned any loses ... different sound drivers, manufactures?), so it's purely a technical curiosity.
Regards,
Ruslan.
|
|
|
|
|
Hi,
I think the difference is that waveInReset discards any buffer(s) in the process of being filled (i.e it returns it/them with a length of zero) whereas waveinStop does not. Although this is not made clear in the documentation, I think that's how it works. Here is a post which implies as much:
http://www.codeguru.com/forum/archive/index.php/t-220538.html[^]
You might be right about only one buffer being affected in this way though, so it's hard to see why ran9 is losing as much as 1 second of audio. Ran9, are you running on Windows Vista? The audio stack has been completely redesigned and, in my experience, uses noticeably more CPU time than XP, so this might be a factor.
|
|
|
|
|
Ah I see . I have also heard about this problem, but ...
>>What's strange is that I swear it never did this before. I'm beginning to wonder if it has something to do with a version of the OS or something.
It may also be the sound driver implementation causing this problem, OS just guaranties the correct communication with drivers via API's. E.g. in some cases it is mandatory to align buffers to the 'm_waveFormat.nBlockAlign' factor, otherwise application crushes. With other driver implementations you may not notice this. Other driver implementations may use internal buffers for caching in which case PCM from internal buffers are copied to the passed buffers from the queue and this introduces latency. Other driver implementations may record directly to the passed buffers from the queue. And so one ...
From the other point of view, as per the link provided, 3 seconds part of the buffer may contain the required sound where 7 seconds part of the same buffer may be undesirable garbage of (e.g.) next track. So, waveInStop may yet not be the final solution.
In any case, I think the best workaround is to reduce the size of the passed buffers in order to reduce the size of the possible loses. The technical support for this idea is the fact that many systems that display sound spectrum (de-composition of the sound in "per channel frequencies" using Fourier series) indeed use buffers of a smaller size (and, typically, more than two buffers) for a better granularity and, as a result, less loses. Additionally, waveInStop will also behave better in such case ... less time to wait to complete one buffer. Generally, buffers of 1-2 seconds:
nChannels = 2;
nSamplesPerSec = 44100;
wBitsPerSample = 16;
nBlockAlign = nChannels * (wBitsPerSample/8); // == 4
nAvgBytesPerSec = nSamplesPerSec * nBlockAlign; // == 176400 bytes to keep 1 second of PCM
are quite huge.
Regards,
Ruslan
|
|
|
|
|
Hi,
Yes, I agree about using smallish buffers (although I still suspect that calling waveInReset is an invitation to all of the links in the chain between the sound card and the application to 'drop everything'). A larger number of smaller buffers = less latency, traded off against higher CPU overheads. I did read somewhere that very small buffers can crash some sound card drivers though, so I guess it pays to experiment.
For 16 bit 44.1kHz stereo audio, I actually use buffers of 1152 samples (i.e. just over 4k bytes) as this corresponds to one MP3 frame; this seems to work well in practise. I provide 1MB-worth of buffers (just under 6 seconds' worth), which might be overkill, but I wanted to do my best to not miss any audio.
BTW, I don't think waveInStop will ever return a buffer partially filled with garbage. I believe that it either returns a smaller dwBytesRecorded value, or waits until the buffer is completely full before returning it. But I could be wrong as I have not explicitly tested this.
So, to summarise, the formula for success seems to be:
- provide lots of smallish buffers
- call waveInStop and process any buffers returned before calling waveInReset
At least, that's what I do and it seems to work. One thing I have learned about waveIn is that it pays to adopt a belt-and-braces approach as, it seems, if something can go wrong, it will. And everything seems to have more CPU overhead on Vista
Oh yes, I didn't know about the 'm_waveFormat.nBlockAlign' issue, thank you for raising it. Re-reading the docs I see you are right. Fortunately, I seem to be getting away with it
|
|
|
|
|
I was loosing last couple of sec as well but all I did was to reduce buffer duration from 5sec to 1 sec and increased number of buffers and it did work.
I tried using Close and Reset functions but it didn't matter too much. I think main problem is if you have bigger buffers.
|
|
|
|
|
hi
i'm try to find a code that record sound
i found 3 project before it, and are all horrible documentation
this is maybe, the more easier to understand and do it work..
but, i need a very, very simple example in console c++
if u can help me, i need just record and save
a simple example in console will be very helpfull
gratz
a computer science student (Brazil, Santa Catarina)
|
|
|
|
|
Hi,
Do you need a very simple example to record sound in
- raw PCM or
- MP3
format?
Additionally, the API used in the very simple example should be
- WinSDK or
- using classes from this article
?
Regards,
Ruslan
|
|
|
|
|
i need a console simple example
can be recorded in mp3, will be a good choise
i'll use this class with my app, using wxWidgets
i just need the main example, starting, encoding, and stop record
a example using the classes
if possible, in a console app
another stuffs i learn after it
tks for help me
-- modified at 19:38 Wednesday 15th August, 2007
|
|
|
|
|
Well, then it is really simple then.
Have a look at the "Examples" section in the article, point 3. "mp3Writer" is half of the job and the code below (between "try {...} catch ()") is the second half.
A. Here
CWaveINSimple& device = CWaveINSimple::GetDevice(strDeviceName);
you need the provide the name of the WaveIN device.
B. Here
CMixerLine& mixerline = mixer.GetLine(strLineName);
you need to provide the name of the line to record from (line of the device)
C. Here
mixerline.SetVolume(0);
you need to provide the volume level of the line.
REMARK: All three (strDeviceName, strLineName and volume level) you can hardcode as constants or read from the command line (up to you).
D. This line
device.Start((IReceiver *) mp3Wr);
starts the recording process.
E. This line
device.Stop();
stops the recording.
You can find all these in sources (top of the article), which also includes/contains "main" function of the console application provided as an example. The complexity there is just to handle command line parameters and print available in the system WaveIN devices and Lines of the selected device. Anything else is as described above.
Please let me know if you succeed or not with this, otherwise I will provide a simple during the weekend.
Regards,
Ruslan
|
|
|
|
|
i try compile your code
but get some errors
i'm using mingw32 compiler..
i dont have V6.0
look this error msg
[Linker Error] undefined reference to `mixerGetLineInfoA@12'
[Linker Error] undefined reference to `mixerGetLineControlsA@12'
[Linker Error] undefined reference to `mixerGetControlDetailsA@12'
[Linker Error] undefined reference to `mixerGetControlDetailsA@12'
and some others linker errors
|
|
|
|
|
To resolve those linkage errors you need to add 'winmm.lib' to your project. Possibly, with mingw32 the name of the lib is different, i have never used mingw32, but idea is you need to provide the proper lib to the linker.
Regards,
Ruslan
|
|
|
|
|
i dont know what is this winmm.lib
beyond that linker errors
a "sizeof type", without sizeof(type), easy to fix
and [Warning] `packed' attribute ignored
on line } BE_CONFIG, *PBE_CONFIG ATTRIBUTE_PACKED;
i really need a lib to record sound
since the last day i'm trying to find one.. but is hard
most of they need visual c
and i dont have the vc sdk.. just the bins..
pls help me
=/
|
|
|
|
|
Why don't you try using Visual Studio Express edition? Free to download from Microsoft and free of charge for home and academic usage.
Regarding
>>i dont know what is this 'winmm.lib'
It's not about coding this time. I guess you know what static linking means. So
- at the first step, development environment compiles the code and produces object files (.obj),
- at the second step, development environment links the obtained object files with the provided static libraries (.lib) and produces the final executable.
As you see, I wrote 'provided' which means you have to provide this, if you know how to use 'mingw32'. You need to understand and know these details and be able to apply them with the development environment you are using. As so, even if I provide you the simple code, you won't succeed building the .exe without this knowledge about 'mingw32'.
So, in this case you need to provide 'winmm.lib' for proper linking. Windows Media API (winmm.lib/WINMM.DLL) was introduced many years ago, it's a standard in Windows. I don't believe 'mingw32' doesn't have this static library (somewhere in the 'LIB' folder where 'mingw32' is installed). Even LCC (lightest C compiler) has it.
Regards,
Ruslan
|
|
|
|
|
ok, i downloaded the visual studio
set lots of lib, and its compiling
but, its hard to create my own code
your example use argc.. argv, is hard to understand
then, i tried it:
int main() {
const vector<cwaveinsimple*>& wInDevices = CWaveINSimple::GetDevices();
CWaveINSimple& WaveInDevice = CWaveINSimple::GetDevice(wInDevices[0]->GetName());
CHAR szName[MIXER_LONG_NAME_CHARS];
UINT j;
CMixer& mixer = WaveInDevice.OpenMixer();
const vector<cmixerline*>& mLines = mixer.GetLines();
for (j = 0; j < mLines.size(); j++) {
::CharToOem(mLines[j]->GetName(), szName);
printf("%s\n", szName);
}
mixer.Close();
CWaveINSimple& device = CWaveINSimple::GetDevice(wInDevices[0]->GetName());
CMixer& _mixer = device.OpenMixer();
::CharToOem(mLines[6]->GetName(), szName); //get Microphone
CMixerLine& mixerline = _mixer.GetLine(szName);
mixerline.UnMute();
mixerline.SetVolume(0);
mixerline.Select();
_mixer.Close();
mp3Writer *mp3Wr = new mp3Writer(); //here crashes
device.Start((IReceiver *) mp3Wr);
while( !_kbhit() ) ::Sleep(100);
device.Stop();
delete mp3Wr;
CWaveINSimple::CleanUp();
}
and get this error:
"This aplication has request the Runtime to terminate it in a unusual way.
Please contact the application's support team for more information"
=/
and i get this fopen warning:
'This function or variable may be unsafe. Consider using fopen_s instead. To disable deprecation, use _CRT_SECURE_NO_DEPRECATE. See online help for details.'
|
|
|
|
|
You forgot about 'try {...} catch(...)'. Also don't forget that 'lame_enc.dll' must be in the same folder with the final '.exe' file. It looks like application is trying to load 'lame_enc.dll' and it can't find it. As so application is throwing an exception which is not caught and you get runtime error.
Try this code:
#include "stdafx.h"
#include "INCLUDE/mp3_simple.h"
#include "INCLUDE/waveIN_simple.h"
#include <conio.h>
class mp3Writer: public IReceiver {
private:
CMP3Simple m_mp3Enc;
FILE *f;
public:
mp3Writer(unsigned int bitrate = 128, unsigned int finalSimpleRate = 0):
m_mp3Enc(bitrate, 44100, finalSimpleRate) {
f = fopen("music.mp3", "wb");
if (f == NULL) throw "Can't create MP3 file.";
};
~mp3Writer() {
fclose(f);
};
virtual void ReceiveBuffer(LPSTR lpData, DWORD dwBytesRecorded) {
BYTE mp3Out[44100 * 4];
DWORD dwOut;
m_mp3Enc.Encode((PSHORT) lpData, dwBytesRecorded/2, mp3Out, &dwOut);
fwrite(mp3Out, dwOut, 1, f);
};
};
int main() {
char strLineName[] = "Microphone";
const UINT volume = 15;
try {
const vector<CWaveINSimple*>& wInDevices = CWaveINSimple::GetDevices();
CWaveINSimple& device = CWaveINSimple::GetDevice(wInDevices[0]->GetName());
CMixer& mixer = device.OpenMixer();
CMixerLine& mixerline = mixer.GetLine(strLineName);
mixerline.UnMute();
mixerline.SetVolume(volume);
mixerline.Select();
mixer.Close();
mp3Writer *mp3Wr = new mp3Writer();
device.Start((IReceiver *) mp3Wr);
while( !_kbhit() ) ::Sleep(100);
device.Stop();
delete mp3Wr;
}
catch (const char *err) { printf("%s\n",err); }
CWaveINSimple::CleanUp();
return(0);
}
|
|
|
|
|
very tks!!!
this sample work
and i get the lame_enc.dll
i had forget
very tks
|
|
|
|
|
Hi,
I wish you make the dialog base app. example to use mp3 class as,I have tried to make it several times but it did not success.
If possible,please show me the dialog base sample.
I'm a novice.
Thank you.
Arun
|
|
|
|
|
Hi Arun,
Unfortunately I can't. Dialog based version is a commercial project:
http://mp3r.send2.me.uk/
Regards,
Ruslan Ciurca
|
|
|
|
|
Hi Ruslan,
OK,I understand but,I want to implement some point like mp3 recording with microphone or other into my learning project.
If you can suggest a little guide to do this,please suggest too.
Thank you.
Arun
|
|
|
|
|
Hi Arun,
Well, it depends on what are you novice to:
- programming in general
- C++
- MFC
- MP3 encoding
This also depends on what and how you plan to implement, general application's approach/design. You can also have a look at what/how other people implemented similar things.
Application's approach is important. Anything else is just technical solution(s) (well, sometimes difficult to implement, but at least you know what you want to achieve). And, due to the lack of time, I can help you only just by answering specific technical questions.
Regards,
Ruslan Ciurca
|
|
|
|
|
Hi Ruslan,
Thank you for your suggestion.I would be spending the time for this surely but,at least like you said i know what i want to achive.
Thank you.
Regards,
Arun
|
|
|
|
|