|
Hi, please can you convert for c#?
Please, help me....
|
|
|
|
|
Hi, and thank you for you perfect source code.
I want to make same program and I need some help of expert one.
my question is, why you shift 6 times the pSample[idx] in following code?
rm = abs(pSamples2[idx+1])>> 6
and when I modify this code for (channel == mono) or (bitpersample == 8),
the 6 should be modified with which number?
and how shoud I write my code to good working about all different values of
channel, samplepersec, and bitpersample parameters?
hello c++
|
|
|
|
|
int nu = (int)(log(sampleCount)/log(2));
Error in this line above.
|
|
|
|
|
|
Replace it by this line:
int nu = (int)log((double)sampleCount/log((double)2));
|
|
|
|
|
Alternatively one might change the variable declaration of sampleCount to double from int and replace 2 with 2.000 for an implied type cast to double.
|
|
|
|
|
|
hi, i got a program from codeproject which playes wave file with directsound. and i want to modify it and add spectrum support to it. but i got some questiones about it, can you help me resolve it?
question one:
the program open wave file and read wave file information to struct WAVEFORMATEX, the members of it like wFormatTag=1, nChannels=2, nSamplesPerSec=44100, nAvgBytesPerSec=176400, nBlockAlign=4, wBitsPerSample=16, cbSize=0. then program initial DirectSoundBuffer's size as 176400, we must set size to 176400, it can be set any number?
question two:
i add code to log DirectSoundBuffer's current play position and current write position to listbox control, the max value displayed in it which each of them may big than 176400, how can i get right start postion what i want to sample the buffer?
code like:
DWORD dwCurPlayPos, dwCurWritePos;
m_lpDSB->GetCurrentPosition(&dwCurPlayPos, &dwCurWritePos);
question three:
if i get right start postion of buffer, what right sample size may be suggest?
code modified by me can download here.
regards,
jacky_zz
|
|
|
|
|
This is a usefull software
http://www.zeitnitz.de/Christian/Scope/Scope_en.html
|
|
|
|
|
when i set sound player(Windows Media Player 9 up or Foobar2000) volumn from 100% to zero, the spectrum lower than what before i change system volumn, why??
modified on Tuesday, August 12, 2008 5:58 AM
|
|
|
|
|
Hi,
This looks like a great project but I cannot find the source or even the application. Whats up?
Thanks,
Derek
|
|
|
|
|
hi
i want to play two sound file same time and want to listen the sound of these file
on separtly on left and right speaker,means sound of one file should play on left
speaker and second file should play on right speaker
thank u in advance
malik
|
|
|
|
|
Use waveOutWrite and put the data(average of left and right channel) of first file in left channel, put the data of second file in right channel. Refer to MSDN for details.
|
|
|
|
|
Hi,
I'm getting this error: 'Error unable to set recording WAVEOUT device' when I click the Sample button. I've been able to follow what is happening but am unsure what it means.
In the SetInputDevice() function it has this if statement:
if(mixerGetLineControls(HMIXEROBJ)hMixer, &mxlc, MIXER_OBJECTF_HMIXER | MIXER_GETLINECONTROLSF_ONEBYTYPE) != MMSYSERR_NOERROR)
And that is where it fails returning a value of MIXERR_INVALCONTROL (1025) and I'm not sure what to do with that. My soundcard is a really basic setup. It is an ESS AUDIODRIVE.
Any help would be greatly appreciated.
|
|
|
|
|
Hi,
The sound card sampling detection isn't perfect. Some sound cards may not have the WAVEOUT device, or have it named differently, or in your case do not have a mixer device.
This is the reason why it fails for you, short of getting some decient hardware, I can't offer you any software solution.
S.
|
|
|
|
|
The mixer operation is not necessary, so just remove the line 349 "break;" in file "scope.cpp".
|
|
|
|
|
23Dec2008
Greetings.
I have two questions.
1) I really have no idea how to recompile the source code
into an exe after editing out the "Break" line.
Might someone have a moment to tell me how to do that?
2) There is now a version 4 release of this audio scope.
(See the posting below about new version at SourceForge.)
Might someone know how to resolve this same error message
with the version 4 release? The v4 release seemss quite good.
I have three notebooks. One is 10 years old, and
I am trying to make it as multimedia capable as possible.
This 10 year old notebook gives me the
"Error unable to set recording WAVEOUT device" error message.
Any replies appreciated.
Thank you.
Regards,
AEN
Æ
|
|
|
|
|
use many cpu percenge ,see winamp,cpu is 0
|
|
|
|
|
Well its not hard to guess why, so I'll visit it for you:
1) Winamp does not sample from the audio card, the sample information is
store in the compressed audio stream, so there is no hardware API to the audio hardware.
2) They have probably used some optimised C/assembly routine to crank out the FFT for the audio data.
3) Its little video spectrum graphics are so tiny that video hardware can update this quickly.
Now looking at my example, its clear I've written it to show how to do it,
and how to do simply.
Most of the CPU/work is actually done on the graphic output, and again its done to keep things simple so you can understand it, which means using slow Windows GDI.
Look here for more better example of turning this simple example into something more demanding:
http://sourceforge.net/project/downloading.php?group_id=61001&use_mirror=optusnet&filename=audioscope_v4.zip&95969022[^]
And yes, it also does consume CPU. Again, mainly due to the graphics output and Windows GDI, of course your video hardware may vary.
Having said that the FFT and sampling routines are quite quick, even if they are not overly optimised.
Steve.
|
|
|
|
|
SetInputDevice fails on the line
if (mixerGetLineInfo((HMIXEROBJ)hMixer, &mxl, MIXER_GETLINEINFOF_COMPONENTTYPE) != MMSYSERR_NOERROR)
under Windows Vista.
I'm not entirely sure what SetInputDevice is doing, but it looks like it's just making sure all the input settings are correct.
Luckily? the project seems to work fine with the calls in the IDC_RECORD case commented out...
Any comments or fixes?
|
|
|
|
|
Hello from Portugal
I have modified the code of the application oscilloscope to make an application to generate blobs on a monitor for a dancing choreography with Aibos. I need to make a reference to your work because I used parts of it...What kind of reference can I make about you, have you ever published anything about the application oscilloscope, a scientific article for example, can you give me more information’s than the ones in the site??Pease send me an answer as fast as you can, because a need to publish my article…sorry for any problem…and thank you for the help that you gave me, with the code of you application.
bite@portugalmail.pt
|
|
|
|
|
Hi guys.
Sorry for the barrage of emails lately but I really do need help getting this done so that I can move on to other parts of my project. If you could spare a minute, I would really appreciate it, thank you.
For those of you who don't already know my problem- I am trying to incorporate this (mainly the input magnitude and FFT spectrum results) into an application I have already implemented (in a VC++ Express Edition Windows Form). I have split the Steven's code into three main sub-routines/functions- StartRec, StopRec and the other is a callback function WaveInProc() (associated with WaveInOpen).
I don't know if this approach is "correct" or if it will work but I am now stuck at the beginning- the StartRec function! It seems that I have an <undefined value=""> for hMixer in the SetInDevice subroutine.
HMIXER hMixer = NULL;
int inDevIdx = -1;
if (mixerOpen(&hMixer, 0, 0, NULL, MIXER_OBJECTF_MIXER) != MMSYSERR_NOERROR)
{
return FALSE;
}
However, when I debug Steven's original code, hMixer first had a value of 0xcccccccc then "0x00000000 {unused = ???}" (whilst I got a <undefined value="">). I really don't know what is happening here; the exact same code is producing these two different results. The answer might be really simple but I really don;t know it and could find much on the internet that explain this difference.
Please help if you have the time. Thank you very very much.
Regards,
Jeff
|
|
|
|
|
this call :
HMIXER hMixer = NULL;
int inDevIdx = -1;
if (mixerOpen(&hMixer, 0, 0, NULL, MIXER_OBJECTF_MIXER) != MMSYSERR_NOERROR)
{
return FALSE;
}
this is correct, it opens a mixer device, if it fails then you need to check the return value:
Return Values
Returns MMSYSERR_NOERROR if successful or an error otherwise. Possible error values include the following:
Value Description
MMSYSERR_ALLOCATED The specified resource is already allocated by the maximum number of clients possible.
MMSYSERR_BADDEVICEID The uMxId parameter specifies an invalid device identifier.
MMSYSERR_INVALFLAG One or more flags are invalid.
MMSYSERR_INVALHANDLE The uMxId parameter specifies an invalid handle.
MMSYSERR_INVALPARAM One or more parameters are invalid.
MMSYSERR_NODRIVER No mixer device is available for the object specified by uMxId. Note that the location referenced by uMxId will also contain the value – 1.
MMSYSERR_NOMEM Unable to allocate resources.
the hMixer having a value of 0xcccccccc initially is the debugger setting this to track possible memory leaks. After passing of the
handle/variable by address, it looks like mixerOpen() is setting the hMixer variable to NULL due to an error.
Review the mixerOpen() API function on www.msdn.com or
http://msdn2.microsoft.com/en-us/library/ms712134.aspx
|
|
|
|
|
Hi Steven.
Thank you very much for your quick reply, really appreciate it. I am also really sorry about my late response, been swamped with other work lately.
I think I just know too little on this. I did review msdn and various sources in order to study your code and I think I understand it. But, maybe because it is based on a dialogbox, I just can't seem to incorporate it into what I already have.
"All" I need are the VALUES of the microphone level (MaxLeft and MaxRight in your code) and the spectrum (pMags[idx]). I have been tweaking and studying your code (and the relevant API calls) for weeks and I still can't get past the HMIXER bit, not to mention obtaining the values. I really don't to give this up but I really don't have that much time to spare for just this component. If it is not too much too ask, could you give me some pointers in how I can approach this?
Thanks again.
Regards,
Jeff
-- modified at 6:42 Friday 15th June, 2007
These are the parts of Steven's code I'm using:
// Audio Scope Written By Steven De Toni
#include "stdafx.h"
#include "aud.h"
#include <stdio.h>
#include <stdarg.h>
#include <windows.h>
#include <commctrl.h>
#include <mmsystem.h>
#include "FFTransform.h"
//#include "resource.h"
// --- Global Variables ---
HINSTANCE HInst = NULL;
void CALLBACK waveInProc(HWAVEIN hWaveIn, UINT uMsg, DWORD dwInstance,
DWORD dwParam1, DWORD dwParam2);
void OutputDebugMsg (const char* pszFormat, ...)
{
char buf[1024];
va_list arglist;
va_start(arglist, pszFormat);
vsprintf(buf, pszFormat, arglist);
va_end(arglist);
strcat(buf, "\n");
OutputDebugString(buf);
}
void OutputMsgBox (const char* pszFormat, ...)
{
char buf[1024];
va_list arglist;
va_start(arglist, pszFormat);
vsprintf(buf, pszFormat, arglist);
va_end(arglist);
strcat(buf, "\n");
MessageBox (NULL, buf, "", MB_ICONSTOP);
}
/*********************** PrintWaveErrorMsg() **************************
* Retrieves and displays an error message for the passed Wave In error
* number. It does this using mciGetErrorString().
*************************************************************************/
void PrintWaveErrorMsg(DWORD err, TCHAR * str)
{
char buffer[128];
OutputMsgBox ("ERROR 0x%08X: %s\r\n", err, str);
if (mciGetErrorString(err, &buffer[0], sizeof(buffer)))
{
OutputMsgBox ("%s\r\n", &buffer[0]);
}
else
{
OutputMsgBox ("0x%08X returned!\r\n", err);
}
}
/*
What a long winded way to select recording input!
MIXERLINE_COMPONENTTYPE_SRC_ANALOG
MIXERLINE_COMPONENTTYPE_SRC_AUXILIARY
MIXERLINE_COMPONENTTYPE_SRC_COMPACTDISC
MIXERLINE_COMPONENTTYPE_SRC_DIGITAL
MIXERLINE_COMPONENTTYPE_SRC_LINE
MIXERLINE_COMPONENTTYPE_SRC_MICROPHONE
MIXERLINE_COMPONENTTYPE_SRC_PCSPEAKER
MIXERLINE_COMPONENTTYPE_SRC_SYNTHESIZER
MIXERLINE_COMPONENTTYPE_SRC_TELEPHONE
MIXERLINE_COMPONENTTYPE_SRC_UNDEFINED
MIXERLINE_COMPONENTTYPE_SRC_WAVEOUT
*/
BOOL SetInputDevice (unsigned int inDev)
{
HMIXER hMixer = NULL;
int inDevIdx = -1;
if ((mixerOpen(&hMixer, 0, 0, NULL, MIXER_OBJECTF_MIXER)) != MMSYSERR_NOERROR)
{
return FALSE;
}
// get dwLineID
MIXERLINE mxl;
mxl.cbStruct = sizeof(MIXERLINE);
mxl.dwComponentType = MIXERLINE_COMPONENTTYPE_DST_WAVEIN;
if (mixerGetLineInfo((HMIXEROBJ)hMixer, &mxl, MIXER_OBJECTF_HMIXER | MIXER_GETLINEINFOF_COMPONENTTYPE) != MMSYSERR_NOERROR)
{
mixerClose (hMixer);
return FALSE;
}
// get dwControlID
MIXERCONTROL mxc;
MIXERLINECONTROLS mxlc;
DWORD dwControlType = MIXERCONTROL_CONTROLTYPE_MIXER;
mxlc.cbStruct = sizeof(MIXERLINECONTROLS);
mxlc.dwLineID = mxl.dwLineID;
mxlc.dwControlType = dwControlType;
mxlc.cControls = 0;
mxlc.cbmxctrl = sizeof(MIXERCONTROL);
mxlc.pamxctrl = &mxc;
if (mixerGetLineControls((HMIXEROBJ)hMixer, &mxlc, MIXER_OBJECTF_HMIXER | MIXER_GETLINECONTROLSF_ONEBYTYPE) != MMSYSERR_NOERROR)
{
// no mixer, try MUX
dwControlType = MIXERCONTROL_CONTROLTYPE_MUX;
mxlc.cbStruct = sizeof(MIXERLINECONTROLS);
mxlc.dwLineID = mxl.dwLineID;
mxlc.dwControlType = dwControlType;
mxlc.cControls = 0;
mxlc.cbmxctrl = sizeof(MIXERCONTROL);
mxlc.pamxctrl = &mxc;
if (mixerGetLineControls((HMIXEROBJ)hMixer, &mxlc, MIXER_OBJECTF_HMIXER | MIXER_GETLINECONTROLSF_ONEBYTYPE) != MMSYSERR_NOERROR)
{
mixerClose (hMixer);
return FALSE;
}
}
if (mxc.cMultipleItems <= 0)
{
mixerClose (hMixer);
return FALSE;
}
// get the index of the inDevice from available controls
MIXERCONTROLDETAILS_LISTTEXT* pmxcdSelectText = new MIXERCONTROLDETAILS_LISTTEXT[mxc.cMultipleItems];
if (pmxcdSelectText != NULL)
{
MIXERCONTROLDETAILS mxcd;
mxcd.cbStruct = sizeof(MIXERCONTROLDETAILS);
mxcd.dwControlID = mxc.dwControlID;
mxcd.cChannels = 1;
mxcd.cMultipleItems = mxc.cMultipleItems;
mxcd.cbDetails = sizeof(MIXERCONTROLDETAILS_LISTTEXT);
mxcd.paDetails = pmxcdSelectText;
if (mixerGetControlDetails ((HMIXEROBJ)hMixer, &mxcd, MIXER_OBJECTF_HMIXER | MIXER_GETCONTROLDETAILSF_LISTTEXT) == MMSYSERR_NOERROR)
{
// determine which controls the inputDevice source line
DWORD dwi;
for (dwi = 0; dwi < mxc.cMultipleItems; dwi++)
{
// get the line information
MIXERLINE mxl;
mxl.cbStruct = sizeof(MIXERLINE);
mxl.dwLineID = pmxcdSelectText[dwi].dwParam1;
if (mixerGetLineInfo ((HMIXEROBJ)hMixer, &mxl, MIXER_OBJECTF_HMIXER | MIXER_GETLINEINFOF_LINEID) == MMSYSERR_NOERROR && mxl.dwComponentType == inDev)
{
// found, dwi is the index.
inDevIdx = dwi;
// break;
}
}
}
delete []pmxcdSelectText;
}
if (inDevIdx < 0)
{
mixerClose (hMixer);
return FALSE;
}
// get all the values first
MIXERCONTROLDETAILS_BOOLEAN* pmxcdSelectValue = new MIXERCONTROLDETAILS_BOOLEAN[mxc.cMultipleItems];
if (pmxcdSelectValue != NULL)
{
MIXERCONTROLDETAILS mxcd;
mxcd.cbStruct = sizeof(MIXERCONTROLDETAILS);
mxcd.dwControlID = mxc.dwControlID;
mxcd.cChannels = 1;
mxcd.cMultipleItems = mxc.cMultipleItems;
mxcd.cbDetails = sizeof(MIXERCONTROLDETAILS_BOOLEAN);
mxcd.paDetails = pmxcdSelectValue;
if (mixerGetControlDetails((HMIXEROBJ)hMixer, &mxcd, MIXER_OBJECTF_HMIXER | MIXER_GETCONTROLDETAILSF_VALUE) == MMSYSERR_NOERROR)
{
// ASSERT(m_dwControlType == MIXERCONTROL_CONTROLTYPE_MIXER || m_dwControlType == MIXERCONTROL_CONTROLTYPE_MUX);
// MUX restricts the line selection to one source line at a time.
if (dwControlType == MIXERCONTROL_CONTROLTYPE_MUX)
{
ZeroMemory(pmxcdSelectValue, mxc.cMultipleItems * sizeof(MIXERCONTROLDETAILS_BOOLEAN));
}
// Turn on this input device
pmxcdSelectValue[inDevIdx].fValue = 0x1;
mxcd.cbStruct = sizeof(MIXERCONTROLDETAILS);
mxcd.dwControlID = mxc.dwControlID;
mxcd.cChannels = 1;
mxcd.cMultipleItems = mxc.cMultipleItems;
mxcd.cbDetails = sizeof(MIXERCONTROLDETAILS_BOOLEAN);
mxcd.paDetails = pmxcdSelectValue;
if (mixerSetControlDetails ((HMIXEROBJ)hMixer, &mxcd, MIXER_OBJECTF_HMIXER | MIXER_SETCONTROLDETAILSF_VALUE) != MMSYSERR_NOERROR)
{
delete []pmxcdSelectValue;
mixerClose (hMixer);
return FALSE;
}
}
delete []pmxcdSelectValue;
}
mixerClose (hMixer);
return TRUE;
}
// ***********************************************************************************
#define SPECSCOPEWIDTH 10
static BOOL inRecord = FALSE;
static HWAVEIN waveInHandle = NULL;
static WAVEHDR waveHeader[2];
static WAVEFORMATEX waveFormat;
static FFTransform* pFFTrans = NULL;
static FFTransform* pFFTransStereo = NULL;
static SampleIter* pSampleIter = NULL;
static RECT drawArea;
static HDC HdblDC = NULL;
static HBITMAP HdblOldBitmap = NULL;
static int rps = 1;
///////////////////////////////////////////////////////////////////////////////
Caud::Caud(){}
void Caud::StopRec(){
// Stop recording and tell the driver to unqueue/return all of our WAVEHDRs.
// The driver will return any partially filled buffer that was currently
// recording. Because we use waveInReset() instead of waveInStop(),
// all of the other WAVEHDRs will also be returned via MM_WIM_DONE too
waveInReset(waveInHandle);
waveInUnprepareHeader (waveInHandle, &waveHeader[0], sizeof(WAVEHDR));
waveInUnprepareHeader (waveInHandle, &waveHeader[1], sizeof(WAVEHDR));
waveInClose(waveInHandle);
VirtualFree(waveHeader[0].lpData, (waveHeader[0].dwBufferLength << 1), MEM_RELEASE);
inRecord = FALSE;
SetInputDevice (MIXERLINE_COMPONENTTYPE_SRC_MICROPHONE);
if (pFFTrans != NULL)
{
delete pFFTrans;
pFFTrans = NULL;
}
if (pFFTransStereo != NULL)
{
delete pFFTransStereo;
pFFTransStereo = NULL;
}
if (pSampleIter != NULL)
{
delete pSampleIter;
pSampleIter = NULL;
}
}
////////////////////////////////////////////////////////////////////////////////////////////////////
void Caud::StartRec() {
//InitCommonControls ();
while (1) {
// Set the recording input ...
//if (SetInputDevice (MIXERLINE_COMPONENTTYPE_SRC_WAVEOUT) == FALSE)
//{
if (SetInputDevice (MIXERLINE_COMPONENTTYPE_SRC_ANALOG) == FALSE)
{
if (SetInputDevice (MIXERLINE_COMPONENTTYPE_SRC_LAST) == FALSE)
{
OutputMsgBox ("Error unable to set recording WAVEOUT device");
break;
}
}
//}
MMRESULT err;
// Clear out both of our WAVEHDRs. At the very least, waveInPrepareHeader() expects the dwFlags field to be cleared
ZeroMemory(&waveHeader[0], sizeof(WAVEHDR) * 2);
// Initialize the WAVEFORMATEX for 16-bit, 44KHz, stereo. That's what I want to record
waveFormat.wFormatTag = WAVE_FORMAT_PCM;
waveFormat.nChannels = 2;
waveFormat.nSamplesPerSec = 44100;
waveFormat.wBitsPerSample = 16;
waveFormat.nBlockAlign = waveFormat.nChannels * (waveFormat.wBitsPerSample/8);
waveFormat.nAvgBytesPerSec = waveFormat.nSamplesPerSec * waveFormat.nBlockAlign;
waveFormat.cbSize = 0;
// Open the default WAVE In Device, specifying my callback. Note that if this device doesn't
// support 16-bit, 44KHz, stereo recording, then Windows will attempt to open another device
// that does. So don't make any assumptions about the name of the device that opens. After
// waveInOpen returns the handle, use waveInGetID to fetch its ID, and then waveInGetDevCaps
// to retrieve the actual name
if ((err = waveInOpen(&waveInHandle, WAVE_MAPPER, &waveFormat, (DWORD)waveInProc, 0, CALLBACK_FUNCTION )) != 0)
{
PrintWaveErrorMsg (err, "Can't open WAVE In Device!");
break;
}
// Allocate, prepare, and queue two buffers that the driver can use to record blocks of audio data.
// (ie, We're using double-buffering. You can use more buffers if you'd like, and you should do that
// if you suspect that you may lag the driver when you're writing a buffer to disk and are too slow
// to requeue it with waveInAddBuffer. With more buffers, you can take your time requeueing
// each).
//
// I'll allocate 2 buffers large enough to hold 2 seconds worth of waveform data at 44Khz. NOTE:
// Just to make it easy, I'll use 1 call to VirtualAlloc to allocate both buffers, but you can
// use 2 separate calls since the buffers do NOT need to be contiguous. You should do the latter if
// using many, large buffers
waveHeader[1].dwBufferLength = waveHeader[0].dwBufferLength = 512;
if (!(waveHeader[0].lpData = (char*)VirtualAlloc(0, (waveHeader[0].dwBufferLength << 1), MEM_COMMIT, PAGE_READWRITE)))
{
OutputMsgBox ("ERROR: Can't allocate memory for WAVE buffer!\n");
waveInClose(waveInHandle);
break;
}
// Fill in WAVEHDR fields for buffer starting address. We've already filled in the size fields above */
waveHeader[1].lpData = waveHeader[0].lpData + waveHeader[0].dwBufferLength;
// Leave other WAVEHDR fields at 0
// Prepare the 2 WAVEHDR's
if ((err = waveInPrepareHeader(waveInHandle, &waveHeader[0], sizeof(WAVEHDR))))
{
waveInClose(waveInHandle);
OutputMsgBox ("Error preparing WAVEHDR 1! -- %08X\n", err);
VirtualFree (waveHeader[0].lpData, (waveHeader[0].dwBufferLength << 1), MEM_RELEASE);
break;
}
if ((err = waveInPrepareHeader(waveInHandle, &waveHeader[1], sizeof(WAVEHDR))))
{
waveInClose(waveInHandle);
OutputMsgBox ("Error preparing WAVEHDR 2! -- %08X\n", err);
VirtualFree (waveHeader[0].lpData, (waveHeader[0].dwBufferLength << 1), MEM_RELEASE);
break;
}
// Queue first WAVEHDR (recording hasn't started yet)
if ((err = waveInAddBuffer(waveInHandle, &waveHeader[0], sizeof(WAVEHDR))))
{
waveInClose(waveInHandle);
OutputMsgBox ("Error queueing WAVEHDR 1! -- %08X\n", err);
VirtualFree (waveHeader[0].lpData, (waveHeader[0].dwBufferLength << 1), MEM_RELEASE);
break;
}
// Queue second WAVEHDR
if ((err = waveInAddBuffer(waveInHandle, &waveHeader[1], sizeof(WAVEHDR))))
{
waveInClose(waveInHandle);
OutputMsgBox ("Error queueing WAVEHDR 2! -- %08X\n", err);
VirtualFree (waveHeader[0].lpData, (waveHeader[0].dwBufferLength << 1), MEM_RELEASE);
break;
}
// Start recording
if ((err = waveInStart(waveInHandle)))
{
OutputMsgBox ("Error starting record! -- %08X\n", err);
waveInClose(waveInHandle);
VirtualFree (waveHeader[0].lpData, (waveHeader[0].dwBufferLength << 1), MEM_RELEASE);
break;
}
// prepare the DSP processing objects
pFFTrans = new FFTransform (waveFormat.nSamplesPerSec, waveHeader[0].dwBufferLength/(waveFormat.nChannels * (waveFormat.wBitsPerSample/8)));
pFFTransStereo = new FFTransform (waveFormat.nSamplesPerSec, waveHeader[0].dwBufferLength/(waveFormat.wBitsPerSample/8));
pSampleIter = new SampleIter();
inRecord = TRUE;
break;
}
}
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
void CALLBACK waveInProc(HWAVEIN hWaveIn, UINT uMsg, DWORD dwInstance,
DWORD dwParam1, DWORD dwParam2) {
if (uMsg == WIM_DATA){
WAVEHDR* pHeader = (WAVEHDR*) dwParam1;
static int maxLeftSave = 0;
static int maxRightSave = 0;
static int refresh = 0;
refresh++;
if (pHeader->dwBytesRecorded > 0)
{
// OutputDebugMsg ("Recorded %d bytes\n", pHeader->dwBytesRecorded);
int idx = 0;
short int* pSamples = (short int*)pHeader->lpData;
int sampleCnt = pHeader->dwBytesRecorded/(sizeof(short int));
// ------------------------------------------------------------------
// ******* Calculate Power Meter *******
int lm = 0, rm = 0, maxLeft = 0, maxRight = 0;
for (idx = 0; idx < sampleCnt; idx += 2)
{
if ((lm = abs(pSamples[idx]) >> 6) > maxLeft)
maxLeft = lm;
if ((rm = abs(pSamples[idx+1]) >> 6) > maxRight)
maxRight = rm;
}
if (maxLeft < maxLeftSave)
{
if ((maxLeft = maxLeftSave - 4) < 0)
maxLeft = 0;
}
if (maxRight < maxRightSave)
{
if ((maxRight = maxRightSave - 4) < 0)
maxRight = 0;
}
maxLeftSave = maxLeft;
maxRightSave = maxRight;
}
if (inRecord) // Are we still recording?
{
// Yes. Now we need to requeue this buffer so the driver can use it for another block of audio
// data. NOTE: We shouldn't need to waveInPrepareHeader() a WAVEHDR that has already been prepared once
waveInAddBuffer(waveInHandle, pHeader, sizeof(WAVEHDR));
}
}
}
////////////////////////////////////////////////////////////////
I am only aiming to get the input value (MaxLeft and MaxRight) now. First I call StartRec() at the beginning and StopRec() at the end of my own program. I am probably something stupidly wrong here. Anyway, any help would be MUCH MUCH appreciated. Thank you lots.
Jeff
|
|
|
|
|
Hi Steven!
Why can not download the source file form http://www.codeproject.com/audio/oscilloscope/oscilloscope_src.zip ?
Could you send me to the e-mail "david_hsieh_173@hotmail.com"
Thanks!
|
|
|
|
|