Now I wanna sent 4 bytes data by calling DeviceIoControl() from my directshow application to my streaming minidriver.First,I call the CreateFile() to get a device handle,and the minidriver can receive a IRP_MJ_CREATE IRP,and the CreateFile() returns nonzero.But when the app runs to DeviceIoControl(),the minidriver does not recieve any SRB.The DeviveIoControl() returns zero,and GetlastError() returns 50,which says:"The request is not supposed."
I difined my own IOCTL command in my driver and app.
What causes the error 50 or if there is any other way to send data to steaming minidriver from directshow application?
Thanks for any help!
Viewtier Systems would like to invite Code Project community
to join the early access program (EAP) for Parabuild 2.0.
Parabuild is a software build management server that provides
continuous integration builds and stable nightly and daily
builds. Parabuild supports any project that can be built
from the command line, including MSBuild, NAnt and devenv,
and integrates with Visual SourceSafe and other version
Parabuild home page:
Viewtier offers free Parabuild licenses for every new bug found
in EAP builds of Parabuild.
I want to do performance test using Rational Test Manager.
My application under test uses CORBA for remote calls.
I can not record any script.
Should I install some specific DLLs to record CORBA calls?
If yes where can I find these DLLs?
I have a quick question. I'm trying to create a test in the Application Center for my ASP.Net application by using the "Record a new test" option from the Ne Test Wizard, but for no apparent reason ACT doesn't record any thing.
- I'm Running Windows XP
- ACT in unblocked from the Windows XP firewall
- I tried to record a test case from the ACT UI and also by creating an ACT project from the Visual Studio IDE, neither of them worked.
- I uninstalled/Re-Installed ACT and still wouldn't record a test.
- The ACT is working in the same environment on another machine
Any suggestions or comment will be greatly appreciated
Thanks in advance.
It turns out the ACT doesn't like it when there is a VPN connection running in the background. The problem was that I had a VPN connection, once I terminated the VPN connection ACT was able to record fine.
in Winrunner, it is possible to define 'GUI checkpoints' in any(?) application. A GUI checkpoint is a graphical object of which Winrunner can retrieve its properties. I'm very curious how Winrunner can do that. Does anyone know more about this?
If you have a little spare time, and would like to spend it translating one of our online services[^] into your native language, let us know. We'd be glad to offer you a complimentary subscription in return for your efforts (I can provide detailed terms on request).
Generally, any left-to-right language (latin, cyrillic, asian...) will help except for the following ones: English, Slovak, Serbian, Croatian. There are also some other languages for which we already have a translation, but not for all products, so feel free to ask about yours. The UI usually consists of only several words & phrases so it will not take you more than a few minutes.
If you're interested, or have a question, just reply to this thread, or use the [Email] link at the bottom of this message to send a direct e-mail to me.
I am a tester in a development team, we are working on a project designed with Delphi(the main part) and part in c and c++ (with MFC), that uses MSSQL (SQL 2000 or MSDE).
I was instructed to search for testing automation options, packages, tools, applications, etc. Our most critical, critical issue is regression testing.
What do you recommend?
kfaday wrote: what do you mean with: What are your requirements ?
Well, do you need complete, automatic testing with no human intervention? This requires that the testing tool capture screen shots, output file/database interactions, etc., and compare them to a known "good" run.
OTOH, if you just want something that will generate some mouse movements, clicks, and keyboard input, then you will have to manually inspect each screen and each db write, to make sure they are correct.
Most test environments use some kind of "scripted" technique to run thru regression tests for previous bug fixes, etc. However, sometimes human interaction is required. For example, suppose that a bug was fixed related to excessive flickering when a dialog was resized. It would be difficult to check the fix in a completely automatic way. Even describing it takes longer than just grabbing the dialog and resizing it!
Finally, you have to look at how many screens there are - are they dense, packed with controls? Is another application invoked? Are there external dependencies, such as waiting for a file to arrive on a server before some action is performed? Is it client/server, so that both the client and the server must be set up to reproduce the bug?
I would also suggest looking thru your bug log and asking, If I had a test tool, what would it have to do to recreate this bug? The types of bugs encountered is heavily dependent on the type of application, whether it has a GUI, client/server, etc.
Whatever you do, don't just buy something. Insist on a 30 or even better 60 day evaluation. Take your bug log and try some of the nasty ones. You might find that an inexpensive tool is better for your environment than an expensive one.
If you do go with something like WinRunner, you need to get some training. Don't let anybody talk you out of this.
I'm new to unit testing and I need some clarification. I want to test a class. When I create an instance of this class with the default constructor, I need to check 3 values and make sure they are initialised to the correct default values (for instance : a name property).
Do I have to code three tests (one for each value to test) or can I create one test for the default construction of the instance and check, in the same test, the 3 values ?
There is no good reason to test each value separately. Your "unit" consists of the class itself, I assume, and your unit test should verify the class as a whole. If there is something unique about the way values are accessed, then the test should take that into account. But if they're accessible as a group, go ahead. Your test also should include verification of the methods within the class, and should be structured so that all private methods are utilized during the test. If the class has any dependencies on other classes, make sure that you exercise them thoroughly. Ideally, you should also try testing error conditions; try things you never intended the class to do and check how gracefully it fails. Use out of range values, incorrect variable types, and edge-of-limit values if there are any limit tests required. As a general guide, try to imagine what the dumbest consumer of your class might attempt to do to it, then duplicate that in your test. We used to call that "dummy-proofing" before there was such a thing as formal unit testing, and it worked remarkably well.
"...putting all your eggs in one basket along with your bowling ball and gym clothes only gets you scrambled eggs and an extra laundry day... " - Jeffry J. Brickley