we want to run benchmark tests on our testing PC.
We were suprised that the benchmark sometimes shows strong deviations. (sometimes only 20% of the best benchmark value, with several windows versions).
I can not explain what happens there.
We even made an image of the pc's hard drive that we reinstall every time we run the (two) benchmarks.
Still we have those deviations.
The next time when i will do some testing i will check if the following services are not running
- the index service
- the defragmentation service
- the antivir service
Do you have any ideas what has do be stopped/made/run in addition to the above to make the benchmark more consistent.
We did the benchmark tests with two different benchmark tools and both showed those strong deviations. So i do not think
that it is a problem of the benchmark tools.
The way the benchmark is run is defined in a word document.
Even if we would not understand what has been measured: if the input is the same the output should be also the same.
My job is now to make clear what went wrong because if the (benchmark) pc not allways produces the (more or less) same result
how can our software be measured.
We did the benchmark tests with two different benchmark tools
Maybe you should talk to the people who provided the tools. Benchmarking computers is a difficult activity at the best of times because there are so many variables to be taken into account, particularly with multi-tasking operating systems.
One of these days I'm going to think of a really clever signature.
Hi, I'd like to know the best practise for this situation :
Once upon a time, there was a project (A), whose output is mixture of exe and dll files. One needs to create an installation package using the InstallShield and to have also the installation package in version control. The build should run on the build server.
As fas as I know there are two usable ways, how to setup this:
1. one solution S with project A and second InstallShield project. The input for the InstallShield project would be the primary output of the project A.
The build solution BS will include this solution S.
The dll files are not in source control, however there is a possibility to build them.
2. one solution S1 with the project A, that would have after build process - copy output to some destination. Second solution S2 with setup project that takes input from that destination, as a prebuild action it makes check-in of the dll files. The build solution BS will include this solution S2. The dll files are in the source control (preffered).
I don't know whether it is possible to set some prebuild action to InstallShield (LT version).
If you have some ideas how to solve things like this please share it. Maybe there is some really simple solution that I don't know. Thanks
there was a project (A), whose output is mixture of exe and dll files.
The dll files are not in source control, however there is a possibility to build
Either those two statements are mutually exclusive or you think that the dlls need to be in source control after the build complete. For the second part the dlls need no more been in source control than the install executable is.
Ignoring that and looking at a high level view then...
A configuration management (CM) person is a person whose job or at least a principle role is to do builds. And only do builds (the role.)
And given the following two possibilities.
A - Two solutions. One used only be developers. Second that includes projects of first solutions in addition to install shield.
B - One solution. Includes common developer projects and InstallShield.
With a CM person/role then use A. If not then B. Might keep in mind that InstallShield licenses, specifically cost, might require at least a CM role (versus every developer managing InstallShield.)
I have just started using Mercurial, for a website project in this case.
It is all done on a local repo (backed up), not using BitBucket (or similar host).
This is my first step into version control beyond keeping sepearte folders, one for production (essentially a static folder representing the current live state) and one for development (both feature updates and content). The prior approach was beginning to give me all kinds of headaches when I had an incomplete feature but needed to update the content, keeping the two folders in sync (for content only) was horrid.
I started by doing the features in a clone and content updates in the original. Then Pulling the Feature clone into the original clone (and managing conflicts was a pita but not really that bad), but have now switched to using named branches and it's working absolutely fine. I haven't tried bookmarks yet, simply haven't found a reason that has caused me to understand what they do differently to named branches).
The one thing I can't work out how to do is to move single changesets from one branch to another. Let's say I have a feature branch that has 5 changesets but is incomplete, then I need to make a bug fix to the live site. I make the bug fix in the default branch and want to push it into the feature branch but without merging - otherwise the merge would have an incomplete (broken) feature!
For clarity I need to get the bug fix out to the live server and whilst I might do the work in a new branch, once done it would get merged back into the default, so once again I'd have two branches, default and the feature branch.
I think, from descriptions I have read (but haven't tried), that this kind of thing is pretty easy to do when you are using BitBucket (or similar host), you just pull the specific changeset (and its history) into your local clone. So I imagine it's also easy to do so between local clones (again I haven't tried it), but for lone nor money i can't work out how to do it within a single repo - i.e. between local branches.
I haven't used mercurial as much as I've used SVN.... but in SVN, you can merge certain revision numbers instead of doing everything at once. That would get you exactly what you want, but I'm not sure how it's done in mercurial. In SVN, all you would have to do if figure out what changes you want based on looking at the log of your branch, write down the numbers of the revisions, then go to the trunk and do a manual merge, on that merge, you can specify to only merge the revision numbers you want.
I know this really only directly applies to SVN but you may be able to think of a way to do it based on how it's done on SVN.
I'm currently looking for a codename for an operating system. I started with Pole OS, but I'm not sure if it's nice, and easy to remember. Also I came up with another codename, Prodane, but I'm not sure whether I should use it, as Prodane is not an actual word in any language. So, do you have any ideas? Please let me know. Also, If you can thing of a logo, you'd be very helpful.
It's probably a better idea to write and test it first. Names often suggest themselves during the development of projects based on experience and feedback from beta testers. Alternatively you could name it after your cat.
Programming is work, it isn't finger painting.Luc Pattyn
I have already developed the biggest part of the project, and it is named Pole OS, but I don't understand, how could the beta testers and the feedback help me reach a codename? And anyway, I've heard that some projects are initially developed without a name (like iOS). Any idea how is that? I mean, what do they show the user when they start up?
The project site is:
Hi everyone. This is my first post on "The Code Project" forums.
I'm a bit confused regarding font licensing issues. Let's imagine I've built a WinForms control and wish to sell it commercially. Technically the component doesn't package or distribute any fonts with it but would use the system fonts (or whatever the user has installed and specifies in the "Font" property).
Logically I would imagine that there would be no font licensing issues since I'm not using a set font or distributing any fonts... but I'm just not sure. Can anyone here advise or let me know where to get advice?
I have a product for which I build an MSI with VS setup solution.
Every time I want to deploy my changes the the customer I have to remote desktop to 4 machines, upload my new installer, uninstall the previous version and install the new version (run the MSI).
It's a bit tedious (particularly on 4 machines).
Is there a way I could automate the process (remote login / upload / uninstall / install) ins ome way (with PowerShell for example)?
A train station is where the train stops. A bus station is where the bus stops. On my desk, I have a work station....
My programs never have bugs, they just develop random features.
I'm hoping I have the correct forum here for a start.
I'm upgrading the Continuous Integration server and build process for our C# codebase.
The code is component based, with over 150 components, with lots of interdependencies.
We're currently building under CruiseControl, calling some NAnt scripts to build "everything" several times a day.
The NAnt scripts end up calling MSBuild on the solution files for each component.
I have written some NAnt tasks to iterate through the components parsing solution and CS project files to get output assemblies and their references, so that I can order the component builds based on dependencies. This code has also been 'reused' to produce a component catalog, which details the version of the assemblies/compoents used by each of our products. So this needs to be maintained.
I'm considering several CI servers for the upgrade:
UrbanBuild and OpenMake meister both claim to have dependency management handling for component based systems, but I need to actually talk to someone from each of these companies to get a price out of them.
TeamCity appears to have plugins/extensions for NAnt and AccuRev (I'm not really considering changing SCM if I can avoid it), but I haven't seen any mention of dependency management.
Team Foundation Server doesn't appear to support AccuRev or NAnt from what I can find, so I've kinda ruled it out at this stage.
If I do go for TeamCity, then I've had a brief look at NuGet and OpenWrap for dependency management, however OpenWrap appears to bypass msbuild, which probably rules out most of my NAnt scripts also, and they both appear to focus on 3rd Party dependency management.
So is anyone aware of a tool which will analyse inter-component dependencies in my code, and output some form of data that I could use to:
1. schedule the component builds from the bottom up, from the middle up or from the top down and
2. from which I could produce my component catalog and
3. could support plugin dependencies which don't use assembly references in the project files?
Does anyone know if either UrbanBuild or OpenMake Meister support this?
The sites for both of these products talk a good talk but don't flesh out the details to the extent I would like.
I've been googling this stuff on and off for a couple of weeks and I'm sure there has to be something out there that does what I'm looking for, but it doesn't seem to be making itself known to me.
I am sure this must have gone round before but, how many environments do you have / think is necessary?
We have quite a collection of inter-related systems here, in-house developed, off the shelf, heavily customised, and using a number of different languages and databases.
Some only exist in live, some have dev / test and live, most of our stuff has dev, test and live. The new daddy system that is yet to go live and needs to interface with most of the others has been setup with dev, test, train and live.
Just had the bloke in charge of this project come to see me quite panicky because he has realised that he doesn't know how to use these environments properly or what they are going to talk to with respect to the other systems. Training starts at the end of this month whilst I am on holiday .
I think it would be best if we had at least a nod at the same number of environments for each system as they all have to work together.
So, bearing in mind that all the systems we write or support are for our own group of companies, what environments would you recommend?
I think 3 is a minimum, I cannot decide if the 4th is necessary or not.
Previously I worked with dev, 8 test / train / whatever, and live. Which worked well, but then we only had the one system to support then. I have also worked with 5 environments, although I can't now remember what they all were, dev, test, qa, prod, I am sure there was one other.
Every man can tell how many goats or sheep he possesses, but not how many friends.
how many environments do you have / think is necessary?
As many as needed to solve the business needs of the company.
I think 3 is a minimum, I cannot decide if the 4th is necessary or not.
For me "dev" would mean what is on my box and nowhere else. This of course would be necessary.
"prod" is where the business makes money. So it is necessary.
Anything else is optional.
A "test" one is only reasonable if it is in fact used. And used consistently. This can often require dedicated QA.
You didn't mention "build" which is something I prefer. It is doable in "dev" but I prefer that everything is blown away first and that is problematic on "dev".
As for "train" that presumably would be a mirror of "prod" with less control and need for stability. Some businesses would require it. It can be available in house and/or externally. It would require a business driver rather than a development process driver.
You can also have "integration test", "system test" and "user acceptance test".
In a previous life, we had one system (as in "app", but it was mainframes not PCs). Three copies - dev, test, prod. (Oh, and test was hot standby for prod. Testers knew that their system could disappear at the drop of a hat if prod hardware chucked a wobbly.)
All worked fine until a second system was brought in (which of course had to interface with the first one). After a while, it became obvious that we needed another test copy of each system. Basically, to test a change for system A (which is going in as an isolated change, not a big-bang upgrade), you need a clone of system B prod. This is different from B test, where the B testers are playing with new stuff which will go into B prod down the track. We wound up combining that integration testing function with user training, which wasn't ideal but did save the odd megabuck's worth of big iron.
I'm not even sure if I can ask this question properly (shows how unclear I am about it myself).
So I developed a small part of a rather complex software. I am to run a system test on this software (the whole thing) to test its error handling procedures.
But it's not going so smoothly.
It's alright for those errors that I can actually cause (e.g. run the software without some essential hardware).
But of course I can't generate errors such as "Hardware broken" or "Hardware is connected but not responding" as there is a chance of actually doing harm to the hardware.
So I need a cheat.
Putting break points in the software and overwriting values is not an option because this is a system test.
So I duplicated a small part of my software and put some error testing mechanism in the second copy (let's call it the tester version). The software launches with one or the other of these versions depending on the registry value I set.
The problem is that I've limited the duplicated portion of the code, which I thought and still think is a good idea, but this means that the dummy errors can only be emitted from one place.
In reality there are many paths errors can take. Depending on the path, the final output to the user can be different.
I want to map the various error paths to help decide how best to generate the dummy errors.
It appears I need to do something like: "Generate error A as if function B caused it" and "Generate error A as if function C caused it" etc
So I need a mapping from error A to functions B and C, and so on.
Is there a clever way of doing this?
Or maybe a better question would be, what is the best way to run a system test on error handling functions?
There is no general purpose easy way to do everything.
It is possible, but difficult, to use code insertion techniques to simulate any error. At best, and at the most complicated, you actually modify the code at run time to force an error.
Some people suggest using interfaces. These are outside those required by the design but are put in place solely to support testing. The problem with that is that not everything is solved with that and it might require a lot of interfaces.
One can also be creative about testing. For example if I need to test connectivity problems I can use a system call, in the test code, to drop my IP (of course I better restore it as well.) Or I can stop SQL Server pro grammatically when doing database error tests (again make sure to restart it.)
A human that knows how to operate your application and who knows what behavior would be expected.
I'd recommend AutoHotKey if you want to create automation-scripts to test your application; it's small, free, and uses almost no resources at all. A script can merely click where you tell it to, and it will not verify anything else.
Hmmm. That's not always the best way. How, for instance, do you tell if a button is meant to be enabled only under certain conditions? In depth knowledge of what the application is supposed to do is invaluable, and should not be ignored.
This question refers to testing using MS Visual Studio 2010 Ultimate Edition.
I think this is simple scenario that can happen during coded UI testing and it could be useful if someone could point to right direction.
Let's say we have ASP.NET application that has several controls on one web form. We wrote coded UI test and everything is ok. After some time development team decided to change the content of the form according to the new requirements and one button was removed. But, the test remained the same as was when button was on the form. What to do in this case? Of course, we can change the test as well. But, what to do in efficient manner if we have, for example, 1000+ coded UI tests and if changes affected a lot of forms? How can we find the changes on many forms programmatically in order to get the information of significant changes earlier and not to execute the tests where these significant changes occurred?
I'm interesting if we can use .uitest file or anything else as central repository of elements on all forms that we're testing. Is this possible to achieve?
But, what to do in efficient manner if we have, for example, 1000+ coded UI tests and if changes affected a lot of forms? How can we find the changes on many forms programmatically in order to get the information of significant changes earlier and not to execute the tests where these significant changes occurred?
Using reflection. Load the assembly, enumerate all forms, enumerate all components on those forms. You can then write all the names of the controls into a database-table, along with a date. You could add other properties too, might be easier to decorate them with a custom attribute.
When validating changes, loop the controls again and compare them to the values in the database. Count how many things have changed, and drop all tests scoring more than a previously defined threshold.
Or, ask the developers to mark the forms' that they've been modifying extensively.
I are Troll
Last Visit: 31-Dec-99 19:00 Last Update: 2-May-15 20:02