|
Most ECUs are programmed in C with some Assembler (usually relegated to the Boot Manager).
In the only true OO project I worked with the Unit tests were 4 years old when the specs and the desing (let alone the implementation) were completely different, to hte point that 90% of unit tests did not even compile and anohter 5% failed because the tests were wrong. Only the test of some helper class worked fine.
If they are integrated from scratch and the software is owned by the development team I believ that they should be maintained and performed at least before every push (not local commit) on the repo, or before the integration step, to ensure that any build present on the repo is functiona (except maybe in the very first phase of developement where stubs and bugs are freely roaming around).
Sadly most software is now built by a plethora of consulting companies whicha are hired for the task, for a limited time and per invested person-hour, not for the project. This means that once the contract is closed the consulting company has no maintenance to do nor any lingering responsibility, besides the employee turnover in consulting makes the ownership of a product virtually impossible - oftentimes the team that concludes the project consists of entirely different people than the team which started it.
In a big three letter car manifacturer I ended up working on projects alongside a dozen or more different companies and only the airheadsfigureheads of the manifacturer. That brought a lot of clashes and conflicts of interests, to the absolute detriment of any semblance of quality of the final product.
GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
Unit testing is only for object-oriented code? News to me!
|
|
|
|
|
Leaf classes can be unit tested. But most of my career was spent developing frameworks, which are very hard to unit test. The best tests are the existing applications, but then you have to know how to configure them and which ones will exercise what you changed. Good luck.
You can develop some toy applications, but they'll never test boundary conditions and accidental dependencies the way that real applications do. So after some cursory tests (or not), you just submit the code and hope your phone doesn't ring. Or you just turn your phone off, buy Depends for the bed-wetters, and fix it the next day.
|
|
|
|
|
The options are somewhat not that I can answer easily.
I do unit testing. (Not the rigid one, to test the implementation details, test every private function in class, etc. I only test the public interface of the class or module.)
But that's only me.
At my work (the "we" in answers), there is this notion called "We Should Do Unit Testing", but the meaning is "let's make an integrated test, and let's call it unit tests". To say the full story, there are no real tests*, as far as I know.
*At least not in the way 'Uncle' Bob, Kent Beck, and others would call (good) test. The main "architect" of this notion even says that it is not good to use any testing framework and let's make our own implementation. Because: what if the framework will no longer be supported, or will have an error?
I do unit testing partially affected by J.B. Rainsberger and his "Integrated Tests Are a Scam".
Unfortunately in at work, I can not make automated integrated, nor integration test, because of the state of legacy code I work with. (And the oldest part (lines of code) of the newest project is not even 18 months old and it already is such a monolithic-tight-coupling-mess that I haven't seen even when I entered the industry and got many years old code.)
|
|
|
|
|
Same here.
With new code I try to make each layer somehow testable, but not the unit testing that everyone harks about.
I've never worked for a "software development" company, It's always been for company's with internal needs or for software control layer products. Both of these can be tricky to apply testing to.
When dealing with hardware randomness of I/O, it's nearly impossible to model the all interactions of inputs, in all the different configurations.
Where as the internal applications are hard to nail down with what the end user needs, they don't always know what they want until they use it. I written complete apps before and had the end users change their minds on direction and have to scrap most of the code. kind of pointless to write unit tests for products with uncertain future.
As far as legacy code, I try to be hands off as much as possible. The original dev got it working at some point, the only thing that should break it is changing environment variables, and that can be dealt with as time goes on, to write unit test for these would be madness.
|
|
|
|
|
I'm sure, my test would not be viewed as real unit tests by many people. But it does not bother me at all. I write it for my learning and my practice. I noticed that when I use code analysers and other practices I make a few errors in code even when the analysers are not active (for some reason).
(e.g. Only this week I found out, only by coincidence, that after an update the code analysers do not work as I was used to. But there was not a single error when I found and activated most of the analysers rules I use.)
You are right. Everything from the ugly external word (each I/O) is a tricky one to test. Therefore I try to search ways how to test code logic without I/O and then triple-check the I/O functions.
The changing needs of users are the only constant in development. We need to live with that.
Unfortunately I have not the freedom not to update the legacy code. It was an external development project, it was late, it was delivered incomplete, and it works only for the happy scenarios (and not even all the expected happy scenarios). It must be fixed and that is the real problem.
|
|
|
|
|
The answer for me is I unit test from the bottom up and get done what I can over time. Making sure that the core stuff (which may not be the most complicated, just the most used and fundamental) is right is most important, IMO, if you can't just magically do it all at once, and who can.
If I was doing some really targeted application obviously I might look at it differently, since there would probably some algorithm or subsystem that was most important. But I tend to do general purpose code and it's all layers and layers. If the bottom layers aren't solid, not much point testing higher ones because you can't trust the results.
Explorans limites defectum
|
|
|
|
|
I find unit testing, in its pure meaning, to be rather useless. However, I do a lot of integration testing -- basically higher level function calls where I verify that the subsystem, if you will, behaves as expected. I find it a lot more efficient to write tests that way.
Example:
Unit test: when I call the Twilio API to generate a JWT token, I return the token as a string. Well of course it does. Stupid boring pointless test.
Integration test: when the endpoint is called, it returns JSON with either the JWT token, or an error key/message. The endpoint asserts that the username and password are non-null/empty, so I can test that as well.
See the difference? A pure unit test would never call the endpoint, it's too high-level. But as an integration test, I can verify that the endpoint is working exactly as it should.
|
|
|
|
|
I've always found unit testing at the core to be useless, and unless someone really screws up you won't find anything new.
Integration is much better as you described. Once you start assembling parts, then you will know what's broken.
"Computer games don't affect kids; I mean if Pac-Man affected us as kids, we'd all be running around in darkened rooms, munching magic pills and listening to repetitive electronic music."
-- Marcus Brigstocke, British Comedian
|
|
|
|
|
Integration testing is slow and more difficult to write edge test cases for (from things as simple as null checks, to more complex business rules) and is also more difficult to create a base template to generate repeatability of obvious error testing scenarios.
These factors get specially amplified in an environment of continuous delivery where multiple daily deployments in production are a reality.
Unit tests are a lot faster to write and a lot faster to execute, they act as the first gate in a continuous integration pipeline. They are not a replacement for integration testing, but they happen earlier and a lot more often than integration testing. They help speed up deliveries with quality, are easily repeatable and don not require the complexity a full environment, which makes them a lot easier to execute locally before something is submitted.
Once unit test has passed and the CI moved to the CD phase, the chances that integration testing will fail is greatly reduced, this saves a lot of time on deliveries.
To alcohol! The cause of, and solution to, all of life's problems - Homer Simpson
Our heads are round so our thoughts can change direction - Francis Picabia
|
|
|
|
|
Well - when it comes down to it, it's just another way of saying "testing your software to see if it works".
And it would astound me that anyone would unleash anything without making sure it works. Not just the individual pieces but the unit as a whole.
A bit like the idea of "refactoring" which, before someone needed to give it a neat new name, meant something called "code cleanup" or "consolidation" or something else to that effect.
Renaming something does not mean something new has been unleashed - just that the jargon's been expanded.
Hurumph.
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
W∴ Balboos, GHB wrote: And it would astound me that anyone would unleash anything without making sure it works. This happens more often than you'd think.
I worked with some programmers (sr. and architect) who thought testing was beneath them or something.
They wrote unit tests and handed the code over to the tester.
The first button the tester clicked crashed the application.
The tester ultimately went as far as to say he didn't like testing their specific user stories because it never worked on the first or even second try.
|
|
|
|
|
Perhaps not your intent, but the implications of your post imply that "this is OK" to justify your apparent lack of faith, or even disdain for, testing.
It is not!
The programmer, depending upon how overt these crashed are, should be asked to seek employment elsewhere, hopefully in a different field.
The tester should follow him/her out the door.
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
It isn't, I just wanted to astound you
And also, that testing alone won't solve anything if your software is of bad quality.
And that it isn't a magic bullet.
Again, I voted some pieces, sometimes, not never.
|
|
|
|
|
1. Write some stupidly simple code that never changes and if it does you must have single-digit IQ to mess it up.
2. Commit.
3. Have co-workers bitch at you because it's not unit tested.
4. Write unit tests.
5. ???
6. Never ever profit.
I can't stand those religious unit testers who just unit test because some book or blog told them to
That said, some code really benefits from unit tests.
Unfortunately, my customers don't want to pay for it and I'm not working for nothing.
So I unit test some code, sometimes.
And get this, contrary to popular belief even untested code can run fine in production
|
|
|
|
|
Sander Rossel wrote: And get this, contrary to popular belief even untested code can run fine in production And then again, untested code may not work fine in production.
I would always rather spend the time find the error than have someone else find it for me.
And just imagine the fun if it sort-of-works and the data it enters is garbage or the data it should find in a search is missed. Who's pay for the entertainment?
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
W∴ Balboos, GHB wrote: And then again, untested code may not work fine in production. And your tested code will break in production too.
And your tests will be bugged too.
And your correct tests will break because specs change.
And some tests will only ever break because specs change and never because a programmer messed up.
Some tests will add nothing but work without ever providing any guarantees.
And then there are those programmers who think unit tests are a substitute for manual testing.
Remember, software testing proves the existence of bugs not their absence.
|
|
|
|
|
Sander Rossel wrote: And your tested code will break in production too.
.
.
.
testing.
And your eyeballs might fall out because your read this reply.
I suppose, having given up completely, there's a lot less stress for you.
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
W∴ Balboos, GHB wrote: having given up completely
Sander Rossel wrote: That said, some code really benefits from unit tests. I haven't given up completely, I'm just saying testing won't solve all your problems and it won't produce perfect software.
Also, religiously testing everything all the time is unnecessary, time-consuming and ultimately defeats the purpose.
|
|
|
|
|
Sander Rossel wrote: and ultimately defeats the purpose.
Best, Ah ha! You got me!
To what purpose do you refer? The only one I can come up with from the previous and current context would be software . . . that . . . works.
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
W∴ Balboos, GHB wrote: software . . . that . . . works Within allotted time and costs
Also, unit tests should support the software, not the other way around.
If your unit tests fail and need to be "fixed" after every change you make to the software you're probably testing the wrong thing.
I've been on projects where every change was two-fold, the actual change and rewriting all unit tests.
|
|
|
|
|
First, the software driving by testing paradigm, a abortion of logic brought on by Agile programming, is not the point. Testing is the point.
But as for your smile-festooned comment:
Sander Rossel wrote: Within allotted time and costs
I might point out that in my very simple-minded view of the world and thing in it that if the software or other product doesn't work, or worse, is unreliable, then it's overpriced even if it's free.
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
W∴ Balboos, GHB wrote: I might point out that in my very simple-minded view of the world and thing in it that if the software or other product doesn't work, or worse, is unreliable, then it's overpriced even if it's free. Then let me point out what I've already said a couple of times, software can work without unit testing.
For example, by manual testing, integration testing, smoke testing, acceptance testing...
I'm really at a loss at how you twist my words from "I unit test some code, sometimes" into "I never test and release software that doesn't work."
|
|
|
|
|
Sander Rossel wrote: I'm really at a loss at how you twist my words Context.
In particular, take my OP[^] and see how I describe the situation. Now, this is your thread, but from my google search on Unit Testing, first item on the left:
Quote: UNIT TESTING is a level of software testing where individual units/ components of a software are tested. The purpose is to validate that each unit of the software performs as designed. A unit is the smallest testable part of any software. It usually has one or a few inputs and usually a single output.
Which simply means Testing. No?
Perhaps we disagree, at least in some elements of this thread, in the meaning of the subject terms. A classic cause of argument of all sorts for eons.
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
W∴ Balboos, GHB wrote: Which simply means Testing. No? No, it's a very specific form of testing.
A unit test is an automated test that tests a single method.
For example:
public int Add(int a, int b) { return a + b; } That code is never going to change and it's very simple.
Does it need a unit test? It couldn't hurt, but it probably has little added value.
Valid unit tests would be:
Assert.Equal(Add(1, 1), 2);
Assert.Equal(Add(1, 2), 3);
Assert.Equal(Add(-1, -2), -3);
Assert.Equal(Add(int.MaxValue, 1), heck I don't know, int.MinValue or something); Now my point is, that last test may or may not be something you'll think about.
If you don't, your software may still overflow despite having the previous two unit tests.
If you do, this is probably a real issue that you've already thought about when creating your Add function and you'll probably have used a long instead.
However, whether or not this code works all depends on whether it's called with such high values.
If you use this function for some invoicing, you really won't hit that overflow, but you can't test that with a unit test.
Another issue with these tests, the first test is prone to errors itself.
If you mess up the Add function and return a + a or b + b (or even 1 + a or b), this test will still pass even though the function broke.
Ultimately, you may replace your Add function with Math.Add and you'll be left with a couple of tests that you need to remove (or rewrite) as well.
So having these unit tests in place is only going to help you on the off chance that someone somehow messes up the Add function.
A smoke, acceptance or regression test will soon find that issue if it's not already caught during code review.
True, you'd much rather find it with an automated unit test, but writing so many unit tests comes at a price.
Don't forget, your tests are just more code and they'll have bugs just like the code you're testing.
It's also more code that you need to maintain, which costs time and time is money.
|
|
|
|
|