|
In my case I use C++, so I use gcov as my coverage testing tool. I don't have a coverage target - for example I think it is pointless to require testing of trivial code (eg getters) - but typically my code runs at about 90% coverage. But is useful to examine an annotated listing of the entire codebase to see what is being covered by the regression suite, and write tests for nontrivial bits that aren't. Its proved a great way to catch bugs.
Of course my codebases are typically 10s of Kloc - if you have a Mloc project, which I have worked on, you might tend to drill down on files that have relatively low coverage as a way to find the bits of code that need improved testing.
Its no panacea, but certainly a useful tool to have in the war for code quality.
|
|
|
|
|
We do unit testing on libraries, and integration tests via deployment projects from the build server that deploy to testing servers and automated simulators of android and ios.
We have coding and coverage guidelines (with a minimum coverage to be reached).
Then we even automatically start the deployed server modules, let them connect in verbose log mode, let the clients connect and do automated tasks with an expected outcome.
It's a lot of work but we found hundreds of testcases we did manually before that can now be run automatically and save us DAYS of testing time for each release.
We have log parsers attached at the end to find errors and get a generated test report from various testing frameworks (android/espresso, ios, windows/msbuild and the log parsers)
|
|
|
|
|
I just don't know what that means.
The sh*t I complain about
It's like there ain't a cloud in the sky and it's raining out - Eminem
~! Firewall !~
|
|
|
|
|
-Code was covered 100%, it's not my fault!
THAT you wanna hear from developer when service fails for hours?
Testing suits to VERY LIMITED set of tasks, where input is determined and output always match it. Say, math library - here you can cover every line, because human factor is not involved.
|
|
|
|
|
I see several problems with what you said:
First,
"It's not my fault" is absolutely meaningless. There are still plenty companies who put finger pointing over solving a problem, that is a sad truth, but it is **meaningless**.
I don't give a dime, who's fault it is, if a problem arises.
Point 1: FIX IT.
Point 2: Analyse deeply how it happened, and if it's one of those companies that feel better, find someone to blame.
Second,
the "very limited set of tasks":
The same universal rule applies to tests too:
They are only as good as the person who writes them.
It's quite easy to write absolute useless tests that only push up the coverage. But that's not, what testing is for. That's how unmotivated and bad developers USE test frameworks.
Two different things
So, sorry, no, I do not agree.
|
|
|
|
|
That sucks... Well..to be sincere he forbids to change the architecture to make the application automatically testable therfore blocking any effort for code quality (that's one of the reasons for which i'm changing job...)
|
|
|
|
|
Did you check if it would be allowed in your next job 
|
|
|
|
|
Yup The first things i ask are now
* Describe me how would be one typical sprint if i come here
* Describe me what would be my "average day" like
* What automade test coverage you have on your project on average
Any inconsistency or "i don't know"/"i suppose" answer will put a red flag on the company, at least for me 
|
|
|
|
|
I'm in a similar situation, but I knew that coming into the job, and I stated in the job interview that I will fight for this until I'm fired or it's fixed. 
|
|
|
|
|
Currently firing miself for greener fields... or new contract with written explicitely that i'll have total freedom of action and power to force this approach on the rest of the teams...i guess what will be the answer but worth a try :P
|
|
|
|
|
Some interviews on the way too 
|
|
|
|
|
|
Feel free to suggest next week's
cheers
Chris Maunder
|
|
|
|
|
|
No, I'm serious: ideas are always welcome.
We get suggestions reasonably often and some of them are a little esoteric or difficult to codify into a survey. The more suggestions we get the more likely we'll hit on a topic more amenable.
cheers
Chris Maunder
|
|
|
|
|
As the software is procedural with objects used as isles of logic in a sea of randomness and user input management, but my non-UNIT tests are based on 100% code coverage and 100% input coverage.
* CALL APOGEE, SAY AARDWOLF
* GCS d--- s-/++ a- C++++ U+++ P- L- E-- W++ N++ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t++ 5? X R++ tv-- b+ DI+++ D++ G e++>+++ h--- ++>+++ y+++* Weapons extension: ma- k++ F+2 X
* Never pay more than 20 bucks for a computer game.
* I'm a puny punmaker.
|
|
|
|
|
Code coverage is a pretty useful metric given that you write tests that are actually worth their bytes.
Unfortunately, on the job we have a low coverage and the tests that we do have are not always of the best quality.
Still, some tests (and coverage) is better than no tests and coverage and we're getting better.
For a personal (JavaScript) project I'm at a 99.99% coverage with meaningful tests.
The missing 0.01% is a browser thing, do A when browser supports it else do B.
No way a single browser is going to do A and B, so I'll never get a 100%.
Very annoying!
|
|
|
|
|
That sounds familiar, all my personal projects are fully unit-tested, with greater confidence of correctness than the work I do professional, where corners are often cut (failing to realise it always pays to measure twice, cut once).
"If you don't fail at least 90 percent of the time, you're not aiming high enough."
Alan Kay.
|
|
|
|
|
Sounds like you need a report generation tool that can aggregate results from multiple runs into a single report. Run on browser A. Run on browser B. Get 2x coverage on all the common code. Get 1x on each half of the different browsers different code segment. (Unfortunately I don't have a recommendation on how to do that in JS.)
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, waging all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
We have only human testing (management thinks automate testing is waste of time/useless)
Skipper: We'll fix it.
Alex: Fix it? How you gonna fix this?
Skipper: Grit, spit and a whole lotta duct tape.
|
|
|
|
|
Wellllllll . . . I kind of see their point.
No machine can do the absurd things that real users do.
Furthermore, development in A-S (Artificial Stupidity) seems to keep losing ground to the real thing.
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
modified 27-Mar-17 8:35am.
|
|
|
|
|
Time ago, the first day in a new job... coming into the office... I see a guy with a touch pannel in front of him, looking at the roof and tipping in the panel with both hands fully random...
me: What are you doing?
he: Playing the dumb user
In that moment I was like
two months later I started doing it too
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
Automatic testing has the advantage to be very cheap. So I am curious your managment doesnt like it.
My managment is "hot" better to say "red hot" for tests. I think because the dont understand that succesful test wont produce bug-free software.
Press F1 for help or google it.
Greetings from Germany
|
|
|
|
|
Exactly my case, I selected "No - we don't have the time" as that is the "official" reason why we don't do many things...
If you think you can do a thing or think you can't do a thing, you're right - Henry Ford
Emmanuel Medina Lopez
|
|
|
|
|
I can see the point in unit testing - it's a tool that's useful for testing specific things. However, I see far too often tests written to check that a call to an MVC controller returns a specific view. I mean, it's a bleedin' MVC view controller, it's going to return a view and it's going to be bleedin' obvious if it's not the right one!! But no, developers still want to write 50+ unit tests on every action on every controller.
Then there's DI.. great for testing, but does anyone ever stop and think "how many classes are getting instantiated on every hit of each request to my site?" - I doubt very many are. Loose coupling makes unit testing possible but 99.9999% of you application's lifetime isn't going to be spent under test, is it? It's more CPU usage, more memory usage, slower applications out there in production.. The worst thing is that so many developers assume that because all the unit tests pass, their application is working! It isn't, at least until it's been manually confirmed that it's working.
My take on it is: unit test what absolutely needs to be unit tested and no more.
Ah, I see you have the machine that goes ping. This is my favorite. You see we lease it back from the company we sold it to and that way it comes under the monthly current budget and not the capital account.
modified 31-Aug-21 21:01pm.
|
|
|
|