|
Ravi Bhavnani wrote: Note: IMHO best practices like these require the buy in of management. Thankfully all our dev managers are ex-developers.
Upvoted for this.
Over the decades, I have tried many times to get better practices to be adopted in my places of employment. My attempts have failed, usually when the managers realized that it isn't a magic bullet, and that there is a learning curve for adoption.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
And I upvoted your upvote... because why not.
Jeremy Falcon
|
|
|
|
|
Ravi Bhavnani wrote: Yay for unit tests, because I like to sleep easy at night. Preach brother.
Ravi Bhavnani wrote: Our DOD requires the creation/modification of unit tests when new functionality is implemented and existing functionality modified. What's DOD mean? I think Dept of Defense when I hear that. Just curious.
Ravi Bhavnani wrote: We don't yet do TDD but are in the process of implementing integration test projects that would make it easy for devs to write the test before writing the code. Be curious to know how it goes. I've never done full blown TDD (I'm stubborn), but would love to hear a use case about it.
Ravi Bhavnani wrote: Thankfully all our dev managers are ex-developers. The best ones are, buddy.
Jeremy Falcon
|
|
|
|
|
DOD = "definition of done" as applied to a work item. Before a work item can be marked complete, we require that it be unit tested and documented (this applies more to APIs).
Jeremy Falcon wrote: The best ones are, buddy. Agreed. I've found this to be the case more at early stage companies, which are the only places I've worked at since 2000.
/ravi
|
|
|
|
|
Ravi Bhavnani wrote: DOD = "definition of done" as applied to a work item. Oh crap. I should've figured that out. I need coffee. Thanks tho.
Ravi Bhavnani wrote: I've found this to be the case more at early stage companies, which are the only places I've worked at since 2000. I've been the enterprise world for a while, but I'm starting to think you're onto something. Need a change, might have to give that a go.
Jeremy Falcon
|
|
|
|
|
Jeremy Falcon wrote: What's DOD mean? I think Dept of Defense when I hear that. Just curious.
Design or Death?
(The Software Engineer's equivalent of Publish or Perish... )
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Sometimes I am lazy and skip them - typically when I am not quite sure I have the main "flow" worked out. It gives a short term benefit not spending time on them, but of course that has to be paid later - so I do at least make sure to write decoupled code that I can easily add the test. If I am reasonable certain of the flow, I write the test along with the code (sometimes even before as TDD, but that is rare). It is often much faster to itterate over a code block in the test than running an application.
And of course, when I do go back and write the tests I skipped I find a bug or two....
In general it works as an investment: loose an hour writing a test now, or waste a day at a later time due to lack of tests... Sometimes the hour now is worth more than the day in the future. It only becomes a problem if the cost of the day in the future isn't even considered when skipping the test.
|
|
|
|
|
Same man. Not every piece of code is tested, but for the code I know that has to work correctly or else... it is.
Jeremy Falcon
|
|
|
|
|
I always write tests for the small components in the code (aka unit tests) for
two reasons:
1. 1 day of writing unit tests saves me a week of looking for bugs in the small crevices of a larger project
2. unit tests describe the behaviour of the component, so they double as documentation
Also, since I have mostly worked at small companies there is usually nobody to double
check my code. So testing is fundamental to avoid big mistakes.
|
|
|
|
|
Nelson Goncalves Oct2022 wrote: Also, since I have mostly worked at small companies there is usually nobody to double check my code. That's a good point. I've found some of my own silly bugs that way too.
Jeremy Falcon
|
|
|
|
|
It's a "yay" from me! However I'm a bigger fan of integration testing, whereby one can test the full functionality of a system or part of it. Not a believer in TDD.
|
|
|
|
|
Fo sho, both integration testing and unit testing should happen. Usually integration testing is done by QA though.
Jeremy Falcon
|
|
|
|
|
The best use of unit-testing I've seen (ie. admired, admittedly from a distance thus far) is to create a test that breaks in a meaningful way (when fixing a bug, it tickles the bug and fails ... or when adding a feature, it tries to perform the actions that are not yet implemented). Then, 'fixing the bug' or 'implementing the feature' is 'done' when your test passes. The test lingers on ... because it continues to pass, you know that your latest changes didn't take other parts of your code backward. A great example of this discipline in action is the main dev of jOOQ (Github link)[^] ... he pretty much doesn't start a bit of new code without an issue and a failing test.
Unit testing should absolutely not be used for things like double-checking that code does what the complier pretty much says it will. Less is more.
|
|
|
|
|
That's just TDD, isn't it? 😉
|
|
|
|
|
Yeah, kinda. I feel it's less tedious/rigorous/exhaustive than TDD as I've seen it explained. I've seen TDD promoted as an iterative design aid: you don't know what you're doing exactly so you write a test which uses an imaginary API, then try to get the test working. Then you reflect a little more and adjust the test and write some more primary code. There are some benefits of this such as you've got only a very short departure from code that runs at all times. However the test *driven* nature of it doesn't sit well with me. I like to do as much up-front-design as I can: in my head, on paper, as formal requirements, whatever.
In the Unit Testing I admire, it's more of a "there, I deliberately broke something, and when I'm done it won't be broken anymore". You're not so much testing for correctness or using it as a design process, as you're throwing spanners in your own gears and making your code cope. It now 'covers more ground' than it did previously
|
|
|
|
|
DT Bullock wrote: Unit testing should absolutely not be used for things like double-checking that code does what the complier pretty much says it will. Less is more. Compilers can't check logic errors. Not sure if that's what you meant or not.
Jeremy Falcon
modified 22-Apr-24 10:23am.
|
|
|
|
|
OK, I was a little vague about that. I've seen people write tests that exercise getters/setters, behaviour from missing arguments, etc. In Java at least, a few good annotations takes care of all that rigmarole and you don't need to write tests for that stuff.
But let's talk about tests which 'confirm expected behaviour'. I feel like this kind of test is a waste of time until we've encountered a non-expected behaviour that we want to squash and know that it stays squashed. Because 'the expected behaviour' is already a path we have trodden while developing/debugging, and obviously we wouldn't think we're done until it's behaving as expected already. But our oversights are the things we need to come back for and scaffold with some tests, because we're prone to overlooking some aspects of the state-space and need that support.
It's about benefit vs bother in the end. You have to cherry-pick your testing opportunities and get on with making the code. IMHO.
|
|
|
|
|
DT Bullock wrote: I've seen people write tests that exercise getters/setters, behaviour from missing arguments, etc. In Java at least, a few good annotations takes care of all that rigmarole and you don't need to write tests for that stuff. Well, with anything in life, it's hard to become good at something that one never learns to do or never learns to do well. The vast, vast majority of peeps in programming fall into that category. They can tell you what a byte is, but they can't tell you what a nibble is. For instance.
Anyway, imagine trying to learn to ride a motorcycle from a book written by a crackhead deprived of sleep. That's what's being done here. Just don't use the mediocrity from one situation as the sole means of analysis as you're limiting yourself to the lack of know-how from another. Test writing is the same as development. It's an art. So it's just a useless or as useful as you make it. Fo realz.
DT Bullock wrote: It's about benefit vs bother in the end. You have to cherry-pick your testing opportunities and get on with making the code. IMHO. Nah man. I promise it's more than that. I used to be turned off of testing for that reason too. But if I was being honest, I also didn't know anything about it at the time. If all the examples you've seen are crap then it gives that impression.
Side note, as far as confirmation expectations only... if we're being real, even if that were the case that's still not a bad thing. You can handle the unexpected as well, but still. Also, the secondary benefit of making sure code isn't messed up that used to work... in an automated fashion is a pretty nice benefit from confirming expectations.
Jeremy Falcon
|
|
|
|
|
Sure, I expect I will expand my use of unit testing in the future, do a good job of it, and reap the benefits.
|
|
|
|
|
I Test, but I don't "TDD Unit Test".
While I develop a piece of functionality, I repeatedly exercise the code I'm working on, as I write it.
If at any point, it fails to compile, or shows signs of not "processing" some inputs correctly, I'll stop and fully debug everything, until it is working correctly once again.
My testing can take many forms, but often, if it's a runnable app, then I'll just make sure that "the app" is runnable at all times. If it's a stand alone library, or isolated bit of functionality, then I'll often build a small command line program along side of it, that I can use to "test run" the code, allowing me to do things in my regular debug loop way.
Once I'm happy the code is good, I then move up to building some test code, that integrates the system with the larger project (Should that be required), or set up some kind of testing harness (If it's a stand alone system) that exercises it using real test inputs and data.
I do not, mock out things like databases, external API's and all that jazz. If I have to connect to an external API, then I connect to an external API, and if that API is not yet available, then that bit of work simply does not get started until it is. I simply will not write test code that "pretends" to be something it is not.
My final step is usually one of setting up, large scale integration testing if required, or some smaller integration style unit test if code has to be independently testable. The key here, is I will create these unit tests only AFTER I'm satisfied I have done everything possible in other ways to produce good code that does the job required of it. I'll then use the integration testing, to A) ensure that the code stays working as it should with it's dependents & B) ensure that data & input changes don't screw anything up.
|
|
|
|
|
Peter Shaw wrote: I Test, but I don't "TDD Unit Test". Same.
Peter Shaw wrote: I do not, mock out things like databases, external API's and all that jazz. If I have to connect to an external API, then I connect to an external API, and if that API is not yet available, then that bit of work simply does not get started until it is. I simply will not write test code that "pretends" to be something it is not. Technically, if you needed fake DB data that would be a fixture. But, a unit test shouldn't call a live resource. You can't do gated check-ins that way as it would take too long to run thousands of tests.
Peter Shaw wrote: My final step is usually one of setting up, large scale integration testing if required, or some smaller integration style unit test if code has to be independently testable. Fo sho man. It's a very important step. QA usually does that though unless it's a small team. For unit testing in particular that's all dev though.
Jeremy Falcon
|
|
|
|
|
Quote: Technically, if you needed fake DB data that would be a fixture. But, a unit test shouldn't call a live resource. You can't do gated check-ins that way as it would take too long to run thousands of tests.
This is why I always, always, always advocate a dev/stage/prod setup, esp for web applications.
Dev has the "same server software", but may have data quality issues, maybe the odd broken dependency here and there, but usually nothing that the development team in general can't fix. It irritates the hell out of me, when corp/internal I.T. and the business, mandate that the same "I.T. security policy's" regarding admin access should be applied to developer only instances, as if they where prod.
Staging, should always be a "clean" dev copy. Software should be as close to prod as possible, deployments should ONLY be to staging after seniors on the dev teams have verified that the code is sound, working and potentially ready for prod.
Prod, well I don't need to state anything about this one
My point here is that, it should be perfectly acceptable to use "Live" resources, if you have a proper dev/stage/prod set-up.
If data quality is a necessity, then there are ways to easy mirror a live DB to the dev & stage environments, while maintaining PII security, such as redacting information with stars as it's copied across, that way the data "format" is preserved well enough to work in testing.
In many of the projects I work, I go in, and build the dev team myself, usually a very tight knit bunch who've all worked together before, and who bounce off each other very well. If it's not a large project, or a simple desktop app that one dev can handle, I'll run the entire project myself, so I don't often find myself in a situation where I have a very large team to co-ordinate with.
The last time I found myself in that environment was back when I worked FT for a single corp, and as a corp I had to follow corp policy's, if they mandated TDD down to the bone, then it was TDD down to the bone.
These days I much prefer the consultancy life style, where I go in, advise, build, test after it's built then move on to the next exciting project
|
|
|
|
|
Peter Shaw wrote: This is why I always, always, always advocate a dev/stage/prod setup, esp for web applications. Fo sho man. Totally agree on the 4 environments that should be set up. You can get away with 3 if you're a solo dev in the company, but otherwise 4. My point was more about calling a live resource for a unit test makes them no longer pure or deterministic and very slow to run. By live that could be a dev environment as well, as in an actual API call.
Peter Shaw wrote: Prod, well I don't need to state anything about this one
Peter Shaw wrote: In many of the projects I work, I go in, and build the dev team myself, usually a very tight knit bunch who've all worked together before, and who bounce off each other very well. It's so hard to find that too. Real hard. But when you have that camaraderie it's gold. Usually it seems everyone is unhappy and hates life and has an agenda rather than the love of tech.
Peter Shaw wrote: These days I much prefer the consultancy life style, where I go in, advise, build, test after it's built then move on to the next exciting project IMO a lot can be learned from that. Like, if you have a team that refuses to modernize, you're stuck in one spot. Also a lot can be learned from sticking with a project for years, usually about supporting it, but a lot can be learned. I choose the former too though, if given a choice. I wouldn't want to be beholden to people who stopped learning and are content with that.
Jeremy Falcon
|
|
|
|
|
Started writing some Unit Testing year ago and found that with the services I develop, unit testing is futile. That said, I have my own extremely broad testing infrastructure that is constantly running JScript and PowerShell generated client requests against my servers, some of those requests intentionally contain client request errors that we've seen come from specific client types. If you are dealing with library functions and methods that have fairly simple input parameters, Unit Testing can be useful. When dealing with a client server model that takes a wide variety of complicated XML HTTP posts, from various vendors, all of whom implement specifications differently, not so much. Most of the problems would be caught somewhere else farther down the stack. That said, I have a variety of iOS clients to test with since Apple's developers excel at not following specifications, especially when it comes to return codes.
|
|
|
|
|
|