I am seriously considering putting together a "Software Engineering for Small Teams" course or set of articles. With a little bit of expertise, you can inject testing in to most projects, use the minimum of Agile that'll help, and generally massively raise your game - and by that I mean code faster, better, and more reliably, with considerably less stress.
(edited: turns out I forgot which year we're in :-P)
I used to always write proper full-fledged tests. Then I started my startup, building a product in the few hours left after a demanding high-stress job and a tumultuous private life.
Within a few weeks, I stopped writing tests. Within a few more weeks, I turned off the test suite.
I wrote the product, got it working, received market feedback, realized my model was all wrong, rewrote the entire domain model and UI multiple times all to finally realize that my component boundaries were all wrong and intuitively understanding where they should've been.
Now I feel confident about an architecture that will stay stable for 12+ months and each new component I write is properly tested.
In the meanwhile my lack of tests is starting to bite me very slowly, but I find that I'm just slowly replacing all 'bad parts' with properly tested components with clearly defined boundaries, rather than changing existing code.
And in the end I'm really happy that I decided not to test as much. It has it's place but when your time is really precious and you're trying to mold your software to fit the market needs, it just isn't worth it.
I don't know how many others are in a similar situation but, for me, sometimes it just ain't f*ing worth it.
I'd been working for years in a workplace that tests virtually everything up front until I joined a startup, and I agree with you.
Experimental features may be very short-lived, or require extensive tweaks, and the technical debt that accumulates from not testing may never arise over their lifetime. Once you're sure it's going to stick around forever, do it right and cover it with tests.
I'm doing a startup as well, and we do a fair bit of testing.
One of the keys to making that work for us is a short feedback loop. We automatically release on every commit, which means every couple of hours. Speculative features get minimally implemented; if they look good then we beef them up more. Our goal is to avoid not just the unneeded tests, but the unneeded feature code too.
I'm personally pretty happy with the testing in that we don't have to spend much time on debugging or manual testing. It's very nice to make a major change, poke at it a little bit, and then ship it with a fair bit of confidence that it will work.
I'm always amazed by how well the whole 'technical debt' analogy holds up. Yes, leveraged development at the beginning is fast, and sometimes a good idea for getting to MVP. But the cost is still there, and will become apparent, and needs dealing with.
I'm a lot less surprised. Not everyone gets to work on a shiny new codebase which was created after regular testing was the norm. A lot of us work on maintaining code that's 5/10/20 years old, and have to worry about things like maintaining and adding functionality over refactoring the entire codebase to support unit tests.
When you're in this position, your going to get more value out of creating a smaller set of functional integration tests that cover the critical functions of the project. Sure, adding new tests as you add functionality is a good idea, but it's not going to result in total coverage for a very long time.
My area of expertise is adding tests to legacy codebases. Obviously you're not going to hit the whole thing overnight. But that's no excuse for not having /any/
The very first thing I do when I take over a codebase is to write tests. Without tests, it's impossible to do maintenance work or add functionality in any sort of rigorous fashion--how can you know that your assumptions about how the code works are correct? How can you know that your trivial change didn't break something?
Of course, tests don't actually tell you these things. But they can tell you that your assumptions were wrong, or that your trivial change broke feature xxx, and that's crucial information to have.
Do you always have the time/bandwidth to write these tests? I'm curious what you might do if an old codebase lands in your lap and someone says "here, fix these bugs by the impossible_length_of_time."
I appreciate the idea here, and I've done the same in certain circumstances, but typically that means writing tests for bits of functionality that I need to touch.
If it's impossible, then the diligent engineer says so. Projects are often doomed by people with "can-do" attitudes attempting to achieve the impossible.
Does an ER surgeon always have the time/bandwidth to scrub hands before surgery?
Does it potentially take the surgeon several days to scrub before an emergency surgery?
Edited to add: I appreciate the analogy, but it's flawed. If someone comes to me and says "here, developer A is on holiday, and we have this bug that it causing massive disruption in the field," is it appropriate for me to say "well, I can do that, but it will likely take me five days so I can understand the codebase and write the appropriate unit test suite."
This is circumstance I thinking about, not necessarily inheriting a codebase and having to add features to it. In that case, certainly, I'm going to take my time, read the code, and write tests.
In this scenario, there should be tests already present covering developer A's portion of the codebase, together with documentation on how to run them (though tests should be as self-explanatory as possible).
In fairness, I recognize that this isn't always the case in the real world. Sometimes you really do need to just blindly attempt to fix something, and there's nothing to be done about it. But it should never become a regular occurrence, and you should never get comfortable doing it. First thing I would do is tell my manager exactly why I'm uncomfortable, and what a conservative assessment of the risk is. If we decide to go ahead with the change anyway, I would create two new entries in the bug tracking system, which should be developer A's top priorities as soon as she returns: thoroughly vet my changes, and DEVELOP A SET OF TESTS.
I see exactly where you're coming from, and I'm there all the time.
It just troubles me that people are so often willing (and eager!) to waste a lot of time doing half-assed manual testing when they claim not to have any time to write tests. Especially when the state of the art in test automation is better than it has ever been.
This has me thinking that the importance of test automation is related to the proposed frequency of changes. If someone wants a one-off change for something this very second I'll just change it. If someone wants me to inhabit a codebase for any length of time, I'll always set up tests for it. The problem is where you can't tell the difference between those two scenarios until it's too late.
Let's say you don't have the time to give it the complete understand-and-write-unit-test-suite approach though.
How are you verifying you fixed the bug otherwise? By changing some code, building the app, and running it to verify the bad behavior doesn't happen anymore? I don't really see how not writing a unit test (assuming the code is unit-testable in the first place) saves you any time. You are doing testing anyhow.
And if it was a critical bug, personally I'd want to feel as confident as possible that I fixed all permutations of it.
frobozz nailed it, really. If something can't be done, it is the engineer's responsibility to make that known to his manager, who is responsible for communicating that to whoever is asking for the work.
Of course working on a code base without a majority test coverage is dodgy (and intellectually frustrating), but it's a necessary skill.
I feel that it is unreasonable to expect that you will be able to pick up any code base and immediately write sufficient tests to get coverage on a majority of the code base. Speaking from my experience picking up old code bases, just being able to write isolated unit tests would require refactoring most of the code base, which is typically not something you will have time to do before you're expected to do other work.
I can't think of a single manager that I've worked for who would accept me saying, "it's going to take me 3-6 months of refactoring & building tests before I can start fixing bugs and providing enhancements."
> I can't think of a single manager that I've worked for who would accept me saying, "it's going to take me 3-6 months of refactoring & building tests before I can start fixing bugs and providing enhancements."
I can't think of a single developer I've worked with who would try that approach.
When a bug is identified in a project with few-or-no tests, the approach that I usually see taken is to write some sort of large, slow integration test that exercises bug, then fix that. That allows you to prove that the bug exists and prove that the fix fixes it, at least for the documented case(s).
There's no reason to cover an entire legacy code base with tests if you're only changing a small portion of it.
It depends what you're working on. If you've got a project that has to grow fast, adding more features than fixing bugs, not even knowing what you're going to keep in a few months, the time spent fixing regression bugs due to lack of tests I think is relatively little.
You may say "I'll thank myself later", but in this sort of business there won't be a later if we're not fast enough. It's a lesser of evils thing.
It would be nice if testing were a faster thing to do. The faster you can do it, the lower the threshold would be for this sort of a judgement call.
I think you'd find out most significant software has some kind of testing, but if you inherit a quarter mil lines of code, need to make one focused change, making the case to spend six months to write full coverage just does not get funded.
This is a fantastic idea - a lot of 'small teams' think that they are too small for extensive tests, or don't know how to organize themselves effectively.
When making a business, you have to continually make tradeoffs. Do I work on new customer features, do I work on customer acquisition features, do I fix bugs in old features, etc. Testing has value, but it often doesn't have the highest value. I totally agree about raising your game, but I can see how young startups especially race ahead without them (often to have them crash down on them 3 months later)
I am seriously considering putting together a "Software Engineering for Small Teams" course or set of articles. With a little bit of expertise, you can inject testing in to most projects, use the minimum of Agile that'll help, and generally massively raise your game - and by that I mean code faster, better, and more reliably, with considerably less stress.
(edited: turns out I forgot which year we're in :-P)