If all your "test it" steps are being done manually, you're being very inefficient. A good unit test can actually make development go by faster with the added bonus of defending your code from changes down the road that might screw things up.
That's all fine unless you're exploring a solution space. Then the overhead of writing tests which are thrown away _in entirety_ is outrageous. AFAIK TDD really only works if you're either a) prepared to waste a huge amount of time writing tests which will later be completely redundant or b) working to a very clear set of requirements with tools and frameworks you already know intimately. </Rant>
I find I spend significantly more time refactoring/maintaining code than I spend writing exploratory code. It's silly to write tests for prototype work, but once you're actually close to having a working prototype, tests help. Having decent test coverage saves so much more time when refactoring/maintaining.
TDD isn't "THE" way, but test coverage helps. It's not fun (at least not for me), but it's less aggravating than breaking something 6 months down the road in some non-obvious way. I'm human, so I assume I'll screw something up eventually. Having test coverage helps keep me from shooting myself in the foot later.
Different levels of tests work here. I usually start with a very high-level test and then as I implement I do unit tests once I have a reasonably high confidence that the units are a good design.
You should often be able to at least create an automated acceptance test for what you're doing (e.g., "as a user I want to click this button and see XYZ"). This is usually extremely decoupled from the implementation so it should survive refactoring. So then do your exploratory code, get the test passing, and then refactor, introducing lower-level tests.
If that doesn't seem doable you might be taking on a task that doesn't have a good set of requirements. Writing code without any concrete use case in mind is fun and all but that kind of code should usually be limited to a prototyping sandbox.
"If that doesn't seem doable you might be taking on a task that doesn't have a good set of requirements." Or it might have perfectly good requirements which are very hard to write automated tests for.
Consider (for instance) a program to translate ABC music format to proper sheet music. It's easy to say the basic requirement: "The program has to accurately translate ABC format to legible and easy to read sheet music." But even a start at automating a test for that would require converting a graphical representation of sheet music back to the basic notes, and that problem is at least an order of magnitude harder than writing the original program, without factoring in the "easy to read" bit at all. (PS This is a real issue for me, if someone knows of a decent open source sheet music to MIDI/ABC convertor I'd love to hear about it.)
The mistake here is that what you're describing is a functional test, not a unit test.
A unit test for a piece of code like this might be "Given that the time signature for the music is 3:4, the software puts 3:4 in the appropriate place on each line of output".
You then might write a variety of cases testing that it deals correctly with (say) a changing time signature at some point in the piece.
The upside of this is when you try and fix another bug which has a knock on effect on this bit of code, lots of your tests are going to fail- immediately identifying where the problem is (or at least letting you know there is one!)
> A unit test for a piece of code like this might be "Given that the time signature for the music is 3:4, the software puts 3:4 in the appropriate place on each line of output".
Don't you still have the same problem colomon described here, though? Testing your stated condition "the software puts 3:4 in the appropriate place on each line of output" still implies some form of image recognition on the graphical output.
But okay, is it really so much easier to write a test which just tests changing the time signature? It's still going to require doing OCR. And if I'm really testing it, I've got to make sure the time signature change comes at the correct point, which requires OCR on the notes around it. Also the bar lines (to makes sure the time signature change is reflected in the notes per bar and not just by writing a new time signature out and otherwise ignoring it).
Now, it's perfectly reasonable to have unit tests that the code correctly recognizes time signature changes in ABC format. (And it looks like I forgot to write them: I see inline key changes in the tests but not inline time signature changes.) But that's only testing half of the problem; and it's the easier half by far.
Actually, I've written tests for half of that problem: Parsing ABC text into an internal format. That is a very clear problem and one just needs a representative set of input files (I now have around 500 of them -- the tests still run in less than a minute. It's true that I haven't been able to figure out a useful way to have an automated test for the drawing part. Here's my project: http://code.google.com/p/abcjs/
in TDD what you are talking about is called a "spike", just write a bunch of code to try out assumptions and find a direction to go that you are reasonably sure is a good one.
Prototype all you want without tests, but once you've settled on a design & are ready to make it production-ready you should spend time at least writing unit tests for your work or (ideally) take what you've learned & re-apply to a clean design written in a test-first manner.
You might think this is a waste of time, but putting code into production without tests is going to give you more trouble in the long run.
Of course you have to know your tools. I usually start by doing test cases, then writing the real features while spork and whatchr evaluates my new code every time I save. I rarely even open a browser, it's the final thing I do when my tests are green.
It doesn't slow me down, and I can be sure that my feature is there and works even when somebody refactors our software.
That's mostly a beginners problem, I know several people where it's mostly:
1. Write a lot of code
2. Test it
3. It works!
4. Test it
5. It works!
6. Test it
7. It works!
...
15. Test it
16. It works!
17. Are you done?
TDD in no way makes 17 any clear, because every test they thought of before writing the code works more or less the first time. And that's the core problem with testing, for a solid developer what fails is has nothing to do with the code it's always a question of edge cases they did not think. (Wait, some sales people are their own managers and outside consultants at the same time? well just bob) You can force these people to write tests, but it really does just slow them down.
Not groking TDD, I literally worked my way thru the book, doing each and every step, just to get the gist of the experience.
TDD works fucking great. If you know what you're doing.
Alas, that's a big IF. Most of the stuff I do, I'm just figuring shit out.
Mostly, like when designing a new library, I work outside-in. I imagine how I'd want to do something, writing the client pseudocodeish stuff first, and then trying to make the implementation support my idealized programming mental model.
I end up throwing away A LOT of code. Getting something short and sweet takes a lot of experimentation, most of which are duds.
Though my personal approach of outside-in is trivially like TDD, it's not nearly as rigorous. Were I to be as thorough as TDD, I'd be spending all my time writing tests. Which seems pointless, for code I'm like just going to throw away.
Anyway. Much respect for the guy who wrote that first TDD book. It's one of the few methodological strategies that works as advertised.
I do strict TDD when I can, and I consider spiking out things part of the process. If I need to approach a problem that I don't know how I'd solve just yet, I create some sample code I later trash and do a lot of work in the Ruby console.
Then, once I've gotten an idea of the problem, I can start writing out some pending tests that help me figure out structure, and then I'll start into the strict TDD loop of write a bit of test, watch it fail, make it pass, write more test, etc.
It's not about telling you when you're done a feature. It's about leaving step 17 with a set of tests so that the next person in the code can tell when he's done without doing steps 1-17. And you'd think it slows you down, but it really doesn't. Some advantages of TDD are less context switching (you can test your code without even leaving the code itself) and a high degree of focus (every atomic subtask has a very clear completion criterion: fix the failing test). Those are things solid developers love.
I have never seen any half way decent developer write code for more then a few minutes without some sort of feedback, automated test suite or not. You are right that it will work more or less, but they work out the "less" part of that statement sooner rather then later. In my experience, that is a place you get to over time, only the people out of college write code for multiple hours straight, then debug everything afterwards.
That is actually the primary goal of tdd, to free you from the more mundane aspects of the code/run/debug loop. The secondary goal is to give you a good base for changing the code later and finding out what broke, again without a ton of manual actions. But as useful as that is (and it is extremely useful), it doesn't hold a candle to the first benefit.
> I have never seen any half way decent developer write code for more then a few minutes without some sort of feedback, automated test suite or not
I do this all the time. Two reasons:
1. I can keep in "the flow" for an extended period of time. This is more important if the code is especially complicated. If I have to stop every few minutes to fix trivial errors, it's easy to forget important details of how everything is supposed to work.
2. Not having any feedback forces you reason about the code before writing it. It's very easy to fall into the trap of writing code, then waiting until it is tested to find the errors. Thinking before writing is the fundamental skill that TDD encourages, but you don't need TDD in order to do it.
> only the people out of college write code for multiple hours straight, then debug everything afterwards.
Knuth wrote TeX in a notebook and did not test it for a good six months afterwards, though I am not aware if he was out of college at the time.
> I have never seen any half way decent developer write code for more then a few minutes without some sort of feedback, automated test suite or not
Reading that again, I'm sorry if it came off as sort of attacky, but I really meant that as a "from my personal experience with the people I have worked with over my career" type qualification :)
I can buy #1, but only when it is something you've done a bajillion times before. When you are getting feedback every few minutes, you know exactly what introduced the problem, and don't waste time tracking things down. If you do miss something fundamental and have several hours work behind you, you tend to be more inclined to hack out something to make it work, where if you catch it a few minutes in, you can adjust your design to take it into account. I also find I can keep in the flow pretty easily with constant feedback, and I use simple todo lists to make sure I don't lose track of things.
As for 2, at least for me, I don't think there is any comparison between thinking about how things should work, and knowing if things do work before writing. TDD is definitely not a replacement for deep thought and planning, but I think that is a different beast then working out the details as you are writing them, which is where it comes into play
> only the people out of college write code for multiple hours straight, then debug everything afterwards.
I sort of did it again there, I should have qualified it more :) In my experience, the better programmers I have worked with, paired with, and watched code in videos will get feedback as quickly and often as makes sense, be it with tests or without them. I know if I wrote TeX in a notebook, it would be a guaranteed unmitigated disaster :)
I think most of you guys missed the point. Writing tests is very important WHILE writing code, to write it better.
We MUST write tests not only to catch regressions, to be sure that certain invariants will be manteined. But we write tests to check if we are writing good code.
I need to write a class to do some stuff. The test is the first user of this class. If I cannot write the test very fast, and I see that I'm spending a lot of time doing it, this means that my class is poorly designed, is not flexible, is not very reusable. Maybe I'm doing something wrong with my app design. If I'm writing good code, reusable and clean code, testing is easy and fast.
Testing help me to check immediately what's going wrong with the code, not only in term of bugs.
That's the point I was trying to make. The main benefit comes from eliminating the "run/debug" part of the "code/run/debug" loop. It then just becomes "code/test" where "test" takes all of a couple seconds each time.
A couple of seconds is too long. I run a small test suite in a couple of microsecs every time I save a file, and save my file at every change. When a task is done I run all the tests.
But surely if you have written a unit of code, you should at least know a. What valid input the code should have, b. what output the code should return, and c. What you want he code to do! If you know these things, then wouldn't it be easy enough to write tests for at least these conditions?
It's not that. I wrote a fairly complex piece of code in, of all things, TSQL, and as the logic was unfortunately in the stored procedures and functions I actually found that the unit tests I did for the more granular functions saved me a lot of time. This was because I would make a change to the logic of a function that other functions/procs relied on and the. All of a sudden I would find that a whole bunch of tests on functions that worked before started failing. I'd never have known this without the tests that DID work previously. Saved me a lot of time I can tell you :-)
1. Write some code
2. Test it
3. Debug your code
4. Test it
5. Debug your code
6. Test it
...
39. Debug your code
40. Test it; now it finally seems to work
If all your "test it" steps are being done manually, you're being very inefficient. A good unit test can actually make development go by faster with the added bonus of defending your code from changes down the road that might screw things up.