Your test suite is good if it runs fast and tells you quickly what you did wrong.
Founder of Neo, Pivotal Labs, Agile & Lean
Unit testing is testing at the smallest level, a single module of code.
Do my tests tell me quickly where the defect is?
You should be able to read and understand the test without ever seeing the code base.
Lesson: Agile Development with Ian McFarland
Step #2 Testing: Your test suite is good if it runs fast and tells you quickly what you did wrong
When people talk about unit testing and they mean a bunch of different things by it. Traditionally, unit testing is testing at the smallest level, a single module of code. Does it do the thing that you expect? Then you want to do some integration testing that tests larger units of code and how they interact. Then you want to do exo testing or front-end full-stack testing using something like Selenium and Cucumber. You want to also do external testing, scenario testing, but that's fairly expensive to run and it's harder to maintain, so you don't do as much of it. You want the right amount of test coverage. This is an area that people get into trouble on because there's an art to it. It's very easy to do it wrong but tell yourself that you're doing it right. There definitely is a balance in getting the right amount of testing.
There a couple of litmus tests. One is, do my tests tell me quickly where the defect is? If I have a unit test, an integration test, and a front-end test and the unit test passes but the integration test fails, then I know that my units are probably fine but the problem is in the way they're connected. Whereas, if my unit test fails I know that there's something at the very smallest level that is a defect and I need to figure out what's going on. So having good layering of test coverage means, one, you don't have to run as many tests because you're more discreetly looking at very simple interactions. Two, it's telling you really rapidly that your test suite is good if it runs quickly and tells you quickly what you did wrong.
One anti-pattern that you see is people holding on to the tests like they're some special thing. The tests are very much part of the code base. They're very much a living part of the code base. If you have tests that you wrote 20 years ago, the testings that don't really matter in the site anymore, that's just another waste. That's another kind of excess inventory of source code that you really need to excise. You should be refactoring your tests as much as you're refactoring your code.
Another thing to think about in terms of tests is readability. It's important when you think about code in general to think about how easy it is to understand this piece of code. I've seen overelaborate test frameworks where people parameterize everything, but you look at the test code and it's really complex. What you want is a very explicit, "I come and I do this thing. This is what I expect. I have these preconditions. Given these preconditions, I do this operation. This is what should come out." You should be able to read the tests without ever having seen the code base and have a sense of what it means.
Cucumber testing, again, is a framework. There's a whole series of test frameworks. Fit was the first one. The idea is that you want to create a domain-specific language for testing that lets an end-user, a product manager, a domain expert write the test and be able to write a bunch of scenarios in a way that's human-readable that you can run against code. Sometimes this works, sometimes this doesn't. It really depends on the team and it really depends on the person that you want to have write these scenarios. I've seen it be very effective where there's a very complex business domain, for example, in an HR application or an accounting application where you have somebody who is really a domain expert in what the rules are. "I know as the domain expert in HR systems that if the employee is Irish, I need to make sure that I have recorded their religion, but in a U.S. context I'm not allowed to know that information."
Building a bunch of rules and being able to run those rules against a code base are the kinds of things that I think stuff like Cucumber is actually good at. People use it for a lot of different stuff. It really is a question of does the team enjoy working with it? Do they get value out of it? I've seen almost the same team structure with almost the same kinds of developers who really love Cucumber and really hate Cucumber.
The thing that you'll notice if you're tests are too rigid is that you want to make this small change and there's all this test stuff you have to change. If that's true, your testing is wrong. You need to really think through how do I refactor my testing? One, is it testing meaningful things? The set phrase is "Test anything that could possibly fail." When I say possibly, I don't mean if a meteor hits the earth, will something happen? But something that could reasonably be expected. Something that could have unexpected behavior.
If you do an assignment to a variable, that's not going to fail. You don't have to test, like I said, N equals four. I don't have to test that N now contains four. I don't have to test the language. I don't have to test the libraries. I should test my interactions with the libraries. I may have to test behavior that I create. But there are a lot of things that don't need test coverage.
I tend to see really healthy test coverages usually. The best functioning teams I've seen tend to have test coverage somewhere around 85 and 95%, but not 100% test coverage. There are things that aren't going to fail or are going to fail so rarely. Not rarely, but the odds that you totally got wrong some variable assignment are very, very low. So it's not really worth putting all the extra work into making sure that you didn't do that wrong.