I'm like a test unitarian. Unit tests? Great. Integration tests? Awesome. End to end tests? If you're into that kind of thing, go for it. Coverage of lines of code doesn't matter. Coverage of critical business functions does. I think TDD can be a cult, but writing software that way for a little bit is a good training exercise.
I'm a senior engineer at a small startup. We need to move fast, ship new stuff fast, and get things moving. We've got CICD running mocked unit tests, integration tests, and end to end tests, with patterns and tooling for each.
I have support from the CTO in getting more testing in, and I'm able to use testing to cover bugs and regressions, and there's solid testing on a few critical user path features. However, I get resistance from the team on getting enough testing to prevent regressions going forward.
The resistance is usually along lines like:
- You shouldn't have to refactor to test something
- We shouldn't use mocks, only integration testing works.
- Repeat for test types N and M
- We can't test yet, we're going to make changes soon.
How can I convince the team that the tools available to them will help, and will improve their productivity and cut down time having to firefight?
I tried to introduce tests to one of the team I worked at. I was somewhat successful in the end but it took some time and effort.
Basically, I made sure to work with the people interested in testing their code first. It's good to have other people selling testing instead of being the only voice claiming testing will solve ~~all~~ some problems.
Then I made examples: I tried to show that testing some code, believed to be untestable, was actually not that hard.
I was also very clear that testing everything was not the end goal, but, new projects especially, should try and leverage testing. Both as a way to allow for regression testing later on and to improve the design. After all, a test often is the first user of a feature. (This was for internal libraries, I expect it would be a harder sell for GUI where the end design might come from a non programmer such as a UX designer).
At this point, It was seen as a good measure to add a regression tests for most bugs found and fixed.
Also, starting from the high level, while harder (it's difficult to introduce reliable integration and end to end tests), usually yields benefits that are more obvious to most. People are much less nervous reworking a piece of code that has a testing harness, even if they are not in a habit of testing their code.
I did point at bugs that could have been easily prevented by a little bit of testing, without blaming anyone. Once the framework is in place and testing has already caught a couple of mistakes, it's much harder to defend the argument that time spent testing could be better spent elsewhere. And that's where we started to get discussions on the balance to strike between feature work and testing. It felt like a win.
It took two years to get to a point where most people would agree that testing has its uses and most new projects were making use of UT.
"It takes years" seems like the most reasonable alternative to forcing my coworkers to TDD or not merge code.
I think that getting people into the benefits of testing is something that's really worthwhile. Building out some of these test suites - especially the end to end tests- were some really eye opening experiences that gave me a lot of insight into the product. "Submit a test with your bugfix" is another good practice - getting the error the first time prevents a regression from creeping in.