- Focus on "automated testing", don't get obsessed with philosophising about "the true nature of a 'unit'", or other such dogma.
- Be empirical: base your rules on what works; don't base your work on "the rules".
- The goal of testing is to expose problems in our program: "test failure" is a success, because we've found a problem (even if that problem is with the test!). Anything else is secondary (e.g. isolating the location of failures, documenting our API, etc.). Avoiding this goal defeats the point (e.g. choosing to ignore edge cases).
- Focus on functionality rather than implementation details, e.g. 'changing a user's email address' rather than 'the setEmail method of the User class'. This improves reliability and makes failures more useful/meaningful (i.e. "this feature broke" vs "this calling convention has changed").
- Mocking is a crutch: it works-around problems that can usually be avoided entirely during design; it can still be very useful when a design can't be changed (e.g. adding tests to a legacy system).
- Testing a real thing is objectively better than testing a fake thing; we should only mock if testing the real thing is unacceptable.
- If two components always exist together, pretending that they're independent is a waste of time and complexity.
- Having some poor tests is better than having no tests. Tests can be added, removed and improved over time, just like anything else.
- "Property checking" is a quick way to find edge-cases and scenarios we wouldn't have thought of.
- Fast feedback loops are important. Reducing I/O and favouring pure calculation usually speeds up testing more than reducing the number or size of tests (e.g. "unit" vs "end-to-end"). Incidentially, this is also how we avoid having to mock.
tl;dr:
- Focus on "automated testing", don't get obsessed with philosophising about "the true nature of a 'unit'", or other such dogma.
- Be empirical: base your rules on what works; don't base your work on "the rules".
- The goal of testing is to expose problems in our program: "test failure" is a success, because we've found a problem (even if that problem is with the test!). Anything else is secondary (e.g. isolating the location of failures, documenting our API, etc.). Avoiding this goal defeats the point (e.g. choosing to ignore edge cases).
- Focus on functionality rather than implementation details, e.g. 'changing a user's email address' rather than 'the setEmail method of the User class'. This improves reliability and makes failures more useful/meaningful (i.e. "this feature broke" vs "this calling convention has changed").
- Mocking is a crutch: it works-around problems that can usually be avoided entirely during design; it can still be very useful when a design can't be changed (e.g. adding tests to a legacy system).
- Testing a real thing is objectively better than testing a fake thing; we should only mock if testing the real thing is unacceptable.
- If two components always exist together, pretending that they're independent is a waste of time and complexity.
- Having some poor tests is better than having no tests. Tests can be added, removed and improved over time, just like anything else.
- "Property checking" is a quick way to find edge-cases and scenarios we wouldn't have thought of.
- Fast feedback loops are important. Reducing I/O and favouring pure calculation usually speeds up testing more than reducing the number or size of tests (e.g. "unit" vs "end-to-end"). Incidentially, this is also how we avoid having to mock.