eXtreme Testing
Advanced Automated Testing
adi.roiban@gmai.com
License Creative Commons Attribution 4.0
About me
- Generalist - Jack of all trades, master of none
- developer - free software contributor
- Twisted Matrix Core Dev
- tiny size entrepreneur
About you
Do you need eXtreme?
- Have tests running on more than one platform
- Have more than 1000 tests for a project
- Have more than 50% in code coverage
- All tests are now passing on master/trunk
-
A in bug production will cost more than $1000 - Testing is
alwasy neede, but not to the eXtreme
100% hard coverage on all testing code
- Make sure skipped tests are executed on some systems
- Move any todo code in a ticket … keeps the code clean, and tests which are always skipped will decay
- Include in the report any testing helpers
- 100% on all assertion, write tests for custom assertion to make sure that they indeed work
- Enable and enforce branch coverage.
- Coverage will also help manage the size of the code by identifying unused code/branches.
Enforce the tests (part1)
- Don’t merge until all tests are green (slow or fast)
- Nightly runs on master / post commit on master to double check that master is safe
- Don’t merge until all tests were executed - 100% test coverage. Run all tests before merge.
- Tests which are always skipped have no use, and like code without usage, they should be removed.
- Is less painful to fix a test before merge, than fixing post-commit after 1 week with test failing due to multiple merges.
Enforce green tests (part 2)
- Failures on master will show as failures in all branches and developers will have to put extra work to check if the failures are due to their changes.
- A builder which will always fail will just be ignored, and if you ignore a test, then why waste resource running it.
- Tests which are always failing are viral :)
- Fix tests before the merge. Is less painful than fixing post-commit after 1 week with test failing due to multiple merges.
- Better skip on conditions or remove a tests which is always failing... easy with good tech-debt practices
Failing functionality
- Don't just fix the code and go
- Even if the bug looks minor, take it serious
- Write tests for all bugs, they might reveal big issues
- For well-written code, even small bugs signal serious design issue.
- Start by writing automated test to reproduce
- Write automated tests at lower level
- Valid for manual tests for functional test
Versioned testing data
- Keep testing data in memory - as variable as close as possible to the code (RSA keys or other things that take long to compute)
- Keep any test data files in the same repo. Load them before test start
- Keep external test data for life and versioned.
- Docker for managing external dependencies, versioned images.
- External DB to work with prefix, and data generated at start of test.
- You can always go back at older revision and know that test data is in sync.
Manage the size of the code
- Soon will get to 25% production code and 75% testing code
- Write tests only for functionality that you need and implement on that functionality.
- Extract common testing code and write parametric tests based on the extracted code
- Avoid writing tests for private things.
- Write tests as high as possible. As long as they are fast and coverage is complete.
- Write at least one integration test for one happy path and one error path.
- Get rid of setUp / tearDown, use helpers and cleanups
Manage the time tests are executed
- You will spend 40% (at most) writing production code and 40% writing and 20% waiting for tests to execute
- Get familiar with parallel tasks
- Run tests as fast as you can and as parallel as you can / afford
- Average of 15 tests per second, with 5000 tests you get at 5 to 6 minutes.
- Fast tests run at 50 tests per second, with 3500 of them you get 1 minute.
Run tests the smart way
- Have a quick run, with test that are fast or fail often. Should run in less than 1 minute.
- Be able to run a single test (subset) on the CI system, without a commit… this read: get rid of easy to use Travis CI and switch to Buildbot and buildbot-try
- Get each builder to have a separate test run report. You can trigger a rebuild just for that builder.
- Run tests in stages. Fast tests and high probable to fail first.
Don't setUp / tearDown
- Use helper methods to create fixtures for each test
- Use `addCleanup` to call code at the end of a test
- Will result in tests which less dependent
- Will make the test self contained (no need to read in other parts to see how it is arranged)
- With code coverage reporting is much easier to detect leftover code after test refactoring
Check for side effects and leak resources
- Tear-down which check that system is clean (filesystem, db, docker, network sockets)
- Set-up which will hook into notification system and will make sure any notification is handled by the test.
- (preaching) Snapshot before and after test to make sure no objects are left... not doing this since is slow
Don’t use mock
- Write your own simple implementation… more work, but you get strict code
- Look for fully functional memory implementation: sqlite, ldaptor
- Write your production code to handle in memory / fast operations
- Prefer composition, so that you can inject a simple implementation
Monitor the tests
- Have augmented test runner which measures both inner and outer running time. (Only test or test+setup+teardown)
- Augment the test definition to mark tests which are expected to be slow
- 100% coverage to make sure no test is always skipped. Tests which are not executed will decay
- (preaching) Monitor test failure rate. Split tests which are slow and seldom fail and run them only after fast and often failing tests are executed.
That was is
Questions / Comments
Thanks for your time!
adiroiban@gmail.com
Moderate TDD (2016)
https://slides.com/adiroiban/moderate-tdd
More than Unit Testing (2013)
http://slides.com/adiroiban/mai-mult-decat-unit-test
eXtreme Testing
By adiroiban
eXtreme Testing
- 1,037