Testing At Its Best
Testing At Its Best
Objectives
- Discuss the pros and cons of automated testing explicitly and honestly.
- Discuss the differences between (and importance of) common test types:
- Unit
- Integration
- Functional
- Acceptance
Automated Testing
is:
The use of code to prove that other code is correct.
PASS
FAIL
Our Software
Our Tests
Automated Testing: Why?
All Together Now: micro-research problem
Use google to find out:
What are some pros and cons of test driven development. Everything is a tradeoff, so expect to find some of both.
Automated Testing: Why?
There are two main categories that tests help with
- Operationalize development
- Write maintainable code
Automated Testing: Why?
There are two main categories that tests can hinder
- Development Speed
- Creative restriction
Operationalization
(is that even a word?)
In this case, tests are helping us move code from "in development" to "in production" faster
Operationalization
Quickly Verify that new code didn't break old features. These are called "regression" bugs
PASS
FAIL
Our New Code
Existing Tests
Operationalization
We can use tools to gate new code entering the master branch based on a test suite
PASS
FAIL
Our New Code
Github hook
git push origin master
Existing Tests
Push accepted
Push rejected
Benefits to Code
Operational benefits are mostly for the business. There are lots of other ways to uses tests for this
Tests also help developers write better code.
Portability
In order to test our software, the tests need to know about the software. This is good, because it mandates that our code is "portable".
Our Software
Our Tests
Portability
Our Software
Our Tests
module.exports = {
funcOne: funOne,
funcTwo: funTwo
// ...
}
var ourModule =
require('../ourModule');
One benefit of testing is that we know we can use the modules from at least one other context
Portability
This means good tests should FORCE our code to be portable / testable in a vacuum
Expectations
Our Software
Our Tests
describe('Our function', function(){
it('Should have describable behavior', function() {
expect(add(2, 2)).to.equal(4);
});
});
Tests force us to describe the our expectations.
Expectations
describe('Our function', function(){
it('Should have describable behavior', function() {
expect(add(2, 2)).to.equal(4);
});
});
- It's impossible to write tests without codifying our expectations in the software.
- Sometimes these expectations will seem trivial: "Do I really need to test that 2+2 = 4"?
- Yes, because the consequences for 2+2 NOT being equal to 4 are catastrophic.
Expectations
This means good tests make our expectations about the code explicit
Testing Forces us to Think About our Code
Simply writing descriptions of what the code MUST do helps us write better code.
Codifying the meaning of our description with code allows us to make those descriptions more explicit.
Example: Implicit
describe("parensChecker", function() {
it('Should return(true) for valid nested parens', function() {
// ...
});
it('Should return(false) for invalid nested parens', function() {
// ...
});
it('Should gracefully return(false) for all invalid inputs', function(){
// ...
});
Sure, but what does "valid" and "invalid" really mean?
Example: Explicit
describe("parensChecker", function() {
it('Should return true if input is a string representing properly' +
' nested and aligned parenthesis, brackets or curly braces: (){}[]', function() {
expect(parensChecker('{}')).to.equal(true);
// ...
});
it('Should return false if there are an odd number of parens', function() {
expect(parensChecker('())').to.equal(false);
// ...
});
it('Should return false if the opening and closing braces do not match in type', function(){
expect(parensChecker('(]')).to.equal(false);
// ...
});
it('Should return false if the input is not a string'), function() {
expect(parensChecker(123)).to.equal(false);
// ...
});
it('Should return false if the input is a string, but contains anything other than ' +
'the allowed character set of {}[]()', function() {
expect(parensChecker('123').to.equal(false);
// ...
});
});
it('Should return(true) for valid nested parens', function() {
// Simple cases - if these break nothing will work.
expect(parensChecker("[]")).to.equal(true);
expect(parensChecker("()")).to.equal(true);
expect(parensChecker("{}")).to.equal(true);
// Mixing cases - test clean combinations which
// will be easier to debug when they break.
expect(parensChecker("()[]{}")).to.equal(true);
expect(parensChecker("({[]})")).to.equal(true);
expect(parensChecker("{[()]}")).to.equal(true);
expect(parensChecker("[({})]")).to.equal(true);
// More complex cases - sometimes even randomly
// generated cases - the goal of which is to ensure
// any edge case is found.
expect(parensChecker("[][][]{}(){[]}({})")).to.equal(true);
expect(parensChecker("([([[{(){}[()]}]])])")).to.equal(true);
});
Look a little deeper
Testing Forces us to Use Our Own Code
It's easy to take this for granted, but look at that last example:
it('Should return(true) for valid nested parens', function() {
// Simple cases - if these break nothing will work.
expect(parensChecker("[]")).to.equal(true);
expect(parensChecker("()")).to.equal(true);
expect(parensChecker("{}")).to.equal(true);
// Mixing cases - test clean combinations which
// will be easier to debug when they break.
expect(parensChecker("()[]{}")).to.equal(true);
expect(parensChecker("({[]})")).to.equal(true);
expect(parensChecker("{[()]}")).to.equal(true);
expect(parensChecker("[({})]")).to.equal(true);
// More complex cases - sometimes even randomly
// generated cases - the goal of which is to ensure
// any edge case is found.
expect(parensChecker("[][][]{}(){[]}({})")).to.equal(true);
expect(parensChecker("([([[{(){}[()]}]])])")).to.equal(true);
});
Do you think you would really have bothered to use your code with:
"([([[{(){}[()]}]])])"
it('Should return false if the input is a string, but contains anything ' +
'other than the allowed character set of {}[]()', function() {
expect(parensChecker('123').to.equal(false);
expect(parensChecker('{}()[]fail').to.equal(false);
expect(parensChecker('').to.equal(false);
expect(parensChecker('\n').to.equal(false);
expect(parensChecker('\r').to.equal(false);
});
What about these failure cases?
This means good tests use the code in all the ways it might be used.
It means a good test hits all the "code paths"
Testing Can Help Us Write The Code
Red Green Refactor is a great way to make incremental progress.
RGR
Lets use the previous example. Step 1:
it('Should return(true) for valid nested parens', function() {
// Simple cases - if these break nothing will work.
expect(parensChecker("[]")).to.equal(true);
expect(parensChecker("()")).to.equal(true);
expect(parensChecker("{}")).to.equal(true);
});
Before we write our function, lets define some very BASIC success criteria
RGR
Step 2:
Write a function which can pass the first 3 tests!
RGR
Step 3:
it('Should return(true) for valid nested parens', function() {
// Mixing cases - test clean combinations which
// will be easier to debug when they break.
expect(parensChecker("()[]{}")).to.equal(true);
expect(parensChecker("({[]})")).to.equal(true);
expect(parensChecker("{[()]}")).to.equal(true);
expect(parensChecker("[({})]")).to.equal(true);
});
Add some more tests, these may work, they may fail.
RGR
Step 4:
Update the code until it passes the old tests, and the new tests
RGR(R)
Write a test which fails
Fix the code so it passes
Keep the test passing, make stylistic changes
Red
Green
Refactor
Repeat
Good With Bad
- Tests make the API more rigid
- Good because it improves predictability
- Bad because if the API really must change its harder
- Good because API changes are a BFD
- Writing tests for a prototype, for example, may be overkill
- Tests are more work up front
- This is good for all the previous reasons
- This is bad because its more work per feature
- Typically though, tests *save* time in the long run
- If the tests become slow, everyone is slowed down significantly
Test Types
- Unit
- Test a single 'unit' of code -- typically a function
- No network calls, no database
- Most common & fastest
Database
Our Codebase
Our new feature
Internet Services
Environment Stuff
Test Types
- Integration
- Tests how two units work together OR
- How a unit works when we add database/network
Database
Our Codebase
Our new feature
Internet Services
Environment Stuff
Test Types
- Functional
- Bigger than an integration test, concerned with the outcome of something like a route
- Typically not concerned with "side effects"
Database
Our Codebase
Our new feature
Internet Services
Environment Stuff
Test Types
- Acceptance
- Like a functional test, but concerned with the internal behavior as well as the outcome
Database
Our Codebase
Our new feature
Internet Services
Environment Stuff
Acceptance VS Functional
- Functional tests ensure that the USER sees some expected behavior.
- Acceptance tests ensure that the MACHINE remains in an expected state.
Questions?
After a 10 minute break, we're going to setup a test environment together.
Testing At Its BEst
By Tyler Bettilyon
Testing At Its BEst
- 1,461