Great talk on the topic: https://www.destroyallsoftware.com/talks/ideology
to_hsl :: RGB -> HSL
to_hsl(Date)
// Error
to_hsl(null)
// Error
to_hsl( RGB(-1,-2,-3) )
// OK!to_hsl( RGB(-1,-2,-3) )
// Error
to_hsl(null)
// Error (at runtime!)
to_hsl( RGB(255,0,0) )
// HSL(0,100,50)to_hsl :: RGB -> HSL
They do not ensure correctness
Faster than clicking through the interface
Ensure correctness of use cases
Bonus: improves (software) design
Unit testing is having tests for the code
TDD is writing a test, then writing the code to pass it
Yes!
No.
If you only take into account writing code.
If you take into account verification.
Tests are automated verification.
Also: It is a skill. You are slow at first.
beforeEach(() => jasmine.clock().install());
afterEach(() => jasmine.clock().uninstall());
it('resolves promise after timeout', async () => {
const spy = jasmine.createSpy();
const promise = RequestManager.timeoutPromise(spy, 1000);
jasmine.clock().tick(999);
expect(spy).not.toHaveBeenCalled();
jasmine.clock().tick(1);
expect(spy).toHaveBeenCalled();
});
it('gives random quotes', () => {
// https://www.xkcd.com/221/
jasmine.spyOn(Math, 'random').and.returnValue(0.8);
expect( getRecommendation([ 'a', 'b', 'c', 'd' ]) ).toEqual('d');
});
Sources of randomness:
it('loads data from the server', () => {
const requestSpy = spyOn(Axios, 'get')
.and.callFake(
() => Promise.resolve({
data: [...]
})
);
const page = mount(MyPage);
expect(requestSpy.calls.count()).toEqual(1);
expect(requestSpy.calls.argsFor(0)[0].url).toEqual('foo');
});
- Knock, knock
- An async test.
- Who's there?
it('works', () => {
// 10 assertions
});
// vs
it('works in case A', () => {
// assertion A
});
it('works in case B', () => {
// assertion B
});Failed: it works
=> start debugging
Failed: it works in case B
=> go fix case B
it('works in case A', () => {
// arrange
const page = mount(MyPage);
// act
page.find(Button).trigger("click");
// assert
expect(page.emitted().save).toBeTruthy();
});
Arrange, Act, Assert, Act, Assert, Act, Assert...
= 3 tests, extract "Arrange"
it('works in case A', () => {
const page = mount(MyPage);
page.vm._save(); // locks implementation in place
expect(page.emitted().save).toBeTruthy();
});
Testing only public API allows easier change of implementation.
AKA refactoring.
// FilterController.prototype.render
self.startDate = self.report.navigator.options.actualStartDate || new Date();
self.endDate = self.report.navigator.options.actualEndDate || new Date();
var canHideConfidence = !self.report.navigator.isDatasetAbTest;
view.html(Leanplum.templates.reportFilters({
// ...
confidence: self.report.navigator.options.confidence,
hasGroupBy: self.report.navigator.hasGroupBy(),
viewCohorts: self.report.navigator.options.viewCohorts,
viewConfidence: self.report.navigator.options.viewConfidence || !canHideConfidence,
showViewStudyButton: self.report.navigator.isCampaignOrStudy(),
viewStudyUrl: self.report.navigator.getSetupPageUrl(),
viewStudyButtonLabel: self.report.navigator.getPageKind(),
holdbackToggleOptions: self.report.navigator.getHoldbackToggleOptions(),
report: self.report.navigator.report
}));
// implementation
function to_hsl() {
return HSL(0,100,50);
}
// this test brings coverage to 100%
expect(
to_hsl( RGB(255,0,0) )
).toEqual(
HSL(0,100,50)
);
// nothing else works, thoughOnly useful if combined with the other approaches
mvn clean install
-Denvironment=STAGING # environment
-Dapplication=SsoApp # app settings (appId, URL)
-Dtest='RunSSOTests' # runs tests by tag (@SSO)
-Dtype=local. # not CI
And here's to no-stress deployments 🍹