Gleb Bahmutov PRO
JavaScript ninja, image processing expert, software quality fanatic
- Uninterrupted
- Increased availability due to no commuting
- Relaxed and Happy
- Interrupted by spouses, roommates, children
- Unpredictable availability due to conflicting needs
- Not relaxed, not happy
Photo of Titanic
Are you going to be ready?
My dear, here we must run as fast as we can, just to stay in place. And if you wish to go anywhere you must run twice as fast as that.
Lewis Carroll, "Alice in Wonderland"
Talking about need to upgrade dependencies to keep up with security and performance updates
(long long time ago, in the galaxy far far away)
I am not doing this
can be overwhelming
New feature!
Bug fixes!
No flood
No or little manual work
Your web framework
Your backend framework
Your main production libraries
Build tools
Minor production utilities
prod
dep
Disabled upgrades for all tools but a few production dependencies
{
"extends": [
"config:base"
],
"enabledManagers": ["npm"],
"packageRules": [
{
"packagePatterns": ["*"],
"excludePackagePatterns":
["react", "react-dom"],
"enabled": false
}
]
}
My users care about these libraries
I have automatic tests for these dependencies, if the tests pass, I can merge the update with confidence
renovate.json
Not everyone follows semantic release 😞
patch: a bug was fixed 🐞
minor: a new feature was added 🎉
major: breaking API change 👀
Semantic release tools inspect commits since last release to update version
upgrade requires effort (maybe)
new feature!
prod
dev
trusted
No or little manual work
{
"extends": ["config:base"],
"automerge": true,
"major": {
"automerge": false
}
}
renovate.json
Do you trust your tests enough to automerge dependency upgrades?
Do you trust your tests to find possible mistakes:
Do you trust your tests to find possible mistakes:
Static code analysis tools (linters)
Do you trust your tests to find possible mistakes:
Cover your code in meaningful tests
$ npm i -D cypress
describe('Todo App', () => {
it('completes an item', () => {
cy.visit('http://localhost:8080')
// there are several existing todos
cy.get('.todo').should('have.length', 3)
})
})
describe('Todo App', () => {
it('completes an item', () => {
// base url is stored in "cypress.json" file
cy.visit('/')
// there are several existing todos
cy.get('.todo').should('have.length', 3)
cy.log('**adding a todo**')
cy.get('.input').type('write tests{enter}')
cy.get('.todo').should('have.length', 4)
cy.log('**completing a todo**')
cy.contains('.todo', 'write tests').contains('button', 'Complete').click()
cy.contains('.todo', 'write tests')
.should('have.css', 'text-decoration', 'line-through solid rgb(74, 74, 74)')
cy.log('**removing a todo**')
// due to quarantine, we have to delete an item
// without completing it
cy.contains('.todo', 'Meet friend for lunch').contains('button', 'x').click()
cy.contains('.todo', 'Meet friend for lunch').should('not.exist')
})
})
Do you trust your tests to find possible mistakes:
Cover your code in meaningful tests
Are we testing all features?
Feature A
User can add todo items
Feature B
User can complete todo items
Feature C
User can delete todo items
❚❚❚❚❚
❚❚❚❚❚ ❚❚ ❚❚❚❚
❚❚❚
❚❚❚❚
❚❚❚ ❚❚❚❚❚❚❚
❚❚❚❚
❚❚
❚❚❚❚❚ ❚❚
❚❚❚❚❚❚
❚❚❚
❚❚❚❚❚
❚❚❚❚❚
❚❚❚❚
❚❚❚❚
❚❚❚❚❚❚ ❚❚❚ ❚❚
❚
❚❚
❚❚❚❚❚
❚❚❚❚❚ ❚❚ ❚❚❚❚
❚❚❚
❚❚❚❚
❚❚❚ ❚❚❚❚❚❚❚
❚❚❚❚
❚❚
❚❚❚❚❚ ❚❚
❚❚❚❚❚❚
❚❚❚
❚❚❚❚❚
❚❚❚❚❚
❚❚❚❚
❚❚❚❚
❚❚❚❚❚❚ ❚❚❚ ❚❚
❚
❚❚
❚❚❚❚❚
❚❚❚❚❚ ❚❚ ❚❚❚❚
❚❚❚
❚❚❚❚
❚❚❚ ❚❚❚❚❚❚❚
❚❚❚❚
❚❚
❚❚❚❚❚ ❚❚
❚❚❚❚❚❚
❚❚❚
❚❚❚❚❚
❚❚❚❚❚
❚❚❚❚
❚❚❚❚
❚❚❚❚❚❚ ❚❚❚ ❚❚
❚
❚❚
source code
Are we testing all features?
Feature A
User can add todo items
Feature B
User can complete todo items
Feature C
User can delete todo items
❚❚❚❚❚
❚❚❚❚❚ ❚❚ ❚❚❚❚
❚❚❚
❚❚❚❚
❚❚❚ ❚❚❚❚❚❚❚
❚❚❚❚
❚❚
❚❚❚❚❚ ❚❚
❚❚❚❚❚❚
❚❚❚
❚❚❚❚❚
❚❚❚❚❚
❚❚❚❚
❚❚❚❚
❚❚❚❚❚❚ ❚❚❚ ❚❚
❚
❚❚
❚❚❚❚❚
❚❚❚❚❚ ❚❚ ❚❚❚❚
❚❚❚
❚❚❚❚
❚❚❚ ❚❚❚❚❚❚❚
❚❚❚❚
❚❚
❚❚❚❚❚ ❚❚
❚❚❚❚❚❚
❚❚❚
❚❚❚❚❚
❚❚❚❚❚
❚❚❚❚
❚❚❚❚
❚❚❚❚❚❚ ❚❚❚ ❚❚
❚
❚❚
source code
it('adds todos', () => { ... })
it('completes todos', () => {
... })
Are we testing all features?
Feature A
User can add todo items
Feature B
User can complete todo items
Feature C
User can delete todo items
❚❚❚❚❚
❚❚❚❚❚ ❚❚ ❚❚❚❚
❚❚❚
❚❚❚❚
❚❚❚ ❚❚❚❚❚❚❚
❚❚❚❚
❚❚
❚❚❚❚❚ ❚❚
❚❚❚❚❚❚
❚❚❚
❚❚❚❚❚
❚❚❚❚❚
❚❚❚❚
❚❚❚❚
❚❚❚❚❚❚ ❚❚❚ ❚❚
❚
❚❚
source code
it('adds todos', () => { ... })
it('completes todos', () => {
... })
green: lines executed during the tests
red: lines NOT executed during the tests
Are we testing all features?
Feature A
User can add todo items
Feature B
User can complete todo items
Feature C
User can delete todo items
❚❚❚❚❚
❚❚❚❚❚ ❚❚ ❚❚❚❚
❚❚❚
❚❚❚❚
❚❚❚ ❚❚❚❚❚❚❚
❚❚❚❚
❚❚
❚❚❚❚❚ ❚❚
❚❚❚❚❚❚
❚❚❚
❚❚❚❚❚
❚❚❚❚❚
❚❚❚❚
❚❚❚❚
❚❚❚❚❚❚ ❚❚❚ ❚❚
❚
❚❚
source code
it('adds todos', () => { ... })
it('completes todos', () => {
... })
it('deletes todos', () => {
... })
Are we testing all features?
Feature A
User can add todo items
Feature B
User can complete todo items
Feature C
User can delete todo items
❚❚❚❚❚
❚❚❚❚❚ ❚❚ ❚❚❚❚
❚❚❚
❚❚❚❚
❚❚❚ ❚❚❚❚❚❚❚
❚❚❚❚
❚❚
❚❚❚❚❚ ❚❚
❚❚❚❚❚❚
❚❚❚
❚❚❚❚❚
❚❚❚❚❚
❚❚❚❚
❚❚❚❚
❚❚❚❚❚❚ ❚❚❚ ❚❚
❚
❚❚
source code
it('adds todos', () => { ... })
it('completes todos', () => {
... })
it('deletes todos', () => {
... })
Code coverage from tests indirectly
measures implemented features tested
Feature A
User can add todo items
Feature B
User can complete todo items
Feature C
User can delete todo items
❚❚❚❚❚
❚❚❚❚❚ ❚❚ ❚❚❚❚
❚❚❚
❚❚❚❚
❚❚❚ ❚❚❚❚❚❚❚
❚❚❚❚
❚❚
❚❚❚❚❚ ❚❚
❚❚❚❚❚❚
❚❚❚
❚❚❚❚❚
❚❚❚❚❚
❚❚❚❚
❚❚❚❚
❚❚❚❚❚❚ ❚❚❚ ❚❚
❚
❚❚
source code
it('adds todos', () => { ... })
it('completes todos', () => {
... })
it('deletes todos', () => {
... })
Unrealistic tests; subset of inputs
code does not implement the feature correctly
it('adds todos', () => {
cy.visit('/')
cy.get('.new-todo')
.type('write code{enter}')
.type('write tests{enter}')
.type('deploy{enter}')
cy.get('.todo').should('have.length', 3)
})
it('adds todos', () => {
cy.visit('/')
cy.get('.new-todo')
.type('write code{enter}')
.type('write tests{enter}')
.type('deploy{enter}')
cy.get('.todo').should('have.length', 3)
})
Nice job, @cypress/code-coverage plugin
Cypress E2E, component and unit test produce combined code coverage report
Do you trust your tests to find possible mistakes:
Use image diffing tests
it('looks the same', () => {
// visit the page
// interact like a user
// now we can do visual diffing
cy.get('.react-calendar-heatmap').happoScreenshot({
component: 'CalendarHeatmap',
})
})
Visual review against baseline image
Visual review against baseline image
Do you trust your tests to find possible mistakes:
Use accessability testing plugin
it('catches missing aria-* label', () => {
// https://github.com/avanslaars/cypress-axe
cy.injectAxe()
cy.visit('/')
cy.checkA11y('input', {
runOnly: {
type: 'tag',
values: ['wcag2a'],
},
})
})
Do you trust your tests to find possible mistakes:
Do you trust your tests to find possible mistakes:
End-to-end tests
I feel safe automatically updating dependencies if all commit checks pass
By Gleb Bahmutov
The Covid-19 pandemic led to a lot of tech companies converting to remote teams almost overnight, and for some this may even become the norm. So as we adjust to new ways of working, how do you ensure that your appsec procedures are designed to withstand any changes in your team dynamics? Video at https://resources.whitesourcesoftware.com/wistia-webinars/what-going-all-remote-taught-us-about-appsec-and-testing-shortfalls
JavaScript ninja, image processing expert, software quality fanatic