Practical Testing Tips

Kazuho Yamaguchi


  • Github: kyamaguchi
  • Freelance Software Developer
    • Ruby, Rails
  • Like / Interest
    • English
    • Sublime Text
    • Data analysis (R, Python)
    • TDD, Pair programming, Browser testing


  • TDD
  • Test first
  • Refactoring
  • Code coverage
  • Code reviews
  • CI


Best practice varies by application, team size etc.

Assumptions of this talk are

  • Rails application
    • Use RSpec, system test(Capybara)
  • The team consits of more than 3 developers
    • which include senior and junior
  • The project has CI

Is TDD for everyone?

What is TDD?

TDD (Test Driven Development)

Red/Green/Refactor cycle

Run all tests and see if the new test fails

Do you run all tests on every change?

Run all tests

You or members in your team

are supposed to run ‘all’ tests on every change

won’t run tests if it takes more than a minutes

are supposed to run ‘all’ tests on every commit

won’t run tests if it takes more than a few minutes

are supposed to run ‘all’ tests before pushing the branch

won’t run tests if it takes more than 10 minutes
Running ‘all’ tests isn’t practical

Solutions to run tests


Manual tests won’t find the regressions.
You can waste the time with repeating them.

guard-rspec (Naming convention rule)

autotest, zeus, spork (early days)

It’s hard to keep the rule if running all tests takes
more than 5 minutes

Practical Tips to run adequate tests

Tips to run ‘adequate’ tests with git & rspec

  • Run a example/spec file on editor
  • Run a spec file based on naming convention
  • rspec –only-failures
  • Run spec files which have changes
  • Run spec files which include the keyword

rspec –only-failures

When you have failures in previous runs

Failed examples:

rspec ./spec/models/failed1_spec.rb:7
rspec ./spec/models/failed1_spec.rb:22
rspec ./spec/models/failed1_spec.rb:28
rspec ./spec/features/failed2_spec.rb:330

You can re-run failed examples only

$ rspec --only-failures

I have an alias

alias rspecf='rspec --only-failures'

Run spec files which have changes

Before commit

$ git status

  modified:   spec/models/changed1_spec.rb
  modified:   spec/features/changed2_spec.rb

The script selects modified spec files with git

$ cat ~/bin/rspecm

rspec $(git status -s | awk '{if ($1 == "M") print $2}' | grep '_spec.rb')

Run spec files which include the keyword

$ cat ~/bin/rspecs

grep -R --include="*_spec.rb" $1 .
rspec --format documentation $(grep -R --include="*_spec.rb" -l $1 .)
$ rspecs users-table
./spec/features/some_user1_spec.rb:      within('#users-table') { expect(page).to have_content( }
./spec/features/some_user2_spec.rb:      first('#users-table tbody tr').click

Keyword can be ‘resource name’, ‘dom’, ‘label’, ‘method name’ etc.
There are many ways to filter files using git/grep

Is TDD for everyone?

Every time I try to tell someone the value of TDD,
It’s hard to tell the value of TDD.

With the rhythm of red, green & refactor cycle
your brain will be efficient.

Impressed with the final design(business logic) built with TDD.
Realize how fast it is done.

Is TDD for everyone?

❌ NO

TDD is just a style of development.
Not every (skilled) programmer likes it.
It requires effort, enthusiasm to learn.

What is the alternative of test?

I know some profitable services which don’t respect testing.

You don’t need test if you have(can)

Customer support 💁

Quick fix on weekends 🏖️📲, at midnight 😪

Appologize 🙇

Money 💴💴💴

for advertisement(commercial) 📺, compensation 🎟

What is for everyone?

Is there any easier practice which can be shared in team
and any junior developers can follow❔

CI ✅ (automatic)
You shouldn’t deploy when CI has failure

Rubocop 🤖👮
Never use it with the Rubocop’s default As-Is

I recommend
Testing first for bugfix.

Test first for bugfix

If you can reproduce the problem manually,
you can write test first.

The problem of test first is
test first cannot be confirmed on code review.

If the bug is urgent,
test of the hotfix tends to be skipped. (Time pressure)

In that case,
Test after is OK.

Test after

Test after also has problems.

You said adding the test(refactoring it) later
but sometimes you won’t do it later.

How can you solve this problems in the team?

Test first / Test after ➡️ Toggle last

What is ‘Toggle’?

Examples of toggle

  • toggle lines with commenting out
  • toggle expect( ).to <-> expect( ).not_to
  • toggle text/value
    • change to wrong(unexpected) text and revert it
  • toggle conditions
    • if true # … / if false # …
  • revert changes by file (<-> reset them)
  • many more …

Anything breaks app (makes assertions fail ⇄ success)

Toggle Last

This is What I recommend most.

You have to have the test
which fails without your change

and passes with your change in each topic branch.
That will be the minimal valuable test case.

This rule is easier to follow than TDD, test first
regardless of the level of developer.
Toggle last can be confirmed by reviewers.

Toggle/Red/Toggle/Green cycle
You can do this on every change, commit, single assertion

Toggle as reviewer

Revert the change for the fix temporarily.

git checkout topic_branch

git checkout develop -- app/
# Or the revision the topic branch started

# Reverting with single commit
git revert -n commit_for_the_fix

Find the spec files which have change

git diff --name-only develop | grep spec

Run them and they must have failures
(Otherwise new test is useless)

Tear down with ‘git reset + git checkout’ after the review

Bad assertion

With toggle last you never write this kind of useless test

click_on 'Update'
expect(page).not_to have_content('')

The example above passes even with 500 error page.
Need something like

expect(page).to have_content('updated')
expect(page).to have_content('')

Only having negative assertions is really bad practice.

This doesn’t happen if you confirm every assertion with toggling.
(Make sure red/green for every assertion)

Exploratory(Scratch) Refactoring

There are many techniques to know the codebase.

  • put ‘raise’
  • Comment out lines
  • Change text/labels
  • Change the value of fixed number
  • Give empty value as an argument
  • Reverse boolean value

Anything makes assertions fail ⇄ success

Refs: [Book] Working Effectively with Legacy Code

Flickering test failures

Your team often has flickering(random) test failures.

They are problematic especially on CI.
Developers waste time to check the failures on their local.
Need to rebuild on CI and wait for the result more than 10 minutes.

They often happen with browser testing.
(Hard to maintain)

Solutions of random test failures

Move the spec(refactor the logic) to other level
(controller, models, js components)

Retry (rspec-retry, custom retry logic etc.)

Test manually based on scenarios on spreadsheet

Hack(monkeypatch) js, css only in browser testing

Remove(give up) the spec
Stop to use that animation

Categorize with tags (Skip in common CI builds)

Categorize with tags

The developer who loves testing most or release manager
is responsible for taking care of random test failures.

The examples marked with unstable won’t run in normal flow.
(“$ rspec” and ‘Common CI builds’)

They can be run manually on event basis(before release).
They can be run on CI nightly(cron) in different job/workflow.

The responsible person takes care of the result.

Other patterns of tags are “depending on ‘external’ services”, “taking a long time”, “different browsers” etc.

Config for tags (1)

Rspec config for tags

RSpec.configure do |config|
  config.filter_run_excluding unstable: true

Examples With unstable tag will be skipped by default

  it '...', unstable: true do

  describe '...', unstable: true do

Run only unstable spec

$ rspec --tag unstable

Config for tags (2)

We’d like to run normal test + unstable test
instead of running unstable test only.

Control with env var

RSpec.configure do |config|
  unless ENV['RUN_ALL']
    config.filter_run_excluding unstable: true

Run normal spec + unstable spec

$ RUN_ALL=1 rspec


With CI you can

  • Rubocop
  • lint (haml-lint, slim-lint etc.)
  • test rake tasks/runners

If you don’t care the trade off of execution time

  • Coverage (SimpleCov)
  • security (brakeman etc.)
  • bundle update
  • upgrading ruby

They can be run in different job/workflow.

Fail fast on CI

If running all test takes more than 10 minutes,
fail fast is recommended.
(Expecting no random failures)

Builds on CI should stop on failure immediately.
The commands which don’t take time should come earlier.

Normally browser testing should be cared
as the time consuming command. (runs last)

  - |
    bundle exec rspec --fail-fast -f d --exclude-pattern "spec/features/**/*_spec.rb" spec && \
    bundle exec rspec --fail-fast -f d spec/features/

The commands could be defferent by
CI services or the mode of shell on CI.

Retry on CI

If running all tests don’t take time OR the project isn’t active,
covering unstable test with retry on CI is still an option.

I failed many time with rspec-retry.
(Problem with reseting session, timing of db interactions etc.)

Here is the most stable way of retry

  - |
    bundle exec rspec spec -f d || \
    bundle exec rspec spec -f d --only-failures || \
    bundle exec rspec spec -f d --only-failures

How pipes(|| , &&) work could be different by
CI services or the mode of shell on CI.

Linebreaks may work with some shell mode in place of ‘||’.
Check shell options(-e , -x)

The most important thing again

Toggle last is for everyone

Other Topics

  • Exploratory(Scratch) Refactoring
    • [Book] Working Effectively with Legacy Code
  • YAGNI spoils the prioritization of Agile practice
  • New code could be tech debt, legacy code from the start
  • After yak shaving, you accidentally commit extra(unnecessary) changes as well as actual solution
  • Extract refactorings as different PRs from your big branch
  • My Private CI (Use idling iMac(16G memory) at home)
    • Jenkins + git repo on Dropbox
  • Broken Windows Theory
  • Rubocop workflow
  • Forcing full Coverage makes things worse
  • Removing outdated test (especially someone else made) is hard
  • My tool to help refactoring with views(HTML)