Transfer learning InceptionV3
A combination of custom architecture, a bit more data, data augmentation and 1D temporal convolutions
With 3x data, a lot of custom image pre-processing, and a professor and a PhD student from a well known US university
One day, we tried Google AutoML Vision (it was still in closed beta). The CEO was helpful and did data augmentation beforehand.
We ran AutoML full training cycle.
And got 98% accuracy.
We ran AutoML full training cycle.
And got 98% accuracy.
The CEO was ready to inform the stakeholders that we cracked it.
But, the best result before that was 69.2%.
Using occluded images, it turned out that the network was predicting garbage.
Indeed something was wrong.
Because CEO was helpful and did data augmentations beforehand but hadn't specified the train/test/valid splits, we had a data leak.