A collection of the toughest bugs I encountered
very small chunks + tls = inflated documents
Guess what? It took 3 days to figure out.
Sigma: "The /result request timeouts in pre-production. Can you look into it?"
Me: "Sure, just a sec."
New feature released on the pre-prod server.
Disabling timeout:
Transferred = 12x body size. WTF?!?!
(Ignore the weird time/latency value, I GIMPed the image because the real screenshot was lost)
We don't know what there is in the middle here, but we know it sometimes causes problems.
CURL from a machine on the same network, tcpdump to measure. Still 12x.
No timeouts, but still 12x! Link is fast enough to transfer that data before the timeout.
Only 2x! No TLS, no gzip, no AJP here.
Chunking fault?
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Type: application/json;charset=UTF-8
Transfer-Encoding: chunked
(.. snip head...)
c
]],"687":[[0
2
,2
4
,234
4
],[1
2
,3
4
,234
(.. snip tail...)
An applicative error?
And tomcat writes a chunk with every flush.
chunk, ajp, gzip, tls
Some messages actually contain strings that httpd can compress, sometimes the ratio is better... so in the end, ~12x
Started with 2 bytes, ended with 39 (worst case)
Now gzip can compress!
disabled FLUSH_PASSED_TO_STREAM