Thierry Nilles
streamroot.io
@thierrynilles
Our experience on growth
Classic CDN
with Streamroot
demo.streamroot.io
We need to keep track of viewers
So that we can connect them together
The big picture
Tracker
Redis
socket.io
~1000
viewers
Tracker
Tracker
Tracker
HAProxy
Redis
Tracker
Tracker
Tracker
Redis
"This guy wants to talk"
got it
got it
— Scales only on simple metrics (cpu, memory)
— Custom autoscaling which is based on # of connections per sec
+ # of messages per sec
Tracker
Tracker
Tracker
Monitoring agent
Periodically checks trackers' health
Tracker
Runs on node too
Launch new instances
... or terminate them if needed
$
$
$
Fine optimizations
Saving money + time
var agent = require('strong-agent');
agent.metrics.startCpuProfiling();
// Run some code here
setTimeout(function() {
var data = agent.metrics.stopCpuProfiling();
fs.writeFileSync('./CPU-' + Date.now() + '.cpuprofile', data);
}, 1000 * 30);
JSON File
Wrote a client simulator
1. Configured it with the following parameters:
Number of clients to emulate: 500
Connections per sec: 3
2. Run the test with strong agent
3. Run another test increasing the # of connections per sec
4. Repeat until it starts to slow down
v=0
o=- 8762946660211567753 2 IN IP4 127.0.0.1
s=-
t=0 0
a=group:BUNDLE audio data
a=msid-semantic: WMS
m=audio 1 RTP/SAVPF 111 103 104 0 8 106 105 13 126
c=IN IP4 0.0.0.0
a=rtcp:1 IN IP4 0.0.0.0
a=ice-ufrag:PMG42memfIKiWttR
a=ice-pwd:GhKvhH7GxwXCISLVpcFQL4/Z
a=ice-options:google-ice
a=fingerprint:sha-256 57:6D:FB:99:A0:20:74:50:AB:56:00:90:75:0B:07:53:4E:47:C7:A5:72:6A:7B:8B:2B:32:87:E9:6D:14:F4:06
a=setup:actpass
a=mid:audio
a=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level
a=extmap:3 http://www.webrtc.org/experiments/rtp-hdrext/abs-send-time
a=recvonly
a=rtcp-mux
a=rtpmap:111 opus/48000/2
...
*for each viewer, several times
Group the messages together
Send less data
&
Chrome does ICE trickling — around 12 messages
Can be grouped into one bigger message
Without optimization
With optimization
31.87% Idle + Native
63.55% Idle + Native
84
168
Nice!
*On my computer
We optimized message length
Without optimization
With optimization
31.87% Idle + Native
49.96% Idle + Native
84
112
84
196
*On my computer
{
meta: 'foo',
data: 89,
heavy: myBigObject
}
VS
3,308 ms
241 ms
{
meta: 'foo',
data: 89,
heavy: [ArrayBuffer]
}
on ucs2encode() + ucs2decode() (used by socket.io)
Without optimization
With "optimization"
31.87% Idle + Native
1.99% Idle + Native
84
84
huh?
*On my computer