Paginated query -> slow
Streamed query -> overload memory
Streamed query with setting fetch size -> 15 times faster
Additionally, the stream implementer can
Understand backpressure with pipes explanation
We scratch the surface. To dive deeper, you can read this stream handbook (quite old though)
process.stdin
.pipe(process.stdout);Pipe stdin to stdout
process.stdin
.pipe(new stream.PassThrough())
.pipe(process.stdout);Same with an identity transform
fs.createReadStream('./package.json')
.pipe(process.stdout);package.json content written to stdout
// write what you are typing in a file
process.stdin
.pipe(fs.createWriteStream('./test.txt'));
stdin written to a file
// write js files line counts to a file
child_process
.exec("wc -l `find -name '*.js'`")
.stdout.pipe(fs.createWriteStream('./line-count.txt'));
shell command output written to a file
// client.js
const net = require('net');
const client = new net.Socket();
client.on('connect', () => {
console.log('client has connected');
});
process.stdin.pipe(client);
client.on('end', () => {
console.log('client has disconnected');
});
client.connect(3000, 'localhost');
// server.js
const net = require('net');
const server = net.createServer(socket => {
console.log('client has connected');
socket.on('data', chunk => {
console.log('received chunk', chunk);
});
socket.on('end', () => {
console.log('client has disconnected');
});
});
server.listen(3000, 'localhost');Whole libraries
Collections
Core streams mirror