Kirk Haines
wyhaines@gmail.com
kirk-haines@cookpad.com
Paris.rb 2018
It's Rubies All The Way Down
The evolution of the Ruby (web) stack and an exploration of just how much of the complex modern stack can be done (effectively) with Ruby software, right now.
Kirk Haines
• Rubyist since 2001
• First professional Ruby web app in 2002
• Dozens of sites and apps, and several web servers since
kirk-haines@cookpad.com
wyhaines@gmail.com
@wyhaines
What is a "Stack"?
The "Stack" is the software components that live around our application, enabling it to be accessed by end users, and enabling it's own function.
- Front end proxy/load balancing
- Asset caching layer
- Web server for static assets
- App container + app itself
- Database
- Other data stores (key/value stores)
- Deployment Architecture
- Configuration Management
- Log management
The Ruby Bits
Traditionally, only some of this is done with Ruby
Front end proxy/load balancingAsset caching layerWeb server for static assets- App container + app itself
DatabaseOther data stores (key/value stores)- Deployment Architecture (w/ Capistrano)
- Log management (w/ Fluentd)
- Configuration Management (Chef, Itamae)
I've done other parts in Ruby, for Production apps, for a long time. So, let's see how we got to where we are with the modern Ruby stack, and then see how far we can push it in doing those other parts with Ruby.
Ruby For The Long Haul
https://www.ruby-lang.org/en/downloads/releases/
I started here....
Ruby 1.6.6
Gig to Write a Web App
Job Search Application for Interns
Ruby is Joy. Ruby is Love.
Let's do it with Ruby!
.
.
.
Stack? Frameworks?
Web Development With Ruby Was Different Then
There were no real "frameworks".
There were a few tools.
Stack options were limited.
Amrita
CGI
mod_ruby
Apache based stack
Just Templating & Embedding & CGI
Amrita
CGI
mod_ruby
Author: seki <seki@b2dd03c8-39d4-4d8f-98ff-823fe69b080e>
Date: Sun Nov 17 16:11:40 2002 +0000
* add ERB
XHTML/HTML Templating
Webserver <-> Executable protocol
Embed Ruby inside Apache
Before Rails
Before ERB
A "Ruby" Stack in 2002?
It was a hodge podge
Apache
Ruby
Script
CGI
Apache
mod_ruby
Your App
IOWA?
Avi Bryant proof-of-concept framework
- Written in 2001
- Proof of concept web development framework
- Internet Objects for Web Applications
- Component/Object based
- Separation of concerns
- Magic server side state management
- Fast, easy, delightful
- IOWA apps ran as separate, persistent processes, and handled requests via a socket...
- Communication channels supported:
- iowa.cgi -- any web server that supported CGI
- mod_iowa -- Apache only
- IOWAServlet -- WEBrick servlet to attach IOWA app to a web server
- Impractical for real world apps in the state that I found it in
- Avi took the concepts and implemented them in Smalltalk as the Seaside Framework
IOWA!
Architecture was much nicer than other Ruby options
Premonitions
Listening on sockets
Attaching to a web server directly
Took it over in 2002, hacked, and released a commercial application, running on Ruby, in 2002
My Ruby "stack" was evolving
Evolving Ruby Stack
Apache
w/ mod_iowa
IOWA
App
on socket
IOWA
App
on socket
IOWA
App
+ WEBrick
IOWA
App
+ WEBrick
Apache
w/ proxypass
That looks kind of modernish...
Rails!
Rails stimulated the ecosystem
Early Rails stacks still Apache centric (sometimes lighttpd)
Rails apps did run as discrete processes
FCGI
MySQL
FCGI Bitterness
Deployment, and server management headaches
FCGI and friends (SCGI, for example) was dominant for a while, but there was a lot of unhappiness with it
Surely there must be a better way?
Mongrel
The New Hotness
https://groups.google.com/forum/#!msg/comp.lang.ruby/-cI5PtqDc1o/FXRdN9UKPE0J
Announced Jan 19, 2006
Web/Application server written in Ruby
Used a C/Ragel derived HTTP parser
Promised Speed, and delivered
Quickly became the new paradigm
Mongrel
The Ruby/Rails Stack Evolved
Rails + Mongrel
Rails +
Mongrel
Apache
w/ proxypass
Rails + Mongrel
Rails +
Mongrel
NGINX +
proxy
Briefly Back in IOWAland
IOWA supported multiple deployment schemes
- CGI
- mod_iowa
- Mongrel
- WEBrick
- FCGI
- HTTPMachine (EventMachine based HTTP server)
- Built in HTTP server leveraging Mongrel's parser + EventMachine
IOWA implemented a common request specification
Not Rack; predated Rack, but similar goals
Some IOWA code ended up in Rack
Rack
March 3rd 2007: First Public Release 0.1
Agnostic interface between Ruby apps and the outside world.
It didn't change the Ruby deployment landscape immediately, but it was a key enabler
Mongrel supported Rack
Other app servers (like Thin) started being developed
Ruby devs were getting more options for their stack
More Ruby In My Stack!
Or
Why Server Side State Isn't All Puppies and Rainbows
IOWA has server side state management
REALLY nice to eliminate programmer work
However, state was isolated to a single process
All requests for a given session need to go to the same backend process
This requires sticky sessions
There was no good solution
So, the answer? Write it myself! A sticky session supporting load balancing reverse proxy, in Ruby!
More Ruby In My Stack!
Basic Webserver & Caching Reverse Proxy w/ IOWA Session Support, in Ruby?
You can't do that with Ruby
Ruby is too slow
You need C/C++/Erlang/other-fast-thing
More Ruby In My Stack!
Swiftiply!
More Ruby In My Stack!
Swiftiply!
Won't It Be Too Slow?
10k 1020 byte files, concurrency of 25; with KeepAlive
Requests per second: 26245.09 [#/sec] (mean)
10k 78000 byte files, concurrency of 25; with KeepAlive
Requests per second: 8273.98 [#/sec] (mean)
10k 78000 byte files with etag, concurrency of 25; with KeepAlive
Requests per second: 25681.19 [#/sec] (mean)
Disclosure: This was timed on a $10/month Digital Ocean instance, running
Ruby 2.5.1. The original benchmarks using Ruby 1.8.7 on cheap dedicated 2008 hardware topped out at about 15000/second.
More Ruby In My Stack!
And It Isn't Too Slow
A pure Ruby stack can do everything I need
Rack got support for Swiftiply around version 0.9
One simple, consistent stack, whether IOWA or Rails
Epiphany!!!
Pure Ruby Stack
Swiftiply
IOWA
App
IOWA
App
IOWA
App
Rails
App
Rails
App
Analogger
Circa 2009
Analogger?
Swiftiply
IOWA
App
IOWA
App
IOWA
App
Rails
App
Rails
App
Analogger
Asynchronous Logger
- Many sites; many processes; logs everywhere!
- Not fun when trying to find a problem or analyze traffic
- No other good solution in 2007
- Write it in Ruby?
Asynchronous Logger
- Asynchronous; performance trumps risk of loss of a few messages on catastrophic process/server failure
- API compatible with Ruby Logger
- Can it be fast if it's Ruby?
Asynchronous Logger
Ruby Is Fast Enough
Analogger Speedtest -- larger messages
Testing 100000 messages of 100 bytes each.
Message rate: 130665/second (0.765315254)
Analogger Speedtest -- Fail Analogger, continue logging locally, and monitor for Analogger return, then drain queue of local logs
Testing 100000 messages of 100 bytes each.
Message rate: 61992/second (1.613111685)
Ruby Logger Speedtest (local file logging only) -- larger messages
Testing 100000 messages of 100 bytes each.
Message rate: 72811/second (1.37341377)
Asynchronous Logger
Ruby Is Fast Enough
Performance tested on $10/month Digital Ocean instance
Inexpensive dedicated 2007 era hardware, with Ruby 1.8.6/7,
delivered about 60% of those numbers. i.e. even a decade ago it was fast enough.
Still Running Today...
Swiftiply
IOWA
App
IOWA
App
IOWA
App
Rails
App
Rails
App
Analogger
Dozens of sites and apps,
large and small, running Ruby dominated production stacks for the last decade.
Room For More Ruby?
- Front end proxy/load balancing
- Asset caching layer
- Web server for static assets
- App container + app itself
- Database
- Other data stores (key/value stores)
- Deployment
- Configuration Management
- Log management
You CAN do most of this with Ruby in 2018
Front End Proxy/LB
em-proxy
- EM-Proxy is a DSL for writing proxies
- It's been around for a long time
- It's fast
With comments, and an example/test, it's less than 200 lines to write a multi-strategy load balancing proxy with Ruby.
Front End Proxy/LB
em-proxy
# Wrapping the proxy server
#
module Server
def run(host='0.0.0.0', port=9999)
puts ANSI::Code.bold { "Launching proxy at #{host}:#{port}...\n" }
Proxy.start(:host => host, :port => port, :debug => false) do |conn|
Backend.select do |backend|
conn.server backend, :host => backend.host, :port => backend.port
conn.on_connect &Callbacks.on_connect
conn.on_data &Callbacks.on_data
conn.on_response &Callbacks.on_response
conn.on_finish &Callbacks.on_finish
end
end
end
module_function :run
end
Front End Proxy/LB
WEBrick
WEBrick is powerful, but often ignored
require 'webrick'
require 'webrick/httpproxy'
It's capable, and it's been a part of Ruby since 2002.
Asset Caching
a special class of proxy
- Really a special purpose proxy
- Store HTTP responses, and deliver stored response instead of proxying, for matching requests
- Just a proxy, with extra intelligence to deal with cache control and expiration of cached items
Asset Caching
Puma/Rack?
- Puma is a capable web server
- Rack lets us write most anything and have it work with Puma
- Reasonable static asset caching on the quick and dirty with Puma and Rack?
Asset Caching
Shellacb
- Puma + Rack makes it easy
- More lines dedicated to the CLI than to the proxy itself
- Simple to customize and expand it
- Puma is fast
- Rack isn't quite so fast
- So, no awe inspiring speed, but maybe fast enough?
Asset Caching w/ Shellac
$ gem install shellacrb
$ shellac -s hash -b 0.0.0.0:8080 \
-r 'parisrb2018.demos4.us::\?(.*)$::https://github.com/#{$1}' \
-t 2:20 -w 1
Requests per second: 1861.69 [#/sec] (mean)
Time per request: 53.715 [ms] (mean)
Time per request: 0.537 [ms] (mean, across all concurrent requests)
Transfer rate: 175182.14 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.5 0 9
Processing: 2 53 3.4 53 65
Waiting: 1 53 3.8 53 65
Total: 10 53 3.0 53 65
It's probably fast enough! (Ruby 2.5.1; $5 Digital Ocean instance)
$ ab -n 10000 -c 100 http://parisrb2018.demos4.us:8080/?wyhaines
Static Asset Caching
Shellac
* Disclaimer
(i.e. the fine print)
Shellac was knocked together as a proof of concept demo for RubyKaigi and for this talk, and ABSOLUTELY needs more work and more features, and more than a few bug fixes before I'd ever use it for any real task.
Webserver For Static Assets
Ruby has a bunch of options
#! /bin/sh
read get docid junk
cat `echo "$docid" | \
ruby -p -e '$_=$_.gsub(/^\//,"").gsub(/[\r\n]/,"").chomp'`
while :; do netcat -l -p 5000 -e ./super_simple.sh; done
Just kidding! Don't do this. Please! (It is, more or less, HTTP 0.9 compliant, though...)
Webserver For Static Assets
WEBrick
$ ruby -run -e httpd -- -p 8080 .
This actually runs a WEBrick server, and it is reasonably quick.
Requests per second: 2732.33 [#/sec] (mean)
Time per request: 3.660 [ms] (mean)
Time per request: 0.366 [ms] (mean, across all concurrent requests)
Transfer rate: 717.77 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 3
Processing: 1 4 0.5 3 17
Waiting: 0 2 0.6 2 16
Total: 1 4 0.6 4 18
Webserver For Static Assets
Puma + Rack
use Rack::Static,
:urls => ["/"],
:root => "."
run lambda { |env|
[
200,
{
'Content-Type' => 'text/html',
'Cache-Control' => 'public, max-age=86400'
},
( File.open('index.html', File::RDONLY) rescue nil )
]
}
Requests per second: 3075.24 [#/sec] (mean)
Time per request: 3.252 [ms] (mean)
Time per request: 0.325 [ms] (mean, across all concurrent requests)
Transfer rate: 12237.88 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 2
Processing: 1 3 0.4 3 23
Waiting: 0 3 0.3 3 12
Total: 1 3 0.4 3 23
puma -b tcp://127.0.0.1:5000 -w 1 -t 2:16 ../config.ru
Webserver For Static Assets
Puma + Rack vs NGINX
Requests per second: 3075.24 [#/sec] (mean)
Time per request: 3.252 [ms] (mean)
Time per request: 0.325 [ms] (mean, across all concurrent requests)
Transfer rate: 12237.88 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 2
Processing: 1 3 0.4 3 23
Waiting: 0 3 0.3 3 12
Total: 1 3 0.4 3 23
Puma
Requests per second: 4545.14 [#/sec] (mean)
Time per request: 2.200 [ms] (mean)
Time per request: 0.220 [ms] (mean, across all concurrent requests)
Transfer rate: 18730.94 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 4
Processing: 0 2 0.4 2 7
Waiting: 0 2 0.4 2 7
Total: 1 2 0.4 2 7
NGINX
Database
This section will be short....
To my knowledge, nobody has used Ruby to implement a database (yet?)
I have a crazy idea (that I have actually started hacking on)
Ruby +
gossip type protocol +
SQLite ==
Distributed "Ruby" SQL DB!
Key / Value Store
ROMA
ROMA is a distributed key value store, written in Ruby
- Fault tolerant
- Pluggable back end storage modules
- Easy to understand code -- it's not a lot of code
- Memcached API compatible
Key / Value Store
ROMA
ROMA-1
ROMA-2
ROMA-3
$ simple_bench -t 8 -r -n 1000 10.136.73.13:11211
qps=556 max=0.151159 min=0.000541 ave=0.001797
qps=637 max=0.196910 min=0.000487 ave=0.001569
qps=605 max=0.119492 min=0.000540 ave=0.001651
qps=446 max=0.527537 min=0.000553 ave=0.002238
qps=535 max=1.563985 min=0.000483 ave=0.001866
qps=531 max=0.879976 min=0.000606 ave=0.001883
qps=599 max=0.231225 min=0.000534 ave=0.001668
Each node is a $5 Digital Ocean instance.
Writing persistent data (survives restart/reboot)
across at least 2 nodes.
Key / Value Store
ROMA
Rakuten claims performance > 10000 queries per second on in-memory data stores, using AWS instances for nodes (2015 slides)
Distributed nature and redundancy supports rolling restarts
Key / Value Store
ROMA
Ships with support for multiple storage engines:
- In Memory Hash
- DBM
- Tokyo Cabinet
- SQLite
- Groonga
Extending it is as simple as writing a Ruby module to provide the implementation
Deployment
Of Course, there is Capistrano....
Capistrano is a framework for building automated deployment scripts.
- Ubiqitous
- Battle Tested
Deployment
Easy Git integrated deploys?
kirkhaines@MacBook-Pro ~/.ghq/github.com/wyhaines/demo-website (staging) $ git push
Git Repo
ProductionServer
ProductionServer
Staging
Server
Deployment
Easy Git integrated deploys?
kirkhaines@MacBook-Pro ~/.ghq/github.com/wyhaines/demo-website (production) $ git push
Git Repo
ProductionServer
ProductionServer
Staging
Server
Deployment
Easy Git integrated deploys?
- On commit, deploy to N servers
- Commits to different branches may be deployed to different servers
- Easy to maintain implementation
Deployment
Easy Git integrated deploys?
Requires explicit specification of push target:
git remote add production demo@server_domain_or_IP:proj
Depends on your work environment knowing where to push.
Deployment
Easy Git integrated deploys?
Let's build it in Ruby!
Deployment
Serf is a nice tool
*but it's not Ruby
By Hashicorp
- Lightweight
- Distributed, gossip protocol daemon
- Transmit events and data, trigger commands, get results
- Nice tool for server orchestration
- Not Ruby, but it is easy to interface with it using Ruby.
Deployment
Easy Membership Queries With Serf
$ serf members -rpc-auth="blahblahblah"
db-1-nyc1 10.136.13.216:7946 alive role=database
app-demo-1-nyc1 10.81.218.50:7946 alive role=app
app-demo-2-nyc1 10.236.115.112:7946 alive role=app
http-1-nyc1 10.205.169.211:7946 alive role=nginx
Deployment
Easy Command/Query Invocation With Serf
$ serf query -rpc-auth="blahblahblah" load-average
Query 'load-average' dispatched
Ack from 'db-1-nyc1'
Ack from 'app-1-nyc1'
Ack from 'http-1-nyc1'
Ack from 'app-2-nyc1'
Response from 'app-2-nyc1': 16:54:16,up,105,days,,20:48,,0,users,,load,average:,0.00,,0.00,,0.00
Response from 'app-1-nyc1': 16:54:16,up,108,days,,1:37,,1,user,,load,average:,0.00,,0.00,,0.00
Response from 'http-1-nyc1': 16:54:16,up,111,days,,1:39,,0,users,,load,average:,0.00,,0.00,,0.00
Response from 'db-1-nyc1': 16:54:16,up,105,days,,23:39,,1,user,,load,average:,0.00,,0.00,,0.00
Total Acks: 4
Total Responses: 4
Deployment
Easy Command/Query Invocation With Serf
$ serf query -rpc-auth="blahblahblah" list-handlers
Query 'list-handlers' dispatched
Ack from 'deploy-1-nyc1'
Ack from 'app-1-nyc1'
Ack from 'app-2-nyc1'
Ack from 'http-1-nyc1'
Response from 'app-1-nyc1': query: list-handlers
query: describe-handler
query: load-average
query: df
query: mpstat
query: ping
event: git-index-deploy
Response from 'db-1-nyc1': query: list-handlers
query: describe-handler
query: load-average
query: df
query: mpstat
query: ping
Response from 'app-2-nyc1': query: list-handlers
query: describe-handler
query: load-average
query: df
query: mpstat
query: ping
event: git-index-deploy
Response from 'http-1-nyc1': query: list-handlers
query: describe-handler
query: load-average
query: df
query: mpstat
query: ping
Total Acks: 4
Total Responses: 4
Deployment
Easy Command/Query Invocation With Serf
$ serf query -rpc-auth="blahblahblah" describe-handler git-index-deploy
Query 'describe-handler' dispatched
Ack from 'deploy-1-nyc1'
Ack from 'app-1-nyc1'
Ack from 'http-1-nyc1'
Ack from 'app-2-nyc1'
Response from 'app-1-nyc1': Expects a hash code in the payload which will
be queried using git-index. A 'git fetch --all && git pull' will be
executed on all matching repositories.
Response from 'app-2-nyc1': Expects a hash code in the payload which will
be queried using git-index. A 'git fetch --all && git pull' will be
executed on all matching repositories.
Total Acks: 4
Total Responses: 2
Deployment
Serf
Configuration
Is Simple
{
"protocol": 5,
"bind": "0.0.0.0",
"advertise": "10.136.13.216",
"rpc_addr": "0.0.0.0:7373",
"rpc_auth": "blahblahblah",
"enable_syslog": true,
"log_level": "info",
"replay_on_join": true,
"snapshot_path": "/etc/serf/snapshot",
"tags": {
"role": "app"
},
"retry_join": [
"db-1-nyc1",
"app-2-nyc1",
"http-1-nyc1"
],
"event_handlers" : [
"/usr/share/rvm/wrappers/ruby-2.5.1/serf-handler"
]
}
This is where we
hook Ruby into it
Deployment
Serf-Handler is a Framework and DSL for defining handlers for Serf queries and events
require 'serf/handler/events/load_average'
require 'serf/handler/events/df'
require 'serf/handler/events/mpstat'
require 'serf/handler/events/ping'
require 'serf/handler/events/git-index-deploy'
Wrote it on the airplane coming home from the Cookpad Bristol Office
Simple configuration - Just require something that uses the DSL
Several bundled handlers come with the gem
Deployment
Serf-Handler DSL is simple
require 'serf/handler' unless Object.const_defined?(:Serf) &&
Serf.const_defined?(:Handler)
include Serf::Handler
describe "Return the 1-minute, 5-minute, and 15-minute load averages as a",
"comma separated list of values."
on :query, 'load-average' do |event|
`/usr/bin/uptime`.gsub(/^.*load\s+averages:\s+/,'').split.join(',').strip
end
load_average.rb
Deployment
git-index-deploy.rb
require 'serf/handler' unless Object.const_defined?(:Serf) && Serf.const_defined?(:Handler)
include Serf::Handler
describe "Expects a hash code in the payload which will be queried using",
"git-index. A 'git fetch --all && git pull' will be executed on all",
"matching repositories. Deployment hooks are available in order to",
"execute arbitrary code during the deploy process. If any of the",
"following files are found in the REPO root directory, they will be",
"executed in the order described by their name.\n",
".serf-before-deploy\n",
".serf-after-deploy\n",
".serf-on-deploy-failure\n",
".serf-on-deploy-success\n"
on :event, 'git-index-deploy' do |event|
user = `whoami` # Serf's executable environments are stripped of even basic information like HOME
dir = `eval echo "~#{user}"`.strip
`git-index -d #{dir}/.git-index.db -q #{event.payload}`.split(/\n/).each do |match|
hash,data = match.split(/:\s+/,2)
path,url = data.split(/\|/,2)
ENV['SERF_DEPLOY_PAYLOAD'] = event.payload
ENV['SERF_DEPLOY_HASH'] = hash
ENV['SERF_DEPLOY_PATH'] = path
ENV['SERF_DEPLOY_URL'] = url
success = Dir.chdir(path)
break unless success
if FileTest.exist?(File.join(path, ".serf-before-deploy"))
system(File.join(path, ".serf-before-deploy"))
end
success = system("git fetch --all && git pull")
if success && FileTest.exist?(File.join(path, ".serf-on-deploy-success"))
system(File.join(path, ".serf-on-deploy-success"))
elsif !success && FileTest.exist?(File.join(path, ".serf-on-deploy-failure"))
system(File.join(path, ".serf-on-deploy-failure"))
end
if FileTest.exist?(File.join(path, ".serf-after-deploy"))
system(File.join(path, ".serf-after-deploy"))
end
end
end
Deployment
Oh yeah! git-index
Simple tool to make/maintain a database of git repositories
Calculates a "fingerprint" using the combination of the hashes of the first two commits in the repo -- it's enough to be reliably unique.
Deployment
Combine these with a git hook or github/gitlab web hook, and et voila! Ruby powered git based deploys.
kirkhaines@MacBook-Pro ~/.ghq/github.com/wyhaines/demo-website (production) $ git push
Git Repo
ProductionServer
ProductionServer
Staging
Server
Configuration Management
This doesn't need much comment. Chef and Itamae are both Ruby based and capable.
Log Management
I still think Analogger is pretty sweet if you just need something fast and reliable to get all of your logs into one place.
Log Management
Fluentd is Ruby and it's a power, all encompassing log management solution for your whole stack.
Other Stuff?
A Good Example of Ruby Power to Fill Niches
A tool to keep a folder or folders synchronized between multiple discrete systems, in a few hundred lines of Ruby.
Bastion Host/Logged SSH
Bolwerk (mostly vapor-ware right now)
At EngineYard I wrote essentially an SSH proxy with full stream logging and 2-factor authentication, implemented in Ruby
- Applying lessons learned to reimplement a much improved version
- Fully searchable full stream logging of SSH sessions
- Transparent and flexible
- All Ruby, pluggable, and easily extensible
Still a private project, but hopefully it'll land on Github soon
Wrapping Up
This chain of thought was inspired by a couple real-world, mostly Ruby stacks that have been in production with few changes for 7+ years.
- Swiftiply
- IOWA
- Analogger
- ROMA
- MySQL
- Chef (solo)
- Serf-Handler + Git-Index (+ Serf)
- em-proxy
- originally Mongrel, and now Puma, with Rails
- Analogger
- MySQL
- Chef (solo)
- Serf-Handler + Git-Index (+ Serf)
Wrapping Up
With some work, one could conceivably fill all the niches with Ruby:
- em-proxy based load balancer
- Shellac based static asset caching
- Your favorite framework + app container
- Fluentd or Analogger logging aggregation
- ROMA key/value store
- Kirk's Crazy Gossiping Distributed SQLite Server
- Chef or Itamae
- Serf-Handler + Git-Index (+ Serf)
- Bolwerk for your bastion server/access control
Would You Want To?
- Performance of the Ruby implementations can be completely reasonable
- Ruby solutions are often less feature rich than the non-ruby options, but they are also more targeted at the key use-case, and are often very extensible for your exact needs.
- Using them can make the architecture simpler to maintain (I never switched off Swiftiply and Analogger because nothing with comparable capability was nearly so easy to set up or configure for my needs at the time)
Why Might This Be Crazy?
- You need every nanosecond of performance
- You have very complex requirements that can be handled with an off-the-shelf component
- Security risks -- more eyes on code mean finding vulnerabilities is more likely, but getting them fixed is also more likely
- There should be a valid reason for not using a perfectly serviceable tool like NGINX -- do it in Ruby if you are also improving some aspect of the use case, for you.
Questions?
kirk-haines@cookpad.com / wyhaines@gmail.com / @wyhaines everywhere
It's Rubies All The Way Down - Paris.rb 2018
By wyhaines
It's Rubies All The Way Down - Paris.rb 2018
Paris.rb 2018 presentation on using Ruby for all of the layers in a full stack web app.
- 990