I remember this era like it was yesterday. Ruby on Rails in 2008 was on 2.1 or 2.2 (before it was merged with merb) and was 70 or 80% of the way there before it was mature. I think rails 3.0 is when the framework matured and stopped changing so much.
There were a lot of decisions back then that were made because it fit the rails vibe, like the fast adaptation of coffee script. In some ways it was a mistake, but in other ways it was kind of putting a foot down on the whole philosophy of rails, which was to optimize heavily for finding the ideal language of the framework, and worry about the implementation details later.
In 2008, Rails still had trouble running on windows, shipped with sqlite as the default database, did not support many other databases, had 1 or 2 servers options, was not modular (you couldn't swap out ActiveRecord), did not have Arel (so all sql queries were heavily unoptimized), had a hard dependency on jQuery... the list goes on. But because it was unrelenting on the values of the framework, we have the beauty that is rails today, which still captures the magic of rails in 2008, just with the implementation details filled in.
In some ways, Rails had a very strong startup mentality, in that it was always under-engineered instead of over engineered throughout its life until rails 4ish.
Sqlite was used as default for local dev/testing, in production you were setting it to mysql. Fantastic setup with activerecord that made tests very fast.
People were buying macbooks just to use textmate + rails because it was so good compared to alternatives at that time. In Europe at least textmate + rails were responsible for more new apple devs than anything else; before that it was unheard of/ultra niche platform nobody was using.
This really should be titled "How Ruby on Rails Could Be Much Better for Shared Hosting", as that's where the issue really was for Dreamhost at the time. In a shared hosting environment back then:
* You didn't get root shell access. Dreamhost were (and still are) one of the good platforms in that they allowed shell access at all. Many shared hosting platforms didn't even do that, requiring you to upload files using FTP.
* The services provided were generally ones where a single server process (or processes) could serve a large number of shared hosting tennants: a web server with many virtual hosts, PHP applications running within that shared web server, jabber services, mail services, etc.
* All web server configuration was either through the control panel or .htaccess files, because you were sharing the web server instance with everyone else on your shared server.
* The economics of shared hosting meant you couldn't keep processes running for each individual hosting user for an extended period of time: memory was just too expensive to dedicate that way.
Rails didn't really fit this model: a production Rails app loads all the code in to memory and then expects to run for a number of hours serving requests. Shared hosting required these processes to shut down when there was no traffic to the site, and then start up on demand when a new request for the app came in. So the fundamental problem was not "Ruby on Rails needs to be a helluva lot faster", it was "Rails is too slow to start up to serve a single HTTP request and then be shut down again".
I did run a production Rails app on Dreamhost shared hosting for a while: it seemed to receive just enough traffic not to get killed, but not so much that it became an issue. But as soon as virtual server hosting became cheap enough to make the jump, we moved.
They should rebrand shared hosting as function-as-a-service AWS Lambda-alike. It's basically the same thing: startup and run this code, then return a response and exit.
Rails doesn't have fast boot times even now. exiting after each request would result in much slower apps. Also, you couldn't reuse database connections so you'd get additional latency built into starting up a new connection with your db.
This isn't just a Rails thing either.
FAAS is a good thing. I'm a big fan. It's essentially the same thing as CGI which was awesome back in the day, BUT there are multiple good reasons to choose a framework that isn't constantly exiting.
To me, the only point on this list that hasn't improved much is "Ruby on Rails needs to more or less work in ANY environment".
Ruby's install/user story for Windows is still Not Great, and Ruby version wrangling on any operating system is a question that you still get 100 answers to ("use asdf/rvm/rbenv/chruby blah blah blah").
For the rest:
1) It's certainly Fast Enough now. More important webperf issues are happening on the frontend side nowadays. Who cares if your backend takes 500ms to respond when it takes 5 _seconds_ to compile the JS on the client.
3) Solved since Rails 4.0 IMO.
4) Discourse has more or less proven this can be done, with some hard work. Also, Heroku more or less standardized the 512MB VPS size, and nowadays 1GB is pretty cheap and becoming more common, so Rails doesn't struggle with memory limits anymore.
The ruby language has also sped up a lot since 2008, which has helped Rails. I catch your point about the slowness being in the client these days, but 500ms would be an exceptionally unusually slow response from the rails application in any of the production rails apps I’ve run.
I've switch from rbenv to chruby[1] years ago. Seems simpler and doesn't have the rehash command that is (was?) required with rbenv. All it does it adjust your PATH.
Personally I swapped from rbenv to asdf a while back. I don't think there's any real justification for one over another these days, just use the one you're comfortable with :D
Happy asdf user here. If you're only using ruby, then asdf isn't much better or worse - but asdf is great for supporting other toolchain, like python, crystal, rust, go... AFAIK the asdf ruby plug-in uses rbenv under the hood.
rbenv handles only ruby versions, whereas rvm also handles gem installations. I think it's simpler when you let rbenv handle ruby versions and bundler handle the gems, because you have to deal with bundler anyway.
> Who cares if your backend takes 500ms to respond when it takes 5 _seconds_ to compile the JS on the client.
Frontend should never need 5 seconds and a 500ms delay on the backend seriously impacts usability. Backend performance still matters, it saves cost (fewer servers), makes scalability easier and has a massive impact on usability.
I just pulled these numbers off of CNN.com using webpagestest.org, which is showing about a 500ms TTFB and a 5 second time to first contentful paint. I'm a web perf consultant and a 10:1 relationship between frontend and backend time is pretty typical IME.
For years I've been running everything on Docker Compose. Even with Ruby improving on Windows, many dependencies didn't run well or easily on Windows (like Redis). Things like WSL of course improve that situation, but I don't think you can beat replacing a long-winded README with installation instructions with a simple "docker-compose up" (the minimal loss of performance is worth it to me)
Yeah, he was amusing too with his rants and general disposition. Came up on my instagram feed, he's into art and oil painting. Don't know about his tech lately. Jumped on that DHH hype train back in the day. Actually still have a rails app running mongrel from back then.
Worth noting about the shared hosting argument: Dallas (author) is a co-founder of DreamHost. DreamHost, at the time, was very invested in selling "unlimited everything" shared hosting plans.
Think I ran a basic Rails app via passenger on it around 2010, but it was a hassle setting up and I don't think it got any easier. And running Node is completely off limits on DH shared hosting.
I think the next main challenges for Rails are improving the view layer (see upcoming ViewComponent for example) and better integrating with modern javascript (WebPacker feels half baked for gem writers). To me these issues (which both have to do more with the view layer) are where Rails is very much lacking.
I agree that the view layer has not been Rail's strength. It's largely handlebars era inspired and has never really changed.
In some ways I think rails never really needed to change because the view layer worked without issues. I always felt the solutions in the modern html/javascript world were messy and half baked, with many unforeseen tradeoffs and complexity for the benefits you get (this summarizes my experience with the whole SPA era).
We are only now starting to have best practices and reasonable design around how the html/js view layer should be done. But I think many people in Rails will continue to prefer the old way of views just because of the simplicity of it.
You'd be surprised how such a small library like turbolinks has held off the entire wave of webpack / react / SPA.
I'm fine with rendering html on the server without a SPA. But Rails view helpers need more love I think, watch Github's ViewComponent talk to see what I mean.
Stimulus Reflex is an exciting approach. It's a pattern used in other places well, and I think will provide a great option to those who'd rather not deal with modern Javascript complexity.
Is there a framework out there that faces as much consistent scrutiny as Rails?
It seems odd that for a decade the web dev community has singled out this framework. It's always relevant to suggest updates and things to be fix - any framework has those. But the continuing conversation of "is Ruby on Rails good enough" (which seems to be the underlying driver for conversation) is still present after hundreds of companies have built solid products with it.
I don't think it's as scrutinized as it used to be, though it may feel that way if it's your framework of choice (it is mine).
I feel that SPAs get a lot of scrutiny, but it's spread out among the specific libraries/frameworks. There's also a lot of vocal advocates (as Rails used to have) that makes it feel more balanced.
Over the course of my career, it seems that the "easy" languages (Visual Basic, ColdFusion, Ruby, etc) always get a lot of criticism. I feel like Rails is in a great, mature spot right now, and some exciting things are happening to make it the best positioned for the coming push-back against "Javascript all the things".
I remember setting up a Rails 2.x app on Dreamhost in 2009. At that time it was indeed more complicated than PHP (although mod_passenger made it manageable), but Rails was so productive and fun that it was well worth the extra setup effort.
I think Ruby on Rails should be more asynchronous. Doing a http request or a sql query blocks everything until it finishes. The solution is to spawn many instances, but it uses a lot of memory and a slow http service may block all rails instances very quickly.
So, threads and instances are two very different things. They both use memory, though threads use far, far less memory - just like in other languages where you would use threads for concurrency. When Ruby's GIL is waiting on I/O, it allows other threads to do their work. Per the first link, from the author of Puma:
"The reason for that is waiting on IO (talking to a DB, etc) will allow another thread to run"
The threads don't run in parallel, but they do run concurrently. While this isn't the panacea of performance, it does work well with many web apps which tend to be I/O bound (waiting for a database or remote service). So to your original of "Doing a http request or a sql query blocks everything until it finishes. The solution is to spawn many instances" - an HTTP request or SQL query will, in fact, allow other threads to execute while waiting for a response meaning you can rely on threads, rather than instances, and maintain a very low memory footprint.
If you like Ruby but want true multi-threading and parallelism, I'd highly recommend taking a look at JRuby (https://www.jruby.org). Those guys have done an incredible job with it.
The company I work for (mostly standard Web Dev) uses almost exclusively Elixir for any backend work. Only the fanboys of Elixir and functional everything are happy with it, just to be different to everyone else.
The experience is terrible: Tooling (editor plugins, tests runners, IDEs (oh, there are no IDEs...), debugging) is like going back 20 years.
There are no libraries for the most basic stuff you get almost by default on the Ruby, Python, Node or Java ecosystems. So we end up reinventing half assed solutions to anything we need to do.
Some days I think we would be sooo much better by just using Rails or Django.
Of course, concurrency and the Erlang VM are awesome and the perfect fit for the web... if your problem is performance, it will solve that problem for you of course... other than that, is all wasted time in my opinion from my experience after years of using it.
I’ve found the ecosystem to be quite high quality but the editor tooling does fall short sometimes. Particularly the transition from Alchemist to an editor powered by the new language server protocol.
I can’t say it’s anything other than the fault of not enough spare time. The community is a bit smaller so there are not as many volunteers to build out the latest and greatest tooling for editor support.
Despite this, I’ve had a wonderful experience working within the elixir ecosystem and using it as a gateway to erlang. One of the things I had to acknowledge was my bias towards recent updates as a measure of quality. The ecosystem is so good that you’ll find packages years old and never updated. It simply does that it does and does it well.
I found overall great support in VS Code at the end of the day for Elixir. I wanted it to work well with emacs but it wasn’t consistently enjoyable.
VS Code works well enough for now between the satisfaction of shipping code and the satisfaction of a flick of the wrist for editor commands.
Really? I've been able to find everything I need in the Elixir ecosystem from auth to background jobs and one of if not the best GraphQL server implementations out there. Do you mind sharing which libraries you found missing or incomplete?
Yes, in fact we have ourselves published a few open source packages out of the need we had to build them.
The latest incarnation of this problem, was when we had to validate RUT codes (a tax code with a checksum we use here in Chile). Options for Python [1], options for Node [2], options for elixir [3] (don't mind to click, it's a list with nothing to do with RUTs or modulo11 validations). So we had to implement it ourselves. Looking for information we found example implementations [4], where as you can see you have example implementations even for Asterisk Dialplans (!!) but no mention of Elixir.
There are many more examples. Of course you get libraries for the "standard" stuff, it is when you get into the detailes, and that happens when you are already too deep into your project, that you realise all the missing pieces.
Also, what code editor do you use? we've tried everything, and seems the best option is you grow a beard and go emacs/vim. The VSCode plugins hog your laptop and are really inconsistent, incomplete and sluggish [1] [2].
I've been experimenting with it, and it definitely has the feeling of working in rails / ruby. My only gripe is that the tooling is not as mature as rails.
I use rubymine and the debugger has truly spoiled me.
I tried to get the debugger going with a third party plugin for Intelli-j but it would crash when inspecting variables.
There is not much I can comment on editor support as I’ve had the experience of both successfully and unsuccessfully using various debuggers in editors. If you jump into IEX and use the built in introspection it’s some of the most advanced tooling I’ve ever seen in any language. Especially when doing OTP-level topology.
I don't think that's true - the Ruby VM switches threads when blocked on IO like http or db. This happens regardless of what Rails is doing.
Maybe you're talking about suboptimal usage of cores...afaik it's indeed being worked on in the Ruby community and it's not something Rails can solve.
There were a lot of decisions back then that were made because it fit the rails vibe, like the fast adaptation of coffee script. In some ways it was a mistake, but in other ways it was kind of putting a foot down on the whole philosophy of rails, which was to optimize heavily for finding the ideal language of the framework, and worry about the implementation details later.
In 2008, Rails still had trouble running on windows, shipped with sqlite as the default database, did not support many other databases, had 1 or 2 servers options, was not modular (you couldn't swap out ActiveRecord), did not have Arel (so all sql queries were heavily unoptimized), had a hard dependency on jQuery... the list goes on. But because it was unrelenting on the values of the framework, we have the beauty that is rails today, which still captures the magic of rails in 2008, just with the implementation details filled in.
In some ways, Rails had a very strong startup mentality, in that it was always under-engineered instead of over engineered throughout its life until rails 4ish.