毎日

Rails to Phoenix

I obviously should and hopefully will write at length about this at some point, but one of the most dramatic changes I've noticed in shifting some of our apps at work from Rails to Phoenix is the utilization of open PG connections.

If I tell a Phoenix app "open 100 connections" and proceed to check the connections of the corresponding PG db, that PG db has 100 connections open, period. Granted, since Phoenix appears to leave the connection open, I can't actually verify that all of them are getting used when a high volume of connection comes in (just because a connection is open if it's open all the time doesn't mean it's being utilized) but it seems like a pretty safe assumption that if it can open them, it can utilize them.

Rails, by contrast, doesn't have this same kind of linear relationship with the utilization of its database pool. In fact... I can't really figure out the math in terms of how it works out in actual usage.

I'm told that the math is supposed to work out quite neatly: the number of puma clusters you have times the number of threads you have running on each one (clusters x threads) is your db pool usage. And this is of course circumscribed by the db pool allocation of your application (in other words, there is a hard max that you set in app).

But... I've never once, ever, in nearly two years of some pretty highly trafficked apps, seen a pool allocation max hit or even APPROACHED. Rails appears to utilize its db pool allocation on the fly, which makes it easier to tell exactly how much it's using, but it's also wholly apparent that it's much, MUCH worse at actually **using** what's available within any given allocation. While I know some of this just spins off from concurrency issues, I'd love to know the actual mechanics of what's happening and why, because... it's pretty dramatic how much worse it is out of the box.