Yes, they do handle significant concurrency better, I won’t deny that. But let’s be honest: most of us are not building the next Quora. Our apps have few users. And with the inherent overhead in event-driven IO loops, they’re more likely to perform worse than using a normal, blocking framework. We probably don’t need them.
Until we really do need them.
The Problem: IM IN UR LOOP BLOCKIN IT
Imagine that we decided to use Tornado because it’s so cool and we have a request handler that does something like:
import requests from myapp import BaseHandler class SomeHandler(BaseHandler): def get(self): some_data = requests.get(self.user_data_url) some_values = self.process_data(data.json()) self.render('some-template.html', **some_values)
So, when the user requests this page, we fetch some data from somewhere (his facebook profile? his blog? whatever), process it and then render the view including that data. Nothing weird. Should be OK.
Unless… that service is a bit slow to respond. So, probably not facebook or twitter. Maybe one of our services, and we know it’s slow sometimes. What do we do then?
The user knows it will be slow, he will wait, no problem.
Sure (not really), but about the other user that’s also connected right now, trying to log in? She won’t be served because the application will be busy with this request, waiting for this external service to respond. And she won’t know why it’s slow because she’s not viewing this page. Any ideas?
We can cache de results!
That will work in the long term, not today and not always. Whenever a new user views this page all other users will be annoyed by the server not responding.
Well, we just use more processes!
And there we go…
Yes, we can always add more machines, more cores, more memory, more processes, more money, more programmers. But that’s not really a solution, is it? It’s not a solution because we’re not actually addressing the problem. we’re only mitigating it, and in a very inefficient manner.
To solve it properly we first need to understand where the problem lies, and the key words to understand that are busy and waiting. Nothing should ever be busy waiting. Ever.
The Solution: CAN I HAZ coroutine?
We can async that method up a bit using some of tornado’s goodness:
from tornado import gen from tornado.httpclient import AsyncHTTPClient ... @gen.coroutine def get(self): cl = AsyncHTTPClient() some_data = yield cl.fetch(self.user_data_url) some_values = self.process_data(some_data.json()) self.render('some-template.html', **some_values)
We turned the method into a tornado coroutine (
gen.coroutine), used an async client to make the call (
AsyncHTTPClient) and yielded the response of the call. The effect is that as soon as we make the call to that external (and potentially slow) service, the method yields a future and the application continues doing something else (eg, serving another request). And then,when it gets the result from that external service, it will return to the method and continue executing from that point onwards (assigning the value to
some_data and so on).
Wait, did I hear coroutine? Does that mean they execute in parallel?
No, they don’t execute in parallel. What they are is kept alive in parallel until they finish execution, so the application can pause and leave when they yield, and return to them when they resolve. In fact, this is only a cool trick to avoid the callback syntax, we could have just done this:
def get(self): cl = AsyncHTTPClient() cl.fetch(self.user_data_url, self.process_and_render) def process_and_render(self, some_data) some_values = self.process_data(some_data.json()) self.render('some-template.html', **some_values)
The Catch: IM IN UR LOOP YIELDIN STUFF
Imagine that we now need to make several calls to that external service, and so we decided to use a loop:
@gen.coroutine def do_something(self, some_people): res =  for p in some_people: r = yield self.get_person_data(p) res.append(r) stats = self.calculate_stats(res) return res, stats
Makes sense, right? Not really. That construct will not yield one future per call to
get_person_data, it will execute the loop until it completes. Why? Our beloved
for loop blocks the
Instead we need to construct the group of calls and yield them all at once, which sounds really complicated but is rather simple, thanks to list comprehensions:
@gen.coroutine def do_something(self, some_people): res = yield [self.get_person_data(p) for p in some_people] stats = self.calculate_stats(res) return res, stats
What do you know? That’s even more readable than the
To Async or Not to Async
Of course, not every application can gain from this async-ness, and there’s a lot to lose as well: debugging becomes significantly more challenging than it already is. I’d say that there are two pre-conditions that must be met for you to even consider entering this realm:
- Your application has high concurrency
- Your request handler is busy waiting rather often
If your handler’s job is very process- or database-intensive you probably shouldn’t. And if your database is slow, you really need to fix that, asap.
Of course, tornado has both sync and async capabilities, so you can use it only when you need it. And it is indeed a simple, sensible and solid framework, so you might as well try it anyway.