The misconception: await gives up control (in every language… right?)
The key truth: await on a coroutine does NOT yield to the event loop
Concrete Example 1: Awaiting a coroutine is synchronous
Concrete Example 2: Tasks actually introduce concurrency
Suspension points define concurrency, not async or await
Putting it all together: a mental model that actually works
Every engineer has had that moment during a review where a comment sticks in their head longer than it should.
The code in question touched a shared cache, and on the surface the comment made sense. Multiple asyncio tasks were hitting the same structure, and the function modifying it was async. Shouldn’t that mean I need more locks?
But Python isn’t those languages. And misunderstanding this fundamental difference leads to unnecessary locking, accidental complexity, and subtle bugs.
In Java’s virtual-thread world (Project Loom), the principle is very similar: when you submit work to run asynchronously, typically via an ExecutorService backed by virtual threads, you’re creating tasks. And when you call Future.get(), the virtual thread suspends until the result is ready. The suspension is inexpensive, but it still constitutes a full scheduling boundary.
Defined with async def, but not scheduled. A coroutine object is just a state machine with potential suspension points.
Python immediately steps into the coroutine and executes it inside the current task, synchronously, until it either finishes or hits a suspension point (await something_not_ready).
Created with asyncio.create_task(coro). Tasks are the unit of concurrency in Python. The event loop interleaves tasks, not coroutines.
This distinction is not cosmetic: it’s the reason many developers misunderstand Python’s async semantics.
A coroutine is more like a nested function call that can pause, but it doesn’t pause by default. It only yields if and when it reaches an awaitable that isn’t ready.
Do not expose this difference. In those languages, an “async function” is always a task. You never await a “bare coroutine.” Every await is a potential context switch.
This is why the code review suggestion I received, “add more locks, it’s async!”, was based on the wrong mental model.
My mutation block contained no awaits. The only awaits happened before acquiring the lock. Therefore:
Python’s async model evolved from generators (yield, yield from), rather than green threads or promises. Coroutines are an evolution of these primitives.
It also leads to confusion among developers coming from JavaScript, Java, or C#, languages where async automatically means “this is a task.”
That difference is why I didn’t add more locks to my cache code. And it’s why I now review Python async code by asking a much better question:
await Is Not a Context Switch: Understanding Python’s Coroutines vs Tasks
Python’s async model is misunderstood, especially by engineers coming from JS or C#. In Python, awaiting a coroutine doesn’t yield to the event loop. Only tasks create concurrency. This post explains why that distinction matters and how it affects locking, design, and correctness.
await child() did not give the event loop a chance to schedule anything else until child() itself awaited asyncio.sleep.
Only tasks introduce concurrency: if you never call asyncio.create_task, you may not have any concurrency at all.
Concurrency occurs only at suspension points: no await inside a block → no interleave → no need for locks there.
Locks should protect data across tasks, not coroutines: lock where suspension is possible, not where the keyword async appears.
Scan critical sections for suspension points: if there’s no await inside the lock, the block is atomic relative to the event loop.
Prefer “compute outside, mutate inside”: compute values before acquiring the lock, then mutate quickly inside it.
Teach the difference explicitly: a surprising number of experienced engineers haven’t internalized coroutine vs task separation.
Python’s async model is misunderstood, especially by engineers coming from JS or C#. In Python, awaiting a coroutine doesn’t yield to the event loop. Only tasks create concurrency. This post explains why that distinction matters and how it affects locking, design, and correctness.
Python’s async model is misunderstood, especially by engineers coming from JS or C#. In Python, awaiting a coroutine doesn’t yield to the event loop. Only tasks create concurrency. This post explains why that distinction matters and how it affects locking, design, and correctness.
Python’s async model is misunderstood, especially by engineers coming from JS or C#. In Python, awaiting a coroutine doesn’t yield to the event loop. Only tasks create concurrency. This post explains why that distinction matters and how it affects locking, design, and correctness.
Python’s async model is misunderstood, especially by engineers coming from JS or C#. In Python, awaiting a coroutine doesn’t yield to the event loop. Only tasks create concurrency. This post explains why that distinction matters and how it affects locking, design, and correctness.
Materialized views are powerful but painful to change. Here’s how we safely version, refresh, and migrate them without locking production or timing out deployments, plus the approach we use to avoid dangerous DROP/CREATE migrations.
Materialized views are powerful but painful to change. Here’s how we safely version, refresh, and migrate them without locking production or timing out deployments, plus the approach we use to avoid dangerous DROP/CREATE migrations.
Materialized views are powerful but painful to change. Here’s how we safely version, refresh, and migrate them without locking production or timing out deployments, plus the approach we use to avoid dangerous DROP/CREATE migrations.
Materialized views are powerful but painful to change. Here’s how we safely version, refresh, and migrate them without locking production or timing out deployments, plus the approach we use to avoid dangerous DROP/CREATE migrations.
We turned our pull request rules into small AI-powered linters using GitHub’s new actions/ai-inference. Each linter enforces one rule: catching risky changes before humans do, without regexes, static analysis, or friction.
We turned our pull request rules into small AI-powered linters using GitHub’s new actions/ai-inference. Each linter enforces one rule: catching risky changes before humans do, without regexes, static analysis, or friction.
We turned our pull request rules into small AI-powered linters using GitHub’s new actions/ai-inference. Each linter enforces one rule: catching risky changes before humans do, without regexes, static analysis, or friction.
We turned our pull request rules into small AI-powered linters using GitHub’s new actions/ai-inference. Each linter enforces one rule: catching risky changes before humans do, without regexes, static analysis, or friction.
This post is the explanation I wish more engineers had.
If you’re coming from JavaScript, the rule is simple:
This is not how JavaScript behaves. This is not how C# behaves. This is not how Java behaves.
Now the output interleaves depending on the scheduler:
Because now we have a task, and awaiting a task does yield to the event loop.
Tasks are where concurrency comes from, not coroutines.
This single difference is where most incorrect locking recommendations arise.
The cache wasn’t the story. My reviewer’s misconception was.
Here is the model I now advocate whenever reviewing asyncio code:
Python’s await isn’t a context switch. It’s a structured control flow that might suspend.
The latest blog posts, release news, and automation tips straight in your inbox
The latest blog posts, release news, and automation tips straight in your inbox
“You should add more locks here: this code is async, so anything might interleave.”
Every async function always returns a task (a Promise).
The moment you write await, the runtime can schedule something else.
Awaiting a coroutine does not give control back to the event loop. Awaiting a task does.
No other task ran between “child start” and “child end”.
An async def function is not automatically concurrent.
await is not a scheduling point unless the inner awaitable suspends.
Concurrency exists only across tasks and only at actual suspension points.
The critical section was atomic relative to the event loop.
No other task could interleave inside the mutation.
A more explicit boundary between structured control flow and scheduled concurrency.
The ability to write async code that behaves synchronously until a real suspension occurs.
Fine-grained control over when interleaving can happen.
Coroutines are callables with potential suspension points: they do not run concurrently.
Audit where tasks are created: every asyncio.create_task() is a concurrency boundary.
Python: async def → only a coroutine; task creation is explicit
The latest blog posts, release news, and automation tips straight in your inbox
The latest blog posts, release news, and automation tips straight in your inbox
await Is Not a Context Switch: Understanding Python’s Coroutines vs Tasks
await Is Not a Context Switch: Understanding Python’s Coroutines vs Tasks
await Is Not a Context Switch: Understanding Python’s Coroutines vs Tasks
await Is Not a Context Switch: Understanding Python’s Coroutines vs Tasks
Updating Materialized Views (Without Worrying Too Much)
Updating Materialized Views (Without Worrying Too Much)
Updating Materialized Views (Without Worrying Too Much)
Updating Materialized Views (Without Worrying Too Much)
Blog/EngineeringMehdi AbaakoukNov 25, 2025∙7 minreadawait Is Not a Context Switch: Understanding Python’s Coroutines vs Tasks
Blog/EngineeringMehdi AbaakoukNov 25, 2025∙7 minreadawait Is Not a Context Switch: Understanding Python’s Coroutines vs Tasks
Python’s async model is misunderstood, especially by engineers coming from JS or C#. In Python, awaiting a coroutine doesn’t yield to the event loop. Only tasks create concurrency. This post explains why that distinction matters and how it affects locking, design, and correctness.



You must be logged in to post a comment.