Preface xiii
Part 1: Asynchronous Programming
Fundamentals
1
Concurrency and Asynchronous Programming:
a Detailed Overview 3
Technical requirements 4
An evolutionary journey of
multitasking 4
Non-preemptive multitasking 4
Preemptive multitasking 5
Hyper-threading 5
Multicore processors 6
Do you really write synchronous code? 6
Concurrency versus parallelism 7
The mental model I use 8
Let’s draw some parallels to process
economics 9
Concurrency and its relation to I/O 11
What about threads provided by the
operating system? 12
Choosing the right reference frame 12
Asynchronous versus concurrent 12
The role of the operating system 13
Concurrency from the operating system’s
perspective 13
Teaming up with the operating system 14
Communicating with the operating system 14
The CPU and the operating system 15
Down the rabbit hole 16
How does the CPU prevent us from accessing
memory we’re not supposed to access? 17
But can’t we just change the page table in
the CPU? 18
Interrupts, firmware, and I/O 19
A simplified overview 19
Interrupts 22
Firmware 22
Summary 23
2
How Programming Languages Model Asynchronous Program
Flow 25
Definitions 26
Threads 27
Threads provided by the operating
system 29
Creating new threads takes time 29
Each thread has its own stack 29
Context switching 30
Scheduling 30
The advantage of decoupling asynchronous
operations from OS threads 31
Example 31
Fibers and green threads 33
Each stack has a fixed space 34
Context switching 35
Scheduling 35
FFI 36
Callback based approaches 37
Coroutines: promises and futures 38
Coroutines and async/await 39
Summary 41
3
Understanding OS-Backed Event Queues, System Calls, and
Cross-Platform Abstractions 43
Technical requirements 44
Running the Linux examples 45
Why use an OS-backed event queue? 45
Blocking I/O 46
Non-blocking I/O 46
Event queuing via epoll/kqueue and IOCP 47
Readiness-based event queues 47
Completion-based event queues 48
epoll, kqueue, and IOCP 49
Cross-platform event queues 50
System calls, FFI, and cross-platform
abstractions 51
The lowest level of abstraction 51
The next level of abstraction 55
The highest level of abstraction 61
Summary 61
Part 2: Event Queues and Green Threads
4
Create Your Own Event Queue 65
Technical requirements 65
Design and introduction to epoll 66
Is all I/O blocking? 72
The ffi module 73
Bitflags and bitmasks 76
Level-triggered versus edge-triggered
events 78
The Poll module 81
The main program 84
Summary 93
5
Creating Our Own Fibers 95
Technical requirements 96
How to use the repository alongside
the book 96
Background information 97
Instruction sets, hardware architectures, and
ABIs 97
The System V ABI for x86-64 99
A quick introduction to Assembly language 102
An example we can build upon 103
Setting up our project 103
An introduction to Rust inline assembly
macro 105
Running our example 107
The stack 109
What does the stack look like? 109
Stack sizes 111
Implementing our own fibers 112
Implementing the runtime 115
Guard, skip, and switch functions 121
Finishing thoughts 125
Summary 126
Part 3: Futures and async/await in Rust
6
Futures in Rust 129
What is a future? 130
Leaf futures 130
Non-leaf futures 130
A mental model of an async
runtime 131
What the Rust language and standard
library take care of 133
I/O vs CPU-intensive tasks 134
Summary 135
7
Coroutines and async/await 137
Technical requirements 137
Introduction to stackless
coroutines 138
An example of hand-written
coroutines 139
Futures module 141
HTTP module 142
Do all futures have to be lazy? 146
Creating coroutines 147
async/await 154
coroutine/wait 155
corofy—the coroutine preprocessor 155
b-async-await—an example of a coroutine/
wait transformation 156
c-async-await—concurrent
futures 160
Final thoughts 165
Summary 166
8
Runtimes, Wakers, and the Reactor-Executor Pattern 167
Technical requirements 168
Introduction to runtimes and
why we need them 169
Reactors and executors 170
Improving our base example 171
Design 173
Changing the current implementation 177
Creating a proper runtime 184
Step 1 – Improving our runtime
design by adding a Reactor and a
Waker 187
Creating a Waker 188
Changing the Future definition 191
Step 2 – Implementing a proper
Executor 192
Step 3 – Implementing a proper
Reactor 199
Experimenting with our new
runtime 208
An example using concurrency 208
Running multiple futures concurrently and in
parallel 209
Summary 211
9
Coroutines, Self-Referential Structs, and Pinning 213
Technical requirements 214
Improving our example
1 – variables 214
Setting up the base example 215
Improving our base example 217
Improving our example
2 – references 222
Improving our example 3 – this
is… not… good… 227
Discovering self-referential
structs 229
What is a move? 231
Pinning in Rust 233
Pinning in theory 234
Definitions 234
Pinning to the heap 235
Pinning to the stack 237
Pin projections and structural
pinning 240
Improving our example
4 – pinning to the rescue 241
future.rs 242
http.rs 242
Main.rs 244
executor.rs 246
Summary 248
10
Creating Your Own Runtime 251
Technical requirements 251
Setting up our example 253
main.rs 253
future.rs 254
http.rs 254
executor.rs 256
reactor.rs 259
Experimenting with our runtime 261
Challenges with asynchronous
Rust 265
Explicit versus implicit reactor
instantiation 265
Ergonomics versus efficiency and
flexibility 266
Common traits that everyone agrees
about 267
Async drop 268
The future of asynchronous Rust 269
Summary 269
Epilogue 272
Index 275
Other Books You May Enjoy 282
Step into the world of asynchronous programming with confidence by conquering the challenges of unclear concepts with this hands-on guide. Using functional examples, this book simplifies the trickiest concepts, exploring goroutines, fibers, futures, and callbacks to help you navigate the vast Rust async ecosystem with ease.
You'll start by building a solid foundation in asynchronous programming and explore diverse strategies for modeling program flow. The book then guides you through concepts like epoll, coroutines, green threads, and callbacks using practical examples. The final section focuses on Rust, examining futures, generators, and the reactor-executor pattern. You'll apply your knowledge to create your own runtime, solidifying expertise in this dynamic domain. Throughout the book, you'll not only gain proficiency in Rust's async features but also see how Rust models asynchronous program flow.
By the end of the book, you'll possess the knowledge and practical skills needed to actively contribute to the Rust async ecosystem.
This book is for programmers who want to enhance their understanding of asynchronous programming, especially those experienced in VM'ed or interpreted languages like C#, Java, Python, JavaScript, and Go. If you work with C or C++ but have had limited exposure to asynchronous programming, this book serves as a resource to broaden your knowledge in this area.
Although the examples are predominantly in Rust, the intricacies of Rust's futures are covered in detail. So, anyone with a keen interest in learning Rust or with working knowledge of Rust will be able to get the most out of this book.