A Rust Async Primer-Pt 3

Part 2 of this series used a trivial example to explain the basic principles and components of asynchronous Rust. In this one, we’ll demonstrate how to include a separate runtime crate and use it to run concurrent tasks, as well as give a brief comparison of the current options.

Photo by Markus Spiske on Unsplash

When we last left off we were using the primitive executor that is available with the futures crate, sometimes called futures-rs. The futures crate provides the common abstraction layer for async behavior in the Rust language. It is intended to be the basis for creating complex runtimes. As discussed in the previous articles, this is by design. want you to intend to create a custom runtime, executors, etc., from scratch, you probably won’t be using futures directly.

I’ve chosen to use the async-std crate for this example simply because it’s easily read by anyone already familiar with the Rust std library. I’ll also be using timers to demonstrate that we’re actually running in a concurrent fashion as expected.

First, we’ll set up our cargo.toml file as follows.

I simply reused the original cargo.toml file and added the async-std and the rand crates as dependencies. We add the rand crate so that we can generate random sleep times for our function.

Next, we’ll alter the main.rs file to look like this:

This looks very similar to the final version of the code that we had at the end of the previous article after all the modifications we made. The set of changes that we made start with the use statements. We import Instant and Duration from the Rust std library. We use those for timing the execution of the code so that we can prove the functions are running concurrently. Instead of importing the futures crate, we import the async-std crate. We also import rng::Rng so that we can generate some random timings.

The only change to main other than creating a timer and printing the total execution time out is that instead of using the block_on() executor from the futures crate, we instead use the task::block_on() interface from the async-std crate. It behaves similarly to the one we saw in the futures crate but the executor is more sophisticated because while the task blocks the current thread, it also runs all of the async code concurrently. If one async function can’t complete, e.g., it’s asleep, it will yield control to a task that can make progress. As an aside, if it seems weird that we use a blocking interface to run concurrent tasks, the reason is that we don’t want the program to exit prior to all of the tasks being driven to some form of completion.

If you compile this program and run it, you will see results similar to the ones shown here:

We can see from the results that while the breakfast items are always started in the same order, they are clearly being run concurrently because the order in which they are finished varies based on their random sleep times. We can also see that the total runtime of the code is always the length of time taken by the longest sleep plus one or two milliseconds.

Lastly, let’s take things one step further and create something that looks more like a real-world application. We’ll run multiple concurrent tasks to grab some data from the web and return a Result<T, E>, an extremely common return value for I/O.

First, let's add some needed crates to our cargo.toml file:

We’re already familiar with the async-std and futures crates from the previous article. The data we’ll be pulling for our example will be stock ticker data in JSON format. Serde is a framework for serializing and deserializing data structures in Rust. More details can be found in the link but we’ll be using it to deserialize the JSON data. We’ll be pulling our ticker data from Alpha Vantage. Mostly because the signup is quick and easy. It also doesn’t cost anything but does give me live data to work with. A sample of the JSON data looks like this:

Surf is an HTTP client framework that works well with async-std, and has a convenient integration with serde for handling JSON documents.

Now let’s take a look at our simple stock ticker example in main.rs

The structures GlobalQuote and TickerData will hold the deserialized data. The #[derive(Deserialize)] annotations handle implementing the Deserialize trait for serde. The #[serde(rename = "")] does what you probably expect it to do; it maps the JSON field name inside the quotes, to the field name in your struct.

The main function hasn’t changed at all. The first 4 lines in async_main just set up the URI for the stock data API. We then create a Future for our three stock symbols we want to get information on, creatively name first_symbol, second_symbol, and third_symbol. We then pass those Future's to the join! macro, which we’ve also seen in the previous article. And finally, we print out the resulting stock prices.

The real meat, if you can call it that, lies in the trivial get_ticker() function. The Result<> that's returned is from the surf framework and is a wrapper for the standard library Result. We’re returning GlobalQuote because we want to return the actual stock data from the request to the calling function. The let TickerData { full_ticker } syntax indicates that the JSON should be mapped to the specified structure. It’s not super useful in our case since the values are all strings but if the returned data had been typed as something other than strings, e.g., if the JSON document price were an actual number, we could have defined the field price as a f64 and serde would have converted it to the appropriate type.

If you create an account and substitute your generated API key for the placeholder in the code, this should run out of the box and return a result similar to what you see below.

Again, you can see by the times that the requests are running concurrently and the total time for completion is within a few milliseconds of the longest-running task.

I had not done any significant asynchronous programming in other languages before attempting to understand it in Rust and I struggled to understand how it worked. My biggest struggle was a misconception that there was a clear standard on how async/await work in Rust based on the futures crate and that all of the runtimes were simply a more sophisticated version of futures. This turns out to be not quite the case.

There are three primary ecosystems for asynchronous Rust. These eco-systems are Tokio, async-std, and smol. I’ll take a brief look at each of them, and point out a couple of interesting things. The links provided throughout this article will give you much more detailed information.

Tokio is the oldest, released in 2016, and is the most widely used runtime at the time of writing. It uses custom I/O traits built on top of mio. This means it needs a compatibility layer to integrate with futures. It’s very configurable with many utilities but comes as one large crate although you can use feature flags to make the crate lighter.

Async-std was released in 2019 with the goal of being a full runtime async version of the Rust standard library and is built on top of futures. It also uses the async-executor crate, the executor provided by smol.

Smol is the newest, created in early 2020. It’s also built on top of futures and was created by Stjepan Glavina, co-creator of async-std, and also uses the futures crate. It’s intended to be as small as possible and as such the functionality is split into many different crates. It also provides a crate for compatibility with tokio.

I wanted to list the references that I used in the research for this article and I’m grateful to the authors. Some of the materials in them are outdated but this is a fast-moving target and they got me close enough.


I hope that this helps rather than making things more confusing. I appreciate your time in making it this far. If you find that I got something wrong, or have something to add please let me know in the comments. And finally, have a great day!



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Clay Ratliff

I am a jack of all trades and a master of none. An all-around neophile, currently disguised as a Solution Architect.