Actix and Rust unsafe

Here I go again writing about Unsafe Rust. Turns out that today I received a pingback to my article Actix-web is dead (about unsafe Rust) from Rust is a hard way to make a web API blog post. It also was featured in r/Rust reddit community! (Strangely, the post in reddit goes to the original author’s blog: Rust is a hard way to make a web API)

There, Tom wrote the following piece referring to my post:

Heck, if you ask some people, Rust is less secure than a GC’ed language for web apps if you use any crates that have unsafe code – which includes Actix, the most popular web framework, because unsafe code allows things like deferencing raw pointers.

It seems that there was a bit of a misunderstanding, because I don’t agree with this wording at all. If I would fix it to match better what I intended to say, I would write instead: “Rust is less secure than a GC’ed language for web apps if you use any crates that abuse unsafe code”.

But still this is overly simplistic, and it’s hard to put it in few words.

Let me try to summarize. About unsafe:

  • All Rust programs depend at some level on unsafe code. It’s near to impossible to get rid of it, as it is one of the basic building blocks of Rust. The standard library uses a lot of unsafe code (in small quantities, but in lots of places).
  • There are algorithms that require unsafe code to work efficiently, or to be practical (or both). For example, implementing a Linked List is quite a nightmare that to do it well you need unsafe. (See Learn Rust with Entirely Too Many Linked Lists)
  • The point is that the unsafe code portions of a crate should be as small as possible and easy to prove correct. If it can be avoided, it should be avoided; unless there’s a strong reason to not to do so.
  • Unsafe code blocks are not really unsafe. Most of the compiler guarantees still apply. It is close to regular C or C++ code in terms of safety, and most of us feel quite safe coding and running C++ programs.

Regarding on Actix and why it was a problem, first we need to understand that Actix is a web server, and it is intended to be exposed to the internet.

The HTTP protocol is really hard to implement right, completely, and error free. It’s not a simple protocol. An attacker could find a vulnerability leading to things that range from DoS to remote code execution.

In general we shouldn’t trust ANY web server. Anything that you put facing to the internet has to be proven to work properly. And here the issue is not about distrust on unsafe code (or C++ code), the issue is that a web server that is not widely used and scrutinized is potentially vulnerable to unknown attacks.

So for example, if you run NodeJS web servers, Python web servers, or anything similar, please consider removing them if they are reachable from the internet directly or indirectly (this is, proxying through Apache/Nginx/etc will still expose a handful of vulnerabilities). Instead, think about using another protocol to do this, like FastCGI, WSGI, or similar. Those protocols are simpler and harder to exploit vulnerabilities on them.

Actix in this case turns out that is so fast, that it even outperforms Nginx! So there’s no point on proxying through anything, you would lose so much performance in the way that it will make no sense. So Actix would be best as a web server directly exposed to the internet.

Also, Actix is new. There hasn’t been much time yet to carefully search for vulnerabilities. So we can expect that there might be something still hiding over there. It is still rapidly changing, so new bugs might appear at any point.

The problem with Actix was that the original author loved to use unsafe a lot. They quite enjoyed playing with Rust and unsafe. I am happy for them, but this is a recipe for disaster. Having more unsafe code than the bare minimum needed opens the door for unforeseen consequences.

The teams developing browsers like Firefox or Chromium are quite seasoned with C++ and they really know what they are doing, and they try to use all possible measures to reduce any memory related bugs, but even with that, it seems that 70% of the bugs of C++ applications are memory-related. (And Microsoft found this too)

I think this shows clearly why unsafe code should be minimized. But, does this mean that a Rust program with a tiny bit of unsafe code is less secure than a Python or NodeJS one? Nope.

Rust places a lot of restrictions on the code in such a way that the program is almost proven to be right, quite on the style of Haskell and other functional languages.

Having Actix fixed now, with unsafe code blocks reduced to the minimum, makes me more confident running it exposed to the internet than any Python/NodeJS server.

Rust has a lot of guarantees that just don’t exist in Python or Node. Also, the threading and async models impose proper restrictions to avoid programmers shooting themselves in the foot.

In case it wasn’t clear from the previous paragraphs, I wouldn’t put into such high standards to all parts required to do a web application. Still, unsafe needs to be minimized, but if doesn’t receive the user input, bugs will be harder to exploit.

Hope this explains my point of view on Actix and Unsafe. Also, I’m still learning Rust and this is just my humble opinion on the matter.

Thanks a lot to Tom MacWright who referenced to my article, it quite helps to see that my opinion is being read and taken into account.

Released new ping tool in Rust!

A lot of time has passed since my last post. To be sincere, these quarantines have ostracized me and haven’t keep up with almost anything, like hibernating waiting for this thing to go away. After almost a year seems I got some energy back to start writing and doing some other stuff.

I have been playing with Rust a lot. Played with several exercises and different things to get comfortable with it. And now I’m reaching a point where I see that Rust can actually be almost as fast to code as Python (there are still a lot of rough edges though).

In the meantime, during this WFH period, I noticed that my home network is kinda strange. I get some disconnections or weird behavior in anything that requires a real-time connection over the internet. For example, video calls tend to break up often, on-line games display random spikes of lag.

Because of this, I have been looking to ping my router and diagnose the problem. But the thing is, regular ping tools show more or less normal behavior, and to catch any packet loss I need a really aggressive ping that actually is really hard to see.

I searched for other ping tools that better suit this purpose, but what I found was basically paid stuff. It was hard to believe that there wasn’t any open source tool for this. So I thought that this is a good idea for a new Rust project.

And this is how zzping was born. This is a tool that features a daemon pinger that will ping a configured set of hosts at roughly 50 pings per second, store the data on disk and also sends it via UDP to a GUI.

After 1-2 weeks of waiting for approval from my employer to release this, I pushed the changes into my github:

(Just note that even if Google is in the license, this is just a result of the review process. The only relationship between this project and Google is the fact that I was working on it while being employed by Google)

I thought that Rust would not have mature enough GUI libraries, so I played a bit with Python+Qt5. My idea was that Python could handle well enough the data size and Qt would be better than any Rust GUI. But after some trial and error, realized that Qt charting libraries were mostly for office use, like 100 points or static viewing.

As I wanted something that was able to display > 1000 points changing in real time, Qt was out of the question, and with this, Python was also out of the question as well. So I went to Rust Discord servers to ask for advice on a Rust GUI library for this.

Turns out that, obviously, there’s no GUI aside of FFI to GTK that is capable of graphing. But, as they quickly pointed out, Iced can paint into a Canvas quite well and that should do.

So I coded zzping-gui in Rust, and receiving the UDP events from the daemon, I could paint in real-time the ping timings and packet loss up to 10,000 lines in screen. Still it takes “too much” time to draw, to the point that I found it deceiving; I thought that would be faster. But after profiling, this seems to come from my own NVidia drivers drawing, therefore on the Vulkan side of things.

It’s possible that Iced is not optimized enough for this kind of stuff, or maybe (surely) I’m missing optimizations and caching. But I saw that it was fast enough and I moved on.

This is what it looks like when displaying real-time data:


It can only show one host at a time, and if restarted it loses the history.

Up to this point it’s what I released as 0.1 in the main branch. I’ve continued working in 0.2 in a beta branch.

A bit of trivia

Most people I talked about zzping they said that I surely used threads for the pinger. Wrong! In fact the first library I found was internally creating threads all over the place, so I looked at the sources and coded something similar myself but single-threaded. I purposefully removed the threading ability and instead used a non-blocking approach.

Why? Because it uses less CPU and it’s more efficient. But threads are more performant! Yes, but no. A threading model would allow me to push more pings per second, sure. But this misses the fact that a single thread in a 10 year old CPU can send over 1,000 pings per second or more, haven’t tested the limits.

And at those rates, one would think if our objective was to test the network or to cause a DoS attack and freeze any networking gear we’re trying to ping. It has near to zero value to send a ping hundreds of microseconds apart.

In contrast, threading has a cost. Yes, it does. Programs using threads use more CPU per unit of work done. Threading means that the CPU and OS scheduler have to do more task switches over time, and those switches aren’t exactly free. OS threads also have memory requirements, and have some CPU cost to initialize.

Going all for threads misses a big point here: zzping-daemon is an utility meant to run all the time in background, as a service. The computer that runs this might not have a lot of CPU, or it might be a gaming machine. Every tiny bit of CPU consumed maybe less FPS while gaming and might be a motivation to shut it down.

Therefore, removing threads is a better strategy to keep the CPU as free as possible and do as much work as possible with the absolute minimum CPU required. Rust also helps there, by optimizing the binary to the maximum.

On another topic, I went for UDP communication to the GUI because I wanted real-time and I preferred to drop packets if the connection between zzping-gui and zzping-daemon was flaky. But now I see this as a problem, as it’s connection-less and it’s not reliable, preparing for a next step when a GUI can subscribe and get the last hour to do a prefill is quite complicated. Therefore I’m thinking on going to TCP instead.

TCP has other problems, it might buffer, and it doesn’t display connection problems. But maybe I’m overthinking it, as this tool is thought over local networks, and they should be more or less stable. In any case, if there’s a problem, it should be solved when it appears, not before.

I have quite a hard time when designing how to store the data to disk. Even settling with storing statistics every 100ms instead of every single ping, turns out that this still can account for 50 messages per second, depending on config. And over a year, this is quite easily a lot of gigabytes.

MessagePack has been quite helpful. Is one of my favourite formats, being really compatible with JSON, flexible, really fast, and small. Here I realized that actually using this specification reduced the messages to a really small size (maybe a half by just not storing directly u32, but allowing MP to choose the smallest size).

I played a lot with compression techniques, but nothing was really helping. I settled with a log quantization that can bring files from 20MB/hour to 12MB/hour with an acceptable precision loss. Other techniques like Huffman, delta encoding or FFT quantizing, yielded negligible results while over-complicating the file format. I might at some point go back to them, as I probably overlook a lot of stuff that can be done.

This produced a new data format. I named the old FrameData, and the new FrameDataQ (quite original, hah). zzping-daemon still saves the old, and I wrote several utilities to read and transform it to the new one, which in turn is the one that the GUI can read.

Yeah, I forgot. zzping-gui in the beta branch can read a file if passed via command line options. This opens a completely new mode and refurbished graph:

Three Windows Synced

In the image above, there are three zzping-gui instances, each opening a different file for a different host.

This allows for zoom, and pan. There is also another way of zooming into the Y axis, I named this “scale factor” (sf) and changes the axis into a semi-logarithmic, depending on how you move the slider.

The tool also does some pre-aggregation at different zoom levels and does a seamless zoom transition. It’s quite interesting that it’s able to navigate millions of points in real-time.

And that’s it, for now. I have plans to make this better. But it’s taking time as the design is not quite clear yet.