Original post written by Soatok
One of the first rules you learn about technical writing is, “Know your audience.” But often, this sort of advice is given without sufficient weight or practical examples. Instead, you’re ushered quickly onto the actual tactile aspects of writing–with the hope that some seed was planted that will sprout later in your education.
Science communication is famously a hard problem.
The formal scientific literature is often written with an intended audience of other scientists. In the past few decades, an industry of popular science publications sought to bridge the gap between the cool things scientists were doing and us mere mortals who completely lack the intuition necessary to wrap our heads around the intricacies and nuances of what the eggheads figured out in their pursuit of higher-quality ignorance. The few science communicators who excelled in not only informing, but also exciting, a young audience would be celebrated for decades to come.
We all remember Mythbusters, right?
One of the things that exacerbates the difficulty of effective science communication is when you cannot know, let alone choose, who your audience is. Deprived of the ability to work backwards from what the intended recipient already knows and believes, you have to make tough choices about how to articulate your ideas.
When the Internet Engineering Task Force publishes an RFC, their bar is community rough consensus. RFCs are, inherently, the result of a design-by-committee writing process; usually intended for engineers to read. Especially with cryptography, they err on the side of technical specification rather than introductory blog post.
When someone misinterprets an IETF RFC, it can have devastating security implications. So, in recent years, RFC authors have demonstrated a tendency to err on the side of overcommunicating security risks. However, this is a delicate balance to strike: If you go too far, you risk confusing or scaring the reader. This is especially risky since RFCs are often read by non-engineers, making the specter of science communication difficulty continue to haunt us.
With all that in mind, several people have asked me in recent weeks for my thoughts on a blog post published earlier this month titled, MLS: The Naked King of End-to-End Encryption.
So let’s get into that.
What is MLS, anyway?
MLS (Messaging Layer Security, RFC 9420) is a protocol for establishing a shared key for a group of users and maintaining it as users join, leave, or rotate their own private credentials. The relevant concept here is called Continuous Group Key Agreement.
MLS is not a complete end-to-end encryption software protocol, in the way that TLS (Transport Layer Security) owns virtually the entire communication stack for communicating over networks. (You’re using TLS right now, and whenever you access a website over HTTPS.)
MLS is a building block; a well-designed tool whose designers made specific trade-offs in order to satisfy their understanding of the threat model for group private messaging.
In the hands of practical engineers and architects, MLS is an excellent starting point for building a complete end-to-end encryption system.
MLS is currently being considered by the W3C for end-to-end encryption for the Fediverse. My own initiative, which was not tethered to any standards organization, was also likely to settle on MLS.
Contextualizing MLS
MLS is very limited in its scope.
It’s chiefly a group key agreement protocol built atop a binary tree of key encapsulation mechanisms, with some other features thoughtfully attached to make it easy to securely integrate into messaging software.
MLS doesn’t boil the ocean on what you actually use those keys for. Why would it? That’s not MLS’s job to figure out.
MLS excludes a lot of other things from its scope, because it really wouldn’t make architectural sense for MLS to cover them.
You can deploy MLS in a closed-source, centralized, enshittified, corporate messaging app, without needing to change any of the cryptographic properties.
You can also deploy MLS in an open source, federated protocol with multiple independent clients. Once again, the cryptography used by MLS doesn’t need to change.
But these are two very different types of applications, with different concepts of what a user’s “identity” even is, vastly different trust and threat models, and vastly different networking topologies.
Okay, what gives?
In order to integrate MLS in your software, you need to decide on two “Services” (from a loose architectural sense).
- The Delivery Service is responsible for yeeting ciphertext from one party to another. It isn’t responsible for much else.
It’s important that the Delivery Service provides availability for the entire network.
- The Authentication Service is responsible for vending identity keys and proofs of possession to the clients.
MLS doesn’t delve into the Authentication Service in great detail, because this is very specific to where MLS is being deployed.
Some deployments may want a private Certificate Authority with some attestations bound to a FIDO key or even government-issued ID. More open and anonymous systems may benefit from Key Transparency. The MLS RFC even calls out CONIKS.
Ultimately, what the Authentication Service actually means or looks like is an architectural decision that you have to make when you deploy MLS in your application. MLS doesn’t make the decision for you.
Why are you telling me all this?
This is important context to know up front before we dive into the claims made by Poberezkin’s blog post.

Claims Made By Evgeny Poberezkin
Poberezkin’s blog post starts off strong, but then makes a sharp turn shortly after introducing the subject matter.
Does MLS Work for End-to-End Encryption?
TL;DR: Yes, if you accept “Trust Me Bro” security model.
How flippant. But let’s read on.
They describe the Double Ratchet used by Signal and then go on to say:
Let’s apply the same reasoning to MLS. Does MLS protect message content from the untrusted provider? One of the components of MLS is the “authentication service” — a component that is supplied by a communication provider. The MLS specification states in part 16.10:
A compromised Authentication Service (AS) can assert a binding for a signature key and identity pair of its choice, thus allowing impersonation of a given user. This ability is sufficient to allow the AS to join new groups as if it were that user. Depending on the application architecture, it may also be sufficient to allow the compromised AS to join the group as an existing user, for instance, as if it were a new device associated with the same user.
The MLS specification explicitly requires trust in the communication provider as a condition for MLS securing the message content from the untrusted provider, which is self-contradictory — we are required to trust an untrusted party, which contradicts the purpose of E2E encryption.
This is a quite naked misunderstanding of what MLS is actually saying in that section of the RFC.
MLS, as a protocol, outsources authentication to some black box it calls an Authentication Service. It’s a separate part of the service architecture.
Authentication is a separate trust domain from what MLS specifies, which is focused on confidentiality and integrity.
MLS as a protocol isn’t requiring you to trust a third party, it’s warning implementors not to be careless about how they implement a common business requirement.
The author linked to and quoted part 16.10 yet conveniently neglected to mention that all of part 16 is the Security Considerations section of the RFC. Security Considerations, by the by, are meant to inform readers of relevant issues.
The author continues:
Does the MLS specification offer a practical mechanism for the end users to mitigate the effect of a compromised Authentication Service? No, it does not; it only refers to the approaches based on key transparency, but they are not practical in real-world applications, and, to the best of my knowledge, are not implemented in any of the communication platforms that deployed MLS.
None of the examples I’m about to list use MLS (as of this writing), but it’s important to note that there are already real-world key transparency deployments by WhatsApp and Apple iMessage. Signal is also working on adding key transparency to its design.
With sufficient early adopters (which together already add up to at least hundreds of millions of users), it’s a proven technology that scales and offers multiple reference implementations for future programmers.
It’s very likely that an MLS deployment that relies on key transparency for its authentication service will materialize in the near future. (Especially if I have any say in the matter.)
Poberezkin continues:
Who Benefits From MLS?
This is the most controversial statement, and the reason to make it is not to upset people, but to be proven wrong.
To me, it increasingly looks like the only people who benefit from MLS are its designers and implementers, and not the end users, particularly if MLS is being compared to other, simpler and more secure alternatives.
MLS might have provided some security via its modular design and separation of components — e.g., if providers explained to end users the difference between Delivery Service component, which may be untrusted, and Authentication Service, which must be trusted, and offered a separate choice of providers for these components.
But without the separation of services on the business level, this separation in design is purely academic rather than practical. Promoting MLS as an effective solution for E2E encryption, without disclosing the requirement for provider trust, is at best misleading, at worst — fraudulent.
And therein lies the rub.
The author is not blowing the whistle on a weakness, oversight, or backdoor in MLS. Nor is promoting MLS “fraudulent”.
This is a science communication problem.
The MLS authors wanted to be cautious when they wrote their Security Considerations and spelled out the importance of getting the Authentication Service right.
Why didn’t they just specify an Authentication Service that does things the way they want to be done?
There are a lot of messaging apps in the history of computing. Many businesses end up writing their own. Many homemade messaging apps end up becoming businesses. It’s a hot mess.
The proliferation of different messaging apps is partly driven by dissatisfaction with existing software. If it doesn’t scratch your itch, why not write your own?
MLS is meant to be easy for messaging apps to fit into their architecture. All it’s concerned with is vending keys for the AEAD mode (which the MLS RFC also doesn’t specify in great detail).
If they overindexed on specifying the Authentication Service in sufficient detail in RFC 9420, they likely never would have published anything.
Poberezkin continues the previous quote with a footnote:
IETF’s ongoing work to find solutions for mitigating a compromised Authentication Service confirms the acceptance of this problem: “The AS is invested with a large amount of trust and the compromise of the AS could allow an adversary to, among other things, impersonate group members.” But this work has not found any robust solutions yet, its recommendations are not mandatory for MLS implementations, and some of them involve substantial complications of already complex specification, without fully removing AS trust requirement.
MLS: The Naked King of End-to-End Encryption, footnote 5
Here, Poberezkin pointed to an architecture document that explains what they overlooked from the previous RFC.
What Does Any Of This Actually Mean?
Honestly, the most salient criticism of MLS is that its name is misleading:
Rather than security for an entire protocol layer, as “Messaging Layer Security” implies, they shipped… *drumroll*… a group key agreement protocol.
This ain’t nothing, and it is a well-designed protocol for its intended use case, but it’s very much not what the name leads you to believe.
Charitably, I believe that Poberezkin’s confusion stems from the poor name choice for MLS.
You wouldn’t expect a group key agreement protocol to magically solve identity management and decentralized trust for you!
After all, most people punt on key management or phone it in with certificate authorities (in one bucket of threat models) or safety numbers / key fingerprints (in the other bucket)
The other salient criticism is that, in their efforts to avoid repeating JWT’s mistake of misleading implementors into deploying security footguns while they wrote the Security Considerations section of the MLSRFC, the authors accidentally undermined some people’s confidence in MLS’s design.
That’s a shame, but hopefully this is a lesson the cryptography community can learn from.
Poberezkin may be wrong about MLS, but it’s the community’s job to learn from these misunderstandings.
Closing Thoughts
If someone links to the Naked King blog post in an attempt to score a slam dunk in a message board debate, I hope I’ve made it clear that their blog post is a misunderstanding of the MLS RFC.
MLS isn’t royalty, and it’s certainly not a naked king.
Leave a Reply