Checklists Are The Thief Of Joy

I have never seen security and privacy checklists used for any other purpose but deception.

After pondering this observation, I’m left seriously doubting if comparison checklists have any valid use case except to manipulate the unsuspecting.

But before we get into that, I’d like to share why we’re talking about this today.

Recently, another person beat me to the punch of implementing MLS (RFC 9420) in TypeScript. When I shared a link to their release announcement, one Fediverse user replied, “How does this compare to Signal’s protocol?”

Great! A fair question from a curious mind. Love to see it.

But when I started drafting a response, I realized that any attempt to write any sort of structured comparison would be misleading. They’re different protocols with different security goals, and there’s no way to encapsulate this nuance in a grid of green, yellow, and red squares to indicate trustworthiness.

But that doesn’t stop bullshit like this (alternate archive) from existing.

This is a wonderful case study in how to deceive someone with facts.

When you first load the page, the first thing you’re shown is some “summary” fields, including a general “Is this app recommended?” field with “Yes”/”No”. This short-circuits the decision-making for people too lazy or clueless to read on.

And then immediately after that, the very first thing you’re given is jurisdiction information.

An excerpt from the website linked above, where they emphasize

This is a website that bills itself as a comparison for “secure messaging apps”.

Users shouldn’t have to care about jurisdiction if the servers cannot ever read their messages in the first place. Any app that fails to meet this requirement should wholesale be disqualified.

The most important questions that actually matter to security:

  1. Is end-to-end encryption turned on by default?
  2. Can you (accidentally, maliciously) turn it off?

If the answers aren’t “yes” and “no”, respectively, your app belongs in the garbage. Do not pass Go.

But this checklist wasn’t written by a cryptography expert. If it were, there would be more information about the protocols used than a collection of primitives used under-the-hood with arbitrary coloring.

The

Why does “X25519 / XSalsa20 256 / Poly1305” get a green box but “Curve25519 256 / XSalsa20 256 / Poly1305-AES 128” get a yellow box? Actually, why does it refer to the same algorithm as X25519 and Curve25519 in different cells? Hell if I know. I’d wager the author doesn’t, either.

Now, I don’t want to belabor the point and pick on this checklist in particular. It’s not that this specific checklist is the problem. It’s that all checklists are.

The entire idea of using checklists to compare apps like this is fundamentally flawed. It’s like trying to mentally picture an 1729-dimensional object on a 2-dimensional screen.

Not only will you inevitably be wrong, but your audience will think you’re somehow being objective while you do it.

How Do You Compare Signal to MLS?

Since I brought it up above, I might as well talk about this here.

The Signal Protocol was designed to provide state-of-the-art encryption for text messages between mobile phone users. It has since slowly expanded its scope to include desktop users and people that don’t want to give their phone numbers to strangers. Signal does a lot of cool stuff, and I’ve spent a weekend reviewing how its cryptography is implemented. Signal didn’t give a hoot about interop, and probably won’t for the foreseeable future, either.

The MLS protocol is an IETF RFC intended to standardize a reasonable protocol for encrypted messaging apps. It was meant to eventually be interoperable across apps/devices.

Signal uses a deniable handshake protocol. MLS does not.

Signal tries to hide the social graph from the delivery service. MLS does not.

Signal’s approach to group messaging is an abstraction over 1:1 messaging, with zero-knowledge proofs to hide group memberships from the Signal server. Because this is an abstraction, it’s trivial to send a different message to each member of a group, and consistent histories are not guaranteed.

MLS proposes an efficient scheme for continuously agreeing on a group secret key. This kind of setup makes invisible salamanders style attacks on a group conversation untenable.

There are a lot of additional things that libsignal offers out-of-the-box, that you won’t get with MLS. Soon, key transparency may be on the list of things Signal offers but MLS doesn’t.

Ultimately, both protocols are good. They’re certainly way better choices than OpenPGP, OMEMO, Olm, MTProto, etc.

When I began drafting ideas for end-to-end encryption for the Fediverse, my starting point for this idea was MLS, not the Signal Protocol. Your social graph is already visible to ActivityPub, so there’s little value in trying to hide it with deniable handshakes. Furthermore, efficient group key agreement makes conversations involving dozens or even hundreds of participants scale better.

(You may also be interested in knowing that the author of the ActivityPub E2EE draft specification also settled on the MLS protocol.)

Your mileage may vary. Talk to your cryptographer. If you do not have a cryptographer, hire one before you design your own protocol.

If you want me to give your design a once-over, see this page for more information.

How Do Experts Make Secure Messaging App Recommendations?

During my review of the cryptography used by Signal, I explained my personal approach to cryptography audits. We’re doing the same sort of thing here, but for messaging app recommendations.

First, you need to let go of “lists” and “tables” entirely.

You’re going to be working with graphs. A flow-chart (where sections can be added as-needed) might be a suitable deliverable, but only if your audience can follow one.

Above, I mentioned that the first two questions you ask are:

  1. Is end-to-end encryption turned on by default?
  2. Can you (accidentally, maliciously) turn it off?

If you stop there, you can sort of call it a list, but the immediate next question I ask is, “What is the use-case and threat model for the app?”

There is no yes/no wiring here (except to fail any app that doesn’t have a coherent threat model to begin with). It’s open-ended and always requires a deeper analysis.

If you want to see what a rudimentary threat model looks like, see the one I wrote for my public key directory project.

Depending on the intended use and threat model of the app in question, a lot of different follow-up questions will also precipitate. It wouldn’t make sense to ask about elliptic curve choice if an app is fully committed to non-hybrid ML-KEM, after all.

Takeaways

If you see a dumb checklist trying to convince you to use a specific app or product, assume some marketing asshole is trying to manipulate you. Don’t trust it.

If you’re confronted with a checklist in the wild and want an alternative to share instead, Privacy Guides doesn’t attempt to create comparison tables for all of their recommendations within a given category of tool.


Header art: AJ.

The title is a reference to the quote, “Comparison is the thief of joy.”

Also, I’m specifically talking about comparison checklists, not every list of any shape or size that has a space for a checkbox in any or every industry. Please don’t @ me with your confusion if you didn’t pick up on this.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

While viewing the website, tap in the menu bar. Scroll down the list of options, then tap Add to Home Screen.
Use Safari for a better experience.