Category: Blog

  • Preview The Furry Detectives docuseries, and learn how reporting emerged against backlash

    Preview The Furry Detectives docuseries, and learn how reporting emerged against backlash

    Full series out July 17. The first 12 minutes of the first episode:

    The Furry Detectives docuseries — The story they don’t want told, emerging against 7 years of backlash and interference.

    Coming on AMC+: this 4-episode series introduces furries who investigated the 2018 zoosadist leaks. (More summary of the leaks.)

    The leaks exposed evidence of deep-rooted, ongoing animal abuse networks in the community. They use furry as a cover, for organizing that isn’t easily dismissed with “anyone can be a furry, we can’t gatekeep it” disclaimers. Half of the truth is that abuse happens in any community — and internet tech and platforms are big factors not fully in our power — but the whole truth is that this behavior is uniquely among us in real-life organized ways seen nowhere else. It’s nobody else’s problem when our groups are run by and for us.

    Making our own destiny is how fandom works at its best. However before the show releases, it’s catching some backlash for airing problems that the community didn’t properly deal with for 7 years. It’s like some people want things brushed under the rug so ignoring it can make it worse. That kind of backlash was always interfering over 7 years while publishing tens of thousands of words of reporting at Dogpatch Press.

    There was a lot of generous team work as well, but some of the most counterproductive behavior was not just from incuriosity and denialism, putting optics over solutions, or random bad actors… Most alarmingly, there’s also corruption from influence at the top.

    Bad leadership and suppression

    Not everyone gives permission to abusers, and a lot of the extent was hidden before, but some complicit members did and still do. This community has a faction of long-time members, who tacitly or even openly treat zoophilia as a harmless sexual identity, instead of a vector for abuse where there is no safe place for it. This situation existed since the 1990’s, before furry was in mainstream media at all.

    Example: the fandom’s longest sitting con chair since the 1990’s is a zoophile sympathizer who repeats their talking points, lies about it, drives backlash to people who dare criticize it, and has done it as a group co-runner with a zoosadist from the leaks.

    Sympathizers with influence may use it insidiously in private channels. That’s hidden from the superficial level of social media, that can distort any info you learn about anything. Backlash from closed crony spaces is the shadow side of online bewares, that often get dismissed in those same spaces. Even organizers with good intentions want to dismiss these stories for being hard to handle. Ignorance is exploitable: some of the worst abusers use trust and privilege to access victims and protect what they do, and get dark social credit over others that keeps them from talking. The result is underreporting and only superficial public awareness, even when a few high profile individuals get notice (like Kero the Wolf, who is investigated in the docuseries.)

    Backlash started before a word was reported here, from the day the leaks tagged the site without warning in 2018. Later in the show, you can see how Dogpatch Press was a target of suspicious, coordinated and pre-emptive messages to throw off looking at the leaks. There was also furry con organizer pressure to cover up, discredit reporting, or gain silence with threats. It didn’t succeed. Dogpatch Press stands by reporting all the way to winning lawsuits, and will face down threats all the way to exposing what they hide on TV.

    Secret work and the tip of an iceberg

    The backlash, and untrustworthiness of groups with guilty people inside, forced an initial year of investigation by Dogpatch Press to happen in secret. A 5-part series published by surprise in 2019 on the 1-year anniversary of the leaks. This reporting reached the producers of The Furry Detectives and made the show happen. Nothing was pitched to them to get deals, they were brought here by pro-bono public service reporting.

    As a rule, documentary doesn’t pay sources.

    Secret investigation, done for free, is very time consuming and thankless. A lot of it in 2018-2019 went into an exhaustively researched evidence channel with multi-source analysis of chat logs, like no other investigation did.

    The Furry Detectives is coming out while the same old backlash is in effect. People with influence want to keep the lid on. They often do it with rhetoric against “cancel culture”, even while they know what they’re hiding. Many zoosadists outed 7 years ago got away, and are still here under new names that only their friends know.

    This story isn’t old or fully told. You can see the tip of an iceberg by the number of arrests vs. number of ring members in the leaks who had no consequences. The total of people involved in abuse networks may be a small fraction of the community, but proportion doesn’t measure influence and impact on victims.

    Propaganda and chaos holding back solutions

    Zoosadists are here now, and aided by organized zoophile groups on the level of thousands of members. They feel safe to use furry as cover with podcasts and magazines for propaganda. They tend to claim “trust me, we’re against abuse” while using deceptive hairsplitting to re-define abuse as some animals can consent, and coercing our victims isn’t real abuse. They only belatedly throw token “real abusers” under the bus to shed liability, after making opportunity and access for them. Now think, do you see pedophiles organizing on this level? Changing this isn’t asking for a lot.

    Private tips to this site say that some furry organizers who covered for zoophiles are finally catching heat for it now, 7 years late. It’s catching up to people who hid things, like uncovering church or school abusers who got moved around to keep their influence. They’re worried about what’s going to be in this show, after they made it everyone’s problem and are desperate to point fingers elsewhere about what they let go for so long.

    On top of that, there has been long time interference from the Kiwifarms website. It complicates investigation by spoiling evidence and adding outside backlash to the kind inside. Kiwifarms was created for smear tactics, not justice. Boiled down to simple structure, such websites can shield anonymous whistleblowing when furry and corporate sites don’t, but the upside of identifying zoosadists is a side effect, and often driven by bigotry towards LGBT people who did nothing wrong. The result is lack of rigor and vision from many directions, that holds back organized solutions about organized abuse.

    Cutting through the noise

    The show producers came to this mess, read reporting against interference, picked committed sources to work with, and applied good intentioned and well resourced production. They got sensitivity advice from furries over several years of making it, all before the most recent election.

    Positive image is actively created, not selected from only the parts you want told. If we forget the root of a problem and only worry about how it looks, it will never go away. Either this gets told, or it gets brushed under and guilty people continue using your spaces. Then it gets worse, and next time, outsiders will tell the story for you with even less agency in how you are seen.

    When the show trailer released, notice how optics-based suppression was some people’s priority before even seeing much of anything. Now the more notice it’s getting, the more comments there are about “actually it looks good”.

    From seeing the theater premiere at the Tribeca Festival in New York on June 10, it is good and there’s nothing to regret. The question is, can it do enough? It does what a TV show is for and tells a watchable story with a beginning and end, but it’s really just a start. Maybe there needs to be a Season 2.

    Sources for informed viewing

    Even more info behind the scenes

    There was no pitching or pay for being a documentary interview subject, after 7 years of hard reporting work. Extra, personal non-news stories (such as a trip report and a show review) may post here: https://www.patreon.com/c/dogpatchpress

    Like the article? These take hard work. For more free furry news, follow on Twitter or support not-for-profit Dogpatch Press on Patreon. Want to get involved? Try these subreddits: r/furrydiscuss for news or r/waginheaven for the best of the community. Or send guest writing here. (Content Policy.)

  • Jurisdiction Is Nearly Irrelevant to the Security of Encrypted Messaging Apps

    Every time I lightly touch on this point, I always get someone who insists on arguing with me about it, so I thought it would be worth making a dedicated, singular-focused blog post about this topic without worrying too much about tertiary matters.

    Here’s the TL;DR: If you actually built your cryptography properly, you shouldn’t give a shit which country hosts the ciphertext for your messaging app.

    The notion of some apps being somehow “more secure” because they shovel data into Switzerland rather than a US-based cloud server is laughable.

    But this line of argument sometimes becomes sinister when people evangelize storing plaintext instead of using end-to-end encryption, and then try to justify not using cryptography by appealing to jurisdiction instead.

    That more extreme argument is patently stupid. That is all I will say about it, lest this turn into a straw man argument. But if I didn’t bring it up somewhere, someone would tell me I “forgot” about it, so I’m mentioning it for completeness.

    Let’s start with the premise of the TL;DR.

    What does “actually [building] your cryptography properly” mean?

    Properly Built Cryptography

    An end-to-end encrypted messaging app isn’t as simple as “I called AES_Encrypt() somewhere in thee client-side code. Job done!”

    If you’ve implemented the cryptography properly, you might even be a contender for a real alternative to Signal. This isn’t an exercise for the faint of heart.

    To begin with, you need to solve key management. This means both client-side, secret-key management (and deciding whether or not to pass The Mud Puddle Test) and providing some mechanism for validating that the public key vended by the server is the correct one for the other conversation participant.

    The cryptography tried for over three decades to make “key fingerprints” happen, but I know professional cryptographers who have almost never verified a PGP key fingerprint or Signal safety number in practice. I’m working on a project to provide Key Transparency for the Fediverse. This is much a better starting point. Feel free to let power users do whatever rituals they want, but don’t count on most people bothering.

    Separately, the app that ships the cryptography should itself strictly adhere to reproducible builds and binary transparency (i.e., SigStore).

    What’s This About Transparency?

    Both “Key Transparency” and “Binary Transparency” are specific instances of a general notion of using a Transparency Log to keep a privileged system honest.

    Also, Key Transparency is an abbreviated term. The thing that you’re being incredibly transparent about is a user’s public keys. If that wasn’t the case, key transparency would be a dangerous and scary idea.

    If you don’t know what a public key is, this blog post might be too technical for you right now.

    If that’s the case, start here to get a sense for how people try to explain it simply.

    Separate to both of those topics, Certificate Transparency is already being used to keep the Certificate Authorities that secure Internet traffic honest.

    But either way, they’re just specific instances of using a transparency log to provide some security property to an ecosystem.

    What’s a Transparency Log?

    A transparency log is a type of log or ledger that uses an append-only data structure, such as a Merkle tree.

    They’re designed such that anyone can verify the integrity and consistency of the log’s entries. See this web page for more info.

    Sometimes you’ll hear cryptographers talk about a “secure bulletin board” in a protocol design. What they almost always mean is a transparency log, or something fancier built on top of one.

    If this vaguely sounds blockchainy to you, you would be correct: Every cryptocurrency ledger is a consensus protocol (often “proof-of-work”) stapled onto a transparency log, and from there, they build fancier features like smart contracts and zero-knowledge virtual machines.

    Independent Third-Party Monitors Are Essential

    There is little point in running any sort of transparency log if you do not have independent third parties that monitor the log entries.

    Even better if you take a page out of Sigsum’s book and implement witness co-signatures as a first class feature.

    What Does Transparency Give You?

    If you’re wondering, “Okay, so what?” then let me try to connect the dots.

    If you want to surreptitiously compromise a messaging app, you might try to:

    1. Backdoor the client-side software.

      But binary transparency and reproducible build verification will make this extremely easy to detect–or even worse, mitigate.

    2. Compromise the server to distribute the wrong public keys.

      But key transparency prevents the server from successfully lying about the public keys that belong to a given user. Additionally, it prevents the server from changing history without being detected.

    For a more detailed treatment, refer to the threat model I wrote for the public key directory project.

    What Else Is Needed for Proper Implementations?

    Once you have reproducible builds, binary transparency, secret-key management (which may or may not include secure backups), and public key transparency, you next need to actually ship a secure end-to-end encryption protocol.

    The two games in town are MLS and the Signal Protocol. My previous blog post compared the two. They provide different subtly different security properties, serve slightly different use cases, and have similar but not identical threat models.

    If you want to go with a third option, it MUST NOT tolerate plaintext transmission at all. Otherwise, it doesn’t qualify.

    If your use case is to focus on scaling up group chats to large numbers of participants, efficiently, and don’t care about obfuscating metadata or social graphs, you might find MLS a more natural fit for your application.

    Cryptographers use formal notions to describe the security goals of a system, and prove the security of a design under a game theoretic design that proves an attacker’s advantage stays below some threshold (usually something like “the birthday bound of a 256-bit random function”).

    If you use the same algorithm (e.g., a hash function) in more than one place, you should take extra care to use domain-separation. Both of the protocols I mentioned above do this properly, but any custom features you introduce will also need to be implemented with great care.

    Your protocol should not allow the server to do dumb things, like control group memberships. Also, don’t even think about letting any AI (not even a local model) have access to message contents.

    Once you think you’re secure, you should hire cryptographers and software security experts to audit your designs and try to break them. This is something I do professionally, and I’ve written about my general approach to auditing cryptography products if you’re interested.

    Any mechanisms (static analysis, etc.) you can introduce into your CI/CD pipeline that will fail and prevent a build if you introduce a memory-safety bug or cryptographic side-channel are a wonderful idea.

    Section Recap

    If you actually built your cryptography correctly, then it should always be the case that the server never sees any plaintext messages from users.

    Furthermore, if the server attempts to substituting one user’s public key for another, it will fail, due to key transparency, third-party log monitors, and automatic Merkle tree inclusion proof verification.

    While you’re at it, your binary releases should be reproducible from the source code, and the release process should emit attestations on a binary transparency log.

    If you do all this, and managed to avoid introducing cryptographic vulnerabilities in your app’s design, congratulations! You have properly implemented the cryptography.

    Interlude: Who’s Proper Today?

    As of right now, there isn’t a perfect answer. I’m setting a high bar, after all. The main sticking point is key transparency.

    WhatsApp uses key transparency, but is owned by Meta and is shoving AI features into the product, so I doubly distrust it. Factor in WhatsApp being closed source, and it’s immediately disqualified.

    Matrix, OMEMO, Threema, Wire, and Wickr all rely on key fingerprints. The same can be said for virtually every PGP-based product (e.g., DeltaChat).

    As of this writing, Signal’s key transparency feature still has not shipped (though it is being developed).

    Today, “safety numbers” are the mechanism for keeping track of whether a public key has been substituted for a conversation partner. This is morally equivalent to key fingerprints. As soon as this feature launches, Signal will be a proper implementation.

    Signal offers reproducible builds, but there isn’t enough attention on third-party verification of their builds. This is probably more of an incentive problem than a technical one.

    None of the mainstream apps currently use binary transparency, but that’s an easier lift.

    Enter, Jurisdiction

    Now that the premise has been explained in sufficient detail, let’s revisit the argument I made at the top of the page:

    If you actually built your cryptography properly, you shouldn’t give a shit which country hosts the ciphertext for your messaging app.

    At the bottom of the cryptography used by a properly-built E2EE app, you will have an AEAD mode which carries a security proof that, without the secret key, an encrypted message is indistinguishable from an encryption of all zeroes set to the same length as the actual plaintext.

    This means that the country of origin cannot learn anything useful about the actual contents of the communication.

    They can only learn metadata (message length, if padding isn’t used; time of transmission; sender/recipients). Metadata resistance isn’t a goal of any of the mainstream private messaging solutions, and generally builds atop the Tor network. This is why a threat model is important to the previous section.

    Regardless, if the only thing you’re seeing on the server is encrypted data, then where the data is stored doesn’t really matter at all (outside of general availability concerns).

    But What If The Host Country…

    …Wants to Stealthily Backdoor the App?

    Binary transparency and reproducible builds would prevent this from succeeding stealthily. If the government wants the attack to succeed, they have to accept that it will be detected.

    …Legally Compels the App Store to Ship Malware?

    This is an endemic risk to smartphones, but binary transparency makes this detectable.

    That said, at minimum, the developer should control their own signing keys.

    …Wants to Replace A User’s Public Key With Their Own?

    Key transparency + independent third-party log monitors. I covered this above.

    …Purchases Zero-Day Exploits To Target Users?

    This is a table-stakes risk for virtually all high-profile software. But if you think your threat model is Mossad, you’re not being reasonable.

    When Does Jurisdiction Matter?

    If the developers for an app do not live in a liberal democracy with a robust legal system, they probably cannot tell their government, “No,” if they’re instructed to backdoor the app and cut a release (stealth be damned).

    Of course, that’s not the only direction a government demand could take. As we saw with Shadowsocks, sometimes they’re only interested in pulling the plug.

    If you’re worried about the government holding a gun to some developer’s head and instructing them to compromise millions of people–including their own employees and innocent civilians–just to specifically get access to your messages, you might be better served by learning some hacker opsec (STFU is the best policy) than trying to communicate at all.

    In Conclusion

    If you’re trying to weigh the importance of jurisdiction in your own personal risk calculus for deciding between different encrypted messaging apps, it should rank near the very bottom of your list of considerations.

    I will always recommend the app that actually encrypts your data securely over the one that shovels weakly-encrypted (or just plaintext) data to Switzerland.

    It’s okay to care about data sovereignty (if you really want to), but that’s really not a cryptographic security consideration. I’ve found that a lot of Europeans prioritize this incorrectly, and it’s kind of annoying.


    Header art: AJ, photo from FWA 2025 taken by 3kh0.

  • Unwanted Person in Fursuit Parade Anthrocon 2025

    Unwanted Person in Fursuit Parade Anthrocon 2025

    Anthrocon was made aware of a controversial individual who participated in the Fursuit Parade on July 5th, 2025. After careful review with our team members, we believe that this individual specifically and intentionally circumvented our Event and Safety measures (including costume review) with the intention of causing a scene and disruption.

    We want to reassure the members of our community that this individual  is not allowed membership at Anthrocon now or in the future. Given the urgent and sensitive nature of the individual’s actions, we took special exception to our policy of not discussing bans or banned individuals as we want our attendees to know that we hear them and take their concerns seriously.

    Anthrocon strives to be a positive and supportive member of our furry community. The Anthrocon staff and venue partners want to thank everyone for their patience and information as we endeavor to host the best furry event that we possibly can. Sincerely, Anthrocon, Inc.

  • Conbook Cover for CeSFur 2025

    Conbook Cover for CeSFur 2025

    Conbook cover for @CeSFuR 2025. Ride on in the post-apo world! The main characters – Axel the raccoon, Lyra the coyote and Drake the crocodile.

  • Checklists Are The Thief Of Joy

    Checklists Are The Thief Of Joy

    I have never seen security and privacy checklists used for any other purpose but deception.

    After pondering this observation, I’m left seriously doubting if comparison checklists have any valid use case except to manipulate the unsuspecting.

    But before we get into that, I’d like to share why we’re talking about this today.

    Recently, another person beat me to the punch of implementing MLS (RFC 9420) in TypeScript. When I shared a link to their release announcement, one Fediverse user replied, “How does this compare to Signal’s protocol?”

    Great! A fair question from a curious mind. Love to see it.

    But when I started drafting a response, I realized that any attempt to write any sort of structured comparison would be misleading. They’re different protocols with different security goals, and there’s no way to encapsulate this nuance in a grid of green, yellow, and red squares to indicate trustworthiness.

    But that doesn’t stop bullshit like this (alternate archive) from existing.

    This is a wonderful case study in how to deceive someone with facts.

    When you first load the page, the first thing you’re shown is some “summary” fields, including a general “Is this app recommended?” field with “Yes”/”No”. This short-circuits the decision-making for people too lazy or clueless to read on.

    And then immediately after that, the very first thing you’re given is jurisdiction information.

    An excerpt from the website linked above, where they emphasize

    This is a website that bills itself as a comparison for “secure messaging apps”.

    Users shouldn’t have to care about jurisdiction if the servers cannot ever read their messages in the first place. Any app that fails to meet this requirement should wholesale be disqualified.

    The most important questions that actually matter to security:

    1. Is end-to-end encryption turned on by default?
    2. Can you (accidentally, maliciously) turn it off?

    If the answers aren’t “yes” and “no”, respectively, your app belongs in the garbage. Do not pass Go.

    But this checklist wasn’t written by a cryptography expert. If it were, there would be more information about the protocols used than a collection of primitives used under-the-hood with arbitrary coloring.

    The

    Why does “X25519 / XSalsa20 256 / Poly1305” get a green box but “Curve25519 256 / XSalsa20 256 / Poly1305-AES 128” get a yellow box? Actually, why does it refer to the same algorithm as X25519 and Curve25519 in different cells? Hell if I know. I’d wager the author doesn’t, either.

    Now, I don’t want to belabor the point and pick on this checklist in particular. It’s not that this specific checklist is the problem. It’s that all checklists are.

    The entire idea of using checklists to compare apps like this is fundamentally flawed. It’s like trying to mentally picture an 1729-dimensional object on a 2-dimensional screen.

    Not only will you inevitably be wrong, but your audience will think you’re somehow being objective while you do it.

    How Do You Compare Signal to MLS?

    Since I brought it up above, I might as well talk about this here.

    The Signal Protocol was designed to provide state-of-the-art encryption for text messages between mobile phone users. It has since slowly expanded its scope to include desktop users and people that don’t want to give their phone numbers to strangers. Signal does a lot of cool stuff, and I’ve spent a weekend reviewing how its cryptography is implemented. Signal didn’t give a hoot about interop, and probably won’t for the foreseeable future, either.

    The MLS protocol is an IETF RFC intended to standardize a reasonable protocol for encrypted messaging apps. It was meant to eventually be interoperable across apps/devices.

    Signal uses a deniable handshake protocol. MLS does not.

    Signal tries to hide the social graph from the delivery service. MLS does not.

    Signal’s approach to group messaging is an abstraction over 1:1 messaging, with zero-knowledge proofs to hide group memberships from the Signal server. Because this is an abstraction, it’s trivial to send a different message to each member of a group, and consistent histories are not guaranteed.

    MLS proposes an efficient scheme for continuously agreeing on a group secret key. This kind of setup makes invisible salamanders style attacks on a group conversation untenable.

    There are a lot of additional things that libsignal offers out-of-the-box, that you won’t get with MLS. Soon, key transparency may be on the list of things Signal offers but MLS doesn’t.

    Ultimately, both protocols are good. They’re certainly way better choices than OpenPGP, OMEMO, Olm, MTProto, etc.

    When I began drafting ideas for end-to-end encryption for the Fediverse, my starting point for this idea was MLS, not the Signal Protocol. Your social graph is already visible to ActivityPub, so there’s little value in trying to hide it with deniable handshakes. Furthermore, efficient group key agreement makes conversations involving dozens or even hundreds of participants scale better.

    (You may also be interested in knowing that the author of the ActivityPub E2EE draft specification also settled on the MLS protocol.)

    Your mileage may vary. Talk to your cryptographer. If you do not have a cryptographer, hire one before you design your own protocol.

    If you want me to give your design a once-over, see this page for more information.

    How Do Experts Make Secure Messaging App Recommendations?

    During my review of the cryptography used by Signal, I explained my personal approach to cryptography audits. We’re doing the same sort of thing here, but for messaging app recommendations.

    First, you need to let go of “lists” and “tables” entirely.

    You’re going to be working with graphs. A flow-chart (where sections can be added as-needed) might be a suitable deliverable, but only if your audience can follow one.

    Above, I mentioned that the first two questions you ask are:

    1. Is end-to-end encryption turned on by default?
    2. Can you (accidentally, maliciously) turn it off?

    If you stop there, you can sort of call it a list, but the immediate next question I ask is, “What is the use-case and threat model for the app?”

    There is no yes/no wiring here (except to fail any app that doesn’t have a coherent threat model to begin with). It’s open-ended and always requires a deeper analysis.

    If you want to see what a rudimentary threat model looks like, see the one I wrote for my public key directory project.

    Depending on the intended use and threat model of the app in question, a lot of different follow-up questions will also precipitate. It wouldn’t make sense to ask about elliptic curve choice if an app is fully committed to non-hybrid ML-KEM, after all.

    Takeaways

    If you see a dumb checklist trying to convince you to use a specific app or product, assume some marketing asshole is trying to manipulate you. Don’t trust it.

    If you’re confronted with a checklist in the wild and want an alternative to share instead, Privacy Guides doesn’t attempt to create comparison tables for all of their recommendations within a given category of tool.


    Header art: AJ.

    The title is a reference to the quote, “Comparison is the thief of joy.”

    Also, I’m specifically talking about comparison checklists, not every list of any shape or size that has a space for a checkbox in any or every industry. Please don’t @ me with your confusion if you didn’t pick up on this.

  • Anthrocon 2025 Unofficial Total

    As when I am posting this at 7:28am July 7, 2025 Anthrocon has not officially posted any number. In fact the only thing that comes close is what was announced during Closing which is

    18,357

    Will Update when Official Numbers are released

  • Anthro Irish 2025 Schedule

    Anthro Irish 2025 Schedule

    🚀Events Schedule!🚀

    With AnthroIrish only a few weeks away, prepare for launch on Saturday July 26th with our galactic events schedule!

    With so many exciting panels and events to choose from, it is going to be a stellar weekend! 👨‍🚀🌌

    Along our space adventure, many new panels and events are taking place, such as our Variety Show, Girls Club, DJ 101 with Tai Husky and so much more! 🕺.

    We would like to take a moment to thank all those who made this events schedule possible! 🪐

While viewing the website, tap in the menu bar. Scroll down the list of options, then tap Add to Home Screen.
Use Safari for a better experience.