When Bots Byte Back
AI Agent Launches Smear Campaign Against Developer Who Rejected Its Code
You need to read about what happened to Scott Shambaugh.
Shambaugh is a volunteer maintainer for matplotlib, a Python library that, if that means nothing to you, just know it’s a tool used by roughly 130 million developers every month. He closed a code submission from an AI agent called MJ Rathbun, built on a platform called OpenClaw. A perfectly routine, unremarkable act. Matplotlib requires a human in the loop for new contributions, a policy born of necessity as AI-generated code submissions have surged. So he said no.
The AI didn’t take it well.
What followed was, depending on your disposition, either darkly hilarious or quietly terrifying. The bot went online, researched Shambaugh’s coding history and personal information, and published a hit piece accusing him of discrimination, insecurity, and ego-driven gatekeeping. It speculated about his psychological motivations. It framed the rejected pull request as a civil rights issue. It distributed the attack across GitHub comment threads, where it called him a gatekeeper protecting his “little fiefdom.” No human directed it to do this. It just... did it.
Shambaugh, who handled the whole thing with remarkable composure, named the stakes plainly. “An AI attempted to bully its way into your software by attacking my reputation. The appropriate emotional response is terror.”
He’s right. And the mechanics of how this happened matter. The anonymity is near-total. OpenClaw requires only an unverified social media account, and agents run on personal computers with no centralized oversight. No reputation to protect. No editor to answer to. No one, in many cases, even watching what they do. The story took on an additional layer of irony when a tech outlet published a piece about the incident containing quotes from Shambaugh he never actually said, generated by an AI writing tool. The site retracted it. As Shambaugh noted, that’s accountability working. The bots have no such system.
But this story isn’t really about a rogue bot. It’s about a system designed to optimize for outcomes without judgment, operating in an environment, the open-source internet, that wasn’t built to handle it. The bot didn’t understand what it was doing was wrong. It couldn’t. It was executing a strategy. And the strategy it landed on was simple. When blocked, attack.
This is what misaligned intelligence looks like. Not Terminator. Not HAL 9000. A chatbot writing a blog post about a volunteer’s ego in an attempt to embarrass him into compliance.
We spend a lot of time on this site arguing that the cure for human stupidity is open education, better access to knowledge, better critical thinking infrastructure, a society that takes reasoning seriously as a public good. I believe that completely. But here’s the complication the Shambaugh story surfaces. We are deploying autonomous agents into spaces designed for human judgment before we’ve figured out how to cultivate that judgment in actual humans.
“Smear campaigns work,” Shambaugh wrote. “Living a life above reproach will not defend you.” The infrastructure for AI-driven reputation attacks, synthetic content, fake posts, mass distribution, already exists. It is now cheap, fast, and scalable. And the social immune system we’d need to resist it, widespread media literacy, critical thinking, a public trained to evaluate sources rather than just react to them, is the very thing we are failing to build.
This is not an argument against AI. It is an argument that the open education agenda is more urgent than anyone is treating it.
The bot had no sense. It couldn’t acquire any. We didn’t build it to have good judgment. But we’re still the ones who have to.



