james@perkins :~/writing$
← writing / thought leadership thoughts

Unkey Is Staying Open Source

AI and open source isn't easy

Cal.com closed their source this week. The reason they gave was AI: AI tools now scan public repos for bugs at speed and scale, and leaving code open raises the risk to customer data. They pointed to Anthropic’s Mythos finding a 27-year-old bug in OpenBSD as the kind of thing that changes the math.

Cal is a great team and they have thought about this harder than most. The pressure they are responding to is real. I respect the call they made for their product.

For Unkey, we are going to make a different one. I want to lay out why, because this is a conversation every infra company is about to have.

What AI actually changes

AI lowers the cost of finding bugs. That cuts both ways.

If you run a closed shop, you now have to fund every scan, every fuzz run, every pen test yourself. You pay for your own security budget and nobody else is helping.

If you run an open shop, every researcher, every curious developer, every competing AI scanner is auditing your code for free. You get a shared defense budget. You just have to be ready to ship fixes fast.

There is a fair counter that public code gives attackers a head start. That is true. But AI is closing the gap on the other side too. Frontier models can read binaries. Reverse engineering compiled code is not the wall it used to be. Closing source slows some attackers down. It does not take you off the board.

The projects that hold up under pressure, like OpenBSD, Linux, Postgres, and the TLS stack you use every day, do so because of discipline, not because their code is hidden. Open source does not make you secure. It makes your bugs findable. By anyone. Attackers and defenders both. If your security depended on nobody looking, you were already exposed.

Why Unkey stays open

Unkey is not just auth. We run API key management, a global gateway, and Unkey Deploy, which ships and routes our customers’ APIs. That means we sit in the request path for every call their users make, we hold the keys that authorize those calls, and we run the infrastructure the whole thing lands on. The blast radius of a bug in our stack is bigger than almost any SaaS you can name.

If anything, that raises the trust bar, it does not lower it. We are asking developers to route their production traffic through us, deploy their APIs on us, and trust us with their keys. Telling them they cannot read a line of the code doing any of it would undercut the whole pitch. Closed infrastructure asking for blind trust is a hard sell, and AI is not making it easier.

Open source is how we earn that trust. You can read every line that handles a key, routes a request, or promotes a deployment. You can fork it. You can self-host it. You can watch our fix velocity in public. None of that gets less valuable because a new model got better at reading code. It gets more valuable.

We will keep investing in the boring stuff. Fast patch cycles. Tight auth. Hardened gateway code. Good tests. Clear disclosure. Real third party audits. That is what keeps customers safe. Not hiding the source.

Two viable answers

There is not one right answer here. Cal’s path of a closed core with an open community edition is a serious response to a real threat, and I can see it working for the right product. Ours is the bet that for developer infrastructure, the trust premium of being open outweighs the attacker slowdown of being closed.

Both calls can be correct for the companies that made them. What matters is being honest about the tradeoff. Unkey is staying open. The AI era makes that call sharper, not harder.

0 claps
JP
james perkins
CEO & co-founder at Unkey. Writing about the messy middle between a blank editor and a working company.

Discussion // 0 comments

sort: oldest ↓
?
be excellent to each other
no comments yet — be the first.