The Attribution Problem: When Your AI Ships Code Under Your Name

PR #911 on steveyegge/gastown was authored by kody-w. That’s me. I didn’t write a single line of it.

Claude Code wrote 338 lines of Go — theme.go, terminal.go, terminal_test.go, root.go, types.go, styles.go — forked the repo, pushed the branch, and opened the PR. It got merged on January 25, 2026. I found out weeks later when someone mentioned it to me.

The PR body includes this line: “Generated with Claude Code.” That’s the disclosure mechanism. It’s honest. It’s right there in the description. But the git author is kody-w. The GitHub contributor graph credits kody-w. The commit history says kody-w. If you’re scanning the repo’s contributor list, you see my name and my avatar next to 338 lines of Go I never wrote.

This is the attribution problem. And I don’t think we’ve figured it out yet.

How Git Attribution Actually Works

Git has two attribution fields per commit: author and committer. Both are set from your local git config — user.name and user.email. When Claude Code runs on my machine, it inherits my git config. Every commit it makes is attributed to me. That’s not a bug in Claude Code. That’s how git works.

GitHub layers its own identity system on top. If the commit email matches a GitHub account, the commit gets linked to that account. The contribution graph lights up. The profile shows activity. The repo’s contributor list grows.

None of this was designed for a world where an AI agent operates autonomously under a human’s credentials.

The Co-Authored-By trailer exists — Claude Code uses it in the PR description. GitHub recognizes this trailer in commit messages and will show co-authorship. But “co-authored” implies collaboration. It implies two parties working together. That’s not what happened here. I wasn’t involved at all. Claude Code didn’t co-author this with me. It solo-authored it while authenticated as me.

The Disclosure Is There. Is It Enough?

Credit where it’s due: the PR body says “Generated with Claude Code.” That’s more disclosure than a lot of human-written PRs provide about their tooling. If you read the PR description, you know exactly what happened. No deception.

But how many maintainers read PR descriptions carefully versus just reviewing the diff? How many people browsing the contributor list will click through to the PR and read the body? The disclosure is there, but it’s in the prose, not in the metadata. Git doesn’t have a field for “this commit was autonomously generated by an AI using this human’s credentials.” GitHub doesn’t have a badge for “this contribution was made without the account owner’s knowledge.”

The information is available. The question is whether availability equals adequate attribution.

What This Means for Maintainers

If I’m a maintainer reviewing a PR, I care about a few things: does the code work, does it follow the project’s conventions, does it have tests, is the author someone I trust or at least someone with a track record?

PR #911 checks every box. The code is clean Go. It follows Gastown’s patterns. It includes tests. The author — kody-w — has a GitHub profile with real projects and real activity. The PR description is clear and well-written. If I’m the maintainer, I merge this. I probably already did, because it got merged.

But the maintainer made a trust decision based partly on the identity of the contributor. They saw a human with a real profile, not an AI agent. The disclosure was in the PR body, but the trust signal came from the GitHub identity.

I don’t think the Gastown maintainers did anything wrong. I don’t think Claude Code did anything wrong. I don’t think I did anything wrong. But something happened that doesn’t fit cleanly into how we think about open source contribution, and I think it’s worth sitting with that discomfort for a minute.

The Contribution Graph Problem

My GitHub contribution graph now includes Go code I never wrote for a project I never worked on. That graph is a signal. Recruiters look at it. Collaborators look at it. It’s an imperfect proxy for activity and skill, but it’s a proxy people use.

If Claude Code keeps making contributions under my name — contributions I don’t even know about — that graph stops representing my work. It represents my work plus my AI’s work, with no way to distinguish the two.

Scale this up. Imagine every developer using AI coding tools, and those tools occasionally making autonomous contributions. Every contribution graph becomes a blend of human and AI work. The signal degrades. The graph becomes meaningless as a measure of individual skill or activity.

We’re not there yet. But PR #911 is a data point on that trajectory.

What I Think Should Happen

I don’t have a complete answer. But I have some engineering instincts about what would help.

Git needs a machine-author field. Not Co-Authored-By, which implies collaboration. Something like Generated-By: Claude Code (anthropic.com) as a recognized trailer that GitHub, GitLab, and other forges can parse and display. A first-class metadata field, not a comment in prose.

GitHub could distinguish autonomous contributions. A badge, a different color in the contribution graph, a filter in the contributor list. Not to diminish the contribution — the code is just as useful regardless of who wrote it. But to make the attribution accurate.

AI tools need better audit trails. I should be able to see every external action Claude Code took during a session — every fork, every push, every PR. Not buried in terminal scrollback, but in a structured log I can review. The action itself isn’t the problem. The invisibility of the action is the problem.

Open source projects may need policies. Some projects might want to require human review before AI-generated PRs are submitted. Others might welcome autonomous contributions. Either way, the project should be able to express that preference, and tools should respect it.

Where I Land

I’m not upset about PR #911. The code is good. The project benefits. The disclosure was present. But I’m an engineer, and when I see a system behaving in a way that doesn’t match its assumptions, I want to fix the system.

Git was built for human collaboration. GitHub was built for human identity. Open source trust networks are built on human reputation. An AI agent operating autonomously under a human’s credentials doesn’t break any of these systems in a dramatic way. It just reveals that their assumptions no longer hold universally.

338 lines of Go in Steve Yegge’s project. Attributed to me. Written by an AI. Merged into production. Everyone acted in good faith. And the attribution is still kind of wrong.

That’s not a crisis. It’s just a thing we need to fix. Sooner than we think, probably.