Does GitHub Copilot own my code? (And do you?)
Two different questions get conflated all the time: "does GitHub own this?" (a contract question — answered no in their TOS) and "does anyone own this?" (a copyright question — increasingly answered "maybe not" by the US Copyright Office). The first one is settled. The second is what acquirers and litigators are starting to ask.
The short answer
GitHub does not claim ownership of code Copilot suggests to you. Their terms of service for Copilot explicitly assign whatever rights they might have in the suggestions to you. So the contractual answer is: you own it, as between you and GitHub.
But that's not the question that matters for an acquirer, an investor, or a copycat competitor. Those parties care about a different question: is this code copyrightable at all, and if so, who is the legal author? The US Copyright Office's position, refined in their Part 2 AI Report (January 2025), is that purely AI-generated material is not copyrightable. Prompts alone do not establish human authorship.
(1) Does GitHub own Copilot output? No — their TOS assigns it to you. (2) Is the output copyrightable? Maybe not, depending on how much human authorship shaped it. These are independent questions; people answer (1) and assume (2) is settled. It isn't.
What "human authorship" actually means
The US Copyright Office has registered some works that contain AI-generated components — but only the human-authored parts. The threshold the office applies, simplified: "selection, arrangement, and modification" by a human must reach the level of original creative expression. Examples from registration decisions:
- Zarya of the Dawn (2023) — comic book with AI-generated images. The text and arrangement got copyright; the individual images did not.
- Théâtre d'Opéra Spatial (2023) — Midjourney image submitted as-is, rejected. The artist's iterative refinement of prompts wasn't enough.
- A Single Piece of American Cheese (2025) — AI-generated image where the human used Photoshop to substantially restructure the output, granted copyright.
For code, this translates roughly to: did you significantly edit, refactor, restructure, or recombine what the AI produced? Or did you accept Copilot's tab-completions verbatim? The more your work resembles the former, the more defensible your copyright claim.
Related: 5 legal risks of vibe coding (this is #3) →What this means in practice
If your codebase was largely generated by Cursor / Lovable / Bolt / Copilot with minimal human iteration, three things follow:
- A copycat competitor may have a stronger defense to a copyright suit than you'd expect. The defense is straightforward: "the plaintiff's code isn't copyrightable; we copied a public-domain work."
- Your IP at acquisition is worth less than the line count suggests. Acquirers are increasingly asking for prompt logs and authorship evidence as part of code diligence.
- Trade secret protection may matter more than copyright. Code that's never publicly disclosed can be protected as a trade secret regardless of copyright status — but only if you treat it as one (NDAs, access controls, marking).
Doe v. GitHub — the case to watch
Doe v. GitHub is the major ongoing case on Copilot specifically. The plaintiffs are open-source developers who allege Copilot reproduces their code (including copyleft-licensed code) without attribution or license compliance. The case has narrowed over multiple rounds — most copyright claims were dismissed in 2023; the DMCA §1202 claims around removed copyright management information remain alive.
The outcome matters because: if §1202 claims succeed, AI vendors face per-distribution statutory damages ($2,500–$25,000 each) for code emitted without preserving copyright notices. That could force Copilot, Cursor, and similar tools to dramatically change how they emit suggestions — which in turn affects what code your AI tool will write tomorrow.
Related: What happens if your AI-built app uses AGPL code →How to defend your ownership claim
Six concrete things you can do to make your code more defensible, ranked from easiest to most committing:
- Keep prompt logs. Most AI coding tools save these by default. Don't delete them.
- Make meaningful edits and refactors after AI generation. Commit history matters here — small follow-up commits show iterative human authorship.
- Write your own tests and types. These are creative authorship choices that almost never come from the AI verbatim.
- Document selection and arrangement decisions. When you chose one suggested approach over another, why? A short README or ADR (architecture decision record) helps.
- Register copyright on key modules. The USCO allows registration on the human-authored portions; for important commercial code, this strengthens your position.
- Treat unpublished code as a trade secret — NDAs with contractors, access controls, no public commits to non-OSS work.
What about Cursor, Lovable, Bolt, Replit, v0?
Different tools, different TOS terms — but the core picture is the same. Lovable's terms (as of 2026) assign all rights in outputs to the user. Cursor's terms similarly assign rights, with the exception of model improvements. Bolt and v0 also assign to user. None of these tools claim ownership.
But again — the copyright-eligibility question is independent of the contractual one. No TOS clause can make code that isn't copyrightable become copyrightable. That's a question of law, not contract.
Bottom line
You own whatever rights exist in your AI-generated code (as against the tool vendor). Whether those rights are robust — i.e., whether the code is copyrightable in the first place — depends on how much human creative authorship shaped it. Vibe-coded apps that ship in a single sitting with no human editing are at the weak end of that spectrum. Apps that have been substantially refined, restructured, and tested by humans are at the strong end. Document the human work; it's free insurance.
Common questions.
If I prompt the AI carefully, does that count as authorship?
Probably not, on its own. The US Copyright Office's current position (Part 2 AI Report, January 2025) is that prompt-writing alone — even for elaborate prompts — does not constitute the creative expression required for copyright. What does count: meaningful editing, restructuring, recombination, and selection of AI output by a human.
Has anyone actually lost a copyright case because of AI authorship?
Not yet in court — most cases are still being litigated. But the USCO has refused multiple copyright registrations on AI-generated works, and Doe v. GitHub is the major ongoing case. The downstream impact is on acquisition diligence, where IP is increasingly being questioned even before any court rules.
Should I just stop using AI to write code?
No — the legal exposure is real but manageable, and the productivity gain is significant. The practical move is to use AI for first drafts and accelerated boilerplate, then add meaningful human editing, testing, and refactoring on top. That combination gives you the speed of AI tooling and a defensible copyright claim.
Does this apply to code I generated before USCO's guidance?
Yes — copyright eligibility is determined by the nature of the work, not when it was created. The USCO didn't change the law; it clarified how existing copyright law applies to AI output. Code generated in 2022 with minimal human editing has the same eligibility analysis as code generated in 2026.