Introduction
If you ask people how open-source AI is "governed," a lot of them will tell you it isn't. "No one is in charge."
"Code is law."
"Anyone can fork, that's the governance." That sounds romantic. It's also wrong. Open weight-labs and open-source AI projects absolutely have governance. It just lives in places people don't label as such: Who owns the GitHub org.
Who controls the domain and Discord.
Who pays for the infra.
Who gets merge rights. When things are calm, this remains invisible. When money, safety run labs translate policy loss functions, or reputation are on the line, it snaps into focus. People discover very quickly that "anyone can fork" is not a substitute for real decision-making. If you want to understand where open AI ecosystems are going, you cannot just look at licenses and benchmarks. You have to look at how these communities govern themselves under pressure: how they handle forks, schisms, and alliances. The details differ per project, but the patterns repeat.
The myth of "no governance"
The story many people like to tell about open AI is: Anyone can contribute.
Decisions are purely merit-based.
If you do not like the direction, you fork and move on. It's a comforting narrative because it erases power. In reality, every serious open AI effort has at least three layers of governance, whether they admit it or not.
Stewardship
Someone owns:
- The name
- The repo and org permissions
- The model release keys and signing
- The main website and docs
If one person or one company controls all of those, they are the de facto steward, no matter how flat the community feels.
Contribution process
There is always a boundary between "random PRs" and "changes that actually land." That boundary is enforced by:
- Maintainers with review and merge rights
- Contribution guidelines and coding standards
- Informal norms about who gets listened to
The more complex the project, the more power shifts from "any contributor" to "the small group that can navigate the full stack."
Resource control
AI is not just code. It is:
- Training training system training run curriculum-design-data-mixtures-emergent behavior models without centralizing data runs
- Evaluation infra
- Hosted demos and endpoints
- Event budgets
Whoever funds and controls those resources gets quiet leverage. They can prioritize some directions, starve others, and decide what gets showcased.
When people say "governance," they often think of formal votes and constitutions. In open AI, governance is mostly made up of these three: stewardship, contributions, and resources. Forks, schisms, and alliances are what happen when these layers drift out of sync with community expectations.
Forks: the safety valve that isn't free
Forks are the canonical story in open-source: don't like the direction, fork it. In AI, they take on a few flavors.
Soft forks
Most common. Someone takes:
- A base model
- A library
- A training script
…adds patches or fine-tunes, and ships under a new name, while still tracking upstream changes.
These are more like branches than revolutions. They let people experiment without arguing with maintainers. They also diffuse innovation: upstream can borrow good ideas later.
Hard forks
Less common, more political. A hard fork says: "We are no longer aligned with upstream's goals, governance, or license. We're going our own way."
This usually comes with:
- A new name and branding
- New governance docs
- New distribution channels
Hard forks happen around:
- License changes that feel like betrayal
- Governance decisions seen as illegitimate
- Values conflicts (safety vs openness, commercialization vs neutrality)
Private or semi-private forks
Companies do this quietly. They take an open model or library, wrap it in:
- Internal patches
- Proprietary data and fine-tunes
- Closed eval and safety layers
…and never push anything back.
This is governance by exit: instead of arguing about direction, they internalize the stack and treat upstream as a base dependency.
Forks are important. They are also costly. A fork has to rebuild:
- Trust
- Documentation
- Contributor momentum
- Often infra and evaluation pipelines media pipelines from text prompt to production asset
If you cannot attract enough talent and resources to sustain that, a fork is just a protest repo. That's why most serious forks come with some combination of:
- A different funding base
- A different sponsor company
- A different institution behind it
"Just fork it" works as rhetoric. As strategy, it only works if you can sustain the new line for years.
Schisms: when values clash in public
Forks are the technical expression of a split. Schisms are the social rupture. You see them in open AI around a few recurring fault lines.
License tightening
A project starts "very open." Over time, the core stewards:
- Worry about misuse
- Worry about free-riders
- Worry about monetization
They tighten the license:
- Adding "no commercial use" clauses
- Adding "no use for X, Y, Z domains"
- Moving from open-source to "source-available"
Part of the community sees this as necessary. Part sees it as bait-and-switch. Trust breaks. The schism is not just legal. It's emotional: "We helped build this, and now the terms changed."
What happens next:
- Some people stay, accepting the new contract.
- Some leave, either to other projects or to start a fork under the older terms.
- Some linger in the issue tracker, fighting a rear-guard action.
Governance centralization
Another pattern: a project starts with a loose group of contributors. As it grows, coordination becomes painful. The stewards respond by:
- Formalizing roles
- Creating a core team
- Adding veto mechanisms
On paper, this is maturity. In practice, it can feel like a power grab, especially if:
- The selection of the "core" is opaque
- Corporate sponsors are overrepresented
- Community members feel decisions now flow top-down
The schism here is between:
- "We need structure to function at scale."
- "We joined because it wasn't another top-down organization."
Safety and openness
AI adds a new axis: how open to be about models and capabilities that could cause harm. Conflicts emerge around:
- Whether to release full weights or gated access
- How much to document known failure modes
- Whether to ship safety mitigations as optional or mandatory
One side frames tighter control as "responsible." The other frames open access as "democratizing" and "audit-enabling." Once that argument goes public, alliances form quickly. People pick sides not only based on technical beliefs, but on deeper political instincts: fear of centralized control versus fear of unregulated power.
Schisms do damage. They also clarify. They force projects to say out loud what they stand for and who they serve.
Alliances: coalitions in the gaps
Between forks and schisms, you have alliances: temporary or durable coalitions between projects, companies, and communities.
Why they form:
Pooling compute and data
Training a serious model is expensive. Few groups can do it alone. Alliances form to:
- Share datasets and cleaning pipelines
- Co-fund training runs
- Co-develop eval suites
These are sometimes formal consortia, sometimes loose agreements between labs and community orgs.
Credibility and legitimacy
A scrappy community project might be strong technically but weak in perceived legitimacy for enterprise or policy work. A big company or foundation might have the opposite problem. They form alliances to lend each other:
- "Serious" branding on one side
- Authentic community trust on the other
You see this when:
- A company sponsors an open model and lets a foundation or community "own" the stewardship.
- A community-driven benchmark gets adopted by a corporate-backed org as a standard.
Strategic positioning
Alliances also form as counterweights.
- Several smaller actors team up to avoid being crushed by the gravity of one giant platform.
- Different ideological camps create joint statements and coalitions around openness, safety, or rights.
None of this is new in politics. It's new only to people who thought software governance was somehow above politics.
Alliances matter because they shape:
- Where contributors move
- Where grants and sponsorship flow
- Which models and tools live long enough to become critical infrastructure reliability engineering infrastructure
Who actually holds power in an open AI project
Strip away the language talking to computers still hard and you find a handful of concrete levers.
Control of the brand
Names, domains, logo, and social accounts are power.
- If you own the name, you can decide which fork is "official."
- If you run the main site, you choose which models and tools are highlighted.
- If you run the mailing lists and Discord, you own the main communication channels.
Legal ownership
Someone signs:
- Contributor License Agreements
- Corporate deals
- Grants and contracts
That entity carries legal risk and gets legal control. If the legal entity is tightly coupled to one company, it is naïve to pretend the project is fully community-governed.
Admin rights
Admin access to:
- The GitHub org
- CI/CD pipelines
- Model registries
- Hosting infra
is the practical definition of sovereignty. In a crisis (abuse, legal threat, security chain security for ai models weights datasets and dependencies incident), those people can:
- Lock down repos
- Change permissions
- Pull artifacts
Funding
Follow the money.
- Who pays maintainer salaries.
- Who pays for infra and events.
- Who funds major training and eval runs.
Funding ties directly into agenda-setting. People rarely bite the hand that pays for their GPUs.
You do not need a constitution to see governance. You just have to map these levers.
Informal norms: the invisible constitution
Formal structures are only half the story. Communities also govern themselves through unwritten rules. For comprehensive coverage, refer to our analysis in Building Domain-Specific Assistants for Law, Finance, and Medicine. A few recurring ones in open AI:
"Don't surprise people with license changes."
Projects that shift licenses without transparent process burn trust fast. The norm that emerges:
- Signal early that license changes are on the table.
- Consult with core contributors.
- Give people time to exit or adapt.
"Document the scary parts."
For models with real risk potential, communities expect:
- Honest docs about known failure modes
- Clear limits and disclaimers
- Recommendations for mitigations and safe deployment
Projects that bury or ignore harms get reputationally marked.
"Respect downstream."
If you maintain a library or model that others depend on, you are expected to:
- Avoid breaking changes without migration paths
- Communicate deprecations
- Not push political or commercial agendas aggressively through API or defaults
Break this too many times and downstream users quietly move to forks or alternatives.
These norms are enforced socially:
- PR reviews
- Public criticism
- Quiet refusals to collaborate
They are fragile but real. Over time, they harden into expectations.
How conflict actually plays out
When something breaks – a controversial merge, an abuse incident, a funding dispute – you get a governance stress test. There is a familiar choreography.
First, the issue leaks into public channels
- Heated GitHub threads, Discord fights, angry tweets
- People invoking ideals: "open," "safe," "democratic," "responsible"
Second, stewards respond
Responses vary:
- Silence or slow, lawyered statements
- Quick bans or rollbacks
- Promises of future processes ("we will form a committee…")
How fast and how transparent this response is tells everyone whether the governance is performative or real.
Third, factions harden
Different groups coalesce:
- "Core team made the right call, trust them."
- "This proves the project has been captured."
- "We need a fork."
This is where alliances matter. External orgs may weigh in, lending legitimacy to one reading.
Fourth, outcomes
There are only a few stable endpoints.
- The project adjusts (policy changes, governance tweaks) and most people stay.
- A group leaves to a fork or a different project, taking some talent and legitimacy with it.
- The controversy burns out, leaving scars and some quiet disengagement.
What people remember is not the details of the incident, but the pattern: "Who listened, who stonewalled, who threw whom under the bus." Over years, repeated patterns become the project's reputation. That reputation affects future contributors, users, and funders more than any single incident.
Patterns of governance emerging in open AI
Despite the apparent chaos, a few recognizable models are taking shape.
Benevolent dictatorship plus community
Classic pattern:
- One or a few founders have ultimate say.
- They delegate more day-to-day authority as the project grows.
- They remain the ultimate backstop during crises.
This can work surprisingly well, especially early, because:
- Decisions are fast.
- Vision is coherent.
- Conflict is resolved decisively.
It fails when:
- The "benevolent" part decays.
- Scale exceeds any one person's judgment.
- Founders and community values diverge.
Foundation model
Not the ML sense. The legal/organizational one.
- A foundation or non-profit owns the brand and core artifacts.
- Boards and advisory councils share formal power.
- Projects under the foundation share some governance templates.
Pros:
- More durable than a single founder or company.
- Easier to argue for independence and public-interest framing.
Cons:
- Risk of bureaucracy and slow decisions.
- Boards can still be captured by specific interests.
Corporate-led "open"
Here, a company:
- Funds most of the work
- Releases models or tools under open licenses
- Maintains tight control over governance
The community gets:
- Strong tooling
- Serious resources
The company gets:
- Ecosystem adoption
- Reputation benefits
- A recruitment pipeline
The governance story is simple: it's a product, even if the code or weights are open. Decisions flow from corporate strategy.
Coalition governance
A newer pattern:
- Several institutions pool resources to create and steward models and tools.
- Governance is shared through councils, working groups, and voting members.
Pros:
- No single corporate or academic actor dominates.
- Better geographic and sector representation is possible.
Cons:
- Coordination overhead is real.
- Risk of consensus paralysis.
Open AI communities are still experimenting. None of these models is obviously "solved." But if you look closely, you can see projects gradually shifting between them as they grow.
Why this matters more as models get capable
As long as open models are seen as toys or pure research, governance feels optional. Once they are:
- Embedded in products
- Used by governments
- Capable of non-trivial harm
governance becomes central. Questions that used to be abstract become practical:
- Who decides when to release a new model or withhold it.
- Who sets and enforces safety constraints.
- Who can be held accountable if things go wrong.
If all of those answers reduce to "one company's leadership" or "one founder," you are not dealing with a community project. You are dealing with a vendor, even if the repo looks friendly.
On the other hand, if you spread authority too thin without clear lines of responsibility, you get unaccountable anarchy, which is not actually better when stakes are high.
The point is not to idealize any specific model of governance. It is to stop pretending that "open" equals "solved."
What to watch if you care where this goes
If you are trying to decide whether to bet on an open AI project – as a user, contributor, or partner – the questions worth asking are blunt.
Who can change the license.
Who can delete or rename the repo and models.
Who pays for the infra and how long that money will be there.
Who has admin access and how those people are chosen and replaced.
What happens when there is a serious disagreement about safety, commercialization, or alignment.
If the answers are:
- Unclear
- All pointing to a single private actor
- Or "we'll figure it out when it happens"
you know what you're dealing with.
Forks, schisms, and alliances are not side dramas. They are the mechanisms by which open AI communities negotiate power in the absence of a single boss. You can treat them as noise, or you can read them as what they are: the visible pulses of governance in a field that still insists, against all evidence, that no one is really in charge.



