The Shape of Composite Intelligence
Working notes on how intelligence emerges — and sometimes collapses — in teams, organizations, biological systems, and human-AI partnerships
A group of smart people can produce an outcome dumber than any of them. Or smarter than all of them put together. The same dynamic shows up in research teams and boardrooms, in nervous systems and insect colonies, in human-AI collaborations and multi-agent AI systems.
Founders and operators experience this as trapped intelligence — the knowledge is there, the capability exists, but the organization can’t make use of it.
The question underneath is the same: how do groups of minds think — and what changes when the minds are different kinds?
I’m trying to understand both sides. Why collectives of minds — teams, organizations, human-AI groups, biological systems — sometimes contract and sometimes expand the intelligence of their members. What determines which way it goes. Whether a single underlying model covers all of these cases.
By examining this we can make organizations operate better, and also find our place in a world of machine intelligence.
Who this is for
- Leaders working with AI in the loop. What happens to a team when an LLM is part of it? If you’re trying to figure out where humans and AIs each add value, when they substitute for each other, and when the pairing produces something worse than either alone — that’s this.
- CEOs and operators who’ve felt the gap between what their organization knows and what it does, and suspect the problem isn’t just “we need better data” or “we need better people.”
- Researchers in cognitive science, organizational theory, philosophy of mind, or AI safety who are curious how these ideas land when you start from practice and connect back to theory. The human-AI case makes the older composition puzzle concrete and urgent in a way it hasn’t been before.
- Anyone who’s noticed that sometimes the whole is more than the sum of the parts, and sometimes much less, and wants a better vocabulary for both.
What this is, and isn’t
What it is: a framework I’m developing openly. Articles published here as I work through specific ideas. Longer pieces when something clicks. Eventually, something more structured — but the structure will come from the work, not the other way around.
What it isn’t: consulting, advice, or sales. I’m not trying to convince you of anything. I’m figuring it out as I go, and sharing the process in case it’s useful to people thinking along similar lines.
Live questions I’m working through
A few threads that are open as of April 2026:
- How much of what we call “collective intelligence” is about the individuals, and how much is about how they’re connected and how they talk to each other?
- Why do some organizations act on what they know, and others don’t — and is this a different failure than “not having enough information”?
- What happens to the dynamics of a group when the members aren’t all human — when AI is part of the team, or when work is split between humans and LLMs?
- How accurately does any collective — a team, a company, a human-AI partnership — model its own capabilities? What does the gap between “what we think we can do” and “what we can actually do” cost when it’s wide?
- Can the same underlying model explain cognition at multiple scales — cells within an organism, people within a team, teams within a company, companies within an industry?
Reach out
If any of this resonates — or if you think I’m wrong — I’d like to hear from you.
- Subscribe for new pieces: rajjha.com/newsletter
- DM directly: @rajjha on Twitter/X
I’m especially interested in hearing from people who’ve seen these dynamics up close — in organizations, in biology, in AI systems, anywhere minds combine.
About me, and why I’m coming at this the way I am
Bumping this to the end because who I am matters less than whether the work lands for you. In case it’s useful context:
I’ve run six companies over twenty-plus years, and I was an intellectual property lawyer before that. The companies gave me the puzzles I’m trying to solve — groups of smart people producing outcomes sometimes much dumber, sometimes much smarter, than any individual member.
The lawyering left me with a contract-drafter’s attachment to precise language; if I can’t define what I mean in a sentence that would survive legal scrutiny, I don’t trust I actually understand it yet.
Most of what’s written about how groups think comes from academia — cognitive science, organizational theory, philosophy of mind, more recently AI safety. The literature is rich, but it tends to work from theory toward practice. I’m working the other way: from two decades of running things, connecting back to the theory where it hones what I noticed.
The work sits in the overlap, and the unexplored space between.