Making sub-agents more diverse with RandomMCP

Josh Lawman
Josh Lawman

A lot of people are talk about agent orchestration and how to co-ordinate agents (and sub-agents) to accomplish tasks.

One counter-intuitive wish that arises when one explores how to work with agents and sub agents is wishing that they were less like a "clone army" and had more of the natural diversity of thought and approach to problems. Any way we try to solve problems with people, we end up getting a lot of for free - people have diverse context window, backstories, different approaches to problems.

If you ask twenty designers to suggest new landing page designs, you'll likely get twenty very different results. If you ask twenty agents, you'll might get 3 meaningfully different results. (Likewise with code review, product direction decisions, etc.).

RandomMCP Based Redesign Demonstration - https://www.randommcp.com/showcase

When working on an identical problem, the agents, being "clones" of each other, struggle to depart from a common centre of gravity and identical wellspring of ideas that they share with their agent siblings.

If you ask me and 100 clones of me on the same day and time to come up with a "random" object, I'm not sure how many we'd produce, but I suspect it wouldn't even be close to 100 unique objects - perhaps it would be 1 unique object. Likewise with models.

One way that this is being tackled is by deploying different models themselves - a code review by Opus 4.6, Gemini 3.1 Pro, and GPT 5.3 Codex will produce 3 very different code reviews / UX redesigns / etc. But this is somewhat limited and doesn't scale beyond 3 or maybe 5 really high quality diverse opinions.

Another way we handle this currently is that we simply don't have more than one agent solve the same problem problem and diversify our sub agents by giving them non-overlapping tasks via the prompt (e.g. "You are a UX reviewer..", "You are a security expert..") and we accept whatever answer they propose knowing that there isn't only one one optimal output when asking for a UX review, code review, etc. but that it will probably produce something useful.

And so it seems we need a way to introduce to make our clone army less clone like, and introduce some outside influence / entropy / randomness to the process.

Ideally, if we deploy multiple agents to the same problem, we'd like each agent to be coming at the problem from a slightly different background and set of experiences. We want them to read in different groups of files, or explore unexpected directions. We want some to take a "sensible" approach, and others to provide non-consesus opinions. In short, we want our group of agents to to benefit from the diversity of thought and problem solving approaches that we experience when multiple humans tackle the same problem.

The RandomMCP Playground
RandomMCP Playground

We've already tried things like this before in traditional machine learning when we built algorithms like Random Forest which introduced randomness to the model training process so that you could fit 100s or 1,000s of weaker learners that in aggregate would produce a better result than 1 "Optimally" fit tree. (In that case we trained each tree on a subset of the data or were given a subset of the features when deciding how to build each split of the decision tree).

As a first stab at exploring this idea, I put together RandomMCP - https://www.randommcp.com/ which is meant to provide outside context to your agents. I think this will be most useful in code review, UX redesign, and finding bugs.

With the analogy of asking 100 clones to think of a random object, it's somewhat like first doing some random activity - playing jazz, asking them to talk about politics, or simply provide a specific type of object that is desired. Each clone gets a different suggestion, and as a result you increase the diversity of outputs from the clone group, while staying within the bounds of the original question.

I think the random generators need some refinement to get the kind of diversity that you want for your specific domain. And I think there are other ways of introducing randomness that aren't explored here (e.g. like with the random forest training examples, it's possible that solving tasks by agents which all have a subset of the overall plan may outperform a group of agents who are all given a full copy of the overall plan and end up using a similair "greedy" decision making process as the decision trees which create unexpected problems later down the road.

Regardless, I hope anyone who reads this enjoys RandomMCP and I would love to hear of other ways people are approaching this problem.


Josh Lawman

Josh Lawman

AI Engineer

Stay up to date

Get notified when we publish new articles.