I've been doing this with Claude Code and agent teams.
I have a /red-team skill that will use an agent team to criticize it's own work, grade and rank feedback, incorporate relevant feedback and then start over. It has increased the quality of output.
I don't know - looks like an interesting idea - but ... I am struggling to put that in a polite manner. When I go into the repo and find out that it does stuff like lip syncing of talking avatars then I start to think what percentage of the development effort goes into marketing?
Self organizing systems is an area of research to which I think LLMs will contribute immensely.
But as of now, even newer AI models are not particularly insightful. I'm always surprised by how suboptimal near-frontier LLMs are at collaborating in some of the easier cooperative environments on my benchmarking and RL platform. For example, check out a replay of consensus grid here: https://gertlabs.com/spectate
While interesting, its not clear to me with just looking at concensus grid how they are prompted.
Do you tell them to think and coordinate the next step through some type of sync/talking mechanism or is it turn by turn?
I suspect turn by turn as it is similiar to other experiements and in this case, it wouldn't work because they wouldn't have a certain amount of time to think about the next step together?
All of our environments are tick based (with ticks of varying speeds), and this is explained in the prompt given to the models, along with the latest observation and a history of recent events/conversations/actions.
So that does make the game more challenging, versus some other simulations we have where multiple conversation turns happen before action. But the inefficiencies I'm describing are different; for example, an agent reaches part of the destination area but is clearly blocking another player who needs to pass, and most models will just stay put instead of moving along to another target spot.
So is "Game Overview" the prompt? Because i can't seem to see any indication / hint given to the models that its a game they should work together on and commmunicate etc.
How does mixture of experts architecture work? Are they debating, or merely delegating?
From what I've read, for each token or input patch, the gate computes a set of probabilities (or scores) over the experts, then selects a small subset (often the top‑[k]) and routes that input only to those.
Ie each expert computes its own transformation on the same original input (or a shared intermediate representation), and then their outputs are combined at the next layer via the gate’s weights.
That’s post hoc combination, not B reasoning over A’s reasoning.
A MoE model is one model with expert parts which use less tokens. Which makes it easier for an expert to diverge to a better optimum state. Its easier to only need to know medicin instead of everything and being able to separate everything away from medicin even if certain names, concepts etc. are the same.
AI agents discussing things with each others would be more like one thinking model thinking throught the problem with different personas.
With different underlying models, you can leverage the best model for one persona. Like people said before (6 month ago, no clue if this is still valid) that they prefer GPT for planning and Claude for executing / coding.
I had good results with combining Claude Code with Codex, let them have back and forth sessions. Their prompts were magnitudes better than mine, also their evaluation and criticism of the other LLM
What I haven’t taken time for is finding out about how I‘d automate their back-and-forth and stop manually copy/pasting their responses.
I've been doing this with Claude Code and agent teams.
I have a /red-team skill that will use an agent team to criticize it's own work, grade and rank feedback, incorporate relevant feedback and then start over. It has increased the quality of output.
I don't know - looks like an interesting idea - but ... I am struggling to put that in a polite manner. When I go into the repo and find out that it does stuff like lip syncing of talking avatars then I start to think what percentage of the development effort goes into marketing?
Self organizing systems is an area of research to which I think LLMs will contribute immensely.
But as of now, even newer AI models are not particularly insightful. I'm always surprised by how suboptimal near-frontier LLMs are at collaborating in some of the easier cooperative environments on my benchmarking and RL platform. For example, check out a replay of consensus grid here: https://gertlabs.com/spectate
While interesting, its not clear to me with just looking at concensus grid how they are prompted.
Do you tell them to think and coordinate the next step through some type of sync/talking mechanism or is it turn by turn?
I suspect turn by turn as it is similiar to other experiements and in this case, it wouldn't work because they wouldn't have a certain amount of time to think about the next step together?
All of our environments are tick based (with ticks of varying speeds), and this is explained in the prompt given to the models, along with the latest observation and a history of recent events/conversations/actions.
So that does make the game more challenging, versus some other simulations we have where multiple conversation turns happen before action. But the inefficiencies I'm describing are different; for example, an agent reaches part of the destination area but is clearly blocking another player who needs to pass, and most models will just stay put instead of moving along to another target spot.
So is "Game Overview" the prompt? Because i can't seem to see any indication / hint given to the models that its a game they should work together on and commmunicate etc.
No the full prompt is not available in the UI, sorry.
Sounds like a less efficient version of the mixture of experts approach.
How does mixture of experts architecture work? Are they debating, or merely delegating?
From what I've read, for each token or input patch, the gate computes a set of probabilities (or scores) over the experts, then selects a small subset (often the top‑[k]) and routes that input only to those.
Ie each expert computes its own transformation on the same original input (or a shared intermediate representation), and then their outputs are combined at the next layer via the gate’s weights.
That’s post hoc combination, not B reasoning over A’s reasoning.
A MoE model is one model with expert parts which use less tokens. Which makes it easier for an expert to diverge to a better optimum state. Its easier to only need to know medicin instead of everything and being able to separate everything away from medicin even if certain names, concepts etc. are the same.
AI agents discussing things with each others would be more like one thinking model thinking throught the problem with different personas.
With different underlying models, you can leverage the best model for one persona. Like people said before (6 month ago, no clue if this is still valid) that they prefer GPT for planning and Claude for executing / coding.
I had good results with combining Claude Code with Codex, let them have back and forth sessions. Their prompts were magnitudes better than mine, also their evaluation and criticism of the other LLM
What I haven’t taken time for is finding out about how I‘d automate their back-and-forth and stop manually copy/pasting their responses.
[dead]