How it actually works: Building a team of AI collaborators
With the growing role of Generative Artificial Intelligence (GenAI) in our personal and professional lives, there’s no shortage of opinions or predictions about what AI can accomplish. But what are we learning at ICF as we do the work to apply GenAI in the real world? In this series, we examine AI applications and use cases with a subject-matter expert to get a true take on them. Is that how AI actually works? Let’s find out.
The Situation: | Clients with complex, multi-step challenges that require different databases, disciplines, and/or AI models. Can they bring it all together in one system? |
The Subject: | Multi-agent AI |
The Expert: | Nick Auen, AI Implementation Specialist |
First things first. An artificial intelligence (AI) agent is a software program that can interact with its environment, collect data, and use the data to perform self-determined tasks to meet human-defined goals. So, as the name suggests, multi-agent AI systems use multiple AI agents, working collaboratively to achieve complex goals, solve challenges, or make decisions, often distributing discrete tasks across multiple agents. With this approach, AI agents can effectively “daisy chain” large challenges, working cooperatively to break them down or come at a problem from different angles.
Multi-agent AI systems can also work competitively to simulate real-world scenarios like economics, robotics, and game theory. Today, the most successful uses for this approach are autonomous vehicle fleets, smart grids, and manufacturing such as collaborative assembly line robotics. But we’re seeing more growth—and more cases—every day.
For those with concerns about the growth of GenAI, multi-agent systems have an added benefit in that they can be designed to be more transparent by distributing decision-making across multiple agents. This decentralization helps prevent single points of failure and ensures that each agent's role and behavior can be independently monitored and evaluated. That improves auditability, making it easier to trace the data that feeds into how a decision was made, and accountability, by reducing the opacity of decisions and making it easier to detect undesirable biases in the AI output.
In short, by using multiple, specifically trained agents we’re taking a team approach, putting the best players in their best positions and maximizing their overall potential.
We’re currently piloting the use of AI agents to help clients with complex, multi-step processes get results faster. Here’s how:
- The process is broken down and assigned to discrete agents, each of whom has access to unique knowledge repositories, tools, and instructions for how to solve a task.
- Agents review one another’s work against multiple sets of evaluation metrics.
- Outputs are iterated on (when needed) based on feedback provided by each agent.
- At every step, ICF subject-matter experts can review each AI agent’s actions and the data used to inform those actions to ensure quality and consistency in performance (Figure 1).
Figure 1: Networking AI and human experts together to tackle complex tasks
We’ve made a concerted effort to take the results of each successful AI engagement and think about what’s next. What we’ve learned is that with multi-agent AI, you can increase the effectiveness of a single AI agent exponentially while simultaneously improving the transparency of the system. Specifically, we’re looking at improved efficiency through distributed problem solving, simple scalability for small or large tasks, and more robust solutions in fields like disaster response and logistics. Like drones, (see how far they’ve come in the past decade) some multi-agent systems can even be configured for more resiliency, so the failure of one agent doesn’t cripple the whole system. And overall, multiple agents bring diverse perspectives and approaches to solving problems that a single system may not see.
With our hands-on experience, we’ve also drawn numerous lessons and key principles for developing multi-agent AI systems, including:
- Understanding that trust is contextual. Agents are only valuable when you have trust in them—at ICF, process transparency and human expertise are foundational and compulsory for all AI-agent powered use cases. Context varies both across and within use cases. So, solutions that work with agents either need to be robustly fault tolerant and/or use existing agents that have previously been 'proven' in identical or highly similar contexts. Without this understanding, you're actually just multiplying your QA/QC work when you add more agents into the mix.
- Designing agents to focus on specific tasks or roles, making each one modular and specialized. This enhances system efficiency and simplifies problem-solving by allowing agents to work independently while contributing to a larger goal.
- Ensuring that agents can effectively communicate and share information in real-time to achieve coordinated decision-making. Strong collaboration between agents allows for dynamic adjustments and enhances the system's overall performance.
- Building your multi-agent AI system to scale easily by allowing the addition of more agents without significant reconfiguration. Agents should also be flexible, adapting to changing environments or new tasks as needed.
- Designing for resilience (as mentioned above) by allowing the system to continue functioning even if individual agents fail. Redundancy and error-checking across agents help maintain system integrity and prevent single points of failure.
So: IS THAT HOW IT ACTUALLY WORKS? With careful planning, purpose, and oversight, multi-agent AI takes the strengths of GenAI to another level that we’re just starting to scratch the surface of. But at ICF, we’re right in the middle of it, and learning every day: we already see in multiple real-world use cases that the multi-agent approach can offer more computing power, flexibility, scalability, and perspectives to better tackle complicated tasks — all by escalating what we already know about GenAI.
AI is evolving at a breakneck pace, and ICF is helping organizations define the rules for how to integrate it efficiently and ethically. We’re already working with multiple government and energy clients to develop strategies and tactics for harnessing this powerful technology to deliver outcomes — all while minimizing risks and ensuring accuracy. Learn more about our Responsible AI principles.