Table of Contents

Building Multi-Agent LLMs – Traditional Approach vs LangGraph

LLM agents are robust artificial intelligence constructs intended to generate well-structured and sequentially wrapped up texts. They can foresee events, recall prior dialogs and events and use various instruments to manipulate their answers for the context and in the required response style. 

A Multi-Agent System (MAS) is a group of agents that communicate and work together to solve problems.  Agents in a MAS can share information, coordinate activities and divide tasks. 

Multi-agent LLMs are rapidly gaining attention, and the chart below vividly illustrates this trend. It tracks the number of research papers published across different categories every quarter, with the paper counts highlighted at each leaf node.  These impressive numbers, accumulated over just a few months, underscore the accelerating interest and growing research momentum in the field of multi-agent LLMs. 

Source: https://arxiv.org/pdf/2402.01680

In recent days, the leading Multi-Agent LLM framework has included LangGraph, AutoGen and Crew AI.  In this blog, we will compare the Traditional Approach withLangGraph.  In the next blog, we will compare  AutoGen and CrewAI. 

Traditional Multi-Agent LLMs: 

Traditional multi-agent systems typically involve multiple instances of LLMs interacting in a hierarchical or parallel fashion.  Each agent is responsible for a specific task, and they often communicate through prompts or API calls. 

The traditional system works like: 

  • Independent Agents: Each agent is independent and assigned a specific responsibility, such as answering a query, performing a calculation or retrieving information. 
  • Supervisor Agent: A central agent, often called a "supervisor," determines which agent gets activated at any point based on the user's request or intermediate results. 
  • Task-Oriented Agents: Agents focus on a single subtask (retrieval, summarization, action, etc.), and the output of one agent can be fed into another. 
  • Memory and State: Session-based state management, where each agent's response is contextualized by previous interactions, but the system may lose context across longer workflows unless explicitly preserved. 

In this traditional setup, the supervisor agent controls the flow of communication between the search and summarization agents. The process is linear, meaning each agent is called sequentially and there is no flexible state management or workflow control. 

In the traditional multi-agent approach, agents are designed as separate entities with distinct roles, often operating independently within a structured environment. Here is how the code you provided can be summarized in terms of traditional agent implementation. 

Each agent is assigned a specific role: 

  • Search Agent: Acts as an information retrieval agent, designed to respond to search requests by processing queries and retrieving relevant information. 
  • Summarization Agent: Serves as a text summarizer, taking in data (in this case, search results) and condensing it into a summary.

Traditional agents often use message-passing methods for interaction, enabling them to communicate asynchronously. 

Here, the queue. Queue object acts as a shared message-passing mechanism, allowing the agents to pass information between each other without directly calling one another. The queue enables each agent to retrieve only relevant data in the sequence it arrives. 

A supervisor function simulates the role of a central controller, managing the lifecycle of each agent. It initiates agents as threads, dispatches queries to the Search Agent, and monitors the workflow. 

 

Once the task is complete, it stops the agents by sending a specific "STOP" command to the queue, a method traditional multi-agent systems use to manage agent activity. 

Each agent operates in parallel using separate threads, simulating independent and concurrent agent behavior, another aspect of traditional multi-agent design. 

Agents independently perform their tasks (Ex: processing queries and summarizing results) while synchronizing only via the message queue. 

 

This approach exemplifies a basic traditional multi-agent design with specialized, parallel, and interacting agents, each handling distinct steps of a single workflow in an isolated manner. This modular and synchronous design, typical of traditional systems, contrasts with newer, dynamic orchestrations like LangGraph, which offer more complex workflows and inter-agent dependencies. 

Using LangGraph, we can design sophisticated workflows involving multiple specialized agents working together in a coordinated fashion to solve complex problems. Here's a detailed explanation of how the provided code leverages LangGraph to demonstrate this multi-agent architecture: 

Key Concepts in Multi-Agent LLM with LangGraph 

  1. LangGraph Environment: 

The LangGraph environment serves as a container that orchestrates the agents and workflows. It helps structure tasks in such a way that multiple agents can interact and collaborate seamlessly. 

     2. Agents in LangGraph: 

In this example we define two specialized agents: 

  1. SearchAgent: This agent is responsible for retrieving information based on a user query. It uses OpenAI’s GPT-3.5 to perform a search-like task, simulating the behavior of fetching data or knowledge. 
  2. SummarizationAgent: Once the information is retrieved by the SearchAgent, the SummarizationAgent processes this data to generate a concise summary, also using GPT-3.5. 
    3. Workflow Definition: 

The @graph.workflow decorator defines a sequence of tasks where the output of one agent feeds into the next. In this case: 

  1. The SearchAgent takes a query, searches for relevant information and passes the result to the SummarizationAgent. 
  2. The SummarizationAgent then summarizes the result and returns the final summary. 

Let us see how we can implement the multi agent using LangGraph in the below sample. 

Let us import the Agent and LangGraph classes from langgraph, initialize the OpenAI API with a key for GPT-3.5 communication.,and create the instance of LangGraph. 

Let us define the Agents – Search Agent and Summarization Agent. 

  1. SearchAgent:

  • Function: Receives a query and uses GPT-3.5 to simulate a search operation, returning the content as the search result. 
  • Output: The search result is printed and returned for further processing. 

2. Summarization Agent: 

  • Function: Takes text as input and uses GPT-3.5 to generate a concise summary. 
  • Output: The summary is printed and returned. 

3. Workflow Definition: 

Workflow: The query_workflow function defines the task flow where the SearchAgent and SummarizationAgent collaborate.  The search result is summarized, and the final output is returned. 

 

Input Query: The system starts with a query like "Recent advancements in AI". 

Execution: graph.execute() runs the defined workflow, and the final summary is printed. 

 

Advantages of Using LangGraph for Multi-Agent Systems 

Modularity: Each agent has a specialized task, making the system easier to manage and extend. 

Parallel Processing: LangGraph can support more complex workflows, potentially running agents in parallel to improve efficiency. 

Scalability: New agents can be added to the workflow to handle additional tasks like filtering, data enrichment or more nuanced analyses. 

Key Takeaways 

The LangGraph approach represents a significant advancement over traditional methods in building multi-agent LLM systems. It provides a higher-level abstraction that reduces complexity, improves maintainability, and enhances integration capabilities. As a result, LangGraph enables faster prototyping and more scalable solutions, making it a compelling choice for modern multi-agent applications. 

In our next blog, we will see the comparison of building multi agent using  CrewAI and Autogen. 

 

Reference: https://arxiv.org/pdf/2402.01680 

 

 

Learn More about Encora

We are the software development company fiercely committed and uniquely equipped to enable companies to do what they can’t do now.

Learn More

Global Delivery

READ MORE

Careers

READ MORE

Industries

READ MORE

Related Insights

Enabling Transformation in Hospitality through Technology-Led Innovation

As the exclusive sponsor of the 2024 Hotel Visionary Awards, we support organizations leading ...

Read More

Key Insights from HLTH 2024: The Future of Patient-Centered Healthcare

Discover key insights from HLTH 2024 on digital health, AI in diagnostics, data interoperability, ...

Read More

Data-Driven Engineering: Transforming Operations and Products from Insight to Impact

Discover how data-driven engineering transforms operations and product development, enhancing team ...

Read More
Previous Previous
Next

Accelerate Your Path
to Market Leadership 

Encora logo

Santa Clara, CA

+1 669-236-2674

letstalk@encora.com

Innovation Acceleration

Speak With an Expert

Encora logo

Santa Clara, CA

+1 (480) 991 3635

letstalk@encora.com

Innovation Acceleration