This is a cache of https://github.com/EvoAgentX/EvoAgentX. It is a snapshot of the page as it appeared on 2025-08-19T17:20:38.020+0200.
GitHub - EvoAgentX/EvoAgentX: 🚀 EvoAgentX: Building a Self-Evolving Ecosystem of AI Agents
Skip to content

EvoAgentX/EvoAgentX

Repository files navigation

Building a Self-Evolving Ecosystem of AI Agents

EvoAgentX Homepage Docs Discord Twitter Wechat GitHub star chart GitHub fork License

An automated framework for evaluating and evolving agentic workflows.

🔥 Latest News

  • [July 2025] 🎉 EvoAgentX is on arxiv!
  • [July 2025] 🎉 EvoAgentX has achieved 1,000 stars!
  • [May 2025] 🎉 EvoAgentX has been officially released!

⚡ Get Started

Installation

We recommend installing EvoAgentX using pip:

pip install git+https://github.com/EvoAgentX/EvoAgentX.git

For local development or detailed setup (e.g., using conda), refer to the Installation Guide for EvoAgentX.

Example (optional, for local development):
git clone https://github.com/EvoAgentX/EvoAgentX.git
cd EvoAgentX
# Create a new conda environment
conda create -n evoagentx python=3.11

# Activate the environment
conda activate evoagentx

# Install the package
pip install -r requirements.txt
# OR install in development mode
pip install -e .

LLM Configuration

API Key Configuration

To use LLMs with EvoAgentX (e.g., OpenAI), you must set up your API key.

Option 1: Set API Key via Environment Variable
  • linux/macOS:
export OPENAI_API_KEY=<your-openai-api-key>
  • Windows Command Prompt:
set OPENAI_API_KEY=<your-openai-api-key>
  • Windows PowerShell:
$env:OPENAI_API_KEY="<your-openai-api-key>" # " is required 

Once set, you can access the key in your Python code with:

import os
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
Option 2: Use .env File
  • Create a .env file in your project root and add the following:
OPENAI_API_KEY=<your-openai-api-key>

Then load it in Python:

from dotenv import load_dotenv 
import os 

load_dotenv() # Loads environment variables from .env file
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")

Configure and Use the LLM

Once the API key is set, initialise the LLM with:

from evoagentx.models import OpenAILLMConfig, OpenAILLM

# Load the API key from environment
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")

# Define LLM configuration
openai_config = OpenAILLMConfig(
    model="gpt-4o-mini",       # Specify the model name
    openai_key=OPENAI_API_KEY, # Pass the key directly
    stream=True,               # Enable streaming response
    output_response=True       # Print response to stdout
)

# Initialize the language model
llm = OpenAILLM(config=openai_config)

# Generate a response from the LLM
response = llm.generate(prompt="What is Agentic Workflow?")

📖 More details on supported models and config options: LLM module guide.

Automatic WorkFlow Generation

Once your API key and language model are configured, you can automatically generate and execute multi-agent workflows in EvoAgentX.

🧩 Core Steps:

  1. Define a natural language goal
  2. Generate the workflow with WorkFlowGenerator
  3. Instantiate agents using AgentManager
  4. Execute the workflow via WorkFlow

💡 Minimal Example:

from evoagentx.workflow import WorkFlowGenerator, WorkFlowGraph, WorkFlow
from evoagentx.agents import AgentManager

goal = "Generate html code for the Tetris game"
workflow_graph = WorkFlowGenerator(llm=llm).generate_workflow(goal)

agent_manager = AgentManager()
agent_manager.add_agents_from_workflow(workflow_graph, llm_config=openai_config)

workflow = WorkFlow(graph=workflow_graph, agent_manager=agent_manager, llm=llm)
output = workflow.execute()
print(output)

You can also:

  • 📊 Visualise the workflow: workflow_graph.display()
  • 💾 Save/load workflows: save_module() / from_file()

📂 For a complete working example, check out the workflow_demo.py

Tool-Enabled Workflows Generation:

In more advanced scenarios, your workflow agents may need to use external tools. EvoAgentX allows Automatic tool integration: Provide a list of toolkits to WorkFlowGenerator. The generator will consider these and include them in the agents if appropriate.

For instance, to enable an Arxiv toolkit:

from evoagentx.tools import ArxivToolkit

# Initialize a command-line toolkit for file operations
arxiv_toolkit = ArxivToolkit()

# Generate a workflow with the toolkit available to agents
wf_generator = WorkFlowGenerator(llm=llm, tools=[arxiv_toolkit])
workflow_graph = wf_generator.generate_workflow(goal="Find and summarize the latest research on AI in the field of finance on arXiv")

# Instantiate agents with access to the toolkit
agent_manager = AgentManager(tools=[arxiv_toolkit])
agent_manager.add_agents_from_workflow(workflow_graph, llm_config=openai_config)

workflow = WorkFlow(graph=workflow_graph, agent_manager=agent_manager, llm=llm)
output = workflow.execute()
print(output)

In this setup, the workflow generator may assign the ArxivToolkit to relevant agents, enabling them to execute shell commands as part of the workflow (e.g. creating directories and files)

Human-in-the-Loop (HITL) Support:

In advanced scenarios, EvoAgentX supports integrating human-in-the-loop interactions within your agent workflows. This means you can pause an agent’s execution for manual approval or inject user-provided input at key steps, ensuring critical decisions are vetted by a human when needed.

All human interactions are managed through a central HITLManager instance. The HITL module includes specialized agents like HITLInterceptorAgent for approval gating and HITLUserInputCollectorAgent for collecting user data.

For instance, to require human approval before an email-sending agent executes its action:

from evoagentx.hitl import HITLManager, HITLInterceptorAgent, HITLInteractionType, HITLMode

hitl_manager = HITLManager()
hitl_manager.activate()  # Enable HITL (disabled by default)

# Interceptor agent to approve/reject the DummyEmailSendAction of DataSendingAgent
interceptor = HITLInterceptorAgent(
    target_agent_name="DataSendingAgent",
    target_action_name="DummyEmailSendAction",
    interaction_type=HITLInteractionType.APPROVE_REJECT,
    mode=HITLMode.PRE_EXECUTION    # ask before action runs
)
# Map the interceptor’s output field back to the workflow’s input field for continuity
hitl_manager.hitl_input_output_mapping = {"human_verified_data": "extracted_data"}

# Add the interceptor to the AgentManager and include HITL in the workflow execution
agent_manager.add_agent(interceptor)
workflow = WorkFlow(graph=workflow_graph, agent_manager=agent_manager, llm=llm, hitl_manager=hitl_manager)

When this interceptor triggers, the workflow will pause and prompt in the console for [a]pprove or [r]eject before continuing. If approved, the flow proceeds using the human-verified data; if rejected, the action is skipped or handled accordingly.

📂 For a complete working example, check out the tutorial /hitl.md

Demo Video

Watch on YouTube Watch on Bilibili

EvoAgentX_demo.mp4

In this demo, we showcase the workflow generation and execution capabilities of EvoAgentX through two examples:

  • Application 1: Intelligent Job Recommendation from Resume
  • Application 2: Visual Analysis of A-Share Stocks

✨ Final Results


Application 1:
Job Recommendation

Application 2:
Stock Visual Analysis

Evolution Algorithms

We have integrated some existing agent/workflow evolution algorithms into EvoAgentX, including TextGrad, MIPRO and AFlow.

To evaluate the performance, we use them to optimize the same agent system on three different tasks: multi-hop QA (HotPotQA), code generation (MBPP) and reasoning (MATH). We randomly sample 50 examples for validation and other 100 examples for testing.

Tip: We have integrated these benchmark and evaluation code in EvoAgentX. Please refer to the benchmark and evaluation tutorial for more details.

📊 Results

Method HotPotQA
(F1%)
MBPP
(Pass@1 %)
MATH
(Solve Rate %)
Original 63.58 69.00 66.00
TextGrad 71.02 71.00 76.00
AFlow 65.09 79.00 71.00
MIPRO 69.16 68.00 72.30

Please refer to the examples/optimization folder for more details.

Applications

We use our framework to optimize existing multi-agent systems on the GAIA benchmark. We select Open Deep Research and OWL, two representative multi-agent framework from the GAIA leaderboard that is open-source and runnable.

We apply EvoAgentX to optimize their prompts. The performance of the optimized agents on the GAIA benchmark validation set is shown in the figure below.

Open Deep Research Optimization
Open Deep Research
OWL Optimization
OWL Agent

Full Optimization Reports: Open Deep Research and OWL.

Tutorial and Use Cases

💡 New to EvoAgentX? Start with the Quickstart Guide for a step-by-step introduction.

Explore how to effectively use EvoAgentX with the following resources:

Cookbook Colab Notebook Description
Build Your First Agent Build Your First Agent Quickly create and manage agents with multi-action capabilities.
Build Your First Workflow Build Your First Workflow Learn to build collaborative workflows with multiple agents.
Working with Tools Working with Tools Master EvoAgentX's powerful tool ecosystem for agent interactions
Automatic Workflow Generation Automatic Workflow Generation Automatically generate workflows from natural language goals.
Benchmark and Evaluation Tutorial Benchmark and Evaluation Tutorial Evaluate agent performance using benchmark datasets.
TextGrad Optimizer Tutorial TextGrad Optimizer Tutorial Automatically optimise the prompts within multi-agent workflow with TextGrad.
AFlow Optimizer Tutorial AFlow Optimizer Tutorial Automatically optimise both the prompts and structure of multi-agent workflow with AFlow.
Human-In-The-Loop support Enable HITL functionalities in your WorkFlow.

🛠️ Follow the tutorials to build and optimize your EvoAgentX workflows.

🚀 We're actively working on expanding our library of use cases and optimization strategies. More coming soon — stay tuned!

🎯 Roadmap

  • Modularize Evolution Algorithms: Abstract optimization algorithms into plug-and-play modules that can be easily integrated into custom workflows.
  • Develop Task Templates and Agent Modules: Build reusable templates for typical tasks and standardized agent components to streamline application development.
  • Integrate Self-Evolving Agent Algorithms: Incorporate more recent and advanced agent self-evolution across multiple dimensions, including prompt tuning, workflow structures, and memory modules.
  • Enable Visual Workflow Editing Interface: Provide a visual interface for workflow structure display and editing to improve usability and debugging.

🙋 Support

Join the Community

📢 Stay connected and be part of the EvoAgentX journey!
🚩 Join our community to get the latest updates, share your ideas, and collaborate with AI enthusiasts worldwide.

  • Discord — Chat, discuss, and collaborate in real-time.
  • X (formerly Twitter) — Follow us for news, updates, and insights.
  • WeChat — Connect with our Chinese community.

Add the meeting to your calendar

📅 Click the link below to add the EvoAgentX Weekly Meeting (Sundays, 16:30–17:30 GMT+8) to your calendar:

👉 Add to your Google Calendar

👉 Add to your Tencent Meeting

👉 Download the EvoAgentX_Weekly_Meeting.ics file

Contact Information

If you have any questions or feedback about this project, please feel free to contact us. We highly appreciate your suggestions!

We will respond to all questions within 2-3 business days.

Community Call

🙌 Contributing to EvoAgentX

Thanks go to these awesome contributors

We appreciate your interest in contributing to our open-source initiative. We provide a document of contributing guidelines which outlines the steps for contributing to EvoAgentX. Please refer to this guide to ensure smooth collaboration and successful contributions. 🤝🚀

Star History Chart

📖 Citation

Please consider citing our work if you find EvoAgentX helpful:

📄 EvoAgentX 📄 Survey Paper

@article{wang2025evoagentx,
  title={EvoAgentX: An Automated Framework for Evolving Agentic Workflows},
  author={Wang, Yingxu and Liu, Siwei and Fang, Jinyuan and Meng, Zaiqiao},
  journal={arXiv preprint arXiv:2507.03616},
  year={2025}
}
@article{fang202survey,
      title={A Comprehensive Survey of Self-Evolving AI Agents: A New Paradigm Bridging Foundation Models and Lifelong Agentic Systems}, 
      author={Jinyuan Fang and Yanwen Peng and Xi Zhang and Yingxu Wang and Xinhao Yi and Guibin Zhang and Yi Xu and Bin Wu and Siwei Liu and Zihao Li and Zhaochun Ren and Nikos Aletras and Xi Wang and Han Zhou and Zaiqiao Meng},
      year={2025},
      journal={arXiv preprint arXiv:2508.07407},
      url={https://arxiv.org/abs/2508.07407}, 
}

📚 Acknowledgements

This project builds upon several outstanding open-source projects: AFlow, TextGrad, DSPy, LiveCodeBench, and more. We would like to thank the developers and maintainers of these frameworks for their valuable contributions to the open-source community.

📄 License

Source code in this repository is made available under the MIT License.

About

🚀 EvoAgentX: Building a Self-Evolving Ecosystem of AI Agents

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Languages