BeeAI
Discover, run, and compose AI agents from any framework and language
Alternative To
- • AutoGen
- • LangGraph
- • CrewAI
Difficulty Level
Requires some technical experience. Moderate setup complexity.
Overview
BeeAI is an open-source platform that enables discovering, running, and composing AI agents across different frameworks and programming languages. As part of the Linux Foundation AI & Data program, BeeAI aims to create a unified ecosystem for AI agent development and deployment.
Key Features
- Framework Agnostic: Seamlessly integrate agents built with different frameworks and languages
- Agent Composition: Create complex multi-agent workflows and systems
- Agent Discoverability: Search and discover agents through an integrated catalog
- First-Class Ecosystem Support: Built with Python and TypeScript as primary languages
- Open Standards: Promotes interoperability across the AI agent ecosystem
System Requirements
- CPU: 4+ cores recommended
- RAM: 8GB+ recommended
- GPU: Optional, but beneficial for running local models
- Storage: 2GB+ for the platform
Installation Guide
Prerequisites
- Basic knowledge of command line interfaces
- Git installed on your system
- Docker and Docker Compose (recommended for easy setup)
Option 1: Homebrew Installation (macOS/Linux)
Install BeeAI using Homebrew:
brew install beeaiInitialize and start BeeAI:
beeai init beeai startAccess the application:
Open your browser and navigate to
http://localhost:8080
Option 2: Docker Installation
Clone the repository:
git clone https://github.com/i-am-bee/beeai-platform.gitNavigate to the project directory:
cd beeai-platformStart the Docker containers:
docker-compose up -dAccess the application:
Open your browser and navigate to
http://localhost:8080
Option 3: Manual Installation
Clone the repository:
git clone https://github.com/i-am-bee/beeai-platform.gitNavigate to the project directory:
cd beeai-platformInstall dependencies:
# For Python components pip install -e . # For TypeScript components npm installBuild and run the application:
npm run build npm startAccess the application:
Open your browser and navigate to
http://localhost:8080
Note: For detailed installation instructions specific to your operating system and environment, please refer to the official documentation at docs.beeai.dev.
Practical Exercise: Creating a Multi-Agent Workflow
Now that you have BeeAI installed, let’s create a simple multi-agent workflow to demonstrate its capabilities.
Step 1: Initialize Your Project
Create a new project directory and initialize it:
mkdir my-beeai-project
cd my-beeai-project
beeai init project
Step 2: Define Your Agents
Create a configuration file that defines multiple agents and their interactions:
# agents.yaml
agents:
- name: researcher
type: llm
model: gpt-4
description: "Research agent that finds information"
- name: writer
type: llm
model: claude-3-sonnet
description: "Writing agent that creates content"
- name: reviewer
type: llm
model: llama3
description: "Reviewer agent that checks for accuracy"
workflow:
- from: researcher
to: writer
condition: "research_complete"
- from: writer
to: reviewer
condition: "draft_complete"
Step 3: Run Your Workflow
Start your multi-agent workflow:
beeai run workflow --config agents.yaml --input "Create a summary of quantum computing advances in 2023"
Step 4: Explore Advanced Features
Once you’re comfortable with the basics, explore some of BeeAI’s more advanced features:
- Create custom agent types
- Integrate external tools and APIs
- Develop cross-language agent communication
- Use the search catalog to discover community-built agents
Resources
Official Documentation
The official documentation is the best place to find detailed information about BeeAI.
Community Support
Join the community to get help, share your experiences, and contribute to the project.
Contributing
BeeAI is an open-source project and welcomes contributions from the community:
Suggested Projects
You might also be interested in these similar projects:
A powerful low-code tool for building and deploying AI-powered agents and workflows
A natural language interface that lets LLMs run code on your computer
Framework for developing context-aware applications powered by large language models (LLMs)