100 AI Agents & Tool Use resources for developers
Building production-ready AI agents requires moving beyond simple prompt-response loops toward stateful orchestration, robust tool-calling schemas, and sandboxed execution environments. This resource guide focuses on the tools and patterns necessary to implement autonomous workflows using frameworks like LangGraph and standards like the Model Context Protocol (MCP).
Agent Frameworks & Orchestration
- 1
LangGraph
advancedhighA library for building stateful, multi-actor applications with LLMs. Use this when you need cycles, persistence, and fine-grained control over agent state transitions.
- 2
CrewAI
intermediatehighFramework for orchestrating role-playing autonomous agents. Ideal for collaborative tasks where specific personas need to hand off work to one another.
- 3
PydanticAI
intermediatemediumA model-agnostic framework from the Pydantic team. It uses Python type hints for tool definitions and structured output validation, ensuring type-safe agent responses.
- 4
AutoGen
advancedstandardMicrosoft's framework for multi-agent conversations. Best suited for complex problem solving where agents require different configurations and can converse autonomously.
- 5
Inngest Agent Runtime
intermediatehighProvides durable execution for agentic workflows. Use this to handle long-running agent tasks that require retries, state recovery, and human-in-the-loop pauses.
- 6
Bee Agent Framework
intermediatestandardAn open-source framework from IBM designed for scaling agents. It emphasizes governance and structured workflows for enterprise environments.
- 7
OpenAI Swarm
beginnermediumAn educational framework for lightweight multi-agent orchestration. Use this as a reference for implementing hand-offs and routine patterns without heavy dependencies.
- 8
Vercel AI SDK Core
beginnerhighProvides a unified `generateText` and `streamText` interface with built-in tool calling support for OpenAI, Anthropic, and Google Gemini.
- 9
AgentStack
beginnermediumA CLI tool for scaffolding agent projects. It automates the setup of frameworks like CrewAI or LangChain with pre-configured observability and environment variables.
- 10
Camel-AI
intermediatestandardA communicative agent framework focusing on 'role-playing' to solve tasks. Useful for generating synthetic data or simulating user-agent interactions.
Tool Execution & Connectivity
- 1
Model Context Protocol (MCP)
intermediatehighAn open standard from Anthropic that allows agents to connect to external data sources and tools (like Slack, GitHub, or Postgres) using a unified interface.
- 2
E2B Code Interpreter
intermediatehighA cloud-based, sandboxed environment for agents to execute code. Essential for data analysis tasks where the agent needs to run Python or JS securely.
- 3
Composio
beginnermediumA platform providing over 100+ pre-built tool integrations (GitHub, Jira, Salesforce) formatted specifically for LLM function calling.
- 4
Tavily Search API
beginnerhighA search engine optimized for LLMs. It returns clean, concise content snippets instead of raw HTML, reducing token usage and hallucination.
- 5
Firecrawl
beginnermediumConverts websites into clean markdown for LLM consumption. Use this as a tool for agents that need to crawl and reason over live web documentation.
- 6
Toolhouse
intermediatestandardA serverless tool marketplace that injects tools directly into your LLM prompt, handling authentication and execution logic on the backend.
- 7
MultiOn
intermediatemediumAn agentic web browser API. Allows agents to perform actions on the web (e.g., booking flights, filling forms) using a high-level API.
- 8
Browserbase
advancedhighA headless browser platform specifically for AI agents. Includes built-in stealth mode and session recording for debugging agent actions.
- 9
Pydantic Tool Schemas
beginnerhighThe standard pattern for defining tool parameters. Use Pydantic's `model_json_schema()` to generate the JSON spec required by OpenAI and Anthropic.
- 10
EXA (formerly Metaphor)
intermediatestandardA neural search engine that allows agents to find links based on content similarity rather than keyword matching.
Observability & Reliability
- 1
LangSmith
intermediatehighA platform for tracing and evaluating LLM applications. Critical for debugging agent loops where you need to see exactly which tool was called and why.
- 2
AgentOps
beginnermediumSDK specifically for monitoring autonomous agents. Tracks tool usage, success rates, and session costs in real-time.
- 3
Arize Phoenix
intermediatestandardOpen-source observability for LLMs. Provides OpenInference-compatible tracing to visualize agentic workflows and detect performance bottlenecks.
- 4
Promptfoo
beginnerhighA CLI tool to run test cases against your agent. Use it to verify that agents call the correct tools given specific user input scenarios.
- 5
Helicone
beginnermediumAn LLM observability proxy. Use it to cache agent requests, monitor latency, and set up custom properties for tracking specific agent IDs.
- 6
Literal AI
intermediatestandardA platform for agent evaluation and monitoring. It focuses on the 'Chain of Thought' and multi-step reasoning visualization for complex agents.
- 7
Braintrust
advancedmediumAn enterprise-grade platform for evaluating and tracking LLM workflows. It includes a high-speed logging SDK and a web UI for side-by-side comparisons.
- 8
Portkey Gateway
intermediatestandardAn AI gateway that provides load balancing and failover between LLM providers, ensuring your agent doesn't crash if a specific API is down.
- 9
Agent Protocol
advancedstandardA standardized API specification for interacting with AI agents. Implementing this allows your agent to work with various third-party monitoring tools.
- 10
Weights & Biases Weave
intermediatemediumA lightweight toolkit for tracking and versioning LLM inputs and outputs, integrated into the popular ML project management suite.