Skip to main content
For Teams

Agent Infrastructure

The plumbing layer for AI agents: memory systems, tool integrations, observability, sandboxing, and identity management. Essential building blocks for production agent deployments.

19 tools in this category

Decagon is a customer support AI platform with voice and guardrails built for enterprise use. It helps teams deploy agents across support workflows with authentication, encryption, audit logs, and resiliency controls.

Enterprise
API
Chrome Extension
+4

Relevance AI is a low/no-code platform for creating AI agents that can complete tasks on your behalf. It’s aimed at teams that want to automate support, sales, and internal workflows without building everything from scratch.

Enterprise
iOS
API
+3

Anyscale is a fully managed Ray platform that removes the infrastructure work from building and deploying AI applications. It helps teams run Ray jobs, services, and workflows with autoscaling, monitoring, and API-driven cluster management.

Paid
iOS
API
+4

Kiro autonomous agent stores task context, chat, and code changes to carry out multi-step work. It is hosted on AWS and designed for developers who want an agent that can retain context while it works.

Enterprise
API
Chrome Extension
+3

Factory provides Droid, an AI agent you can use in coding workflows and IDE integrations like Zed. It connects through login or API key setup and charges usage with Standard Tokens across multiple models.

MCP Support
Paid
iOS
+5

Open WebUI is a self-hosted AI platform for running and organizing chat across local and cloud models. It also lets you extend workflows with Python and share prompts, tools, and functions through its community.

iOS
Voice
B2B
+4

Open Interpreter is a desktop AI agent that helps you work with code, documents, and files instead of just chatting about them. It can also be run in sandboxed environments like Docker or E2B for safer execution.

Open Source
Android
Code Execution
+4

Together AI is a cloud platform for running, fine-tuning, and deploying open-source AI models. It is aimed at developers and teams that need model inference, GPU compute, storage, and training infrastructure in one place.

iOS
API
B2B
+4

Fay can send real-time webhook events when research status changes, completes, or fails. It is useful if you want to trigger downstream workflows from research results instead of polling for updates.

Web
API
Integrations
+4

LocalAI’s Realtime API lets you build voice and text experiences over WebSocket or WebRTC using an OpenAI-compatible protocol. It is aimed at developers who want a self-hosted, configurable realtime layer with their own VAD, STT, LLM, and TTS components.

iOS
API
Voice
+4

Ollama helps you install, run, and launch open models from the terminal, with integrations for coding, chat, RAG, and automation tools. It emphasizes keeping data on your own machine while still offering cloud hardware for larger models.

iOS
API
Integrations
+4

Adds long-term memory to conversational apps built with the Vercel AI SDK. Use it to store and retrieve user context across chats and keep responses consistent over time.

iOS
API
Integrations
+4

Connect a Lovable app to Twitch so it can read live stream data, track channel activity, and send chat messages. It’s useful for overlays, dashboards, and creator tools that need Twitch API access.

Web
API
Integrations
+4

Fireworks AI is a model hosting and inference platform for teams building with open and proprietary models. It covers serverless inference, fine-tuning, embeddings, speech-to-text, and on-demand GPU deployments.

Paid
Enterprise
iOS
+4

GroqCloud is an AI inference platform for developers that focuses on low latency and predictable spend. It provides API access to text, audio, vision, and image-to-text models, with free, developer, and enterprise plans.

iOS
API
For Developers
+4

Replicate lets you run and fine-tune models, and deploy custom models through an API. It’s aimed at developers who want to add image, speech, music, video, or LLM capabilities without managing model hosting themselves.

iOS
API
Vision
+4

LM Link lets you access models running on other devices as if they were local. It is built for LM Studio users who want to load remote models through the app, local server, API, or SDKs without exposing devices to the public internet.

iOS
API
B2B
+4

Composio’s schema modifiers let you rewrite a tool’s schema before it reaches an agent. Use them to adjust descriptions, add or hide parameters, and set defaults when the raw tool schema needs guardrails.

API
Chrome Extension
Integrations
+3

llamafile packages an LLM and runtime into one file you can download and run locally. It is aimed at developers and end users who want offline, no-install model execution across common operating systems.

Desktop
B2B
CLI
+4
Explore more

Related Categories