Agent Infrastructure
The plumbing layer for AI agents: memory systems, tool integrations, observability, sandboxing, and identity management. Essential building blocks for production agent deployments.
19 tools in this category
Relevance AI is a low/no-code platform for creating AI agents that can complete tasks on your behalf. It’s aimed at teams that want to automate support, sales, and internal workflows without building everything from scratch.
Anyscale is a fully managed Ray platform that removes the infrastructure work from building and deploying AI applications. It helps teams run Ray jobs, services, and workflows with autoscaling, monitoring, and API-driven cluster management.
Factory provides Droid, an AI agent you can use in coding workflows and IDE integrations like Zed. It connects through login or API key setup and charges usage with Standard Tokens across multiple models.
Open WebUI is a self-hosted AI platform for running and organizing chat across local and cloud models. It also lets you extend workflows with Python and share prompts, tools, and functions through its community.
Open Interpreter is a desktop AI agent that helps you work with code, documents, and files instead of just chatting about them. It can also be run in sandboxed environments like Docker or E2B for safer execution.
Together AI is a cloud platform for running, fine-tuning, and deploying open-source AI models. It is aimed at developers and teams that need model inference, GPU compute, storage, and training infrastructure in one place.
LocalAI’s Realtime API lets you build voice and text experiences over WebSocket or WebRTC using an OpenAI-compatible protocol. It is aimed at developers who want a self-hosted, configurable realtime layer with their own VAD, STT, LLM, and TTS components.
Connect a Lovable app to Twitch so it can read live stream data, track channel activity, and send chat messages. It’s useful for overlays, dashboards, and creator tools that need Twitch API access.
Fireworks AI is a model hosting and inference platform for teams building with open and proprietary models. It covers serverless inference, fine-tuning, embeddings, speech-to-text, and on-demand GPU deployments.
Replicate lets you run and fine-tune models, and deploy custom models through an API. It’s aimed at developers who want to add image, speech, music, video, or LLM capabilities without managing model hosting themselves.
LM Link lets you access models running on other devices as if they were local. It is built for LM Studio users who want to load remote models through the app, local server, API, or SDKs without exposing devices to the public internet.