Effortlessly build AI apps that can answer questions and help users get things done. Arch is the AI-native proxy that handles the pesky heavy-lifting so that you can move faster in building agentic apps, prevent harmful outcomes, and rapidly incorporate latest models.
Prompts are nuanced and opaque user requests, which require the same capabilities as traditional HTTP requests including secure handling, intelligent routing, robust observability, and integration with backend (APIs and tools) to build agentic tasks – outside business logic.
Arch is integrated with purpose-built LLMs to handle the critical but pesky heavy lifting in building agentic apps. Arch offers fast request clarification, query routing, and data extraction from user requests so that you can move faster in building enterprise-worthy agentic apps -- without the taxing prompt engineering and systems development work.
Arch centralizes guardrails to prevent jailbreak attempts and ensure safe user interactions without writing a single line of code. You can also define and configure custom guardrails to keep users engaged on topics or tone relevant to your application requirements.
Build and experiment with multiple LLMs or model versions with a single and consistent interface. Arch centralizes access controls, offers high-throughput and resiliency for traffic to 100+ LLMs -- all without you having to write a single line of code.
Arch acts as a source for several monitoring metrics related to prompts and LLMs. These metrics are compatible with OpenTelemetry destinations like Signoz, Honeycomb.io, Jaeger to help you understand all critical aspects of your AI application.
Arch is designed to help you move beyond basic LLM interactions into sophisticated scenarios like building multi-turn conversations and improving the speed and accuracy of your RAG scenarios.
Arch is an intelligent (edge and LLM) proxy server designed for agents - to help you focus on core business objectives. Arch handles critical but the pesky tasks related to the handling and processing of prompts, which includes detecting and rejecting jailbreak attempts, intelligent task routing for improved accuracy, mapping user requests into "backend" functions, and managing the observability of prompts and LLM in a centralized way.
Get StartedNo lock-in. No black boxes. Just an open, intelligent (edge and LLM) proxy for building smarter, agentic AI applications. Created by contributors to Envoy Proxy, Arch brings enterprise-grade reliability to prompt orchestration, while giving you the flexibility to shape, extend, and integrate it into your AI workflows.
Get StartedArch takes a dependency on Envoy and is a self-contained process designed to run alongside your application servers. Arch extend's Envoy's HTTP connection management subsystem, filtering, and telemetry capabilities exclusively for prompts and LLMs.
Engineered with purpose-built LLMs to handle fast, cost-effective, and accurate prompt-based tasks like function calling, and parameter extraction to build more task-accurate agentic applications.
Function Calling is a powerful feature in Arch that allows your application to dynamically execute backend functions or services based on user prompts. This enables seamless integration between natural language interactions and backend operations, turning user inputs into actionable results.
Prompt Targets are a core concept Arch, enabling developers to define how different types of user prompts should get processed and routed.
By defining prompt targets, you can separate business logic from the complexities of processing and handling of prompts so that you focus on improving the quality of your application and have a cleaner separation of concerns in your code base.
Arch is an intelligent (edge and LLM) proxy for agents - to help you protect, observe, and build agentic apps by simply integrating (existing) API.
Arch handles critical but pesky tasks related to the handling and processing of prompts. This includes detecting and rejecting jailbreak attempts, intelligent task routing for improved accuracy, map user queries to "backend" APIs, and unifies the observability of prompts and LLMs in a centralized way.
No lock-in. No black boxes. Just an open, intelligent (edge and LLM) proxy for building smarter, more agentic AI applications. Created by contributors to Envoy Proxy, Arch brings enterprise-grade reliability to agentic applications, while giving you the flexibility to shape, extend, and integrate it into your AI workflows.
Get StartedArch takes a dependency on Envoy and is a self-contained process designed to run alongside your application servers. Arch extend's Envoy's HTTP connection management subsystem, filtering, and telemetry capabilities exclusively for prompts and LLMs.
Engineered with purpose-built LLMs to handle fast, cost-effective, and accurate prompt-based tasks like function calling, and parameter extraction from prompts to build more task-accurate agentic applications. Function Calling is a powerful feature in Arch that allows your application to dynamically execute backend functions or services based on user prompts. This enables seamless integration between natural language interactions and backend operations, turning user inputs into actionable results.
Get StartedPrompt Targets are a fundamental component of Arch, enabling developers to define how different types of user prompts are processed and routed within their generative AI apps. By defining prompt targets, you can separate business logic from the complexities of processing and handling of prompts so that you focus on improving the quality of your application and have a cleaner separation of concerns in your code base.
Get StartedArch offers a delightful developer experience with a simple configuration file that describes the types of prompts your agentic app supports, a set of APIs that need to be plugged in for agentic scenarios (including retrieval queries) and your choice of LLMs.
Protect, observe and build agentic tasks in minutes
Configure LLMs providers, guardrails and prompt scenarios that you would like to build
A docker image that you can deploy Arch in any environment: AWS, on premises, or locally
Ship enterprise-grade agentic apps that works at ANY scale