Why Arch?
Arch is built on (and by the core contributors of) Envoy proxy with the belief that:
Prompts are nuanced and opaque user requests, which require the same capabilities as traditional HTTP requests
including secure handling, intelligent routing, robust observability, and seamless integration with backend (API)
systems for personalization — all outside business logic.
Out-of-process architecture, built on Envoy
Arch takes a dependency on Envoy and is a self-contained process designed to run alongside your application servers.
Arch extend's Envoy's HTTP connection management subsystem, filtering, and telemetry capabilities exclusively for
prompts and LLMs.
- Proven success with companies like Airbnb, Dropbox, Google, and others.
- Works with any application language such as Python, Java, C++, Go, PHP, etc.
- Quick deployment and transparent upgrades.
Engineered with (fast) LLMs
Arch is engineered with specialized (sub-billion) LLMs that are designed for fast, cost-effective, and accurate handling of prompts.
These LLMs are best-in-class for critical prompt-related tasks like:
- Function Calling: Function-calling helps you personalize GenAI applications with your API operations via user prompts.
- Prompt Guards: Centrally manages safety features to prevent toxic or jailbreak prompts.
- Intent-drift detection: Able to detect shifts in user intent to improve retrieval accuracy and response efficiency.
Traffic Management
Arch offers several capabilities for LLM calls originating from your applications, including a vendor-agnostic SDK to make LLM calls, smart retries on errors from upstream LLMs, and automatic cutover to other LLMs configured in Arch for continuous availability and disaster recovery scenarios.
Arch extends Envoy’s cluster subsystem to manage upstream connections to LLMs so that you can build resilient AI applications.
Front/Edge Gateway
There is substantial benefit in using the same software at the edge (observability, traffic shaping algorithms, applying guardrails, etc.) as for outbound LLM inference use cases.
Arch is exceptionally well suited as an edge gateway for AI applications. This includes TLS termination, rate limiting, and prompt-based routing.
Best-in-Class Monitoring
Arch offers several monitoring metrics that help you understand three critical aspects of your application: latency, token usage, and error rates by an upstream LLM provider.
Latency measures the speed at which your application is responding to users, which includes metrics like time to first token (TFT), time per output token (TOT), and the total latency as perceived by users.
End-to-End Tracing
Arch propagates trace context using the W3C Trace Context standard, specifically through the traceparent header compatible with OpenTelemetry.
This allows each component in the system to record its part of the request flow, enabling end-to-end tracing across the entire application. Arch ensures that developers can capture this trace data consistently and in a format compatible with various observability tools.