UsageGuard API Documentation
UsageGuard API enables you to access open-source, 3rd party (e.g. OpenAI, Meta, Mistral or Anthropic), or proprietary models with built-in safeguards, advanced moderation, cost control, end-user tracking, and usage reporting. you can either use our SaaS API or self-host your own.
UsageGuard Data Flow
This diagram illustrates the data flow between your application, the UsageGuard API, the UsageGuard Dashboard, and finally to the LLMs.
-
Your Application: Sends inference request to UsageGuard.
-
UsageGuard:
- Receives and processes the request.
- Applies moderation and compliance policies (content filtering, PII redaction, wordlist blocking, end-user tracking, etc.). For more details, see our Moderation & Compliance docs.
- Processes your inference request with your chosen large language model.
- Sends the response back to your application.
-
Dashboard / Management APIs:
- Configure and deploy policies and connection settings to UsageGuard.
- Provide observability (logs, metrics, usage).
Key Features
Integration
- Unified Inference API: Access all models with a single API endpoint, no matter what they are with consistent request/response format.
Moderation and Compliance
- Content Moderation: Built-in filters for prohibited content, NSFW, and more.
- PII Detection: Identify and manage personally identifiable information.
- Wordlist Blocking: This feature allows you to specify a list of words that should be blocked from being processed
- End-User Tracking: Monitor and analyze user interactions.
- Audit Logging: Comprehensive logging for compliance and debugging.
Cost and Usage Management
- Cost Management: Track the costs of your requests to the nearest cent and set limits on token usage and request frequency.
- Usage Reporting and Analytics: Get detailed reports and insights on your API usage, including request volume, token usage, cost analysis, request patterns, error rates, and user behavior.
Get Support
Need help? Our support team is ready to assist you.