
Inspect
The open-source framework for rigorous large language model evaluation and safety testing.
Updated 15d ago
Has API
PricingFree
Free
LLM Benchmarking
Safety Red Teaming
Agentic Workflow Testing

The open-source framework for rigorous large language model evaluation and safety testing.

Real-time proactive voice moderation and community safety powered by emotional AI.

Enterprise-grade AI models for content moderation, media understanding, and generative AI detection.