Stoplight
Design, document, and build APIs faster.

Langfuse is an open-source LLM engineering platform designed to help developers debug, evaluate, and continuously improve their large language model applications. Acquired by ClickHouse, Langfuse provides robust observability features by capturing complete traces of LLM applications and agents natively through OpenTelemetry. With dedicated SDKs for Python and JS/TS, as well as a flexible public API, developers can effortlessly integrate Langfuse into their workflows—often using just a drop-in wrapper or the @observe decorator for popular integrations like OpenAI. Core capabilities include advanced trace capture, evaluation dataset building, centralized prompt management, and high-level metric tracking. By enabling teams to closely inspect execution failures, annotate outputs, and iterate safely in a built-in playground, Langfuse drastically reduces the time needed to optimize complex agentic behaviors. Because of its open-source nature, teams can choose to utilize the managed cloud platform or self-host the infrastructure entirely, ensuring strict data compliance and security for enterprise workloads.
Langfuse is an open-source LLM engineering platform designed to help developers debug, evaluate, and continuously improve their large language model applications.
Explore all tools that specialize in native opentelemetry integration. This domain focus ensures Langfuse delivers optimized results for this specific requirement.
Explore all tools that specialize in annotate outputs and inspect failures. This domain focus ensures Langfuse delivers optimized results for this specific requirement.
Explore all tools that specialize in iterate safely in playground. This domain focus ensures Langfuse delivers optimized results for this specific requirement.
Captures hierarchical telemetry data automatically using the @observe decorator, allowing developers to trace nested LLM and agentic function calls.
Centralized repository for creating, versioning, and deploying prompts outside of the core application codebase.
Enables manual annotations and automated scoring of LLM outputs to systematically build and assess evaluation datasets.
Sign up for Cloud or deploy via Self-Hosting
Install Python or JS/TS SDK
Configure API keys
Add the @observe decorator or use the drop-in OpenAI wrapper to capture telemetry
Monitor nested calls, inputs, outputs, and latencies directly in the Langfuse dashboard
All Set
Ready to go
Verified feedback from other users.
"Highly praised for its open-source nature, comprehensive tracing capabilities, and ease of integration into existing stacks."
Post questions, share tips, and help other users.
Design, document, and build APIs faster.
Digital developers who are actually easy to work with.

The Open-Source Framework for Reinforcement Learning in Quantitative Finance.

Enterprise-grade Python library for modular backtesting and quantitative financial market analysis.

Static bytecode analysis to identify potential defects and vulnerabilities in Java applications.

The batteries-included, open-source distribution of ImageJ2 for multidimensional scientific image analysis.