Open-source LLM tracing that speaks GenAI, not HTTP.
2ndProduct of the week

traceAI

Open-source LLM tracing that speaks GenAI, not HTTP.

Live preview
traceAI is OTel-native LLM tracing that actually works with your existing observability stack. ✓ Captures prompts, completions, tokens, retrievals, agent decisions ✓ Follows GenAI semantic conventions correctly ✓ Routes to any OTel backend—Datadog, Grafana, Jaeger, anywhere ✓ Python, TypeScript, Java, C# with full parity ✓ 35+ frameworks: OpenAI, Anthropic, LangChain, CrewAI, DSPy, and more ✓ Two lines of code to instrument your entire app No new vendor. No new dashboard. Open source (MIT).
  • traceAI
  • traceAI
  • traceAI
  • traceAI

Comments, support and feedback

  • 10 days ago

    GenAI observability has been broken for too long. TraceAI gets it right and this is the kind of observability layer every AI team needs but rarely has. Smart to make this open source and build trust first. Congrats team! 🚀

  • 10 days ago

    The lack of GenAI-native semantic conventions in OpenTelemetry is a real bottleneck right now. This will be superuseful!

  • 10 days ago

    With this we believe the problem of observability and Conventional standard is solved for wider range of frameworks

  • vrinda damani
    10 days ago

    This is amazing, great launch and best part- its open source!

  • Nikhil Pareek
    10 days ago

    Hey DevHunt! 👋 I'm Nikhil from Future AGI, and I'm excited to share traceAI with you today. The Problem We're Solving If you're building with LLMs, you know the pain: your agent made 34 API calls, burned through your token budget, and returned the wrong answer. You have no idea why. Existing LLM tracing tools force you into a new vendor dashboard. But most teams already have observability infrastructure - Datadog, Grafana, Jaeger. Why add another? OpenTelemetry is the industry standard for application observability, but it was designed before AI existed. It understands HTTP latency. It has no concept of prompts, tokens, or reasoning chains. What traceAI Does??? traceAI is the proper GenAI semantic layer on top of OpenTelemetry. It captures everything that matters in your AI application: - Full prompts and completions - Token usage per call - Model parameters and settings - RAG retrieval steps and sources - Agent decisions and tool executions - Errors with full context - Latency at every layer And sends it to whatever observability backend you already use. Two lines of code: from traceai import trace_ai trace_ai.init() Your entire GenAI app is now traced automatically. Works with everything: - Languages: Python, TypeScript, Java, C# (with full parity) - Frameworks: OpenAI, Anthropic, LangChain, LlamaIndex, CrewAI, DSPy, Bedrock, Vertex AI, MCP, Vercel AI SDK, and 35+ more - Backends: Datadog, Grafana, Jaeger, or any OpenTelemetry-compatible tool - Actually follows GenAI semantic conventions. Not approximately. Correctly. So your traces are readable in any OTel backend without custom dashboards or parsing. - Zero lock-in. Your data goes where you want it. Switch backends anytime. We don't even collect your traces. - Open source. Forever. MIT licensed. Community-owned. We're not building a walled garden. Who Should Use This??? AI engineers debugging complex LLM pipelines Platform teams who refuse to adopt another vendor Anyone already running OTel who wants AI traces alongside application telemetry Teams building agentic systems who need production-grade observability What's Next??? We're actively working on: - Go language support - Expanded framework coverage Try It Now ⭐ GitHub: https://shorturl.at/GT9KZ 📖 Docs: https://shorturl.at/Yz8zv 💬 Discord: https://shorturl.at/zHp8Y

About this launch

traceAI was launched by Nikhil Pareek in April 7th 2026.

  • 25
    Upvotes
  • 4416
    Impressions
  • #2
    Week rank

Trending launches