Everything you need to know about TENTROPY — how challenges work, what you'll learn, and the technology behind it.
Each challenge presents you with broken production code — the kind of bugs that crash real systems at 3 AM. Your job is to identify the flaw and implement the fix.
After solving a challenge, the Debrief unlocks with a deep-dive explanation of why the bug happened and how production systems prevent it.
Every challenge is built from 5 core components that work together to create a realistic debugging experience:
The scenario that sets up the problem. Explains what system is broken, what incident occurred, and what you need to fix. Written in Markdown with production-realistic context.
Python code with an intentional bug. This is what you see in the editor — your job is to fix it. The bugs are based on real production issues.
A hidden pytest file that validates your fix. You never see this code — you only see the test output. The tests use assertions, mocks, and timeouts to verify correctness.
The correct implementation. Available if you choose to reveal it (counts as "giving up"), or automatically shown after you solve the challenge.
Educational content that unlocks after completion. Explains the mechanics behind the bug, real-world impact, and production-grade best practices.
When you click DEPLOY PATCH, your code is saved as solution.py in the sandbox. The test file imports your functions and runs assertions. If all tests pass (pytest exits with code 0), you succeed. If any assertion fails or the code times out, you'll see the error in the console.
Master the failure modes of high-throughput distributed systems. Debug the logic that crashes production.
Build the robust infrastructure that wraps LLMs. Master Semantic Caching, Context Windows, and Streaming stability.
Learn the essential tools to debug distributed AI systems. Practice Tracing, Metrics, and Structured Logging.
Complete all challenges in a track to earn a digital certificate with a unique verification ID.
Open Source: The platform core is available on GitHub. You can run your own instance with custom challenges.
You can submit up to 5 solutions every 20 minutes. This applies to both anonymous and authenticated users. The limit resets on a sliding window.
Yes. Your code runs in an isolated Firecracker micro-VM that is destroyed after execution. Each sandbox has a 60-second timeout and limited resources. We never store your submitted code beyond the session.
Yes! You can solve challenges as a guest. However, your progress is stored locally and may be lost if you clear your browser data. Sign in to sync your progress across devices and earn certificates.
All challenges are currently in Python. We chose Python because it's the dominant language for AI/ML infrastructure.
We welcome contributions! Check out our Contributing Guide for details on adding new challenges or improving the platform.