But Rohan can’t. He keeps asking why . Why does the algorithm always choose the solution that benefits the largest demographic but crushes the smallest? Why does it never allow for creative failure? One night, while trying to download a practice Crucible scenario, Rohan’s cracked smartwatch syncs accidentally with the CSC’s quantum core. A cascade of data flows into the watch—not study material, but something forbidden: the original source code of the CSC evaluation system .
But as they are about to wipe his records, Rohan holds up his father’s watch. “Before you do, run Project Phoenix.” CSC Struds 12 Standard
But Meera, who had followed the guards, steps forward. She points to the screen. “Sir, look at the secondary data.” But Rohan can’t
Hidden within are the “Stratification Algorithms”—the secret logic that doesn’t just test students but shapes them. Rohan discovers the truth: The CSC’s 12th Standard isn’t designed to unlock potential. It’s designed to students into pre-determined socio-economic layers: Blue for governance, Green for tech, Red for manual services. The Crucible isn’t a test of problem-solving; it’s a loyalty check. The system rewards students who make predictable, risk-free choices. Why does it never allow for creative failure
His best friend, Meera, is a “Blue-Stream Strud”—destined for AI ethics and governance. She tries to help Rohan practice for The Crucible, a simulation where students must solve a complex, unpredictable civic crisis. “Just trust the algorithm, Rohan,” she pleads. “It’s trained on a million past crises. Input the variables, pick the highest-probability solution.”