In the wake of the "NoScope" AI controversy, TryHackMe (THM) has officially broken its silence. In a blog post titled "Our Approach to Data at TryHackMe" released yesterday, March 25, 2026, the platform’s leadership attempted to de-escalate what many are calling a fundamental breach of student trust. However, for many in the Hacklido community, the defense has only raised more questions.
1. The "Zero Training" Pledge
The core of THM’s defense is a categorical denial of the most serious allegations:
- The Statement: "No TryHackMe user data has ever been used to train any AI models," writes Carah Els. The platform asserts that NoScope and other AI initiatives are built on separate, non-user datasets.
- The "Explicit Consent" Future: THM claims that any future pivot toward using user data for AI refinement will require an explicit opt-in framework, distancing themselves from the "silent harvesting" models used by other big-tech firms.
2. The Controversy: "Delete to Disagree"
The friction point for the Indian cybersecurity community—particularly on r/cybersecurityindia is the current mechanism for opting out of data collection for "service improvements."
- The Loophole: While THM denies training AI on user data, they admit to reviewing anonymized user behavior at scale to "improve the experience."
- The Only Way Out: When pressed by users on Reddit and Discord, the platform's stance has been interpreted as a binary choice: if you do not agree with the current data handling terms, your only recourse is to delete your account entirely.
3. "Calculated Risk": The Acquisition Theory
Industry analysts, including Motasem Hamdan, suggest that THM’s move to push into AI (even at the cost of community goodwill) is a "cold, rational calculation."
- Acquisition Bait: By pivoting from a training site to an AI-driven security firm, THM significantly raises its valuation for potential acquisition by giants like Google or Microsoft.
- Unmatched Data: Seven years of "user journeys" the specific ways humans fail, succeed, and pivot during a hack is a dataset that competitors like Hack The Box simply don't have.
Hacklido Technical Takeaway: The Data Sovereignty Audit
The THM controversy is a "canary in the coal mine" for how we handle our professional tradecraft in 2026.
- Deletion is Not "Un-Training": As noted by researchers on Reddit, even if you delete your account today, if your data was used in a previous training run, it is already "baked" into the model weights. There is currently no reliable way to "un-train" an LLM on a specific user's contributions.
- Monitor Your "Service Improvements": Always look for the phrase "service improvement" in a TOS. In 2026, this is often legal shorthand for "feeding your inputs into an internal model."
Diversify Your Lab: Don't put all your "learning telemetry" in one basket. Use local labs, open-source projects, and multiple platforms to ensure your unique problem-solving style isn't being commoditized by a single entity.