âThe world is in peril,â Anthropicâs Safeguards Research Team lead wrote in his resignation letter
A leading artificial intelligence safety researcher, Mrinank Sharma, has resigned from Anthropic with an enigmatic warning about global âinterconnected crises,â announcing his plans to become âinvisible for a period of time.â
Sharma, an Oxford graduate who led the Claude chatbot makerâs Safeguards Research Team, posted his resignation letter on X Monday, describing a growing personal reckoning with âour situation.â
âThe world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment,â Sharma wrote to colleagues.
The departure comes amid mounting tensions surrounding the San Francisco-based AI lab, which is simultaneously racing to develop ever more powerful systems while its own executives warn that those same technologies could harm humanity.
I’ll be moving back to the UK and letting myself become invisible for a period of time.
It also follows reports of a widening rift between Anthropic and the Pentagon over the militaryâs desire to deploy AI for autonomous weapons targeting without the safeguards the company has sought to impose.
Sharmaâs resignation, which lands days after Anthropic released Opus 4.6 â a more powerful iteration of its flagship Claude tool â hinted at internal friction over safety priorities.
âThroughout my time here, Iâve repeatedly seen how hard it is to truly let our values govern our actions,â he wrote. âIâve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.â
The researcherâs team was established just over a year ago with a mandate to tackle AI security threats including âmodel misuse and misalignment,â bioterrorism prevention, and âcatastrophe prevention.â
Sharma noted with pride his work developing defenses against AI-assisted bioweapons and his âfinal project on understanding how AI assistants could make us less human or distort our humanity.â Now he intends to move back to the UK to âexplore a poetry degreeâ and âbecome invisible for a period of time.â
Anthropicâs chief executive, Dario Amodei, has repeatedly warned of the dangers posed by the very technology his company is commercializing. In a near-20,000-word essay last month, he cautioned that AI systems of âalmost unimaginable powerâ are âimminentâ and will âtest who we are as a species.â
Amodei warned of âautonomy risksâ where AI could âgo rogue and overpower humanity,â and suggested the technology could enable âa global totalitarian dictatorshipâ through AI-powered surveillance and autonomous weapons.