Based on recent reports up to April 2026, numerous AI models and platforms have been compromised through methods such as prompt injection, unauthorized access, and supply chain attacks. A March 2026 study found that nearly 100% of 22 state-of-the-art language models from major AI labs were successfully hacked in simulated environments. AI models like ContextAI, Ultralytics (Yolo Model), ChatGPT (OpenAI) among others.
Thank you for reading this post, don't forget to subscribe!Vercel a hosting Platform was hacked through ContextAI model as a result of their partnership with the AI Platform.
Ultralytics (YOLO Model) 2026 Hackers hijacked an AI model from Ultralytics, a popular computer vision company, by inserting malicious payloads. The compromised model was distributed to users, resulting in thousands of systems being infected with cryptomining malware.

ChatGPT (OpenAI)‘s early release experience data leak in 2023, a bug in ChatGPT allowed users to see the titles of other users’ chat history and, in some cases, personal information. ChatGPT has been frequently targeted by “jailbreaks,” such as “DAN” (Do Anything Now), which bypass safety filters to produce forbidden content.
Anthropic’s Mythos which is the newest and most advanced model has also been accessed by hackers. Just a few days ago in April 2026 Anthropic’s most powerful AI model, Claude Mythos, was reportedly accessed by a group of unauthorized users, according to a Bloomberg report.
Anthropic has been considering it’s Mythos model too capable for public release. According to Anthropic, Mythos can autonomously find zero-day vulnerabilities and build working exploits, which is why access has been restricted through a controlled initiative called Project Glasswing, limited to partners like AWS, Apple, Google, Microsoft, and NVIDIA for defensive security work.

However according to the Bloomberg’s report, a group of private online forums gained access to Mythos the same day Anthropic announced the limited release.
The group have been using it regularly since then, though not for cybersecurity purposes, and provided screenshots and a live demo as evidence. The access came through one of Anthropic’s third-party vendor environments rather than Anthropic’s own systems. Anthropic said it is investigating and has found no evidence that its systems were affected.
A model built specifically because its abilities were considered too dangerous for open release is now reportedly being used outside the controlled group it was designed for.