Activate subscription >
Add devices or upgrade >
Renew subscription >
Secure Hub >
Don’t have an account? Sign up >
< Products
Have a current computer infection?
Worried it’s a scam?
Try our antivirus with a free, full-featured 14-day trial
Get your free digital security toolkit
Find the right cyberprotection for you
< Business
< Pricing
Protect your personal devices and data
Protect your team’s devices and data – no IT skills needed
Explore award-winning endpoint security for your business
< Resources
< Support
Malwarebytes and Teams Customers
Nebula and Oneview Customers
Researchers showed how prompt injection hidden in a calendar invite can bypass privacy controls and turn an AI assistant into a data-leaking accomplice.
Researchers uncovered a way to steal data from Microsoft Copilot users with a single malicious link.
Grok’s apology is unlikely to be the end of the story after the AI tool was used to generate content that may constitute illegal child sexual abuse material.
Linking your medical records to ChatGPT Health may give you personalized wellness answers, but it also comes with serious privacy implications.
Having generated content that may violate US child sexual abuse material laws, Grok highlights once again how ineffective AI guardrails can be.
A list of topics we covered in the week of December 29 2025 to January 4 of 2026
Several AI-related stories in 2025 highlighted how quickly AI systems can move beyond meaningful human control.
We explore how the rapid rise of Artificial Intelligence (AI) is putting users at risk.
Criminals make malicious ChatGPT and Grok conversations appear at the top of common Google searches—leading users straight to the Atomic macOS Stealer.