Putting AI-assisted ‘vibe hacking’ to the test

Putting AI-assisted ‘vibe hacking’ to the test
Valuable tools for experienced attackers and researchers, LLMs are not yet capable of creating exploits at a prompt, researchers found in a test of 50 AI models — some of which are getting better quickly. Attackers are increasingly leveraging large language models (LLMs) to enhance attack workflows, but for all their advances in helping to write malicious scripts, these tools are not yet ready to turn run-of-the-mill cybercriminals into exploit developers. According totests performed by research...

Save to Folder

Choose a folder to save this article: