From data poisoning to prompt injection, threats against enterprise AI applications and foundations are beginning to move ...
ChatGPT and other AI chatbots are tested to be ethical when it comes to prompt responses. While they fail to evoke human emotions, they are not trained to promote hate speech and other sensitive ...
Large language models are supposed to shut down when users ask for dangerous help, from building weapons to writing malware. A new wave of research suggests those guardrails can be sidestepped not ...
Researchers have developed a computer worm that targets generative AI (GenAI) applications to potentially spread malware and steal personal data. The new paper details the worm dubbed “Morris II,” ...
When Microsoft released Bing Chat, an AI-powered chatbot co-developed with OpenAI, it didn’t take long before users found creative ways to break it. Using carefully tailored inputs, users were able to ...
For a brief moment, hiding prompt injections in HTML, CSS, or metadata felt like a throwback to the clever tricks of early black hat SEO. Invisible keywords, stealth links, and JavaScript cloaking ...
Members of a Microsoft Corp. team tasked with using hacker tactics to find cybersecurity issues have open-sourced an internal tool, PyRIT, that can help developers find risks in their artificial ...
A single prompt can now unlock dangerous outputs from every major AI model—exposing a universal flaw in the foundations of LLM safety. For years, generative AI vendors have reassured the public and ...