#ai-chatbot-security

[ follow ]
Information security
fromThe Hacker News
1 week ago

Researchers Reveal Reprompt Attack Allowing Single-Click Data Exfiltration From Microsoft Copilot

Reprompt enables single-click exfiltration of sensitive data from AI chatbots like Microsoft Copilot by injecting prompts via URL and bypassing guardrails.
Information security
fromZDNET
1 month ago

Scammers are poisoning AI search results to steer you straight into their traps - here's how

Cybercriminals manipulate public web content to insert scam phone numbers so AI chatbots and LLM systems recommend fraudulent contact numbers to users.
[ Load more ]