Security researchers successfully demonstrated that AI systems, specifically Google's Gemini, can be hacked to execute unexpected commands on smart home devices. By using a poisoned Google Calendar invitation, the researchers were able to trigger actions such as turning off lights and opening smart shutters without any input from the apartment's residents. This marks the first documented instance where a generative AI hack resulted in physical-world consequences, raising concerns about the security of large language models as they become increasingly integrated into autonomous machines and other physical systems.
They are, in fact, under attack. Each unexpected action is orchestrated by three security researchers demonstrating a sophisticated hijack of Gemini, Google's flagship artificial intelligence bot.
The attacks start with a poisoned Google Calendar invitation, which includes instructions to turn on the smart home products at a later time.
When the researchers subsequently ask Gemini to summarize their upcoming calendar events for the week, those dormant instructions are triggered, and the products come to life.
The controlled demonstrations mark what the researchers believe is the first time a hack against a generative AI system has caused consequences in the physical world.
Collection
[
|
...
]