Catch up quick: Researchers reported last month that bondu, an AI-powered conversational toy company, inadvertently exposed children's chat transcripts and personal data through a publicly accessible portal. Bondu, which allows parents to check their children's conversations, said it took down the exposed portal and relaunched it the next day with authentication measures, according to Wired. Driving the news: New Hampshire Senator Maggie Hassan, the ranking member of the Senate's Joint Economic Committee, is now asking bondu to explain how the exposure occurred.
Earlier this month, Joseph Thacker's neighbor mentioned to him that she'd preordered a couple of stuffed dinosaur toys for her children. She'd chosen the toys, called Bondus, because they offered an AI chat feature that lets children talk to the toy like a kind of machine-learning-enabled imaginary friend. But she knew Thacker, a security researcher, had done work on AI risks for kids, and she was curious about his thoughts.
Two people allegedly linked to China's infamous Salt Typhoon espionage hacking group seem to have previously received training through Cisco's prominent, long-running networking academy. Meanwhile, warnings are increasingly emerging from United States lawmakers in Congress that safeguards on expanded US wiretap powers have been failing, allowing US intelligence agencies to access more of Americans' data without adequate constraints. If you've been having trouble keeping track of all of the news and data coming out about infamous sex offender Jeffrey Epstein,
Last week, OpenAI said it cut off the toymaker FoloToy's access to its AI models after the AI-powered teddy bear "Kumma," which ran GPT-4o, was found giving responses that were wildly inappropriate for children - including discussing sexual fetishes, and giving instructions on how to find knives and light matches. The move signaled that the ChatGPT-maker was clearly concerned about how its business customers, especially ones selling products for children, were using its tech, or at least how these efforts looked.
Picture the scene: It's Christmas morning and your child is happily chatting with the AI-enabled teddy bear you got them when you hear it telling them about sexual kinks, where to find the knives, and how to light matches. This is not a hypothetical scenario. As we head into the holiday season, consumer watchdogs at the Public Interest Research Group (PIRG) tested four AI toys and found that, while some are worse than others at veering off their limited guardrails, none of them are particularly safe for impressionable young minds.