
"The voicemail from your son is alarming. He has just been in a car accident and is highly stressed. He needs money urgently, although it is not clear why, and he gives you some bank details for a transfer. You consider yourself wise to other scams, and have ignored texts claiming to be from him and asking for cash. But you can hear his voice and he is clearly in trouble."
"By taking a tiny snippet of real audio just three seconds is enough from a person, they can clone the individual's voice using freely available AI technology. From there, they can make an recording of the synthesised voice saying exactly what they want. The criminals can record a voice from videos on social media or by calling someone and saying nothing. A victim just needs to respond with words such as hello, who is there? to give them material for their hoax."
Criminals are using AI voice cloning to produce convincing calls or voicemails that imitate family members. A tiny snippet of real audio — as little as three seconds — can suffice to synthesize a matching voice. Scammers can obtain voice samples from social media videos or by prompting callers to speak and recording brief responses. The forged messages typically portray emergencies such as accidents, injuries or robberies and create urgency to pressure victims into sending money. Attackers provide bank details for transfers. The technique is a form of highly targeted spear phishing that exploits emotional reactions to defraud victims.
Read at www.theguardian.com
Unable to calculate read time
Collection
[
|
...
]