Now, I am sure I am not the only one who is thinking about the potential issues here. The capability of a phone or device to be able to communicate for people with speech challenges is a great idea. Allowing it to sound exactly like someone is a security nightmare just waiting for you to go to sleep. AI voice technologies have already been used in some very disturbing fraud and phishing attacks. This is sure to just be another path to that.
As there is not much detail on how the AI learns the voice, what it takes to create the voice profile etc. I can imagine attackers using this function to create a voice print of someone for malicious purposes. We know that threat actors are taking advantage of other AL technologies to impersonate people, having this one so readily available on a phone, just makes things easier. We are not just talking about cybercrime here. Imagine if someone that just does not like you makes a print of your voice and then as a malicious joke has you “call” someone and say terrible things.
When I talk about a failure of imagination, this is exactly the type of shit I mean. There seems to have been no thought about how quickly this is likely to be abused at a time when AI is being seriously abused. Good lord will someone please send these people some recent news along with links to the descriptions of some of the terrible fraud calls that have been made using voice cloning technology? These concerns do not even get into the fact that your voice print could potentially become the property of Apple if you use this feature.