People ‘are sometimes listening in’ when you talk to Siri on Apple iPhone

Rob Waugh
Contributor
Bangkok, Thailand - January 8, 2017: Business woman holding iphone smartphone with icons of social media on screen as relaxing lifestyle, internet technology in everyday life

You might have thought that when you say the words, ‘Hey Siri,’ you were having a private conversation between you and your Apple gadget.

But (as users of Google and Amazon also recently discovered), you’d be wrong.

A small proportion of requests sent to Siri via the internet are actually listened to by human beings, the tech giant has admitted.

It’s all to do with quality control, Apple claims.

According to a report by The Guardian, contractors employed by Apple to analyse the interactions said they regularly hear personal information and parts of private conversations.

Siri - which is built into Apple devices such as the iPhone and Apple Watch - can be activated when it hears the phrase 'Hey Siri'.

Read more

Husband who allegedly murdered wife in honour killing could be coming to UK

Police puzzled after around 100 ducklings found gathered at roadside

Met Police left with £8,000 veterinary bill after police dog savages cat

However, the assistant can be accidentally prompted when it mistakenly thinks it hears the wake phrase.

It is also possible to accidentally trigger Siri when an Apple Watch detects it has been raised and then hears speech, even when a user is not planning to speak to the assistant.

Apple said only a random subset - less than 1% of daily Siri activations - were part of the grading scheme, and each of these was just a few seconds long.

'A small portion of Siri requests are analysed to improve Siri and dictation. User requests are not associated with the user's Apple ID,' the company said.

'Siri responses are analysed in secure facilities and all reviewers are under the obligation to adhere to Apple's strict confidentiality requirements.'

Fellow tech giants Amazon and Google have both recently confirmed they also use small samples of user recordings with their own voice assistants to train and develop its language recognition software.

The two firms each have a virtual assistant present in smart speakers and some smartphones, and both confirmed they use human auditors to analyse a small section of recordings from users.