Security experts worry about Google’s new feature being ‘incredibly dangerous’

new Google Feature intended to alert people to Tricks It led to concerns from privacy activists.

The tool uses artificial intelligence to eavesdrop on people’s phone calls and try to detect whether they look like a scam. If they do, a pop-up will appear alerting people to a “potential scam.”

The feature was announced at Google I/O this week, during which it announced a host of new AI tools. Like many of these features, Google hasn’t said when they’ll actually arrive.

It also provided little information about how the feature actually works, such as what type of conversations might prompt the AI ​​to indicate that a call might be a scam. But it said it relies on the Gemini Nano, a much smaller version of the AI ​​that was released recently and is designed to run on phones.

Google stressed that all listening and analysis of phone calls will take place on the phone itself, so that private conversations are not sent to its servers. “All of this security happens on the device, so your conversation stays private to you,” she said in her announcement.

However, security experts pointed out that listening to phone calls in this way is “extremely dangerous” and “terrifying.” They noted that even if the calls remained on the device, allowing AI to eavesdrop on calls could lead to other problems.

See also  Samsung offers a free Galaxy Z Flip 4 upgrade and an instant $100 discount

“The phone calls we make on our devices can be one of the most private things we do,” said Albert Fox-Kahn, executive director of the Surveillance Technology Project. NBC News. “It’s very easy for advertisers to extract every search we do, every URL we click, but what we actually say on our devices, into the microphone, has historically not been monitored.”

“This is very dangerous,” said Meredith Whittaker, head of messaging app Signal. “It paves the way for centralized client-side scanning at the device level.”

Ms. Whittaker, who worked at Google for 13 years and helped organize internal protests against its policies, said the use of technology could expand rapidly.

From detecting “frauds” it is a short step to “detecting patterns commonly associated with fraudulent schemes.”[ith] Seeking reproductive care” or “commonly associated with[ith] “Providing LGBTQ resources” or “usually associated with whistleblowing of tech workers,” she said.

Leave a Reply

Your email address will not be published. Required fields are marked *