Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.
Tech

Security Experts Express Concern Over ‘Highly Risky’ New Google Feature

A recently unveiled Google feature aimed at identifying potential scams has sparked concerns among privacy advocates. This tool employs artificial intelligence (AI) to monitor phone conversations and identify suspicious patterns indicative of scams. If such patterns are detected, users receive a pop-up alert warning them of a “likely scam.”

The announcement of this feature came during Google’s I/O event, where the tech giant unveiled a range of new AI tools. Despite the introduction of these features, Google did not specify a release date for this particular tool. Furthermore, details regarding the operational mechanisms of the feature, including the criteria used to flag calls as potential scams, were scarce. However, Google revealed that the tool relies on Gemini Nano, a downsized version of its AI technology tailored for mobile devices.

Google emphasized that all call monitoring and analysis would occur locally on users’ devices, assuring that private conversations would remain confidential. The company stated, “This protection all happens on-device so your conversation stays private to you.”

Nevertheless, security experts have expressed apprehension regarding the implications of this feature. They argue that permitting AI to eavesdrop on phone calls, even if restricted to on-device processing, poses significant risks. Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, cautioned that phone conversations are among the most private interactions individuals have, historically untouched by surveillance. Meredith Whittaker, president of the messaging app Signal and former Google employee, echoed these concerns, describing the technology as “incredibly dangerous.” She warned that such capabilities could pave the way for broader, more intrusive surveillance practices beyond scam detection.

Whittaker further highlighted the potential expansion of this technology’s application beyond scam detection, raising concerns about its use in identifying other sensitive information. She suggested scenarios where the AI could be deployed to identify patterns associated with reproductive care, LGBTQ resources, or tech worker whistleblowing, thus infringing on users’ privacy and autonomy.

In summary, while Google’s new feature aims to enhance user safety by identifying potential scams, it has ignited debates about the balance between security and privacy in the digital age. As technology continues to evolve, policymakers and stakeholders must navigate these complexities to safeguard individuals’ rights to privacy and data protection.

Related Articles

Back to top button