ChatGPT fever is spreading to the US workplace, sounding alarm bells for some

LONDON/WASHINGTON, Aug. 11 (Reuters) – Many workers across the United States are turning to ChatGPT for help with essential tasks, a Reuters/Ipsos survey finds, despite concerns that have pushed employers like Microsoft and Google to limit than use it.

Companies around the world are studying how to make the best use of ChatGPT, a chatbot that uses generative AI to conduct conversations with users and answer countless prompts. Security firms and firms have raised concerns, however, that it could lead to intellectual property and strategy leaks.

Anecdotal examples of people using ChatGPT to help with their daily work include drafting emails, abstracting documents, and doing preliminary research.

About 28% of respondents to an online survey on artificial intelligence (AI) between July 11 and 17 said they regularly use ChatGPT at work, while only 22% said their employers explicitly allow such third-party tools.

The Reuters/Ipsos poll of 2,625 adults across the United States had a credibility interval, a measure of accuracy, of about two percentage points.

About 10% of those surveyed said their bosses explicitly prohibited third-party AI tools, while about 25% did not know whether or not their company allowed the technology to be used.

ChatGPT has become the fastest growing app in history after its launch in November. It has caused both excitement and alarm, bringing developer OpenAI into conflict with regulators, particularly in Europe, where the company’s mass data collection has drawn criticism from privacy watchdogs.

Human reviewers from other companies may read any of the chats generated, and researchers have found that similar AI can reproduce data it absorbed during training, creating a potential risk to proprietary information.

See also  This digital nomad left the US for Bangkok and lives on $8,000 a month

“People don’t understand how data is used when they use generative AI services,” said Ben King, vice president of customer trust at corporate security firm Okta (OKTA.O).

“For companies, this is critical, because users don’t have a contract with many AI systems — because it’s a free service — so companies won’t be exposed to risks through the usual evaluation process,” King said.

OpenAI declined to comment when asked about the implications of individual employees using ChatGPT, but highlighted a recent company blog post assuring company partners that their data will not be used to train the chatbot further, unless they give explicit permission.

When people use Google’s Bard, it collects data such as text, location, and other usage information. The company allows users to delete past activity from their accounts and request removal of content that is fed into the AI. Alphabet-owned Google (GOOGL.O) declined to comment when asked for more details.

Microsoft (MSFT.O) did not immediately respond to a request for comment.

harmless tasks

A Tinder employee in the US said employees of the dating app used ChatGPT for “harmless tasks” like writing emails even though the company doesn’t officially allow it.

said the employee, who declined to be named because they were not authorized to speak with reporters.

The employee said Tinder has a “No ChatGPT rule” but employees still use it “in a general way that doesn’t reveal anything about our presence on Tinder”.

Reuters has not been able to independently confirm how employees at Tinder use ChatGPT. Tinder said it provided “regular guidance to employees on security and data best practices.”

See also  Salesforce faces the prospect of customers leaving the platform after Veeva

In May, Samsung Electronics banned employees globally from using ChatGPT and similar AI tools after it was discovered that an employee had uploaded sensitive code to the platform.

“We are reviewing measures to create a safe environment for the generative use of artificial intelligence that enhances employee productivity and efficiency,” Samsung said in a statement on August 3.

“However, until these measures are in place, we are temporarily restricting the use of generative AI through company devices.”

Reuters reported in June that Alphabet had warned employees about how they were using chat software including Google’s Bard, at the same time it was marketing the software globally.

Google said that while Bard can make unsolicited code suggestions, it does help programmers. It also said it aims to be transparent about the limitations of its technology.

Ban blanket

Some companies told Reuters that they are embracing ChatGPT and similar platforms with security in mind.

“We have begun testing and learning about how AI can enhance operational effectiveness,” said a Coca-Cola spokesperson in Atlanta, Georgia, adding that the data remains inside its firewall.

“Internally, we recently launched our enterprise version of Coca-Cola ChatGPT for productivity,” said a company spokesperson, adding that Coca-Cola plans to use artificial intelligence to improve the effectiveness and productivity of its teams.

Meanwhile, CFO of Tate & Lyle (TATE.L), Dawn Allen, told Reuters that the global component maker was testing ChatGPT, after it “found a way to use it in a secure way.”

“We have different teams decide how they want to use it through a series of experiments. Should we use it in investor relations? Should we use it in knowledge management? How can we use it to carry out tasks more efficiently?”

See also  Bitcoin price stops $29K as Terra LUNA comes back from the dead

Some employees say they can’t access the platform on company computers at all.

“It’s completely off-limits to the office network, because it’s not working,” said a Procter & Gamble (PG.N) employee, who wished to remain anonymous because they were not authorized to speak to the press.

Procter & Gamble declined to comment. Reuters has not been able to independently confirm whether employees at P&G are unable to use ChatGPT.

Paul Lewis, chief information security officer at cybersecurity firm Nominet, said the companies were right to be cautious.

“Everyone benefits from this increased ability, but information is not completely secure and can be modified,” he said, citing “malicious claims” that can be used to make AI chatbots reveal information.

“A blanket ban is not yet justified, but we need to tread carefully,” Lewis said.

Additional reporting by Richa Naidoo, Martin Coulter, and Jason Lang; Edited by Alexander Smith

Our standards: Thomson Reuters Trust Principles.

London-based reporter covering retail and consumer goods, analyzing trends including coverage of supply chains, advertising strategies, corporate governance, sustainability, policy and regulation. He previously wrote for US retailers and major financial institutions and covered the Tokyo 2020 Olympic Games.

Leave a Reply

Your email address will not be published. Required fields are marked *