Apple is limiting its employees' use of ChatGPT for fear of leaks

Apple Inc. is reported to be limiting the use of ChatGPT by some employees and other artificial intelligence software.
The Wall Street Journal reported on Friday that the iPhone maker was concerned about employees releasing secret information if they used OpenAI LLC's ChatGPT. The Journal reported that the Cupertino firm also told its employees they weren't allowed use Copilot, a computer programming tool powered by artificial intelligence offered by Microsoft Corp. owned GitHub, which helps developers write code faster.
Apple representatives have not responded to a comment request.
ChatGPT, a San Francisco-based OpenAI product that took the world by storm in late 2012, has attracted tens and millions of users since. The software generates text responses using a chat interface that mimics what a person might write. The Journal reported that Apple is developing a similar technology.
Apple's concerns about ChatGPT have some basis. The Economist reported that Samsung Electronics engineers had leaked sensitive information about their chip factories, including how many good chips were produced, by typing the data into ChatGPT. Samsung responded by blocking employees' access to ChatGPT.
Amazon.com Inc. and other large companies such as banks, defense contractors and Amazon.com Inc. have also placed similar restrictions on the use of emerging technology by their employees.
OpenAI announced separately on Thursday that it had released a version for Apple's iOS, the software which runs the iPhone. ChatGPT had previously only been available via a web-based interface. The app will be initially available in the U.S., before it is made available to other countries.
Tim Cook, Apple's CEO, discussed safety concerns during a recent earnings conference. Other tech executives have echoed these concerns, including Alphabet Inc.'s CEO Sundar Piichai.
Sam Altman, CEO of OpenAI, called for lawmakers to regulate AI at a hearing in Congress earlier this week. Christina Montgomery, IBM’s chief privacy officer and Gary Marcus, professor emeritus of New York University appeared as well before the Senate Judiciary Subcommittee for Privacy and Technology.
"If this technology fails, it could go very wrong." Altman stated at the hearing that he wanted to make this clear. "We want the government to help us prevent this from happening."