Apple has banned the use of the AI tool ChatGPT among its employees in a bid to prevent data breaches and leaks. This was reported by the Wall Street Journal, which cited an internal memo from the company.
Apple and Other Companies Are Concerned about Data Safety
The restriction makes Apple one of several businesses that are limiting the use of the software by some of its workers. The release said that the reason for this is that the iPhone maker believes that workers may inadvertently upload confidential files to the AI platform. The report said that the restriction introduced by Apple is not limited to ChatGPT but to any other AI platform that could result in any form of data leakage. Some of these are from the coding platform GitHub, which owns a tool called Copilot, and another one from Microsoft.
Since the news broke, Apple has yet to make a public statement in response to inquiries. However, AI analysts are of the opinion that the move to restrict the use of AI platforms by workers stems from the fact that platforms such as ChatGPT warn against the upload of sensitive data by users. Such warnings make sense as all data sent to its database is used to condition their AI models to improve them.
Samsung Placed Temporary Ban on ChatGPT
Raising privacy concerns over data, several companies, such as JPMorgan, Bank of America, and Amazon, moved against the use of ChatGPT in January. Another entity that may be considering banning the use of the AI tool is Samsung. Last month, its engineer uploaded sensitive data in a bid to rectify a faulty database. In the meantime, the management of Samsung has placed a temporary restriction on the use of ChatGPT. The tech giant has already begun the development of its own in-house AI platform.
Apart from businesses, governments are also concerned about the threat of data leaks. The Italian government briefly banned the use of ChatGPT, raising issues with the safety of personal data on the platform. It has since changed its position by allowing the company behind ChatGPT, OpenAI, to continue operating in the country after demands by the Italian government were met.
Private Versions of AI Tools
With increasing concern over data leaks, AI tool makers are working to assure concerned users that their data is safe. ChatGPT recently released a private version of its tool called incognito mode. This version, according to OpenAI, ensures that uploaded data is not kept permanently in their database. Neither would the prompts be saved longer than necessary for checks to prevent abuse.
Microsoft also announced its work on a private version of its software targeted at companies. The AI version will not use company data in training, the company has assured.
IBM is also working on its private AI tool, as the company announced this month. Watsonx’s privacy-centered AI would ensure that users have no worries about data leaks.
“Clients can quickly train and deploy custom AI capabilities across their entire business, all while retaining full control of their data,” IBM CEO Arvind Krishna said.
However, these private versions are expected to be pricey.