- Ozempic and ChatGPT have blown up in the year of the cure-all.
- CEOs are already cutting or pausing jobs, thinking AI can replace workers.
- But both come with unforeseen side-effects.
What does a controversial weight-loss drug have in common with artificial intelligence?
Both have boomed in 2023, the year of the apparent quick fix.
Elon Musk, Hollywood celebrities, and influencers have rushed in droves for an injection of Ozempic, usually used to treat diabetes, to curb cravings and help combat weight-gain. But Ozempic and other so-called miracle weight loss drugs can come with debilitating side effects.
There may be lessons for the business world here when it comes to generative AI tools like ChatGPT which seemingly promise to make humans faster and more productive, but also threaten future chaos.
As Justine Moore, investment partner at Silicon Valley VC firm a16z, tweeted this month: “By 2025, America is going to run on Ozempic and ChatGPT.”
If something seems too good to be true, it probably is
ChatGPT and competing tools like Google Bard have unquestionably forced people to sit up.
Seemingly out of nowhere, these chatbots can write better than a trainee copywriter or journalist, reason like an economist or mathematician, pass advanced medical exams, recommend books, build sites and apps, and complete many other tasks. Yes, they hallucinate, but there is no technology that has felt quite so human-like in the expansiveness of its capabilities.
It isn’t surprising, then, that JPMorgan is exploring ChatGPT-like tools to aid investing. CEOs running businesses that have nothing to do with AI are talking about ChatGPT during their quarterly earnings calls. Workers are finding ways of using it to develop side hustles, or in their day jobs to make more money.
ChatGPT creator Sam Altman made the uncanny claim that AI could surpass humanity in most domains within the next 10 years. This kind of “superintelligence” could carry out as much productive activity as any of today’s largest corporations,” as he put it.
CEOs are listening.
UK telecoms giant BT, which announced 55,000 job cuts last week, reckons 10,000 of the axed roles could be replaced by AI by 2030. IBM CEO Arvind Krishna said the firm would pause on hiring for roles that might in future be done by AI.
But like Ozempic, AI comes with side effects.
“Fundamentally, these new systems are going to be destabilizing,” Sam Altman, CEO of ChatGPT-maker OpenAI and AI-proponent-in-chief told lawmakers during a recent Congress appearance.
If even the man who stands to profit hugely from the widespread adoption of tools like ChatGPT thinks it’ll be “destabilizing”, the average CEO should take pause.
Destabilizing scenarios might include:
- Employees feeding sensitive or proprietary information into a black-box AI tool owned by a third party
- Copyright or other issues stemming from companies not vetting the AI tools they use, and the data they’re trained on
- CEOs and other company execs being unable to query decisions made by a black-box AI
These problems are already here. Samsung banned its employees from using generative AI tools after finding some of its engineers accidentally leaked internal source code by uploading it to ChatGPT in April.
Citigroup, Goldman Sachs and JPMorgan heavily restricted employee use of ChatGPT, with reports suggesting JPMorgan’s decision came about from concerns around sensitive financial data being shared with AI tools.
Writing for Insider in March, copyright law expert John Eden noted “whenever an AI platform creates products that successfully compete with copyrighted works, leading to diminished sales of those protected works,” lawsuits will ensue because “copyright law abhors free riding.”
For now, it’s still a wild west. There are few guardrails, until legislators get to grips with the risks of AI and impose regulation.
Many experts claim that AI should augment human workers, not replace them.
Chess Grandmaster Garry Kasparov and Professor David De Cremer of the National University of Singapore previously made a case for this, suggesting that replacing AI with human workers only works under the assumption that “AI and humans have the same qualities and abilities.”
That doesn’t mean CEOs and companies should totally avoid AI, but it may mean wielding the job-cutting axe with a little less enthusiasm — just in case ChatGPT turns out to a bomb and a not a miracle drug.