A year after generative AI tools were unleashed, the world is adapting to, but also grappling with, a technology that humanity still thinks it can control.

Actors are suing app developers for using their likeness and voice without permission, while AI chatbots are being onboarded into workforces. Unemployed workers are using AI tools to send out a barrage of job applications, then using AI to prompt them when they bag an interview. It’s time to walk through a disrupted landscape in the wake of a generative AI storm and pick through the hallucinations for clues of what’s to come.

When ChatGPT was launched in December 2022 little did we know the rollercoaster ride on which we were embarking. AI scientists were predicting then that the far more powerful ChatGPT 4 (which has one hundred trillion parameters – the same number of synapses in the human brain) would be released in 18 months. It was released three months later.

The speed at which machine learning was taking place was both fascinating and terrifying.

If the digitalisation of business was perplexing for business owners, then generative AI has just upped the game and made the velocity of change exponential. 

One of the initial concerns of AI was how schoolchildren were using it: letting it do their homework, cheating at exams, and so on. But soon teachers were singing AI’s praises as it helped them make their classes more interesting and engaging. Even doctors started using AI to assist with patients' diagnoses.

Now there are AI apps like LazyApply, which is able to generate 5 000 job applications with a single click. One unemployed user managed to land 20 job interviews using this CV spamming (let’s call it what it is) app. But since recruiters themselves use algorithms to sift through mountains of job applications it means that we’ve reached a point were recruiters are using AI tools to screen applications that were submitted by applicants using other AI tools. 

If the twilight zone can be defined as, “a state of mind between reality and fantasy: a dreamlike or hallucinatory state”, then welcome to the twilight zone.

When HR becomes AIR (AI resources) 

In August 2022 the China-based mobile game company NetDragon Websoft appointed an artificial intelligence-supported virtual general manager named Tang Yu.

Tang Yu’s function was decision-making in daily operations and improving the company’s risk management system. “She” was also used as a real-time data centre and analytics tool for the board. After a six-month appraisal, the company discovered that it had outperformed on Hong Kong’s stock market.

Last year, British boarding school Cottesmore appointed an AI chatbot – named Abigail Bailey – as its new “principal headteacher”. The school also appointed another AI chatbot – Jamie Trainer – as its head of artificial intelligence. As for KPIs, the school’s headmaster, Tom Rogerson, said, “It's nice to think that someone who is unbelievably well trained is there to help you make decisions. You don’t have to wait around for an answer.”

Forget “cobots” (robot co-workers), you’re now competing with AI chatbots for jobs or promotions.

The legal loophole

The more unsettling ripple effect of generative AI is not so much what it can do, but the legal intellectual property (IP) issues it has spawned.

Hot on the heels of text-generated AI were text-to-image AI platforms such as Midjourney and DALL·E 2, which source imagery on the internet rather than text. This sparked a legal battle between Getty Images and Stability AI. Getty accused the company behind image generator, Stable Diffusion, of illegally scraping its content to generate AI imagery and in so doing having “unlawfully copied and processed millions of images protected by copyright to train its software”. None of the artists and photographers consented to, or were remunerated for, this usage.

This legal dispute has wider repercussions and was pivotal to the six-month Screen Actors Guild strike, where performers objected to “AI performance synthetisation”, when film studios use a performer’s likeness to replicate a digital version, which can then be used (a) as a clone, (b) create a younger version of the actor, or (c) bring a deceased actor “back to life”.

But why stop at someone’s image? 

The European Broadcasting Union announced that it would be using the cloned voice of Hannah England to provide commentary for the European Athletics Championships. There’s no problem if Ms England has given her consent but there’s an ominous loophole at play.

According to the Financial Times’ Ludo Hunter-Tilney, you can’t copyright the sound of your own voice. He explains that, “we can copyright recordings (of your voice), but we can’t copyright the sound to them – the sonic frequency at which we speak. We don’t own those. We can’t own a sonic frequency.” How’s that for a legal loophole?

If Microsoft’s Vall-E software is able to mimic a person’s sonic frequency (or voice) in three seconds then the threat of someone using your voice without permission, and without even needing your permission, creates a very slippery slope.

Celebrities are now being paid vast sums of money for their likeness to be turned into chatbots, which dispense advice or have a conversation with you. AI chatbots now offer life or business coaching and there is an increase in cases of people falling in love with chatbots created by Replika.

There is now an app – Covers AI – that supplies "presidential voices" including Donald Trump, Joe Biden, Narendra Modi, and Jair Bolsonaro, which can be turned into memes. The future of politics and electioneering is going to be surreal.

Data colonialism

A crucial part of machine learning is data “scraping”: hoovering up huge amounts of data from the internet. Using this information in generative AI requires a brain-numbing process called “data labelling” for its algorithms.

We’ve all been subjected to online captcha tests, having to prove that you’re not a robot by identifying things like motorbikes in a picture grid.

The algorithms powering these new chatbots are able to bypass these tests, remove blocking mechanisms for harmful content or create their own visual hallucinations.

Data labelling by humans ensures that the data they process is as clean as can be and this is farmed out to the world’s cheapest labour markets in Latin America, East Africa, Venezuela, India, the Philippines, and even refugee camps in Kenya. 

Workers earn a few cents for microtasks – such as data labelling – on platforms like Appen, Clickworker, and Scale AI, or sign short-term contracts in physical data centres.

The global data collection and labelling market was valued at $2.22 billion in 2022 and is expected to grow to $17.1 billion by 2030.

KEY TAKEAWAY

The avalanche of synthetic data – information that's artificially manufactured rather than generated by real-world events, and the lines it blurs between reality and fantasy – is what we are grappling with now. A leading AI architect has already admitted that in private tests, experts can no longer tell whether AI-generated imagery is real or fake, which nobody expected to be possible this soon.

But interactive AI is looming. This is the next level in the Internet of Things (IoT), where AI merges with robotics and the bots carry out tasks determined by machine learning. The software for AI chatbots (the ones already being onboarded as virtual employees) is already being uploaded into humanoid robots, providing them with a physical presence.

Generative AI has sparked an existential debate about the dangers of artificial intelligence surpassing human intelligence and if there is a tipping point of AI becoming sentient. 

The “rise of the machines” is no longer a sci-fi concept. We’ve let the genie out of the bottle. Your wish is now literally its command. Be careful what you ask for. 

 Dion Chang is the founder of Flux Trends. For more trends as business strategy, visit: www.fluxtrends.com 

Related

The 'Freakonomics' of Climate Cange

The 'Freakonomics' of Climate Cange

Researching Beyond Business

Researching Beyond Business

AI in GBS: Enhancing or Replacing the Human Touch?

AI in GBS: Enhancing or Replacing the Human Touch?