When Karl Marx used the term ‘boom or bust’ in the 19th century it was to explain economic expansions and contractions. Today, the term neatly explains the dilemma faced by emerging technologies like artificial intelligence.

Professor Manoj Chiba is senior lecturer and faculty member at GIBS, whose areas of expertise range from statistics and predictive analytics to digitisation, artificial intelligence, innovation and design. A management professional, Chiba has held senior positions across different sectors, which allows him to draw on this understanding to exploit opportunities and align strategies. His training, qualifications and passion for how data and technology intersect underpin his underlying philosophy for evidence-based decision-making.

You don’t have to look further than the recent past for some reminders of the boom-or-bust phenomenon. Remember the dot-com bubble of the late 1990s, when speculation in internet-based businesses saw US equities spike rapidly before the overvalued stock came crashing down? Or the origins of the 2008 global financial crisis, when the failing US housing market ultimately dragged down the global economy?

Right now, the future of artificial intelligence (AI) is poised on a boom-or-bust knife edge. The result of a lineage of technological innovations and machine-learning advances, AI’s credentials should point squarely towards a boom. Yet, for all the hype, tangible applications are still in short supply. Despite the money being pumped into the development of tools, the big generative AI tools like ChatGPT, Bard and Bing AI are largely being kept busy writing reports, distilling technical information and fielding questions about religion, retail choices,[CB1]  how to stop (or start) a war, and marital concerns (all actually topics, by the way).

This highlights that while people, businesses and society have embraced the promise of AI, we still aren’t quite sure how to really leverage it.

The AI family tree

This might have something to do with the origins of AI, as part of a theoretical, mathematical and computer science universe. People glaze over when you start talking about data in the same way as about atoms and amoebas, but data is the basic building block in the AI family tree. While humans have been generating AI-generated data since the 1950s, we’ve only fairly recently developed the computational power to analyse this information at scale, sparking the rise of data analytics and more machine-learning advancements.

Therefore, even though it stands on the shoulders of impressive technological advances, AI is still in its infancy. It will, of course, continue to develop as access to data grows and as machines keep learning. This process should not be rushed – particularly as the process of teaching machines moves towards what we call deep learning, or teaching machines to complete human tasks like driving a car or recognising objects.

I like to think of deep learning as the postgraduate-type thinking we expect from advanced learners who are able to draw on experiences from parallel processes to arrive a meaningful conclusions. Until OpenAI was formed by Sam Altman, Elon Musk and Peter Thiel in 2015, this deep-learning leap was largely the domain of big tech companies with deep pockets, like Amazon, Alibaba and Meta. When OpenAI’s customer-centric ChatGPT launched on 30 November 2022, with its easy-to-use interface, it proved a game changer. No longer were humans instructing technology; they were conversing with machines and finding new and innovative ways to use the emerging AI tool.

A gleeful uptake

Such has the level of uptake been over the year since ChatGPT burst onto the scene that Gartner already believes more than 70% of businesses are currently exploring the use of generative AI. ChatGPT alone had nearly 39 million monthly active users and 23 million downloads in September 2023.

To further embed the business case for using these new tools, MIT Sloan researchers recently noted that using generative AI could “improve a highly skilled worker’s performance by as much as 40% compared with workers who don’t use it”. According to McKinsey & Company, the current crop of AI tools alone could potentially automate jobs to such an extent that they free up 60-70% of an employee’s time.

This is where things get interesting. Not only do generative AI tools have the potential to improve business productivity and efficiency, but they can also give back that most precious of commodities: time.

From a utility value perspective, OpenAI’s ChatGPT has certainly achieved the “stickiness” needed to continue winning over new customers and retaining existing users. This puts AI squarely in boom territory, but current frontrunners in the industry are already scrutinising the business models of the world’s top companies by market capitalisation and realising that the big money doesn’t come from selling services or time. It comes from selling product. 

Herein lies the first potential bust.

The ‘businessification’ of AI

While many businesses will be faced with a need to tweak or enhance their current business models by incorporating AI into systems and services, AI developers have a far more fundamental choice to make. Do they go the AI hardware route being explored by OpenAI or opt for a user fee-based model? Either choice is likely to shed customers: those who don’t want to pay and those who are put off by the idea of another device.

Since the consumer base will inevitably narrow, technology companies must ensure that they continue to deliver both company and human value. If AI is able to make a case for itself at the highest level of human quality of life, and through its sustained utility value continue to improve productivity and efficiency in our personal lives, then we will probably see a boom.

Already there are interesting case studies emerging of how AI can be incorporated into the systems that underpin any business. Mega customer relationship management (CRM) tool Salesforce is using AI to draw sales leads for organisations and to determining the right approaches to pitch. This ticks all the right boxes in terms of helping businesses grow, and streamlining efficiencies and productivity, but it will take some time for the numbers to back this up, which raises the risk of organisations growing impatient and abandoning the experiment.

While big players like IBM, Microsoft and Salesforce have the resources to wait until an inflection point is reached, and with technology giants like Apple and Google circling with their own AI solutions, it will be harder for start-ups to hang on until the boom days, unless they can conceptualise a totally different business strategy. Ultimately, we might end up with just a handful of big winners, with the likes of OpenAI falling by the wayside.

The inevitable rise of global monopolies brings up another host of issues, such as control and dominance, as well as the far-reaching implications if one of the mega tech companies fails. This, in turn, has implications for how AI is rolled out to improve the quality of life of all human beings, and not just the select few.

The verdict

Right now, the dice is loaded towards AI booming, particularly since the world has learned its lesson from rushing blindly into emerging technologies like blockchain and cryptocurrencies. We are seeing a more measured approach to AI, and already some investors and venture capitalists are pulling back slightly as AI becomes an increasingly overvalued. However, the hype is still outpacing reality – and hype without substance remains a concern.

As much as the likes of ChatGPT have won hearts and minds, the plethora of AI startups need some big, visible wins that move beyond basic predictive analytics and machine learning and start to really tap into deep learning. When we eventually see an AI algorithm capable of spanning industries and applications, being able to fly an airplane and then use the same intelligence to drive a car, that’s when we know we’ve hit the boom time of AI superintelligence.

Until then, watch out for bubbles.

All eyes on…

While some big risk-takers have set the tone for AI up to this point, when it comes to rolling out tangible applications, the Middle East is definitely a region to watch.

The United Arab Emirates (UAE) has already established an effective ministry of AI and emerging technologies, and launched a strategy for AI in 2017. More recently, the UAE and its fellow Gulf nation Saudi Arabia have been engaged in a face-off to buy up the high-performance Nvidia chips needed to run AI software. According to the Financial Times, Chinese tech groups like Tencent and Alibaba are also looking to acquire Nvidia’s sought-after chips, joining the UAE and Saudi Arabia in heavily backing an AI-driven future.

Will regulation make or break AI?

AI’s future lies not only in the application of the technology, but how regulations shift the goalposts. Emerging technologies are no stranger to regulatory impacts, which might explain the proactive approach being taken by the likes of OpenAI’s CEO Sam Altman, Alphabet’s Sundar Pichai and Microsoft’s Brad Smith, all of whom are calling for regulations. 

Having a hand in setting or influencing regulation is a wise move by the AI industry, which must now concern itself with ways in which to capitalise on its investments. However, there is always the possibility that AI regulation might spell problems, particularly in countries where concerns around job losses are growing and issues of monopolies and control are simmering. While South Africa has been fairly progressive in its approach to regulation around, for instance, cryptocurrencies, other countries have opted for all-out bans; which means traversing a patchwork of regulations is inevitable.

Don’t neglect ethics

While regulators remain 10 steps behind the development of AI technology, the industry should certainly be focusing its attention on building a sound and responsible ethical approach. This means not only voicing concerns around Google’s Autocomplete algorithm or the use of drones in conflict zones, but acting on these concerns in order to rein in the irresponsible use of emerging technologies. To this end, boards of directors need an AI ethical charter at a company level and should absolutely enlist an external expert to sit on the board. Finally, since AI is not a decision-making tool, the buck stops with human beings. So, leaders must ultimately take accountability.

KEY TAKEAWAYS 

The crux of the matter? 

  • The hype around artificial intelligence (AI) belies the lack of tangible applications.
  • This makes AI vulnerable to becoming a bubble instead of a boom.
  • Much is being asked of a nascent technology, which has a great lineage but it still developing.  
  • Businesses, people and even governments have been quick to use new tools like generative AI, but the secret will be if AI companies can continue to offer utility value to consumers.
  • Key players in the AI industry are calling for regulation. This could be a game changer.

Related

Entering the Dragon's Den: African Tech Start-Ups

Entering the Dragon's Den: African Tech Start-Ups

Harnessing Data Power: Key Traits of a Data-Driven Business

Harnessing Data Power: Key Traits of a Data-Driven Business

Gamifying Mental Health

Gamifying Mental Health