Since its release in late 2022, ChatGPT has caused a fair bit of panic, inspired countless LinkedIn think pieces and quickly made its mark in industries ranging from scriptwriting to customer support and education. How will AI affect business decision-making? It depends who you ask…

I thought I’d start by asking ChatGPT itself how AI will affect business decision-making. It was quick to tell me that AI is already having an effect and will continue to do so, listing several ideas of how. These included:

  • Data-driven insights that leverage AI-powered analytics
  • Improved forecasting and predictive analytics
  • Enhanced customer experience, such as personalised marketing campaigns
  • Automation and process optimisation
  • Risk management and fraud detection
  • Strategic decision support

ChatGPT also flagged that while AI can be very helpful, it also presents various challenges, including ethical considerations and data privacy concerns, and still needs human oversight.

I then chatted to Professor Louise Whittaker, deputy dean at GIBS, who is an expert on strategy, governance and ethics, particularly in relation to organisations and information systems, and we had a far more nuanced discussion about AI and business decision-making.

“I think we have to be really careful to distinguish between different kinds of tools that are being produced under the umbrella of AI,” she says. “ChatGPT is a large language model (LLM) and is generative AI, so it's different from, for example, big data analytics or even what we used to refer to as expert information systems, which tried to model human decision-making, although they weren't always very good at it. Currently, generative AI is largely misunderstood – people are labouring under the illusion that ChatGPT is ‘thinking’ and it really isn't.”

After reading a few articles that try to explain how LLMs work and scratching my head, I asked ChatGPT to explain the concept to me like I’m five (a very useful prompt, I’ve often found). The salient part of its response was as follows (emphasis mine):

“When you ask a question to the language model, it looks at the words you wrote and tries to figure out what you're asking. Then it uses its training and all the information it has learned to give you an answer. It's like having a very knowledgeable friend who can quickly find information and explain things to you. 

But here's the important thing to remember: the language model doesn't actually understand the information it provides or have thoughts and feelings like humans do. It's just really good at finding patterns in the text it was trained on and using those patterns to generate responses. It doesn't have real understanding or consciousness.

So while the language model can give you helpful information and seem like it's thinking, it's actually just following patterns and rules based on what it has learned. It's like a very advanced calculator or a super-fast search engine. It's a powerful tool, but it's not the same as a human thinking and understanding the world around them.”

As Whittaker says, an LLM is giving you an answer based on probabilities and statistics from analysing the vast amounts of data it has been exposed to. But because they lack real world understanding and often context, they tend to over-rely on statistical patterns, and there may be “noise” in the training data (after all, they’ve been trained through scraping the internet, where information is not always to be trusted), LLMs may “hallucinate” – a polite term for spitting out nonsense. 

Whittaker says asking ChatGPT about herself has resulted in a string of incorrect information. “I know about myself, so I know these ‘facts’ are untrue, but the problem is when people blindly trust in what an LLM churns out,” she says. “It’s like trusting what a spreadsheet says, but your formula in that sheet is wrong.”

Risking a generation of entry-level on-the-job education

Whittaker says that some of the commentary around AI suggests that it has the potential to replace entry-level employees. “Let’s say it replaces the intern who you’d normally get to do a bit of research and you now use AI to do that research and have someone who just checks that the information is correct. But then how does your intern ever become the person who knows how to check the AI’s work? The point of having interns is not for them to be skivvies, but to learn. If we deprive people of learning opportunities, we’re potentially disabling a generation of people. We need to think really carefully about what it is that we want to use AI for. And if organisations rush into short-term adoption to cut costs, without thinking of the longer-term consequences, they’re potentially undermining their own organisational development goals in terms of human capital development.”

She emphasises that there’s a new skillset that needs to be developed around using AI effectively, and it’s important that people realise that critical thinking is outside the realm of what AI can do. 

Another factor that Whittaker believes will come into play is that certain accepted indicators of competence will need to be re-evaluated. “For example, my daughter's been applying for graduate programmes and quite a few of them are saying please don't send a cover letter, because the expectation is that applicants will just get ChatGPT to write those for them,” she says. 

Putting boundaries in place

Academic institutions, including GIBS, are grappling with how to engage with AI and how to stop students from outsourcing assignments to ChatGPT.

“We’ve pulled our exams back onto campus and we’ve got invigilators checking that students aren’t going off the learning management system to use ChatGPT, for example. We’re running everything through Turnitin looking for transgressions. And we’ve got a policy that says if we think you've used AI where we haven’t authorised it, we can pull you in for an oral exam. We are doing our best to guard against copying or plagiarism from ChatGPT and similar programmes, but we can't control for it completely. Ultimately, it’s up to each individual to realise that they are in an educational environment to learn.”

Organisations are grappling with similar issues – how to stop managers from outsourcing decisions to AI and preventing employees from loading proprietary data into tools like ChatGPT. 

“I suppose it is it's going to have to come down people being held accountable for decisions that are made, irrespective of how they made them,” says Whittaker. Presumably, it’s not going to be a defence in a court of law to say, ‘Well, I did this because ChatGPT told me to.’ Managers or directors have a fiduciary duty to their company. If they make poor decisions, they should be called on to explain themselves. You can't hold a computer liable.”

She points out that this debate has been going on in the academic environment and many reputable journals have said that generative AI cannot be cited as a co-author, because it can't be held accountable. 

While someone might be able to use AI to provide various options and pros and cons for each, ultimately, the human remains responsible for interrogating the information supplied and for making the decision.

What do the AI creators say?

While outsourcing all decisions to AI may sound like the premise of a dystopian sci-fi film, it’s not the scariest thing that’s been suggested.

In May, The New York Times reported that leaders from OpenAI, Google DeepMind, Anthropic and other AI labs were warning that “the artificial intelligence technology they were building might one day pose an existential threat to humanity.”

The Center for AI Safety, a nonprofit organisation, released a single-sentence statement that reads, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” The open letter was signed by more than 350 executives, researchers and engineers working in AI. 

Also in May, OpenAI, the company behind ChatGPT and DALL-E, published a blog post detailing some ideas for responsibly and safely managing AI. The suggestions included increasing cooperation among leading AI creators, more technical research into large language models, forming an international body to oversee AI safety (similar to the International Atomic Energy Agency, aimed at controlling the use of nuclear weapons), and requiring AI models to register for government-issued licences.

Another issue that Whittaker flags is how accessible generative technologies will be, given the extremely high costs of developing and maintaining the software, and of using the AI to power various software.

Will the technology become ubiquitous, or will it be available only to those individuals or organisations that can afford it, creating a new form of inequality? It remains too early to tell.

Practical implications

While the rate of change in AI development makes it difficult to take firm positions on anything, as the technology is changing so fast, organisations – whether educational institutions or businesses – cannot afford to ignore AI and need to develop policies and frameworks for using the technology as a matter of urgency. 

ChatGPT has some ideas for how to guard against managers outsourcing decision-making to AI, including:

  1. Clearly define the role of AI in decision-making processes and establish guidelines for its use. Make it clear that AI is meant to augment human decision-making, not replace it entirely.
  2. Develop decision-making frameworks that outline the criteria, considerations, and thresholds for using AI in specific scenarios. Determine the level of autonomy that AI systems can have in decision-making and clearly communicate these guidelines to managers.
  3. Provide managers with training and education on AI technologies, their limitations, and their potential benefits, emphasising the importance of human judgment and critical thinking in conjunction with AI.
  4. Foster a culture of human oversight, encouraging a culture where managers actively engage with AI outputs, question assumptions, and validate the results before making decisions.
  5. Implement accountability mechanisms to hold managers accountable for their decision-making processes.
  6. Continuously evaluate and improve AI systems. Monitor for biases and unintended consequences that may arise from AI algorithms.

Of course, by the time this article is published, the AI landscape may have already changed. These frameworks and policies will need to be updated frequently as the technology evolves.

Gallery

Related

Art(ificial) – Take #2

Art(ificial) – Take #2

High Tech, High Stress

High Tech, High Stress

Who Owns Your Face?

Who Owns Your Face?