As cities become testbeds for AI-powered systems, the unintended consequences and ethical implications of these experiments are coming under scrutiny.

As artificial intelligence (AI) and machine learning (ML) become increasingly woven into the fabric of society, a new form of large-scale experimentation is quietly reshaping our world. From the development of smart cities such as Songdo in South Korea to the pervasive influence of algorithms on social interactions, we’re witnessing an unprecedented merging of technology, urban planning, and human life.

“One of the things that really struck me as I started doing work on large scale infrastructure systems was the ongoing discourse around testing or experimentation, or the idea that urban space or things at scale could be testbeds for the future of human life itself,” says Orit Halpern, chair of digital cultures at Dresden University of Technology. “And it got me asking certain questions about how we understand this logic.”

The concept of smart cities promises sustainability, efficiency, and improved quality of life through advanced technologies. However, these urban labs often serve as testing grounds for unproven technologies on unwitting populations. Songdo, for instance, was built with the vision of becoming an “experimental prototype community of tomorrow”, yet it now houses around 167 000 residents living amid untested systems that impact every aspect of their lives.

“The city’s failure or success is viewed not through the lens of human experience but as data points for the next iteration — a concerning trend in the commodification of human habitats,” Halpern explains. “When people were asked what if the whole thing fails, what if the system doesn’t work, and what if nobody wants to live in this city, what we heard from Cisco [the American multinational digital communications technology conglomerate] and most of the investors was that it was a testing ground. But this is a vast experiment.”

The ‘demo or die’ ethos in urban development

Indeed, case studies like this bring up the ethical considerations of conducting such vast experiments without clear accountability and explore how the mantra of “demo or die” — popularised by computer scientist Nicholas Negroponte of the MIT Media Lab — has evolved into a global phenomenon with far-reaching consequences.

“The fantasy Negroponte touted for the lab was the integration of all media — print, publishing, movies, radio — into a kind of digital technology,” Halpern notes. “And this would be its core mandate. So, one of the things we’re trying to understand is how did that precedent come in to imagine the world as dependent on and surviving through computation.”

This speaks to the ethical implications of deploying ML algorithms that continually learn from and influence large populations, often without informed consent or adequate oversight. The implications also extend to the geopolitical and socioeconomic dimensions of these developments, thus highlighting how smart cities are frequently tied to free trade zones and serve the interests of the affluent while exacerbating social inequalities.

“Should people be used as experimental subjects for the future of human life?” Halpern questions. “What are our responsibilities in relationship to these technologies? And how do we create new forms of oversight, accountability, and responsibility?”

The ethical implications of algorithmic governance

As we march toward an algorithmically governed society, Halpern believes that we need to keep asking questions about who holds responsibility when these grand experiments fail and what safeguards are or should be in place to protect the social fabric from unintended consequences. Mercè Crosas, head of the Computational Social Sciences programme at the Barcelona Supercomputing Centre, also emphasises the importance of well-designed experiments and interventions in pursuing truth and solutions to societal challenges, particularly in the context of computational social science.

“Computational social sciences is a science in itself that goes beyond the ethics of AI,” Crosas explains. “It helps with the study of these data societies. It’s also helping us for scientific inference. And while I encourage the use of experiments, they should be well-designed experiments with randomised controlled trials to confirm causality.”

Crosas highlights the transformative power of ML and AI in social sciences, particularly in the context of using text as data. Indeed, with the increasing availability of interviews, books, and other forms of human communication, there’s immense potential to convert this qualitative information into quantitative data for analysis and inference.

“A lot of the work that’s been done on AI in social sciences has been using text as data,” she notes. “In fact, that’s why there’s a potential for large language models — it’s because how we speak to each other is a big part of what our societies represent.”

Iterative science and AI in social research

However, Crosas emphasises that the scientific model becomes more iterative when using text as data and ML techniques. Rather than strictly adhering to the traditional hypothesis-driven approach, researchers can engage in a discovery phase where the data itself informs the conceptualisation of the world. This would then lead to further experimentation and refinement of hypotheses.

“If we go from the hypothesis to data collection to results, this is a more deductive application of the scientific model,” Crosas explains. “Here, when we’re using ML and AI with text data, it’s a lot more iterative… Interventions like that can help to see if there’s a way of changing how these ecosystems work.”

By combining the foundations of causal inference (the gold standard of randomised controlled trials) and this iterative approach to science using text as data, Crosas believes we can gain valuable insights into our digital societies and propose solutions to the challenges they present. She provides examples of social science studies that have used computational methods and experimentation to understand phenomena such as censorship on Chinese social media and the spread of misinformation on platforms such as Facebook and Twitter/X.

Balancing innovation with ethical oversight

As we navigate the challenges and opportunities presented by algorithmic societies, both Halpern and Crosas emphasise the key role of ethical considerations and well-designed experiments in understanding and shaping our digital futures.

“What would it mean to demo without death?” Halpern asks. “What would it mean to use these experimental structures in ways that can reattach or produce new forms of care, whether it’s for more than human forms of life or for what constitutes or enhances the idea of the human? And how might we use these histories to inform and rethink how we’re implementing smart systems and, of course, AI into our built environment?”

The paradoxical role of computers in society

The increasing prevalence of ML algorithms and AI technologies in our digital societies has reignited discussions about the paradoxical role of computers, a concept introduced by computer scientist Joseph Weizenbaum back in 1983. “The computer has long been a solution, looking for problems, and the ultimate technological fix, which insulates us from having to look at problems,” says Albert Sabater Coll, director of the Observatory for Ethics in Artificial Intelligence of Catalonia.

This paradox becomes particularly apparent when considering the impact of algorithms on social interactions and relationships within digital communities. While these technologies offer unprecedented opportunities for connection, communication, and information sharing, they also raise significant ethical concerns about privacy, manipulation, and the potential erosion of the social fabric.

“Merging tech such as generative AI and quantum computing could make such scenarios far more interesting and also far more effective,” Sabater Coll warns. “So we need to worry about that, and we need to think about that.”

The paradoxical role of computers is further complicated by the human tendency to be influenced by emotional appeals and to believe information that’s not necessarily true. As Sabater Coll explains, “Humans not only respond to the objective features of their situations, but also of their own subjective interpretations of those situations, even when these societies or these beliefs start thinking about things that are factually wrong.”

This vulnerability to misinformation and manipulation becomes even more concerning when considering the ability of algorithms to exploit these weaknesses more effectively than humans. With the accumulation of vast amounts of data, machines can now test and refine content and ideas to maximise their influence on human behaviour and decision-making.

“As we navigate the challenges and opportunities presented by algorithmic societies, it’s important to consider the ethical implications of conducting experiments with algorithms and data that can potentially affect the social fabric,” Sabater Coll says. “The present moment seems to be another period in which such discussions are difficult to ignore, and that’s one of the main reasons why we need to talk about these issues.”

A smart city in South Korea

Songdo International Business District, situated on 1 500 acres of reclaimed land along Incheon’s waterfront in South Korea, represents one of the most ambitious smart city projects globally. Initiated in 2002 with an estimated investment of $35 billion, Songdo was envisioned as a futuristic urban centre integrating advanced technologies to enhance efficiency, sustainability, and quality of life.

The city’s infrastructure boasts several innovative features. A pneumatic waste disposal system transports trash directly from residential units to processing facilities, eliminating the need for garbage trucks. Extensive sensor networks monitor temperature, energy consumption, and traffic flow with the goal of optimising resource management and reducing environmental impact. Songdo also offers smart-tech homes and globally connected school classrooms that reflect its commitment to integrating technology into daily life.

Despite these advancements, Songdo has encountered significant challenges. The city remains sparsely populated, with occupancy rates lower than anticipated. Factors contributing to this include high living costs, a lack of cultural and social attractions, and a top-down planning approach that overlooks the need for organic urban growth.

Ethical considerations have also emerged regarding the extensive data collection inherent in Songdo’s design. The pervasive monitoring intended to enhance efficiency and security raises questions about privacy and the potential for intrusive surveillance. It also exemplifies broader debates about the balance between technological advancement and individual rights within smart cities.

Related

The Business of Endurance Sport

The Business of Endurance Sport

China’s Industrial Complex – A Business Model For The Future?

China’s Industrial Complex – A Business Model For The Future?

AI and the Future of Media

AI and the Future of Media