Impact of AI technologies – media hype or real concern?

Impact of AI technologies – media hype or real concern?

Author Tony Day, Portman Associate, BSc Dip Arch (Hons) RIBA

Artificial intelligence (AI) and machine learning (ML) formed the prime focus at the DCD Connect¹ event held in London earlier this month. This followed a year in which news media regularly reported on urgent calls from senior officers of companies developing AI along with governments and regulators around the world, for greater regulation in the use of these technologies.

AI developers suggest their technologies are already delivering social and economic benefit in many areas, potentially with an even greater upside for the future. But there are also concerns that without adequate safeguards, individuals and entire societies potentially face a wide range of negative issues including: fraud, identity theft, distribution of undesirable material, discrimination, fake news, invasion of privacy, human rights, personal safety, copyright infringement, illegal interference in democratic processes, cyber-attacks, and national security breaches.

Following the November 2022 release of OpenAI’s ChatGPT, a generative AI chatbot based on a self-improving large language model neural network (LLM), generative AI has seen exceptional growth in its use. By January 2023 ChatGPT had become the fastest-growing consumer software application to date, with over 100 million users. Eclipsing all previous internet product introductions from the likes of Facebook, Spotify, and Netflix.   

In addition to its core function as a chatbot, it can write and debug computer programs, answer test questions, translate, write poetry and song lyrics. Although it still suffers from the hallucination problem, a common behaviour with LLMs, that is the ability to produce plausible-sounding but incorrect or nonsensical answers.

DALL-E, Midjourney, and Stable Diffusion are examples of image generator LLMs that create art and photo-realistic images from a description in natural language, allowing anyone to create things they previously would not have been capable of making. But the algorithms for this type of model first train by analysing millions if not billions or more images of digital art which already exists on the web, raising ethical questions around copyright and potential business loss for the original artists whose art may have been unintentionally copied.

March 2023 saw several AI industry leaders in the US call for a pause on all AI developments more advanced than the current version of ChatGPT. For some the concern is that in its latest variant this application is already rapidly nudging toward the level of artificial general intelligence (AGI). This being an AI that can perform all human cognitive skills such as reasoning, learning and problem-solving, in any or all fields, which by definition would exceed the abilities of any individual human.

In May 2023, along with other AI experts, OpenAI CEO Sam Altman appeared as a witness before a US senate subcommittee² hearing discussing potential rules for AI, stating that “as this technology advances, we understand that people are anxious about how it could change the way we live. We are too”, adding, “if this technology goes wrong, it can go quite wrong” and cause “significant harm to the world.”

The ability of machines to match humans in general intelligence has been predicted since the invention of computers in the 1940s. Enthusiastic overprediction by some early pioneers, has frequently been followed by slower progress than anticipated. Mainly due to underestimating the technical difficulties of constructing human-level general machine intelligence. Today AGI is the near-term goal of many researchers, with ASI (artificial superintelligence or an intellect that greatly exceeds the cognitive performance of humans in any or all fields) the long-term goal.

Such a step was speculated upon in 1965 by mathematician I.J. Good³, formerly chief statistician in Alan Turing’s code-breaking team at Bletchley Park during WWII. He proposed that since the design of machines is an intellectual pursuit carried out by man, an ultraintelligent machine would be able to design even better machines than man. This “unquestionably” would lead to an “intelligence explosion” in which “the intelligence of man would be left far behind”. Thus, this ultraintelligent machine would be “the last invention that man” ever needed to make. With the proviso being that “the machine is docile enough to tell us how to keep it under control.”

The significant existential risk arising from Good’s speculations has fed the imagination of Hollywood film script writers leading to the creation of characters like the ‘Terminator’ and fictitious projects such as ‘Skynet’. Is it desirable or even possible to create a superintelligence and if so, how long would this take?

The publication of Superintelligence, Paths, Dangers, Strategies by Nick Bostrom⁴ in 2014, his subsequent presentation at IP ExpoEurope⁵, in 2016, and to the UK Parliament in October 2017, has done much to raise awareness with regulators, governments, and the industry around the world, of both the potential as well as the existential risk from any ‘intelligence explosion’ upon creating a superintelligent agent (ASI).

Bostrom suggested “the existence of multiple paths increases the probability that the destination can be reached via at least one of them” and ASI would follow quickly, perhaps within hours, days or only a few years of achieving AGI. Therefore, it would be crucial to ensure that the goals of a superintelligence are closely aligned with our own, and the control problem referred to by Good already solved and ready to implement before this point is reached. 

Expert opinion regarding timelines to achieve these goals varies widely, for AGI from 2030 to 2050, and for some never, but nobody knows. Similarly predicted end outcomes for humanity range from extremely good, to as bad as human extinction. Hence the concern to consider all the potential dangers of near- and long-term goals now, and how we might avoid them rather than leaving it until it’s already too late.

In September 2023 The Sunday Times published a ‘book of the week’ review beneath the headline “The pioneer of AI at Google is having second thoughts. Fearing his invention will spawn terrorism and superviruses within years, he wonders……… What on earth have I unleashed?”.

James McConnachie reviewing the book, The Coming Wave AI, Power and the 21st Century’s Greatest Dilemma by Mustafa Suleyman, with Michael Bhaskar⁶, states Mustafa Suleyman “has been having second thoughts” since selling DeepMind to Google in 2014.That “the combination of AI with synthetic DNA and other technologies”, like “robotics and super-powerful quantum computing”, “is driving an ever-accelerating wave of change and threat that will burst upon us within not decades but years”.

Back in 2010 when DeepMind was established with the ambitious goal of replicating human intelligence, AI was still not taken that seriously by many. The work of researchers in this field was thought to be a niche endeavour or falling more within the realms of science fiction. Now AI is in use everywhere in products, services, and devices we use and interact with many times every day, and already outperforms human intelligence in many individual tasks. Mustafa Suleyman predicts that it will “reach human-level performance across a very wide range of tasks within the next three years” with “truly profound” implications.

He views this transition period already underway, as a smooth gradual evolution where AI systems become increasingly capable, consistently nudging towards AGI. Further that we need a new concept “artificial capable intelligence” (ACI), encapsulating a middle layer in which an AI achieves goals and tasks with minimal human oversight prior to full AGI.

Suleyman says the potential threat from a machine with superintelligence is “a colossal red herring”. Rather the threat is much more imminent and will result from the combination of AI and other fast developing relevant technologies such as synthetic biology, robotics and super-powerful quantum computing as noted above.

He stresses the need for both regulation and containment, setting out ten possible steps this might take “to create sensible rate-limiting factors, checks on the speed of development, to better ensure that good sense is implemented as fast as the science evolves.” Further that “regulation alone doesn’t get us to containment, but any discussion that doesn’t involve regulation is doomed.”

Regulation

Why have we seen so many AI companies themselves asking authorities to regulate them? I would suggest simply because they understand regulation is both necessary and inevitable. Better to engage voluntarily in assisting the drafting of any regulations, than have a regulator who does not fully understand the technology drive them to an unacceptable place by imposing potentially restrictive or draconian rules upon them.

Among the big tech companies there may also be an element of monopolisation of markets, to exclude competition, while increasing their own control and dominance. The traditional industry pattern to date has been for leading edge innovation and invention to originate first with individuals or small start-up enterprises. Once the risky ‘proof of concept’ stage and a degree of commercial take up is achieved, the big players move in to commoditize and deliver the solution at scale. I believe it would be a mistake to hinder this successful process.

Mustafa Suleyman suggests “China, on the face of it, is a regulatory leader of sorts. The Government has issued multiple edicts on AI ethics, seeking to impose wide-ranging restrictions”, that “far exceeds anything we’ve yet seen in the West”. “Its regulation is matched by an unparalleled deployment of technology as a tool of authoritarian government power.” “Chinese AI policy has two tracks: a regulated civilian path and a freewheeling military-industrial one.”

The proposed EU Artificial Intelligence Act places AI governance with a central regulatory body and is much broader in scope than China’s rules, while applying the level of regulation according to how potentially harmful individual AI solutions might be.

The Algorithmic Accountability Act 2022 in the USA requires companies to assess the impacts of AI, but the national AI framework is so far voluntary. Senate subcommittee chair Richard Blumental² said AI companies must proceed with a “do no harm” approach to ensure rules are introduced that are both effective and enforceable.

Rejecting the earlier calls from some industry leaders for a moratorium on new developments as a problem, he conceded there would be no pause in AI development until regulators could catch up. “The world won’t wait”, “sticking our head in the sand “is not the answer. In response to Sam Altman’s proposals for a new regulatory agency for AI he said these presented challenges in providing adequate resources. Both financial and scientific expertise, otherwise private companies will “run circles around” the government.

Witness Christine Montgomery IBM Chief Privacy and Trust Officer suggested the “EU rules on AI regulating by context” would provide a good lead for US regulators to follow.

The UK approach is not to give responsibility for AI governance to a new single regulator. Rather they propose using existing regulators such as the HSE, Equality and Human Rights Commission, and Competition and Markets Authority to develop their own approaches that suit their own sectors. They will also be using existing laws rather than being given any new powers.

A government white paper outlines five principals that they should consider enabling safe and innovative use of AI. Over the next year the regulators will issue practical guidance to organisations to set out how to implement these principals.

This light-touch approach has already been criticized in comparison to the approach by other nations. There are questions as to the ability of the existing regulators to handle this new role without greater resourcing, given the scope of the challenge in regulating this rapidly evolving AI workload.

Conclusion

Application programming interface (API) has helped to put state-of-the-art machine learning capabilities in the hands of non-specialists. Whilst this opens many exciting possibilities and potential benefits now and for the future, there is another side to this technology which raises serious concerns regarding potential bad actor use.

All governments reacted far too slowly to social media regulation, that rabbit is already out of the hat, while we are reactively trying to fix the numerous problems and issues that has created.

By comparison how we effectively regulate and control AI around the globe remains a wide-open question, but a failure to do so will have far greater consequences for humanity. Standards will play an important part but are by their very nature only codifying what we already know not the cutting edge and slow to amend, typically taking years for new iterations. This technology is developing very quickly.

Some have proposed forming a ‘global body’ to have ultimate oversight to integrate individual nations efforts into a code of practice for the benefit of all. Unfortunately, due to the current serious geopolitical situation within Europe and the Middle East, it is hard to see there being the necessary trust and goodwill to collaborate to achieve such a goal at this time. 

Yes, there has certainly been a degree of media hype around AI but there are also serious concerns that urgently need to be addressed.

Finally, are you likely to be confronted by a naked terminator materialising in a car park near you and demanding your clothes and keys anytime soon? Absolutely not.

Notes

  1. DCD >Connect London 2023, InterContinental London-O2.
  2. Senate Judiciary Privacy, Technology and the Law Subcommittee hearing titled ‘Oversight of A.I.: Rules for Artificial Intelligence’ on Capitol Hill in Washington, US., May 16, 2023.
  3. Irvin John Good “Speculations Concerning the First Ultraintelligent Machine”. In Advances in Computers, edited by Franz. L. Alt and Morris Rubinoff, Vol 6, Pages 31-88 New York: Academic Press, Pub Elsevier.
  4. Superintelligence, Paths, Dangers, Strategies by Nick Bostrom, Director Future of Humanity Institute, Director Strategic Artificial Intelligence Research Centre, Professor Faculty of Philosophy & Oxford Martin School, at University of Oxford. First published in hardback 2014, first published in paperback 2016, by Oxford University Press.
  5. IP ExpoEurope, 5-6 October 2016, ExCeL, London.
  6. The Coming Wave AI, Power and the 21st Century’s Greatest Dilemma by Mustafa Suleyman co-founder of pioneering AI company DeepMind 2010, acquired by Google in 2014, VP of AI product management and AI policy at Google, co-founder, and CEO of Inflection AI from 2014, and Michael Bhaskar writer and tech commentator, publisher, Published by Bodley Head 2023.

About the Author

Following deregulation of the power and telecoms market in the 1990’s Tony Day BSc Dip Arch (Hons) RIBA joined Waterfields Limited, one of the first datacenter design and build companies in Europe, as Technical Director and was responsible for the planning, design, and construction of over six million sqft of datacenter space across the UK and Europe.

​Tony joined APC in May 2003 as Chief Engineer, Rack Cooling Solutions, and following their acquisition by Schneider Electric in 2007 became a founding member of Schneider’s Innovation Solutions Group within the office of the CTO. Working directly with clients on datacenter D&B projects including Ferrari, Sauber F1, Hinwil, Deutsche Bank and Capgemini.

Mentioning a few other key highlights in Tony’s career include him presenting datacenter infrastructure technology at workshops for Iraq Government ministries in Damascus in 2006 and similar for various clients in Dubai. Industry consultancy development team member for Masdar City in Abu Dhabi, UAE 2009; a new city created in the desert and intended to become an exemplar as one of the world’s most sustainable cities.

He represented Schneider Electric in The Green Grid and on the original EU Code of Conduct: Energy Efficiency for Datacenters committee meetings. He was a member of The Green Grid Work Groups. He’s a former Company representative on Birmingham City University Industrial Advisory Board and Chair of the Datacenters Professionalism Group at industry body ‘tech UK’.