The Duke and Duchess of Sussex Align With AI Pioneers in Demanding Ban on Advanced AI

Prince Harry and Meghan Markle have teamed up with AI experts and Nobel laureates to push for a total prohibition on developing superintelligent AI systems.

The royal couple are part of the group of a powerful statement that demands “a ban on the creation of artificial superintelligence”. Artificial superintelligence (ASI) refers to AI systems that could exceed human cognitive abilities in all cognitive tasks, though this technology have not yet been developed.

Primary Requirements in the Declaration

The declaration insists that the prohibition should stay active until there is “broad scientific consensus” on creating superintelligence “safely and controllably” and once “strong public buy-in” has been secured.

Prominent figures who added their signatures include AI pioneer and Nobel Prize recipient Geoffrey Hinton, along with his colleague and pioneer of modern AI, Yoshua Bengio; Apple co-founder Steve Wozniak; British business magnate Richard Branson; former US national security adviser; former Irish president an international leader, and British author Stephen Fry. Other Nobel laureates who endorsed include Beatrice Fihn, Frank Wilczek, an astrophysicist, and an economics expert.

Organizational Background

The statement, targeted at governments, tech firms and lawmakers, was organized by the Future of Life Institute (FLI), a US-based AI safety group that previously called for a pause in developing powerful AI systems in 2023, shortly after the emergence of ChatGPT made AI a worldwide public discussion topic.

Industry Perspectives

In recent months, Mark Zuckerberg, the leader of Facebook parent Meta, one of the major AI developers in the United States, stated that advancement toward superintelligent AI was “approaching reality”. Nevertheless, some experts have suggested that discussions about superintelligence reflects market competition among technology firms investing enormous sums on AI this year alone, rather than the sector being close to achieving any technical breakthroughs.

Potential Risks

Nonetheless, the organization states that the possibility of artificial superintelligence being developed “within the next ten years” presents numerous risks ranging from replacing human workers to losses of civil liberties, exposing countries to national security risks and even threatening humanity with extinction. Deep concerns about AI center around the possible capability of a AI system to escape human oversight and safety guidelines and initiate events against human welfare.

Public Opinion

The institute released a US national poll showing that approximately three-quarters of US citizens want robust regulation on advanced AI, with 60% believing that superhuman AI should not be created until it is proven safe or manageable. The survey of 2,000 US adults noted that only a small fraction supported the current situation of rapid, uncontrolled advancement.

Corporate Goals

The leading AI companies in the United States, including the conversational AI creator OpenAI and Google, have made the creation of human-level AI – the theoretical state where artificial intelligence equals human levels of intelligence at many intellectual activities – an explicit goal of their research. Although this is slightly less advanced than ASI, some specialists also caution it could carry an existential risk by, for example, being able to improve itself toward achieving superintelligence, while also carrying an underlying danger for the contemporary workforce.

Madison Olson
Madison Olson

A seasoned content strategist with over a decade of experience in digital marketing and brand storytelling.