24 February | 2024

Lahore, Pakistan

Is artificial intelligence challenging human authority with choices, ‘exceptional’ concepts?

Yuval Noah Harari believes artificial intelligence can challenge human authority; Dr Pervez Hoodbhoy believes AI outpacing human control which could pose a ‘serious threat’ to humanity

“Artificial intelligence (AI) is the first invention in human history to make choices and come up with concepts on its own, potentially challenging human authority,” according to Yuval Noah Harari, a public intellectual, book author, historian and Professor of History at the Hebrew University of Jerusalem. While physicist, public intellectual, book author and writer Dr Pervez Hoodbhoy believes that the artificial intelligence has potential to revolutionize many industries. But he warns about possibility of AI becoming too advanced and outpacing human control, which could pose a ‘serious threat’ to humanity.

The artificial intelligence raises concerns about an ‘impact’ on human culture as it has ability to generate new ‘holy’ books and other forms of information. While a long-term ‘threat’ of AI surpassing current capabilities and future advancements remains uncertain, the immediate concern lies in the use of AI technologies for propaganda and tailored trolls during elections, which could have serious consequences. As human intellect is a vital attribute, computers have an advantage of faster learning and more efficient information communication. Whether machines will eventually surpass human intelligence is both speculative and unknowable.

‘As human intellect is a vital attribute, computers have an advantage of faster learning and more efficient information communication’

Some argue that we should treat these machines properly, but others question treatment of other species, as there is a potential that they could become ‘smarter’ than humans. The intellectuals highlight worry about both immediate hazards, such as the demise of the Mock-Ops, and the potentially existential threat posed by AI. The potential of AI evolving at a rate faster than biological evolution — much like an amoeba — is also discussed. They inquire about timetables for addressing these ‘dangers’ and the course of action that governments should take. The timetable is unpredictable, therefore it might just take a few years for AI to create machines that are just as intelligent as people.

Religions may change as a result of GPT’s potential to produce new ‘sacred’ text. The artificial intelligence has potential to close a gap between its current capabilities and human intelligence. In certain areas, machines might surpass humans in intelligence, but they might also be less intelligent overall. The AI’s seriousness and its ‘dangers’ to democracy are discussed by Yuval Noah Harari who is equally concerned about immediate dangers like the mock-up collapsing and the ‘existential’ threat. He contends that since AI develops considerably more quickly than biological development, it might take a few years until we have knowledge necessary to create computers that are just as intelligent as humans.

‘It might take a few years for artificial intelligence to create machines that are just as intelligent as people’
‘It might take a few years for AI to create machines that are just as intelligent as people’

Understanding how to prevent these occurrences and how quickly policymakers must respond to them is challenging due to ambiguity surrounding the timetable. It is the duty of governments to be ready for the worst-case situation, which may last for five years. The transition to a more adaptable society is crucial, and governments should start as soon as possible to minimize risks. A letter – signed tech leaders – calls for companies behind technology development to pause their actions, as it poses an existential risk to humanity. It is noted that the human societies are adaptable, but it takes time and effort to build successful societies. The current ‘threat’ is even more powerful than the industrial revolution’s inventions, such as trains, radio, and electricity.

A reason for signing a letter to computer scientist Professor Yoshua Bengio is to draw attention to the issue of AI in industrial societies. The technology has already survived failed experiments, such as the industrial revolution, due to its power and ability to prevent destruction. However, it is crucial to avoid further experiments and take things more slowly to address the challenges. The incentive system, which works reasonably well for industrial societies, is based on competition, and companies must balance ethics and social values to survive. The reason for signing the letter is to draw attention to the ‘dangers’ of AI and responsibility of the governments to regulate this ‘risky’ development.

‘Politicians often focus on AI’s positive aspects, but it’s crucial to recognize their responsibility to protect public from short-term and long-term risks’

The main political issues are not discussing the immediate concerns of everyday life, such as AI making decisions about jobs and employment. This should be a top priority in every election campaign, as it affects our understanding of rejections and the potential consequences of AI decisions. Politicians often focus on the positive aspects of AI, but it’s crucial to protect public from short-term and long-term risks. It is urgent and easier to regulate the deployment into the public sphere, as it requires simple rules to ensure AI cannot counterfeit humans. Similar laws should be in place for faking humans and releasing powerful new medicines or vehicles without safety checks.

Currently, the deployment side of AI is unregulated, as laws about data and communication were not designed to address issues of counterfeiting humans. The moral duty is to think through the challenges and adapt ways to deal with climate change. Some people believe in AI’s potential benefits, such as improving health, education, and climate change. Yuval Noah Harari believes we have a ‘moral’ duty to pursue the optimal solution, as we created the ‘problem’ and have the power to solve it. While the worst-case scenario may be avoidable, we still have the power to make it happen. Intelligence is a valuable trait, but it can also lead to negative consequences.

For example, artificial intelligence may be incredibly stupid, and the collapse of democracy is a result of bad human actors using AI in a malevolent way. To avoid this, it is essential to focus on understanding and developing our minds, as well as AI, to avoid the totalitarian regime. Some people fear that discussing AI as a solution may alienate people who need to buy into the solution. To address these concerns, it is crucial to focus on the current issues in the society, such as discrimination, and injustices. The job market should be a central concern, and AI may create new jobs but requires a transition and retraining of people.

‘The job market should be a central concern, and AI may create new jobs but requires a transition and retraining of people’

Despite the challenges, it is essential to focus on addressing these issues and ensuring the convergence of solutions. Professor Yoshua Bengio highlights deficiency of the political system in individuals with a deep understanding of technology and its impact. Most tech-savvy individuals focus on business rather than politics to raise public awareness and regulate technology. Discussing AI’s impact on jobs and the audience’s jobs is a tangible way to ground the ‘threat’ it poses, similar to climate change. Some argue that modifying jobs could increase productivity, while others argue that AI tools could reduce the number of programing jobs. Predicting these outcomes is challenging, and some argue that AI tools are not a significant concern.

Society changes slowly, and AI technology could impact the job market in years or decades. While some argue that AI is different once it performs better, the potential for AI to improve various jobs is still uncertain. The transition to AI is difficult, especially considering global considerations. The AI ‘revolution’ is led by a few countries, which may become rich and powerful, while less developed countries may face economic disruption. The gains from the AI ‘revolution’ may help cushion the blow for those who would lose their jobs and enable them to retrain. This could lead to a similar situation to the industrial ‘revolution’, where few countries ‘conquered’ the world. The automation and AI ‘revolutions’ present political control risks, making it crucial for citizens to be aware of these immediate dangers.