Home Insider Articles Governance of Ai

Governance of Ai

Chatgpt was released in Nov 22. It gathered a million users within a week. And 100 million within 2 months. This generative ai has been so good and powerful that the potential of a doomsday scenario has been voiced out by many including Elon Mask and Steve Wozniak.

Prior to the launch, in August 2022, ai Impacts, US research group, surveyed 700 machine learning researchers about their predictions on ai risks. The result yielded a 5% probability of ai causing ‘extremely bad’ outcome, such as human extinction.

Li Fei Fei, ai luminary at Stanford, talks of a ‘civilisational moment’ for ai. Geoff Hinton, ai bigwig from Uni of Toronto, said, the judgement day is not inconceivable. Robert Trager of Centre for Governance on Ai, said one risk of such large language models is “making it easier to do lots of things – and thus allowing more people to do them (harm)”

In a recent survey of super forecasters and ai experts, median ai expert gave 3.9% chance to an existential catastrophe (<5,000 humans survived) owing to ai by 2100. Median super forecaster, gave only 0.38%. The difference may be probably be due to field selection bias by ai researchers.

So, how do we control ai?

Before Chatgpt4 (C4), openai used several approaches to reduce risk of accidents and misuse. One is called ‘reinforcement learning from human feedback’ (rlhf). Rlhf asked humans to provide feedback on whether the model response to a prompt was appropriate. Then the model is updated based upon feedback. The goal is to reduce the likelihood of producing harmful content when given similar prompts in the future. The drawback is that humans often disagree about what counts as ‘appropriate’. Rlhf also made Chatgpt4 far more capable in conversation, thus propel the ai race.

Another approach, borrowed from war gaming, is ‘red-teaming’. Openai worked with Alignment Research Centre (ARC) to put its model thru a battery of tests. The red-teamer job was to attack the model by getting it to do something it should not, in the hope of anticipating mischief in the real world.

Another idea is to use ai to police ai. Sam Bowman of New York University and of Anthropic ai firm, has written on topic like “Constitutional Ai”, in which a secondary ai model is asked to assess whether the output from main model adheres to certain ‘constitutional principles’.

In general, governments can approach to control ai using one of the following three strategies:

I. Light touch – with no new rules or regulatory bodies. The approach is to apply existing regulations to ai systems. e.g., UK, US.

II. Tougher line – The government would create law categories for different uses of ai classified according to risk, with stringent monitoring and disclosure. Some ai to be banned such as subliminal ad and those with remote biometrics. The government would impose fines for non-compliance. E.g., EU.

III. Toughest – Under this strategy, the government would treat ai like medicines, with a dedicated regulator, strict testing and pre-approval requirements. E.g., China, where ai has to undergo security review before release.

Even if efforts to produce safe models work, future ai models could work around them. E.g., ai models have already made new discoveries in biology. Not inconceivable that one day design dangerous biochemicals themselves.

General attitude of the world seems to be better safe than sorry. Dr Li of Stanford thinks we ‘should dedicate more, much more resources’ to research on ai alignment and governance. Dr Trager of Centre for Governance on Ai, on the other hand, supports creation of bureaucracies to govern ai standards and do safety research.

In the meantime, ai researchers supporting much more funding for safety research has grown from 14% in 2016 to 35% in 2023. ARC is also considering developing a safety standard for ai.

Immediate impacts before the judgement day

The probability of the end of the world may be low enough to cast aside, yet everyone seems to agree that the immediate impact of ai would be on jobs. The big tech have already retrenched tens of thousands of staff in the last twelve months alone. These jobs are not coming back. Tyna Eloundou of openai and colleagues estimated that ‘around 80% of US workforce could have at least 10% of their tasks affected by intro of llms. Based on Ms Eloundou’s estimates, ai would result in a net loss of around 15% of US jobs. Some could move to industries experiencing workers shortages, s.a. hospitality. A big rise in unemployment would follow, may be up to 15% reached during covid.

Edward Felten of Princeton University and colleagues, conducted a similar exercise, legal services, accountancy, travel agencies come out at or near the top of professions most likely to lose out. According to him, 14 of top 20 occupations most exposed to ai are teachers.

Goldman Sachs prediction is somewhat more positive, stating the widespread adoption of ai could drive 7% or almost 7T increase in annual global gdp over 10 years period. Academic studies predict 3% rise in annual labor productivity in firms that adopt ai.

Another concern from this has been who would eventually benefit most from Ai. Ai profits could end up in just one org – openai. Generative ai has some real monopolistic characteristics. C4 reportedly cost more than $100m to train. There is also a lot of proprietary knowledge about data for training the models plus the users feedback.

Should you be worried about that job loss?

In areas of economy with heavy state involvement such as healthcare and education, technology change tends to be super slow. Government may have policy goals such as maximisation of employment levels, that are inconsistent with improved efficiency. These industries are likely to be unionised and unions are good at preventing job losses, according to Mark Andreessen of Andreessen Horowitz. Only the bravest government would replace teachers with ai.

A paper by David Autor of MIT and colleagues, said about 60% of the jobs in today America did not exist in 1940. The ‘fingerprint technician’ was added in 2000. ‘Solar photovoltaic technician’ in 2018. Ai economy is likely to create new occupations which today cannot even be imagined. Personal computer was invented in 1970s. In 1987, Robert Solow, an economist, famously declared that computer age was ‘everywhere except for productivity stats’.

Jobs beyond the reach of Ai are blue collar works, such as construction and farming, accounting for 20% of rich world GDP and in industries where human to human contact is an inherent part of the service, such as hospitality and healthcare.

In summary, we can be less concerned about job losses and individual impacts of Ai. We should be more concerned about the balance of power, changing of the society and nations and destruction of them, simply based on the extrapolation of how damaging social media is on Myanmar alone. Just imagine, C4 is a godsend for a nimby fighting against a government plan or a development program. In five minutes, he can produce a well written 1000 page objection. Someone then has to read and respond. Spam emails would be harder to detect. Fraud cases would soar. Banks will need to spend more on preventing attacks and compensating people who lose out. Combine that with auto creating of comments in social media, it would bring fake news and Mal-information to a whole new level. That’s exactly the future without the strict governance of Ai!