#4 'You cannot fetch the coffee if you're dead'
or Artificial General Intelligence scenarios or Rise of the Machines
I have been learning chess formally for the first time since last month. I used to play the game as a kid, but now that I know how to really play it, it was actually a series of stupid moves. I now know about common openings, simpler endgames, board weaknesses, pawn structures, when it makes sense to exchange pieces and practicing more tactics. I can also read the chess algebraic notation and visualize the movements of a game in my head without actually needing the board now. Though I am still terrible at the game and make rookie blunders, I have improved by about 150 rating points in the last month. I have a certain modest long term goal in mind, which I will write about when it happens.
The Top Chess Engine Championship is a contest amongst Chess bots, running since 2010. For the last few editions, it has been won by Stockfish, one of the strongest computer programs with built-in domain knowledge of chess. Until 2018, when Stockfish was defeated by Leela Chess Zero (LC Zero), a variation of the program based on Deepmind’s AlphaGo Zero. Late last month, an improved version of LC Zero again defeated Stockfish to gain back the crown. There are some pretty amusing LC Zero vs Stockfish games on Youtube, some that might cause Alekhine or Nimzowitsch to roll over in their graves.
This is not the first time that a strong computer program playing chess has made the news. It was back in 1997 that the IBM Blue Gene supercomputer defeated the then reigning world champion in chess, Gary Kasparov. IBM’s effort, unlike Deepmind’s AlphaZero, was not a significant advancement in AI since it purely relied on improved computing capacity (approximately 11 Trillion Floating point operations per second or less than a $1000 computer with a GPU today). Stuart Russell is a professor of Computer Science at UC Berkeley and a co-author of one of the standard textbooks on AI (Artificial Intelligence: a modern approach along with Peter Norvig). In his recent more accessible book for the masses, Human Compatible, Stuart writes that in a 1994 study, he and Peter Norvig actually charted the compute improvements to predict that a computer would be able to defeat Kasparov in 1997.
Deepmind is currently trying to be the winner of the computer game called Starcraft. Stuart Russell says it might prove more challenging considering the complexity compared to even Go. Lee Sedol, the reigning world champion in Go, retired from the game in November 2019 after his defeat to AlphaGo saying ‘it cannot be defeated’.
A lot of the hype around machines becoming powerful comes from the industry itself. We have had misrepresentation in terms of humanoid robots which are no better than Siri, being displayed around the world for PR exercises. Stunts like granting honorary citizenships, companies presenting progress in self-driving cars as something more than what it actually is, and the media buildup have changed the common perception about when a superintelligent being would be possible. The other culprit could be representation in pop culture including movies.
What AGI and superintelligence is
The representation of intelligent AI in movies has drastically improved over time. From anthropomorphizing them as shapeshifting not very smart always human looking in Terminators, to a body-less AI Samantha in Her, to human-looking machines in a theme park that displays emotions and is conscious in Westworld. The popular series Black Mirror has some episodes that highlight the risk of the speculated future that we are heading towards.
Max Tegmark is a physicist and machine learning researcher. He co-founded the Future of Life Institute which examines the risks of superintelligent AI among other things. In his recent book Life 3.0, the premise is that Life 1.0 was bacterias with fixed hardware and software, which evolved over a long period of time. The bacteria in the face of a threat like antibiotics could not immediately learn and change its software. Life 2.0 is humans, who have fixed hardware but can modify their software by the process of learning and adaptation. Humans cannot modify their hardware in a broad manner (except for maybe artificial medical implants). Life 3.0 would be artificial beings who can modify and design their own hardware and software, thus becoming a lot better in a very short duration. Max says that we do not even agree on definitions of the most basic elements in AI -
When a panel of leading AI researchers was asked to define intelligence, they argued at length without reaching consensus. We found this quite funny: there’s no agreement on what intelligence is even among intelligent intelligence researchers!
One of the most hyped and quoted things in AI is the Turing test (or The Imitation game from Alan Turing’s paper on ‘Can Machines Think’, also the name of a popular movie on him). Stuart argues that this is probably overhyped and was not intended to be a true definition of intelligence, even by Turing himself.
Artificial General Intelligence (or Strong AI) is a term used to define an entity that is capable of doing general-purpose tasks as well as humans can. By extension, it should be capable of learning new tasks and adapting to a new environment. Contrary to pop culture references, AGI does not have to be a humanoid or even exist in the physical form. Nick Bostrom is a Professor and philosopher at the University of Oxford. His book Superintelligence talks about challenges of when and if a superintelligent being is created. Nick defines superintelligence as an AGI that is significantly better than humans at the general-purpose tasks.
A technological singularity is an event when we will possibly have superintelligent artificial beings with changes in society and structure that would be significantly huge compared to the present. One of the major hypotheses on how we would reach this point is something called ‘Intelligence Explosion’ where humans having built an AGI, the AGI would itself work recursively to improve its hardware and software thus leading to a superintelligent AGI. Ray Kurzweil, famous for a lot of stuff (including taking over 100 different supplements daily), predicts a singularity time horizon of around 2045. The AI community is divided on the timeline or even if an AGI is possible to build in the future.
Historical basis and the current Deep Learning era
In mathematics, a lot of concepts can be defined using set theory or an extension of the same. In order to formalize the arguments in some proofs, we take a bunch of agreed-upon statements called axioms and then use a series of logical statements using something called a first-order logic to arrive at a conclusion. There exists a theorem called Godel’s completeness theorem, which says that given a body of knowledge and any formula expressible in first-order logic, there exists a finite proof of the formula. In other words, there is a way to find the answer to a question, if it exists.
The term ‘Artificial Intelligence’ was coined by John McCarthy back in 1955. McCarthy’s idea of AI was to have general-purpose programs that can consume knowledge bases on any topic and then answer any question based on it. Unfortunately, according to Prof. Stuart, a lot of these programs do not work due to failure to account for uncertainty in our models. There are no absolute goals and logic, rather we have to work with probabilities and partial rewards-based models. The NELL (Never Ending Langauge Learning) project at CMU which attempts to read the web in order to gather beliefs and facts so that questions based on the same can be answered is sure about only 3% of its facts. Something known as Bayesian networks can be used for reasoning with probabilistic knowledge.
The current research and work happening in public domain are primarily in the domain of deep learning. AlphaGo is what we call an example of Narrow AI, AI limited to a particular specific problem or a domain. This focus on narrow AI has been used to solve a variety of problems including improved automated language translation, object detection and changes in crop patterns using satellite images, progress towards self-driving cars, Robot process automation (RPA) for business use cases like automated data extraction from financial documents, readings of medical imaging at par, or even better than human doctors among other areas. A variety of problems can be solved without the need of AGI.
Prof. Stuart argues that while deep learning might not lead to AGI, the work on tool AI in terms of the smaller stuff we are building would enable us to work on more general methods. As an example, the same AlphaGo program was able to defeat top human players in Go, Chess, and Shogi, all entirely different games since the underlying approach was to solve a somewhat general set of problems (reinforcement learning to evaluate better board positions) restricted to a subset of two-player board games. Deepmind’s DQN based approach was able to play 49 different Atari games since it simply took the screen pixel data as input and the game score as a reward for optimization, rather than given specific domain knowledge about what the individual games meant or any other information about gameplay. These agents are still no examples of AGI and we might need an entirely different approach for that. Indeed, Prof. Stuart states -
‘One can argue almost endlessly that whether deep learning will lead directly to human-level AI. My own view, which I will explain later, is that it falls far short of what is needed.’
I spent a significant portion of my leisure time in 2017 and ‘18 going through a backlog of all the significant research papers that have been published till that time. After the exercise, I ended up with the feeling when you get to know a magician’s trick of how they made that pigeon appear out of nowhere. The chatbots giving out gibberish, vision algorithms breaking down significantly with tiny adversarial changes in images, the weird captioning of images, it all made sense. Though it might go on to solve almost all of our problems in various domains, even to a novice like me, it did not appear to be a path towards general intelligence.
If you like what you are reading, do subscribe to get this newsletter delivered to your inbox, every weekend.
Homo sapiens in a superintelligent society
There has been a lot of speculation about what the role of humans would be in a society where we do indeed end up building AGI (which in turn builds superintelligent AGI). An organization called Alcor has been keeping dead bodies in a frozen state since 1972 in the hope that a superintelligent AI creates a future technology to revive them.
If you have ever taken an Econ 101 class, you might have heard of John Keynes. Keynes among other things, once wrote an essay during the Great Depression in 1930 titled ‘Economic Possibilities for our Grandchildren.’ Portions of the essay -
My purpose in this essay, however, is not to examine the present or the near future, but to disembarrass myself of short views and take wings into the future. What can we reasonably expect the level of our economic life to be a hundred years hence? What are the economic possibilities for our grandchildren?
We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come—namely, technological unemployment. This means unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour. But this is only a temporary phase of maladjustment. All this means in the long run that mankind is solving its economic problem.
Yet there is no country and no people, I think, who can look forward to the age of leisure and of abundance without a dread. For we have been trained too long to strive and not to enjoy. It is a fearful problem for the ordinary person, with no special talents, to occupy himself, especially if he no longer has roots in the soil or in custom or in the beloved conventions of a traditional society. To judge from the behaviour and the achievements of the wealthy classes today in any quarter of the world, the outlook is very depressing!
We are 10 years short of the 100 years timeline. Technological changes did bring about unemployment, and while we might argue that the progress increased the number of jobs available in the long term, it was pretty bad for the displaced in all sectors where it happened. If AGI is ever possible, does that mean we can look forward to a life of enjoyment of the leisurely pursuits? Max Tegmark proposes an interesting set of aftermath scenarios in his book, some of which are amusing and scary at the same time (like Tucker & Dale vs Evil, though I might regret writing this 50 years down the line). There are 12 scenarios proposed -
Libertarian Utopia - Humans and machines coexist, Humans have upgraded their bodies, some uploading their minds into new hardware, can live in virtual realities, Huge wealth gap between AIs and humans. Humans get a perpetual basic income if they give away some of their lands to AIs (yay Keynes.)
Benevolent Dictator - No crime, poverty, or diseases, solved by the dictator AI. Everyone wears a bracelet that can sedate or execute, extreme surveillance, multiple paths to happiness for humans. Some might find life meaningless.
Egalitarian Utopia - No superintelligence, Possibly humans in control, property abolition, and guaranteed income. Since most creative innovations were not motivated by money but other human emotions, we see a resurgence in arts and creativity.
Gatekeeper - Partial human control to prevent the creation of another superintelligence, some surveillance to maintain status quo.
Protector God - Partial human control, Protector God works at maximizing the top of Maslow’s hierarchy stuff - purpose and meaning vs the bottom requirements in case of Benevolent dictator. AI produces rare nudges and miracles. AI lets some suffering in order to hide their identity.
Enslaved God - Humans in control, use superintelligence to solve problems, prevent breakout like Ex Machina. May give rise to a robots right movement.
Conquerers - No humans, AI kill us all for some weird reason unthinkable right now (similarly as we killed 8 out of 11 variety of much powerful elephants to smuggle their tusks and carve them into status symbols. )
Descendants - ‘Every human is given a robot kid to adopt, the robot learns from humans, adopts their values. Humans happy with robot baby much better than always disappointing human babies grows old and dies. No humans.
Zookeeper - AI keeps some of the humans as display pieces in zoos like we keep some animals about to get extinct.
1984 - No superintelligence, Humans in control. Go read 1984 by Orwell.
Reversion - Revert to the ways 1000 years ago. No electricity, no steam engine, no gunpowder, no medicines. No superintelligence, Humans in control.
Self-destruction - No superintelligence, No humans. (In the long run, we are all dead. Thanks, Keynes.)
AI safety
AI safety is yet another controversial topic that divides experts. Some believe that given that we are not even sure if building an AGI is possible, we should not be wasting any time or resources on thinking about the safety part. Others like Stuart Russell believe that it makes sense to invest in it right now, given the extreme consequences if it happens (think insurance). The common theme is that rather than an AGI going rogue, the bigger danger is the AGI using any means necessary to pursue the goal assigned.
‘Superintelligent AI in some form could arrive quite suddenly. The chances are that we would be unprepared.’
The debate has been ridden with whataboutism , name-calling, denial, and looks like a political shit show. One of the interesting arguments is that we can always turn the AI ‘off’. This argument again comes from anthropomorphizing AGI. Even if it happens, AGI would probably be a piece of a program, even distributed like a blockchain that cannot be simply shut down. An interesting example is that of an intelligent machine given the goal of fetching you a coffee. Given that it is superintelligent, it knows that turning it off prevents it from achieving its goal, and coupled with lack of any other information, it might go to any means to prevent you from doing that. (The title is a line from the example in Human Compatible.)
In experiments, mice would continue to press down on a lever that gives them instant boosts of dopamine (a chemical that acts as a reward and signals pleasure and motivation in the brain), even forgetting to eat and sleep until they died. This is the same behavior in human beings when getting addicted to something (even likes on social media.) The phenomenon called wireheading is something that can happen even with smart AGI. A machine that has a particular reward function might outsmart and cheat to get the maximum reward instead of doing the intended task for which the reward function was created.
Max Tegmark lays down four broad areas of AI safety - verification (we built what we wanted to correctly), validation (we built the right system for the problem), security (do not hack my AGI), and control (do you control the AGI or AGI controls you). One of the proposed solutions to AI safety is to have an Oracle AI, a friendly AI that is isolated and can only answer questions posed to it, and not have ability to act in the real world. This can be used to solve a lot of difficult problems while preventing a breakout situation.
Another threat from AI is that of Lethal Autonomous Weapons. Despite call for regulations in this space, countries around the world are working on prototypes of weapons that can target someone or a group of people based on some criteria. It could be a photo for targeted assassination, or much worse like helping in ethnic cleansing. Software only AI has been used in the past to personalize and deliver propaganda material to people before elections, and there are automated extortion bots in production.
Goals, Ethics, consciousness and the meaning of life
I remember discussing The Myth of Sisyphus by Camus with a friend during this lockdown last week. Camus outlines what he calls ‘The Absurd’ - the continuous striving of human beings to find meaning to their life in a universe that is essentially meaningless and purposeless. The legend goes that Sisyphus has been punished by death to roll a boulder up a mountain top that again proceeds to roll downwards, and this continues for eternity.
Then Sisyphus watches the stone rush down in a few moments toward that lower world whence he will have to push it up again toward the summit. He goes back down to the plain. It is during that return, that pause, that Sisyphus interests me. A face that toils so close to stones is already stone itself! I see that man going back down with a heavy yet measured step toward the torment of which he will never know the end. That hour like a breathing-space which returns as surely as his suffering, that is the hour of consciousness. At each of those moments when he leaves the heights and gradually sinks toward the lairs of the gods, he is superior to his fate. He is stronger than his rock.
If this myth is tragic, that is because its hero is conscious. Where would his torture be, indeed, if at every step the hope of succeeding upheld him? The workman of today works every day in his life at the same tasks, and this fate is no less absurd. But it is tragic only at the rare moments when it becomes conscious.
The premise that non-living things cannot have a purpose or a goal is false. Every machine from a printer to a missile is built with an objective in mind. One could argue that it is most humans that are devoid of any goal. When we are kids, we follow what is being told by our parents and taught by our teachers. Our likes and dislikes are shaped by our peers and experiences. For example, if a parent appreciates a kid for something like doing simple math problems, they might start doing it more in expectation of more rewards and develop a liking for the subject and make it a goal to pursue it in the future. Our choices shape our future decisions regarding the kind of jobs we can do, which then we make our life goals. It could be something as trivial as getting that job promotion every year or marrying someone that would elevate our social status. We are a bunch of stupid arrogant people.
The common argument that machines cannot have emotions or is conscious like humans is rejected in Life 3.0. Since we know that there is nothing magical about our brain. Just like any other matter, it is a certain arrangement of atomic particles that makes what we are, it should not be difficult for a superintelligent AGI to create something similar and experience similar emotions. Max Tegmark says that ‘If it feels like something to be you right now, then you’re conscious.’ He remarks that as a thought experiment if small parts of your brain were replaced by exactly equivalent artificial components, would you lose something at all. Research on consciousness has been often criticized as though people know it is something but not actually know what to study. Daniel Kahneman’s book Thinking Fast and Slow is perhaps one of the most widely read books on behavioral economics. One of the key takeaways from the book is the concept of System 1 and System 2. While the system 1 is fast, operates almost without voluntary control, System 2 is often associated with subjective experience, choice, and agency. Max adds a third System 0: ‘the raw passive perception that takes place even when you sit without moving or thinking and merely observe the world around you.’ System 0 and 2 are conscious, System 1 is not. If superintelligent AGI is conscious, it might be ethically wrong to make it work as a slave.
‘If any scientist wants to argue that subjective experiences are irrelevant, their challenge is to explain why torture or rape are wrong without reference to any subjective experience.’ - Yuval Noah Harari
Like the definition of intelligence amongst AI researchers as mentioned above, there is no common consensus amongst philosophers on what ethics are even in the context of human beings. If a self-driving car with an impending accident with a predicted fatal outcome has to choose between hitting an old woman whose disabled husband depends on her and a young man who just had a kid, who would the car choose? What would ethics mean for AGI and who would be responsible for setting up the standards? These are some really difficult questions to answer.
One might wonder what would life mean in a world teeming with superintelligent creatures, where our superiority as the smartest creature on the planet has been decimated. Would we be happy with artistic pursuits or depressed without a long term goal to look forward to. (For some reason, Flowers for Algernon is coming to my mind as I write this.)
‘When people ask about the meaning of life as if it were the job of our cosmos to give meaning to our existence, they’re getting it backward: It’s not our universe giving meaning to conscious beings, but conscious beings giving meaning to our Universe.’ - Life 3.0
AI governance and China
Unlike a lot of other traditional fields, AI research has been dominated by corporate scientists. Large companies know the gigantic upside to be the first to finish in the race towards building better tool AI and eventually towards the path of AGI. Companies have been constantly poaching university professors and research groups to work for them. Stuart Russell acknowledges that when corporate scientists and not the publicly funded one dominate, there are different motivations for the outcome of the research.
‘Because human-level AI is not a zero-sum game and nothing is lost by sharing it. On the other hand, competing to be the first to achieve human-level AI, without first solving the control problem, is a negative-sum game. The payoff for everyone is minus infinity.’
Kai-Fu Lee is a Taiwanese scientist based out of China. In his book AI Superpowers, Lee writes -
‘Creating an AI superpower for the twenty-first century requires four main building blocks: abundant data, tenacious entrepreneurs, well-trained AI scientists, and a supportive policy environment.’
The preference of local companies over global ones, a single all-in-one platform like Wechat and state-mandated surveillance solve the data problem. China already has entrepreneurs not afraid to cross the line and do things that might be unlawful in other nations (Qihu vs Tencent). While China might not be able to compete with the US education system on the quality of AI research, it does have a huge quantity of researchers and AI engineers working in locally bred companies. The Chinese education system has been focusing on AI a lot more than possibly any other country in the world. Open research in the area, often with pre-prints of publication appearing on Arxiv (a repository of often non-peer-reviewed papers), rather than publishing work in closed scientific publications, has also helped China.
Lax policies in China have often helped scientists cross moral and ethical boundaries. He Jiankui, a Chinese scientist announced the birth of twins with edited genomes with a technology known as CRISPR-Cas9. This might actually work in favor of China but may prove a disaster in the long run. The Ministry of Science and Technology is giving out grants in order to create hardware startups for efficient chips for deep learning. It is like the reboot of the arms race. Thousands of startup incubators are flourishing with a call for ‘mass entrepreneurship’. Lee reports that iFlyTek, an AI company in China is applying AI in the courtroom, advising judges on both evidence and sentencing, assessing risk levels for parole, and ranking public prosecutors based on performance. (This despite the infamous study on Israeli judges being more lenient after lunch and much stricter before.) Companies like Xiaomi now also making smart gadgets for homes that have already found a strong foothold in foreign markets like India. China is building an entirely new city the size of Chicago for companies to test their intelligent products like autonomous cars.
Other things that matter
SpaceX has successfully launched two NASA astronauts into space to the ISS. India has open-sourced its contact tracing app. Joe Rogan has made a deal with Spotify for exclusive publishing rights of his podcast for apparently more than $100m. HBO Max will rely on human curation of content instead of algorithms like Netflix. Youtube says ‘an error’ caused comments critical of the Chinese government to auto-delete.
This is the 4th issue of the newsletter. If you know someone who might like content like this, do share it with them. See you next weekend!