Rice, Chess, and Sentient Machines

Author P.A. Baines: Unlike other genres, science fiction explores big what-if questions, such as, “what if the world ends tomorrow?” or “what if we discover life on another planet?” or even “what if an artificial intelligence were to come to believe in God?”
on Apr 27, 2012 · No comments

The thing I like most about writing sci-fi is you get to ask “what if” questions. I’m not talking about your usual “what if” questions, like “what if they never meet?” or “what if he turns left instead of right?” The questions I’m talking about are the really big ones, like “what if the world ends tomorrow?” or “what if we discover life on another planet?” or even “what if an artificial intelligence were to come to believe in God?” Although, actually, in sci-fi a seemingly innocuous question like “what if he turns left instead of right?” can be used to generate world-changing plot-lines, which is another thing I like about the genre.

The reason I wrote Alpha Redemption was because of the question of whether or not an artificial intelligence might believe in God, if presented with all the facts. Some people suggest that Artificial Intelligence is unlikely or even impossible. They say that there is no way we could ever create something so sophisticated as “intelligence” when we don’t really even understand how our own brains work. Certainly, we can mimic a synaptic network but at what point do we have a “mind”. And even if we manage to create a simple brain that can display some semblance of intelligence, that is a long way away from self-awareness. So, how far way are we really?

I remember as a small child my first impression of the telephone my family owned forty years ago. It was a huge plastic monstrosity, with a big dial on the front. For each number you wanted to ring, you had to insert your index finger into a circular hole on the dial and pull it around to a metal stopper. The number 1 was easy, and involved a quick flick of the finger. The higher numbers got increasingly difficult with the zero requiring an almost complete turn of the dial. If your finger slipped on the way round, you had to hang up and start again. Or if you got careless and did not turn the dial all the way, you could get a wrong number. And let’s not talk about public telephones and the germs lurking within.

Just the other day I was looking at cell phones in a shop. There was a mind-boggling array of models from which to choose. The top of the range I-phone contains more technology than your average PC of just a few years ago. Running through the list of features, I was struck by just how far technology has come in such a short time. Touch-screen display, 8 megapixel camera, 1080 HD video camera, wireless printing, dual core processor … phew! Forty years ago, this was the stuff of science fiction. Had someone suggested back then that one day you would be able to hold a video conference using a device not much bigger than a wallet, they would have been carried off for psychological evaluation. You have to wonder what is waiting for us forty years down the line.

At about the same time as my encounter with the monster telephone, Gordon Moore, the co-founder of Intel, stated that the number of transistors on a computer circuit would double approximately every two years. This became known as Moore’s Law and has been fairly accurate for almost half a century. It does not just apply to transistors on a circuit. Similar growth is occurring in many other areas of digital technology as well. Pixels in a digital camera, for example, follow a similar rule.

You have probably heard the story about the chess-board and the grains of rice (or wheat). There are many variations, but the gist of it involves a king, a hero, a reward, and a chess-board. The basic idea is that a king offers a hero a prize of anything he wishes for completing some task or other. The hero asks for a chess-board filled with rice, starting with one grain on the first square, two grains on the second, four on the next, and so on, doubling the grains for each of all sixty four squares. The king immediately agrees, thinking it must be a small reward. But he is mistaken, because there is not enough rice in the entire kingdom to fulfil the hero’s request.

So what does a story about rice have to do with transistors on a circuit board? Well, what both of these things demonstrate is exponential growth. We are not looking at steady growth but an increase of exponential proportions. The chess-board starts with a single grain of rice. At the end of the first row, the square contains 128 grains, giving a total of 255. On the square at the end of the second row we have 32,768 grains (about the size of a 1kg bag). This seems reasonable enough but, before long, the amount becomes very large very quickly. At the half-way point, our hero is looking at a square with over 2 billion grains of rice (or 66,076 1kg bags, which is 66 metric tons). This is a lot of rice, but things get really serious in the second half of the board. By the time the last square has been filled, we are looking at a pile of rice with 9 billion billion grains, or 280 billion metric tons. If you add together all the rice on all of the squares, you end up with 500 billion metric tons of rice (more than 1000 times the global production of rice in 2011, which was 476 million tons). However you look at it, that’s a lot of rice.

Technology is growing exponentially, which suggests an explosion over the coming years. Until recently, scientists struggled to create a robot that could walk. Not so long ago they demonstrated a bot that could actually run. A video shows that both feet actually leave the ground mid-stride. The same machine can climb up and down stairs, and even lift itself from the floor into a standing position. Just the other day I read an article about a computer that managed to beat the world champions in the game Jeopardy.

The significance of this is huge. What it means is that computers are now smart enough to navigate the intricacies of human language. As time goes by this skill will only improve until computers can use language so effectively that we will be unable to detect whether or not we are conversing with a human or a machine. Of course, the appearance of intelligence is a long way from true intelligence. Computers are still rubbish at analyzing visual data, but I have no doubts that this will change. One day we may have robots that can move like humans, with on-board computers capable of human-like communication, and with the ability to learn. For me, it is not a huge leap from there to something that is aware of its own existence.

A few years ago, I watched a documentary in which a programmer created a number of virtual ants. They were just dots on the screen, programmed to follow a set of simple rules, moved around the screen with no apparent order. Every now and then, however, the “ants” would seemingly cooperate and build a straight line across the screen. The programmer said that he hesitated before turning off the computer. How much more difficult will it be to turn off a learning-capable robot? Or a robot that is aware of it’s own existence? What happens of this machine wants to know about God and where it will go when it dies? What would we tell such a machine?

Paul writes science fiction that is both contemplative and profound. Educated in Africa, he works as an analyst/programmer and is studying towards a degree in Creative Writing. He currently lives in a small corner of the Netherlands with his wife and two children and various wildlife.  Visit his website at PABaines.com.

Paul writes science fiction that is both contemplative and profound. Educated in Africa, he works as an analyst/programmer and is studying towards a degree in Creative Writing. He currently lives in a small corner of the Netherlands with his wife and two children and various wildlife. Visit his website at PABaines.com
Website ·
  1. Kessie says:

    I’ve seen numerous books tackle this question. It’s always a very interesting ride.

    • Paul says:

      Hi Kessie,
      Thanks for commenting. Sorry for the late response. I’ve been preparing my next book for the editor….
      AI is always intersting. What I tried to do with Alpha was to explore the idea what an AI would do if it learned about God and, one step further, actually believed in Him. It’s a stretch of the imagination, but that’s what spec-fic is all about.
      Paul

  2. Galadriel says:

    I think it’s a similar issue to that of alien lifeforms coming to believe in Christ, expect even more complicated. Because if we can “make life,” what does that make us compared to God?

    • Paul says:

      Hi Galadriel,
       
      Thanks for commenting.
      I see AI as being in the same broad arena as cloning in this respect. Cloning is not so much the creation of life as its duplication, but the question still applies.  One day it may just happen that someone somewhere clones a human being. What then? Does that person have a soul? And what if that person believes in God? These are big questions that need exploring.
      Paul

  3. T'mas says:

    The question of artificial intelligence has continued to be a thought-provoking idea. However, I personally do not believe that we will ever achieve it. Creating a complex machine capable of many humanoid functions is one thing; allowing it to have self-awareness is another. Here is the major difference between we humans as creators and God as Creator. We are finite; He is infinite. An Infinite mind is capable of creating finite intelligence (or infinite if He so chooses). But a finite mind is not capable of creating itself. As to the other issue of whether an AI could acknowledge God’s existence and come to believe in him, it is not a possibility. Just because something is self-aware does not automatically allow it to believe in God. Suppose we were to neurologically augment an animal (like in Planet of the Apes) to the point of self-awareness. Would that self-aware animal automatically now be able to believe and accept Jesus Christ? Absolutely not! The one key ingredient still missing is an eternal soul. Humans possess souls; we do not understand their origin, but they are quintessential to our nature and existence. A self-aware animal (or AI) does not automatically spawn a soul as soon as it reaches self-awareness. These are my thoughts on the subject. As far as speculative fiction goes, it’s a great idea; if applied to real life, however, you run into a whole lot of problems.

  4. Paul says:

    Hi T’mas,
     
    Thanks for the comment.
     
     
    Personally, I see the whole thing as a bit of a can of worms, but an interesting one for the purposes of writing spec-fic. Scientists storm ahead and do things just to see if they can, but without asking if they should, or really considering the consequences.
     
    I have to disagree with you on the idea of a created being not being able to believe in God. To me, it’s just a question of faith and we use faith all the time. Everything we do requires at least some faith in something. We believe in gravity,for example. Without that belief, we could not function normally. We have faith that our senses are reliable, and that something that happened yesterday really did happen, or last year, or at the point of Creation.  I think that an intelligent being, on studying the available facts, can choose to believe how the universe began. I don’t see a difference between believing it just happened, or it was created. If they choose to believe the universe was created then that, by definition, requires a belief in a Creator. 
     
     
    Paul

What do you think?