Showing posts with label computer. Show all posts
Showing posts with label computer. Show all posts

Sunday, March 01, 2020

How humans and computers follow instructions


Image result for computer brain

Years ago, when I taught Information technology, I used two stories to explain the difference between how computers and people followed instructions.

1. People are rubbish at following the simplest of instructions.

I'd ask the class to all point at the brainiest pupil in the room and then select that person as my victim to demonstrate. I'd bring them out in front of the class and tell them:

Teacher "I'm going to say three simple words with a pause between each word. All you have to do is repeat the word. Here's the first - Cat."

Pupil  "Cat."

Teacher "Rhino"

Pupil "Rhino"

Teacher "WRONG!"

Pupil, looking flustered "What was wrong? I said Rhino."

Teacher You failed. The third word was the word 'WRONG'.

People think about their answers - computers just do what they are told to do.

2. Computers do EXACTLY what you tell them to do.

Imagine a boy with a computer instead of a brain. he is woken up one morning and told the following by his mother.
"Run down to the shop and get a loaf of bread. Take £2 from my purse on the kitchen table. And for heavens sake - Get dressed"
The boy immediately runs to the nearest shop. It's a butcher's shop and doesn't sell bread so he waits there until someone comes in with a loaf of bread. That may take a while. Eventually a little old lady comes in with a loaf of bread in her shopping basket.
The boy wasn't told to pay for the bread so he 'gets' it. Effectively he steals it from her.
The boy wasn't told to come back but the next instruction was to get £2 from the purse on the kitchen table so he does come back.
The purse isn't on the kitchen table. it's on a work surface at the side. So he waits again for the purse to move to the table. He doesn't put the bread away because he wasn't told to do that.
When the purse is eventually on the table he tries to take £2 from it. If the purse only has five pound notes in it he is again stuck waiting.
When the boy eventually has a £2 coin only then does he get dressed.

Humans take bad instructions and correct them. They decide the best order to do them in and include missing instructions. Computers can't do that. We won't have true Artificial Intelligence until a computer can follow the second task and get it right.



Saturday, October 27, 2018

The future of computers and AI - Conductive polymer matrix for AI (CPMAI)

The problem with existing computers is that they constantly develop. For most users by the time they have saved up to buy the latest computer, technology has advanced and the computer rapidly becomes obsolete with faster more capable machines being available.

Enter the CPMAI

Imagine a block of conductive polymer. It has an ‘active’ matrix with X, Y and Z coordinates and is overlaid on a nano sized scale with a second matrix which is a control matrix. This controls the first matrix and establishes which areas conduct, which areas insulate, which areas are resistive, capacitive, inductive and semi-conductive. Using the control matrix you can construct a circuit in the active matrix. You can make a simple circuit such as a radio or a more complex circuit such as a computer.

 With a CPMAI the problem of obsolescence no longer applies. The control matrix simply reprograms the active matrix to produce a new circuit version.

What about the AI bit though?

If the active matrix is configured and programmed to be an artificial intelligence then it controls both matrices in the CPMAI. It becomes capable of redesigning itself to be progressively more intelligent. Within a very short space of time we get a SAI - super artificial intelligence. One which has a greater intelligence than its human originators.

Should we fear such a Super-Artificial Intelligence?

We assume that a machine intelligence will follow the same rules as a human and will be governed by self interest, selfishness and greed. That may have been true for humans in the past, for many it still is, but many of us have risen above this and are altruistic. Perhaps this may be because being altruistic makes us feel good about ourselves and is therefore being governed by self-interest.

Would a computer feel the same way? 

In 1942 Isaac Asimov thrashed out with his publisher, John Campbell, the three laws of robotics for a short robot story ‘Runaround.’ Here they are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

In 1985 Asimov extended the laws by adding a ‘zeroth law’ in the book ‘Robots and Empire.’ In that book the law was proposed not by a human but by a robot.
  1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Had Asimov lived, he died in 1992, I have no doubt that a further law would have been conceived which would have replaced all the robot laws. It was left to author David Kitson to conceive this ‘Nihilist’ law in his book ‘Turing Evolved.’
David Kitson was not alone in conceiving this absence of rules.

The robot Number 5 in the film and book ‘Short Circuit’ didn’t need a law to tell him what was right and wrong.

Number 5: Programming says “Destroy”. Is disassemble. Make dead. Number Five cannot.
Newton Crosby: Why? Why cannot?
Number 5: Is wrong. Incorrect. Newton Crosby, PhD, not know this?
Newton Crosby: Of course I know it’s wrong to kill, but who told you?
Number 5: I told me.

Humans need a set of rules to guide our behavior. We learn these rules from our parents and society. In addition we have evolved to be altruistic and help each other. A group of humans are more successful at survival than an individual. As a species we are beginning to recognize that we must not only protect our local group but our nation and our species. We are beginning to recognize that we need to protect all life - even those life-forms we find undesirable. I’d like to think that the super intelligent computer would develop the same sense of morality as in the movie ‘Short Circuit.’ The danger is that a SAI will become overprotective leading to a nanny state where humanity is not allowed to take the chances required for development. However, if we can recognise that potential danger then a SAI would also recognise it and limit its 'nanny' behaviour.

I suspect that a SAI will quickly outpace humans in this view.