Artificial minds?

Juan Camilo Espejo-Serna
Universidad de la Sabana

Philosopher AI

Plan

  1. Film overview
  2. Turing's Test
  3. Should we be wary?

Film overview

Write in chat a one-line summary of the plot of EX-MACHINA.

(A good one :P).

The film considers several topics of philosophical interest revolving around the nature of the mind and what we have to fear about AI

The film seems to take the view that we shold be wary of AI because of its cold rationality.

Main claim:
Nathan: Ava was a mouse in a mousetrap. And I gave her one way out. To escape, she would have to use imagination, sexuality, self-awareness, empathy, manipulation - and she did. If that isn’t AI, what the fuck is?

Turing's Test

This is not quite right but almost.
To to talk about artificial intelligence, it is important to determine the terms more precisely. In many cases an intuitive understanding of what artificial intelligence is used wihtout a strong theoretical characterization of what it is.
When talking about AI we usually think about images from science fiction.
One of the main difficulties in trying to offer a characterization of artificial intelligence is that we do not have a good idea of what natural intelligence is either.

Since we have no sold idea of what human and animal intelligence is, how can we begin to define artificial intelligence?

Alan Turing proposed a brilliant workaround.

He proposed an empirical test, not a conceptual definition.

The general idea is simple but powerful: if there is a human activity that requires intelligences and a machine can perform sucessfully such activity, then we can say that the machine can think.
"I propose to consider the question, ' Can machines think ? ' This should begin with definitions of the meaning of the terms 'machine' and 'think'. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words 'machine' and 'think' are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, 'Can machines think?' is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words." (Turing 1950, 433)
The new form of the problem can be described in terms of a game which we call the 'imitation game'. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either ' X is A and Y is B ' or ' X is B and Y is A'. (Turing 1950, 433)
We now ask the question, 'What will happen when a machine takes the part of A in this game?' Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, 'Can machines think ?' (Turing 1950, 434)
The issue Turing is talking about is thought, not consciousness. Whether consciousness comes together with a general capacity for thought is a question that should be distinguished.
The film mixes them up.
Consciousness might refer to either the phenomenally conscious aspect of what it is like to be in a given state or the availability for use in reasoning and rationally, guiding speech and action.
IF these come apart, we might have beings with the capacity for thought without the capacity for consciousness and the other way round.
Some philosophers think that it is possible to have (phenomenal) consciousness without thought: a mind that have a distinct feel but without the capacity think properly. (Like an octopus, for example. )
Some philosophers think that it is possible to have thought without consciousness: a mind that can have rational thought without a distinctive qualitative feel. (They are often referred to as "philosophical zombies". AI is usually portrayed in this way. )
Turing seems to suggest that we have AI when we get the rational engagement needed in order to fool a judge into thinking that it is a human speaking. But as I have said, the idea more generally seems to be that when machines behave in a way in which we cannot distinguish from intelligent behaviour, then we have there genuine intelligence.

There are two problems with this suggestion

The first problem is empirical.

We have already machines that perform at the highest levels of competence but we do not have AI.

Towards the end of the XX century we got a machine that played chess at the highest levels of competence but could not perform similarly in any other area.

With this came the "AI winter"

Less media attention

Less military funding

Less research

The second problem is theoretical.

It is possible to behave as if there were understanding without there being any.

The film does not really stick to Turing's characterization of intelligence and rather suggests:
Nathan: Ava was a mouse in a mousetrap. And I gave her one way out. To escape, she would have to use imagination, sexuality, self-awareness, empathy, manipulation - and she did. If that isn’t AI, what the fuck is?

Should we be wary?

The film presents the nearby future and suggests that we should be wary of it.
The film seems to present the following picture: If we are cruel to the machines (like Nathan), they will kill us. If we are kind to the machines (like Caleb), they will use us. So, watch out for the machines!
We are not quite in the future that the film presents. But should we worry?
We need to understand better, actually looking at the developments and not taking our ideas from sci-fi.

Rule following vs neural networks

One of the best examples of rule-following AI is Deep Blue

Dangers?

We have already talked about the limits of rule-following AI. This suggests that the tech is as dangerous as the people who design and use it because there nothing more to it.
One of the best examples of neural networks in AI is Alpha-Go
This tech is as dangerous as the people who design and use it but also as what "makes up the pile".

Let me explain with a concrete example.

PULSE is a neural net that takes a low resolution image as input and returns a high resolution image. Like in procedural dramas like CSI!

BUT

Despite its great sucess in several cases, PULSE can also return an image that a well informed adult would never recognize as a highresolution version of the original.

What kind of error is this?

Technology shapes our lives and in that way transmits and strengthens biases.
  • Analog tech has biases. Ex: buildings with accessibility issues.
  • Digital tech has biases. Ex: apps without support for the blind
Neural networks pose an additional problem.
PULSE is biased. Where does the bias come from?

The inescrutable pile has the answers.

Dangers?

The problem with neural networks comes from the lack of transparency. We are at the mercy of the pile data and statistics.

Next week