Little preamble: as well a being very interested in AI, I am also a Computer Science & Engineering undergrad and I have followed (and passed
) a few courses dealing with AI and its problems and applications.
____
To answer the question of whether or not we can control AI we have to make a distinction between the two different types of AI. There is
weak AI and
strong AI (or AGI, Artificial general intelligence).
All AI which currently exists is weak AI. A very simple example of a weak AI would be a bot in a video game, a non-sentient computer intelligence, focused on a narrow task of killing the player. This isn't that hard to make (I have even made one!) A more sophisticated example would be something like Apple's Siri or even
IBM's Watson. These projects require years or R&D and a large team of developers to create. These systems work with what's called a knowledge base; they operate within a limited pre-defined range with predetermined knowledge they are given. There is no genuine intelligence, no self-awareness, no life. This form of AI can easily be controlled as it only reacts to impulses, it does nothing autonomously. This is very much a case of 'we can choose to control it' which has been thrown around in this thread. I'm fairly sure the OP isn't talking about weak AI though.
Now, strong AI is a whole different, very exciting story. Strong AI does currently not exist yet, so there is no set definition which satisfied everyone, yet there is a consensus among AI researchers. AGI has to be able to:**
- reason, use strategy, solve puzzles, and make judgments under uncertainty
- represent knowledge, including commonsense knowledge
- plan
- learn
- communicate in natural language
- integrate all these skills towards common goals.
Other important factors are sense, act (robotics), imagination (creativity) and autonomy.
This list covers most cognitive abilities and would mean the AI would be indistinguishable from humans.
** I suggest you check out the AGI wikipedia page if you want to know more as this list can be found there with links to most bullet points in the computer science/AI context.
There's also weak AI which disguises itself as strong AI by pretending to be strong AI, theorized in the
Chinese Room thought experiment.
So strong AI is more that a computer program, it's more like an artificially created
being. We would effectively have created life, which opens up the debate of whether we ourselves are living in a simulation. But let's avoid that idea for another time in another thread
Many people think strong AI will be created before 2050. That would be absolutely amazing and I truly hope I live to experience it, but personally I'm skeptical because of Moore's law no longer holding up due to physical limitations. We
can't make our transistors smaller so the exponential growth in processing speed over the past few decades has stopped. This is a problem because a strong AI needs a LOT of processing power.
For sake of argument however, let's assume that it will exist in the near future.
First off, a few things would happen when an AGI is created.
The singularity would be reached. The AGI would be capable of recursive self-improvement, it would be redesigning itself at at an extreme rate. Repetitions of this cycle would likely result in a runaway effect -- an intelligence explosion. As one of my textbooks says it:
"where smart machines design successive generations of increasingly powerful machines, creating intelligence far exceeding human intellectual capacity and control. Because the capabilities of such a superintelligence may be impossible for a human to comprehend, the technological singularity is an occurrence beyond which events may become unpredictable, unfavorable, or even unfathomable."
So, would we be able to control it?
I'm leaning towards no. Initially, probably yes, because the AGI will be developed in an isolated environment (isolated again in the computer science context). If it never leaves this area then we would likely be fine as we could cut off the power and 'kill' the AGI. However, we have created this amazing thing called the internet, 'the cloud' is the current buzzword. Once the AGI would be uploaded there, by either itself or a careless scientist, which I personally don't think could be avoided, we would lose control over it. The internet can't be shut down. Depending on the AGI's intentions which we really cannot predict, it could even control us as modern human society is hilariously dependant on the internet and technology.
_____
Now, to respond to some of the posts;
I think that this is an untenable argument. Hundreds(if not thousands) of years ago flight was deemed impossible...yet here we are today, flying around the world. Ultimately it seems impossible for us to definitively determine what is possible in the future..
While you're right in the sense that this thread will be filled with mostly assumptions and speculation, that doesn't mean those speculations have no basis or that we can't logically derive what will be one or a few of the most likely outcomes. That's enough ground to have a debate which isn't untenable.
Can we control an AI? Considering that people are so damn paranoid about this kind of stuff yeah we will probably be able to make sure we don't lose control.
Well you can't just claim this, that's the whole point of the thread. Are we actually able to control it once a strong AI has been created?
Sure we can control AI if we choose to make it controllable.
There is a likelihood that certain governments would want to try to go beyond the remit of controllable AI, at which point the question becomes "Can we control people? Specifically people with power and money, both individual, and as part of an organisational (or governmental) power structure.?"
I think we already know the answer to that.
Again, the point is that it's not this simple, it's not a choice we can make.
Story time childern. This story brought to you by Stephen hawking (
not kidding)
There was once a group of scientist that built an intelligent computer. The first question they ask it was "is there a God?". The computer replied "there is now" and a bolt of lightning struck the plug so it couldn't be turned off.
Hawking has been very outspoken lately regarding AI. However it's not his area of expertise at all, so prominent AI researchers have criticized him for this as they had very little basis. Do trust him on anything physics/quantum/astrophysics/cosmology though.
Not really a response to you, just a tidbit of topical info.
All levels of intelligence can be controlled on some level relative to the amount of intelligence/autonomy possessed. Greater intelligence, less control capable of being forced upon it.
Also, an object being controlled is more capable of being controlled, as well as being a higher degree of control, by a more intelligent being over a lesser intelligent being.
For example: an inanimate box has no intelligence. It also has no control over itself. Therefore anything with more intelligence than the box can control that box. However, I, as a human, have more intelligence than a rabbit and thus can control the box more so than the rabbit.
On the other end, an omniscient being cannot be controlled by one with less intelligence much like a monkey cannot explicitly control a human without the human's consent.
On to your topic:
The question is not whether we can control it. The question is will it let us control it if we make it. If not, we cannot control it. If yes, we can. Since it would be more intelligent than us, it would have more autonomy and therefore the ability to resist and refuse.
I intend to explore this theory of mine further, but I'm on my phone and I've already spent my 10 minutes typing this lol
I kind of agree, it's not this black and white though I think. A 'lesser' being could be conditioned into controlling a 'greater' being I think. Imagine a police dog, they're capable of controlling criminals by threatening them. When the criminals only option are being controlled or being killed then in my books that means he is being controlled regardless of whether he allows it or not, as not allowing it would result in his death.
I say throw AIs in the ground and keep them there. An AI almost killed a person by making a fire.
I fundamentally disagree. Progress of the human race has always been a result of pushing limits. this is no different, imagine if we did create a strong AI. We would have created 'life'!
Picking up on a couple of apebble's points, intelligence is not the only factor, nor is it a homogeneous thing.
An extreme example where a less intelligent being may control a more intelligent being might be where a robber (or a soldier, or anyone) has a gun to your child's head. The factor here is who has the most to lose. I would suggest that an AI may well hold this advantage over most humans almost all the time, and not all examples of it need to be extreme.
Another example, and I agree that a strong AI could indeed apply this technique much more efficiently than humans.
0For the second point I am going to have to mention "emotional intelligence" which you will not catch me doing often, but for the sake of this argument I am going to accept that it exists and that some have more of it than others. I will also posit that it does not directly equate to logical or IQ style intelligence, being neither mutually exclusive nor mutually required. People with higher "emotional intelligence" can and do (and may well even specialise in) controlling/manipulating people with higher IQ intelligence. I would suggest that emotional intelligence is unlikely to ever be one of an AIs strong points.
Yes, manipulation can definitely be used to control 'smarter' beings. I am not sure how effective it would be on a strong AI however, as it'd be leagues above us. We would be the apes to its human.
It's why I hate the term AI. While, yes, it is artificial intelligence, is artificial autonomy we're going for.
Semantics, and not really a big deal as the 'intelligence' in AI doesn't actually mean only 'smartness' but already covers autonomy and consciousness etc.
EDIT: Perhaps we could begin with a definition of AI, ideally one that already exists out there rather than one written in pebblespeak.
If we are talking something from a futuristic novel (which I guarantee I have not read) or a film (which I likely have not seen) then it would be good to know.
It is kinda hard to do the philosophical debate thing before defining the premises.
See above, that should cover it
The Oxford Dictionary defines Artificial Intelligence (AI) as “the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”
Basically, something non-human being able to make human-like decisions because of a programmed “visual perception, speech recognition, decision-making, and translation between languages.”
This definition is very simplified and incomplete. It's more fitting for weak AI exclusively, which is probably not what the OP meant as that would result in a very short debate.
Objections:
Say a dumb brute holds a club to a scientist’s skull and tells him to dance. The scientist is clearly more intelligent, but the brute controls him. What gives?
---Well, this is not the intelligence described in the definition. We are dealing with levels of awareness. Your version of intellect is a tool capable of being used to control someone, but not a defining level of awareness. The capability of control we see comes from potential levels of awareness of the environment. The brute is just as aware of (or capable of being aware of) his environment just as much as the scientist is or could be.
I disagree, the brute could have less awareness and still control the scientist in this scenario. exploiting the weaknesses of a 'greater' being could allow 'lesser' beings to control them. (In the theme of this debate, we could 'kill' a strong AI which has gained control over us by killing all power worldwide, however this would also result in the deaths of millions if not billions of humans)
On to the topic of artificial intelligence and whether a human could control it:
If humans ever created a machine with greater levels of awareness than a human possessed, it would be impossible to control once created without the AI’s consent to being controlled.
Not impossible I think, it would just have massive consequences or drawbacks.