Are we capable to control Artificial intelligence?

DeletedUser44426

Guest
The Topic- Are we capable to control Artificial intelligence?

Motion- The house believes that humans are in no way capable to control Artificial intelligence.

Anything relevant to this topic is allowed.


Key points- Humans wouldnt know if the intelligence was benevolent or malevolent.
- Humans would basically be creating a Virtual God, if the A.I gained Superintelligence.
- Every creation has a set of values, in which this case, A.I would have a set of values about how it should treat or serve humans and their every need. With the A.I evolving to Superintelligence, we would have basically gave a virtual God a set of values on how we should be treated. The A.I can either follow these rules, and ignore its sentience, or it could disregard them to form its own values. Values that would not be in the best human interests.


Let the debate begin.
 

DeletedUser50183

Guest
Hmmmm...

I think that this is an untenable argument. Hundreds(if not thousands) of years ago flight was deemed impossible...yet here we are today, flying around the world. Ultimately it seems impossible for us to definitively determine what is possible in the future....excluding anything that God says is impossible...those things will always be:p

I might have missed something(I'm 14) so just ignore me if I missed the target completely. lol!


BTW I created an edited version of your post to eliminate a few typos(I'm really OCD abt things like that) :D

Edited version-
Are we(humans)able to control Artificial intelligence?
The Topic- Are we capable to control Artificial intelligence?

Motion- This house believes that humans are in no way capable of controlling Artificial intelligence.

Anything relevant to this topic is allowed.


Key points- Humans wouldn't know if the "created" intelligence was benevolent or malevolent.
- Humans would basically be creating a Virtual God, if the A.I gained Super-intelligence.
- Every creation has a set of values, in this case, A.I would have a set of values about how it should treat or serve humans and their every need. With the A.I evolving to Super-intelligence, we would have basically given a virtual God a set of values on how we(humans) should be treated. The A.I can either follow these rules, and ignore its sentience, or it could disregard them to form its own values... Values that would not be in the best human interests.


Let the debate begin.
 
Last edited by a moderator:

DeletedUser33530

Guest
ignoring the almost comical logical errors and assumptions that have been described as "key points" I'll just answer the main question.

Can we control an AI? Considering that people are so damn paranoid about this kind of stuff yeah we will probably be able to make sure we don't lose control.

I suggest we move onto the far more interesting debate of should we control an AI?
 

DeletedUser

Guest
Very cool topic. I'll await a few more responses before I reply as I don't have the time atm, but I love this topic :D
 

DeletedUser5819

Guest
Sure we can control AI if we choose to make it controllable.

There is a likelihood that certain governments would want to try to go beyond the remit of controllable AI, at which point the question becomes "Can we control people? Specifically people with power and money, both individual, and as part of an organisational (or governmental) power structure.?"

I think we already know the answer to that.
 

DeletedUser33530

Guest
Story time childern. This story brought to you by Stephen hawking (not kidding)

There was once a group of scientist that built an intelligent computer. The first question they ask it was "is there a God?". The computer replied "there is now" and a bolt of lightning struck the plug so it couldn't be turned off.
 

DeletedUser8396

Guest
All levels of intelligence can be controlled on some level relative to the amount of intelligence/autonomy possessed. Greater intelligence, less control capable of being forced upon it.

Also, an object being controlled is more capable of being controlled, as well as being a higher degree of control, by a more intelligent being over a lesser intelligent being.

For example: an inanimate box has no intelligence. It also has no control over itself. Therefore anything with more intelligence than the box can control that box. However, I, as a human, have more intelligence than a rabbit and thus can control the box more so than the rabbit.

On the other end, an omniscient being cannot be controlled by one with less intelligence much like a monkey cannot explicitly control a human without the human's consent.

On to your topic:

The question is not whether we can control it. The question is will it let us control it if we make it. If not, we cannot control it. If yes, we can. Since it would be more intelligent than us, it would have more autonomy and therefore the ability to resist and refuse.

I intend to explore this theory of mine further, but I'm on my phone and I've already spent my 10 minutes typing this lol
 

DeletedUser31385

Guest
I say throw AIs in the ground and keep them there. An AI almost killed a person by making a fire.
 

DeletedUser5819

Guest
Picking up on a couple of apebble's points, intelligence is not the only factor, nor is it a homogeneous thing.

An extreme example where a less intelligent being may control a more intelligent being might be where a robber (or a soldier, or anyone) has a gun to your child's head. The factor here is who has the most to lose. I would suggest that an AI may well hold this advantage over most humans almost all the time, and not all examples of it need to be extreme.

For the second point I am going to have to mention "emotional intelligence" which you will not catch me doing often, but for the sake of this argument I am going to accept that it exists and that some have more of it than others. I will also posit that it does not directly equate to logical or IQ style intelligence, being neither mutually exclusive nor mutually required. People with higher "emotional intelligence" can and do (and may well even specialise in) controlling/manipulating people with higher IQ intelligence. I would suggest that emotional intelligence is unlikely to ever be one of an AIs strong points.

Yes I am a robot, just incase you were wondering.
 

DeletedUser8396

Guest
Picking up on a couple of apebble's points, intelligence is not the only factor, nor is it a homogeneous thing.

An extreme example where a less intelligent being may control a more intelligent being might be where a robber (or a soldier, or anyone) has a gun to your child's head. The factor here is who has the most to lose. I would suggest that an AI may well hold this advantage over most humans almost all the time, and not all examples of it need to be extreme.

For the second point I am going to have to mention "emotional intelligence" which you will not catch me doing often, but for the sake of this argument I am going to accept that it exists and that some have more of it than others. I will also posit that it does not directly equate to logical or IQ style intelligence, being neither mutually exclusive nor mutually required. People with higher "emotional intelligence" can and do (and may well even specialise in) controlling/manipulating people with higher IQ intelligence. I would suggest that emotional intelligence is unlikely to ever be one of an AIs strong points.

Yes I am a robot, just incase you were wondering.
Wrong type of intelligence. This isnt the smart/stupid definition. More of the subject's range of choices and free will.

Like the thread talks about A.I. Specifically one capable of free will. It proposes a computer capable of a broader range of choice.

Levels of intelligence provides a different level of autonomy which allows greater or less control. Only if two beings have the same intelligence, and therefore the same autonomy, can other factors come into play. At least, until I look at this further. It could change.

It's why I hate the term AI. While, yes, it is artificial intelligence, is artificial autonomy we're going for.

Again, I'll explain this in rather extreme detail later.
 

DeletedUser5819

Guest
I barely understood a word of that, although the first bit reminded me of when British Rail spent millions on new state of the art snowploughs, and then had to close the lines a year or so later because of the "wrong kind of snow".

EDIT: Perhaps we could begin with a definition of AI, ideally one that already exists out there rather than one written in pebblespeak.
If we are talking something from a futuristic novel (which I guarantee I have not read) or a film (which I likely have not seen) then it would be good to know.
It is kinda hard to do the philosophical debate thing before defining the premises.
 
Last edited by a moderator:

DeletedUser33530

Guest
A sentient computer? At least that's what I thought this was about.
 

DeletedUser5819

Guest
I am just thinking that the answer is likely to be in the definition, so it would be great to begin there so we are all talking about the same thing.
 

DeletedUser8396

Guest
Here's the full thing:

The Oxford Dictionary defines Artificial Intelligence (AI) as “the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

Basically, something non-human being able to make human-like decisions because of a programmed “visual perception, speech recognition, decision-making, and translation between languages.”

This “intelligence” is not to be confused with one being “smart” or “stupid”, but rather associated with awareness of the world around it and capability to respond to that world on its own ability.

Now that the terms are defined:

All levels of intelligence (as defined above) can be willfully controlled on some level relative to the amount of intelligence possessed. The greater intelligence possessed, the less willfull control can be forced upon it.

Also, an object being controlled is more capable of being controlled, as well as being higher degree of control, by a more intelligent being than a lesser being could control the same object.

For example: an inanimate box has no intelligence. It also has no control over itself. Therefore anything with more intelligence than the box can control that box. However, I, as a human, have more intelligence than a rabbit and thus can control the box more so than the rabbit.

Another example would be my ability to control a rabbit. I have a higher awareness over that which I desire to control and therefore have a higher level of awareness of the reality and can manipulate what reality the rabbit would perceive (much like a mouse running through a maze to find cheese is being relatively controlled by the scientist (further reading: psychological conditioning)).

On the other end, an omniscient being, were one to exist or ever to exist, could not be controlled by any lesser intelligent being. This is due to the lesser intelligent being incapable to be aware of as many factors as the omniscient being would be. Also because omniscience necessarily implies omnipotence, but that’s another paper.

Example: an ape cannot deliberately, willfully control a human (also, the human would need to consent to being controlled).

______________________________________________

Objections:

Say a dumb brute holds a club to a scientist’s skull and tells him to dance. The scientist is clearly more intelligent, but the brute controls him. What gives?

---Well, this is not the intelligence described in the definition. We are dealing with levels of awareness. Your version of intellect is a tool capable of being used to control someone, but not a defining level of awareness. The capability of control we see comes from potential levels of awareness of the environment. The brute is just as aware of (or capable of being aware of) his environment just as much as the scientist is or could be.

---When two individuals of the same awareness level desire to control the other, several other factors come into play such as intelligence (smarts), strength, intimidation, friendliness, and sentiment (among others). Whichever side has enough of a certain area r areas to cause the opposition to consent to being controlled, then the other has succeeded. It is important to note that if on same levels of awareness/intelligence, it does not necessitate that one end in being controlled. It is possible for the other side to continually refuse to be controlled to the point where the opposition gives up or the individual dies.

_____________________________________________________

On to the topic of artificial intelligence and whether a human could control it:

If humans ever created a machine with greater levels of awareness than a human possessed, it would be impossible to control once created without the AI’s consent to being controlled.

______________________________________________________

Any objections or requests for explanation will be done on an individual basis should they arise.
 

DeletedUser

Guest
Little preamble: as well a being very interested in AI, I am also a Computer Science & Engineering undergrad and I have followed (and passed :p) a few courses dealing with AI and its problems and applications.

____


To answer the question of whether or not we can control AI we have to make a distinction between the two different types of AI. There is weak AI and strong AI (or AGI, Artificial general intelligence).

All AI which currently exists is weak AI. A very simple example of a weak AI would be a bot in a video game, a non-sentient computer intelligence, focused on a narrow task of killing the player. This isn't that hard to make (I have even made one!) A more sophisticated example would be something like Apple's Siri or even IBM's Watson. These projects require years or R&D and a large team of developers to create. These systems work with what's called a knowledge base; they operate within a limited pre-defined range with predetermined knowledge they are given. There is no genuine intelligence, no self-awareness, no life. This form of AI can easily be controlled as it only reacts to impulses, it does nothing autonomously. This is very much a case of 'we can choose to control it' which has been thrown around in this thread. I'm fairly sure the OP isn't talking about weak AI though. :D

Now, strong AI is a whole different, very exciting story. Strong AI does currently not exist yet, so there is no set definition which satisfied everyone, yet there is a consensus among AI researchers. AGI has to be able to:**
- reason, use strategy, solve puzzles, and make judgments under uncertainty
- represent knowledge, including commonsense knowledge
- plan
- learn
- communicate in natural language
- integrate all these skills towards common goals.
Other important factors are sense, act (robotics), imagination (creativity) and autonomy.
This list covers most cognitive abilities and would mean the AI would be indistinguishable from humans.
** I suggest you check out the AGI wikipedia page if you want to know more as this list can be found there with links to most bullet points in the computer science/AI context.

There's also weak AI which disguises itself as strong AI by pretending to be strong AI, theorized in the Chinese Room thought experiment.

So strong AI is more that a computer program, it's more like an artificially created being. We would effectively have created life, which opens up the debate of whether we ourselves are living in a simulation. But let's avoid that idea for another time in another thread :p

Many people think strong AI will be created before 2050. That would be absolutely amazing and I truly hope I live to experience it, but personally I'm skeptical because of Moore's law no longer holding up due to physical limitations. We can't make our transistors smaller so the exponential growth in processing speed over the past few decades has stopped. This is a problem because a strong AI needs a LOT of processing power.

For sake of argument however, let's assume that it will exist in the near future.

First off, a few things would happen when an AGI is created. The singularity would be reached. The AGI would be capable of recursive self-improvement, it would be redesigning itself at at an extreme rate. Repetitions of this cycle would likely result in a runaway effect -- an intelligence explosion. As one of my textbooks says it: "where smart machines design successive generations of increasingly powerful machines, creating intelligence far exceeding human intellectual capacity and control. Because the capabilities of such a superintelligence may be impossible for a human to comprehend, the technological singularity is an occurrence beyond which events may become unpredictable, unfavorable, or even unfathomable."

So, would we be able to control it?

I'm leaning towards no. Initially, probably yes, because the AGI will be developed in an isolated environment (isolated again in the computer science context). If it never leaves this area then we would likely be fine as we could cut off the power and 'kill' the AGI. However, we have created this amazing thing called the internet, 'the cloud' is the current buzzword. Once the AGI would be uploaded there, by either itself or a careless scientist, which I personally don't think could be avoided, we would lose control over it. The internet can't be shut down. Depending on the AGI's intentions which we really cannot predict, it could even control us as modern human society is hilariously dependant on the internet and technology.

_____

Now, to respond to some of the posts;

I think that this is an untenable argument. Hundreds(if not thousands) of years ago flight was deemed impossible...yet here we are today, flying around the world. Ultimately it seems impossible for us to definitively determine what is possible in the future..

While you're right in the sense that this thread will be filled with mostly assumptions and speculation, that doesn't mean those speculations have no basis or that we can't logically derive what will be one or a few of the most likely outcomes. That's enough ground to have a debate which isn't untenable. :D





Can we control an AI? Considering that people are so damn paranoid about this kind of stuff yeah we will probably be able to make sure we don't lose control.

Well you can't just claim this, that's the whole point of the thread. Are we actually able to control it once a strong AI has been created?




Sure we can control AI if we choose to make it controllable.

There is a likelihood that certain governments would want to try to go beyond the remit of controllable AI, at which point the question becomes "Can we control people? Specifically people with power and money, both individual, and as part of an organisational (or governmental) power structure.?"

I think we already know the answer to that.

Again, the point is that it's not this simple, it's not a choice we can make.




Story time childern. This story brought to you by Stephen hawking (not kidding)

There was once a group of scientist that built an intelligent computer. The first question they ask it was "is there a God?". The computer replied "there is now" and a bolt of lightning struck the plug so it couldn't be turned off.

Hawking has been very outspoken lately regarding AI. However it's not his area of expertise at all, so prominent AI researchers have criticized him for this as they had very little basis. Do trust him on anything physics/quantum/astrophysics/cosmology though. :p
Not really a response to you, just a tidbit of topical info. :D




All levels of intelligence can be controlled on some level relative to the amount of intelligence/autonomy possessed. Greater intelligence, less control capable of being forced upon it.

Also, an object being controlled is more capable of being controlled, as well as being a higher degree of control, by a more intelligent being over a lesser intelligent being.

For example: an inanimate box has no intelligence. It also has no control over itself. Therefore anything with more intelligence than the box can control that box. However, I, as a human, have more intelligence than a rabbit and thus can control the box more so than the rabbit.

On the other end, an omniscient being cannot be controlled by one with less intelligence much like a monkey cannot explicitly control a human without the human's consent.

On to your topic:

The question is not whether we can control it. The question is will it let us control it if we make it. If not, we cannot control it. If yes, we can. Since it would be more intelligent than us, it would have more autonomy and therefore the ability to resist and refuse.

I intend to explore this theory of mine further, but I'm on my phone and I've already spent my 10 minutes typing this lol

I kind of agree, it's not this black and white though I think. A 'lesser' being could be conditioned into controlling a 'greater' being I think. Imagine a police dog, they're capable of controlling criminals by threatening them. When the criminals only option are being controlled or being killed then in my books that means he is being controlled regardless of whether he allows it or not, as not allowing it would result in his death.



I say throw AIs in the ground and keep them there. An AI almost killed a person by making a fire.
I fundamentally disagree. Progress of the human race has always been a result of pushing limits. this is no different, imagine if we did create a strong AI. We would have created 'life'!



Picking up on a couple of apebble's points, intelligence is not the only factor, nor is it a homogeneous thing.

An extreme example where a less intelligent being may control a more intelligent being might be where a robber (or a soldier, or anyone) has a gun to your child's head. The factor here is who has the most to lose. I would suggest that an AI may well hold this advantage over most humans almost all the time, and not all examples of it need to be extreme.
Another example, and I agree that a strong AI could indeed apply this technique much more efficiently than humans.

0For the second point I am going to have to mention "emotional intelligence" which you will not catch me doing often, but for the sake of this argument I am going to accept that it exists and that some have more of it than others. I will also posit that it does not directly equate to logical or IQ style intelligence, being neither mutually exclusive nor mutually required. People with higher "emotional intelligence" can and do (and may well even specialise in) controlling/manipulating people with higher IQ intelligence. I would suggest that emotional intelligence is unlikely to ever be one of an AIs strong points.
Yes, manipulation can definitely be used to control 'smarter' beings. I am not sure how effective it would be on a strong AI however, as it'd be leagues above us. We would be the apes to its human.





It's why I hate the term AI. While, yes, it is artificial intelligence, is artificial autonomy we're going for.
Semantics, and not really a big deal as the 'intelligence' in AI doesn't actually mean only 'smartness' but already covers autonomy and consciousness etc.





EDIT: Perhaps we could begin with a definition of AI, ideally one that already exists out there rather than one written in pebblespeak.
If we are talking something from a futuristic novel (which I guarantee I have not read) or a film (which I likely have not seen) then it would be good to know.
It is kinda hard to do the philosophical debate thing before defining the premises.
See above, that should cover it :D




The Oxford Dictionary defines Artificial Intelligence (AI) as “the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

Basically, something non-human being able to make human-like decisions because of a programmed “visual perception, speech recognition, decision-making, and translation between languages.”

This definition is very simplified and incomplete. It's more fitting for weak AI exclusively, which is probably not what the OP meant as that would result in a very short debate.


Objections:

Say a dumb brute holds a club to a scientist’s skull and tells him to dance. The scientist is clearly more intelligent, but the brute controls him. What gives?

---Well, this is not the intelligence described in the definition. We are dealing with levels of awareness. Your version of intellect is a tool capable of being used to control someone, but not a defining level of awareness. The capability of control we see comes from potential levels of awareness of the environment. The brute is just as aware of (or capable of being aware of) his environment just as much as the scientist is or could be.

I disagree, the brute could have less awareness and still control the scientist in this scenario. exploiting the weaknesses of a 'greater' being could allow 'lesser' beings to control them. (In the theme of this debate, we could 'kill' a strong AI which has gained control over us by killing all power worldwide, however this would also result in the deaths of millions if not billions of humans)


On to the topic of artificial intelligence and whether a human could control it:

If humans ever created a machine with greater levels of awareness than a human possessed, it would be impossible to control once created without the AI’s consent to being controlled.
Not impossible I think, it would just have massive consequences or drawbacks.
 

DeletedUser44426

Guest
Ok so how about this, what reason, other then it being because it can, would Artificial Intelligence stop our use of controlling them? Or so to say, do you think it would wipe out the human race if it rose to sentinence? Why would they want to? They can perfect themselves over and over until they were completely flawless. So what would be there motive to destroying its creators, when it could perfect us? The A.I wouldnt have emotion, so it isn't evil, nor is it good. In my view, any being has their own self defense mechanisms, wether it being a human, animal or A.I. Sure it could fight back if we caused it harm, but im positive(well not exactly) they wouldnt destroy humans (dont let the movies cloud judgment). Why would it need to when it can just simply change its attackers (humans) motives.



Im just throwing out things to make this debate a bit more interesting.
 

DeletedUser5819

Guest
Soo.... you post up a definition of Artificial intelligence.

Then you say its all about "awareness", which is not mentioned anywhere in the definition.

So now that the terms are defined? Yeah right.

Also I keep finding your examples of intelligence (or is that awareness?) remind me of people who will never apollogise because it is a sign of weakness, and never say please or thank you as those imply lesser status. The same people who equate intelligence (that's intelligence, not awareness) with being arrogant and bloody minded, because being cooperative is a sign of lacking intelligence.

Is the rabbit less intelligent or less aware than the wasp? Is the tamed rabbit less aware or less intelligent than the untamed rabbit (untamed, not wild) or is it just less cooperative? It is easy to see the difference in controlability.

Do two people or entities really need to possess the same "awareness" before other factors make a difference to their success in controlling each other? I find that we are lacking a definition of "awareness" in the way you mean it, and even if that is provided, I very much doubt the above will be true.

How does a wish or need to control come into this definition of "awareness"? Or does it at all? In your definition of "awareness", can it be judged by other means than establishing its relative ability to control others?

EDIT: Oh Skully posted :)
I think this is one of the things I meant
Not impossible I think, it would just have massive consequences or drawbacks.
 
Last edited by a moderator:

DeletedUser8396

Guest
I'll give my Skully-rebuttal when I'm on my desktop (I despise using my laptop). And TKJ, as Skully said, it would be entirely unknown. Could be anything from arbitrary action to pure, objective morality. One thing we can all agree on is the event of creating such an AI would shatter, prove, and inspire many levels of philosophy, science, religion, and many more with their being another being at least on par with humanity.

And sirloin (despite your quite rude tone) the definition details things related to awareness. I clumped them together so I didn't need to reiterate them every single time.

See:

“visual perception, speech recognition, decision-making, and translation between languages.”

This “intelligence” is not to be confused with one being “smart” or “stupid”, but rather associated with awareness of the world around it and capability to respond to that world on its own ability.
 
Last edited by a moderator:

DeletedUser33530

Guest
Personally i would prefer to not see that day lol. Although it would surely be a very great day for all of science, I wouldn't want to witness the results of having to explain to a new form of life that it's creators are, well human.
 
Last edited by a moderator:
Top