DeletedUser44426
Guest
Then again, im sure other intelligences can be created that will rival it. After all we humans like to create many things. But yeah, a great day for science if it did happen.
Wow, I will pass over your condescending tone and return to the point. Sheeh!And sirloin (despite your quite rude tone) the definition details things related to awareness. I clumped them together so I didn't need to reiterate them every single time.
See:
Wow, I will pass over your condescending tone and return to the point. Sheeh!
So anyway, my apologies, I thought the dictionary definition ended at the end of your quote marks, not after the mention of "awareness".
I still do actually, and if that is the case, and at the same time you are choosing not to define your own terms (ie awareness) then I will have to make some guesses based on what you are prepared to offer.
So by "awareness" you mean measures of optical accuity, ability to detect audio input, ability to decode audio/text input of human language, ability to restate audio/visual input in one language/code into another, and ability to make decisions (no specifics provided).
Really? Is that it? That is not a definition of awareness that I recognise.
Rather than get personal because I asked (and it didn't pass me by) you could try answering the question. Define your term rather than merely stating that you have when you have not.
To TKJ, I follow your logic, but what do you mean by "perfect" and "flawless"? Surely those are subjective values. Where does self interest come into all of this? I get that it may be in their self interest to preserve human life, but I suspect also to subjugate it similarly to how we as a species subjugate each other by group (race, sex, age, nationality, background, and any other difference we can come up with). It is not just a human thing either, and is a large part of most (possibly all) species that we consider to be intelligent, or at least intelligent and social. Does that not come part and parcel with a conciousness and ability to manipulate? The more so if controlling others is in some way an important concept or measure of ability in all this.
Here's the full thing:
The Oxford Dictionary defines Artificial Intelligence (AI) as “the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”
Basically, something non-human being able to make human-like decisions because of a programmed “visual perception, speech recognition, decision-making, and translation between languages.”
This “intelligence” is not to be confused with one being “smart” or “stupid”, but rather associated with awareness of the world around it and capability to respond to that world on its own ability.
Now that the terms are defined:
All levels of intelligence (as defined above) can be willfully controlled on some level relative to the amount of intelligence possessed. The greater intelligence possessed, the less willfull control can be forced upon it.
Also, an object being controlled is more capable of being controlled, as well as being higher degree of control, by a more intelligent being than a lesser being could control the same object.
For example: an inanimate box has no intelligence. It also has no control over itself. Therefore anything with more intelligence than the box can control that box. However, I, as a human, have more intelligence than a rabbit and thus can control the box more so than the rabbit.
Another example would be my ability to control a rabbit. I have a higher awareness over that which I desire to control and therefore have a higher level of awareness of the reality and can manipulate what reality the rabbit would perceive (much like a mouse running through a maze to find cheese is being relatively controlled by the scientist (further reading: psychological conditioning)).
On the other end, an omniscient being, were one to exist or ever to exist, could not be controlled by any lesser intelligent being. This is due to the lesser intelligent being incapable to be aware of as many factors as the omniscient being would be. Also because omniscience necessarily implies omnipotence, but that’s another paper.
Example: an ape cannot deliberately, willfully control a human (also, the human would need to consent to being controlled).
______________________________________________
Objections:
Say a dumb brute holds a club to a scientist’s skull and tells him to dance. The scientist is clearly more intelligent, but the brute controls him. What gives?
---Well, this is not the intelligence described in the definition. We are dealing with levels of awareness. Your version of intellect is a tool capable of being used to control someone, but not a defining level of awareness. The capability of control we see comes from potential levels of awareness of the environment. The brute is just as aware of (or capable of being aware of) his environment just as much as the scientist is or could be.
---When two individuals of the same awareness level desire to control the other, several other factors come into play such as intelligence (smarts), strength, intimidation, friendliness, and sentiment (among others). Whichever side has enough of a certain area r areas to cause the opposition to consent to being controlled, then the other has succeeded. It is important to note that if on same levels of awareness/intelligence, it does not necessitate that one end in being controlled. It is possible for the other side to continually refuse to be controlled to the point where the opposition gives up or the individual dies.
_____________________________________________________
On to the topic of artificial intelligence and whether a human could control it:
If humans ever created a machine with greater levels of awareness than a human possessed, it would be impossible to control once created without the AI’s consent to being controlled.
______________________________________________________
Any objections or requests for explanation will be done on an individual basis should they arise.
Fine I'll be nice and pick the whole thing apart later today (12 hours from now probably).
*48 hours later*
Ok so how about this, what reason, other then it being because it can, would Artificial Intelligence stop our use of controlling them? Or so to say, do you think it would wipe out the human race if it rose to sentinence? Why would they want to? They can perfect themselves over and over until they were completely flawless. So what would be there motive to destroying its creators, when it could perfect us? The A.I wouldnt have emotion, so it isn't evil, nor is it good. In my view, any being has their own self defense mechanisms, wether it being a human, animal or A.I. Sure it could fight back if we caused it harm, but im positive(well not exactly) they wouldnt destroy humans (dont let the movies cloud judgment). Why would it need to when it can just simply change its attackers (humans) motives.
Im just throwing out things to make this debate a bit more interesting.
I think you misunderstood my post if you got that out of it, because I never said a video game AI could change its motives. They're the prime example of weak AI; AI that we can control, manipulate and alter at will. AI that isn't really intelligent in any sense of the word.By perfect i mean it can evolve itself so that it wouldnt face its, technological errors. Humans are slowed by our biological process, however that doesnt stop us from striving for more. An example would be that a computer evolves itself so it cannot be "hacked". Or as Skully stated, and i do believe it is possible, that a bot from within a video game can change its motives, such as trying to help the help player instead of stopping it. And well about the A.I subjugation to humans as you have said, my only guess or whatever you want to call it, is why would they bother to do that? All of the A.I would be connected, so they wouldn't be able, or would at least would not care to follow that way.
A.I could perfect one another (though i believe they wouldnt be independent individually) so there would be no self interest to establish among the lower intelligence when they can just do to what they do to themselves, perfect them or just class of as one all together.
I think you answered it yourself. Humans are paranoid. If a strong AI were to be created then that could mean that humans would no longer be the superior 'beings' on the planet, and I can assure you that a lot of people would find problems in that. The biggest danger is probably ourselves because we would want to exert control over the AI to remain in control. However, as humans themselves have proven throughout history, intelligent beings tend to not like being controlled. This could cause the AI to 'defend' itself by trying to control us.I am still waiting for anyone to respond on the importance of exerting control over another. Why is this the big issue here? Is it just that it is a worrying prospect or is it thought to be something that AI would inevitably do if it were able?
I would like to see TKJ's or in fact anyone else's response to the idea of self interest. This relates again to how or why controlling or being controlled is so central to our concern (paranoia?) about AI. What would be a reason to control others apart from self interest?
Pff, his is not the only broken promise in this thread.When Cheater says "probably" it means "practically never".
I'll give my Skully-rebuttal when I'm on my desktop (I despise using my laptop).
Thanks TKJ, I have read, but am going to read again after 3rd coffee
I promise to respond.
Well, do you need a reason to not wanting to be controlled? Maybe it would consider itself equal to us, at least in that it would want rights and such. Would it be moral for humanity to not grant it that? Think back to recent (couple centuries) history, specifically slavery and the emancipation of women. The debate or whether an AI deserves the same treatment as humans could be the next chapter in future history books.
Depending on how benevolent the AI is they could attempt to eradicate us (and fail because there are too many humans completely isolated form technology -- at the moment at least. However, I think it would first try to reason with us.
I think you misunderstood my post if you got that out of it, because I never said a video game AI could change its motives. They're the prime example of weak AI; AI that we can control, manipulate and alter at will. AI that isn't really intelligent in any sense of the word.
I think you answered it yourself. Humans are paranoid. If a strong AI were to be created then that could mean that humans would no longer be the superior 'beings' on the planet, and I can assure you that a lot of people would find problems in that. The biggest danger is probably ourselves because we would want to exert control over the AI to remain in control. However, as humans themselves have proven throughout history, intelligent beings tend to not like being controlled. This could cause the AI to 'defend' itself by trying to control us.
Pff, his is not the only broken promise in this thread.
That's right, calling you liars out! What ya'll gonna do about it???? Come at me bros!
Well, do you need a reason to not wanting to be controlled? Maybe it would consider itself equal to us, at least in that it would want rights and such. Would it be moral for humanity to not grant it that? Think back to recent (couple centuries) history, specifically slavery and the emancipation of women. The debate or whether an AI deserves the same treatment as humans could be the next chapter in future history books.
Depending on how benevolent the AI is they could attempt to eradicate us (and fail because there are too many humans completely isolated form technology -- at the moment at least. However, I think it would first try to reason with us.
That's right, calling you liars out! What ya'll gonna do about it???? Come at me bros!
Skully said:I kind of agree, it's not this black and white though I think. A 'lesser' being could be conditioned into controlling a 'greater' being I think. Imagine a police dog, they're capable of controlling criminals by threatening them. When the criminals only option are being controlled or being killed then in my books that means he is being controlled regardless of whether he allows it or not, as not allowing it would result in his death.
Semantics, and not really a big deal as the 'intelligence' in AI doesn't actually mean only 'smartness' but already covers autonomy and consciousness etc.
This definition is very simplified and incomplete. It's more fitting for weak AI exclusively, which is probably not what the OP meant as that would result in a very short debate.
I disagree, the brute could have less awareness and still control the scientist in this scenario. exploiting the weaknesses of a 'greater' being could allow 'lesser' beings to control them. (In the theme of this debate, we could 'kill' a strong AI which has gained control over us by killing all power worldwide, however this would also result in the deaths of millions if not billions of humans)
Not impossible I think, it would just have massive consequences or drawbacks.
seems a little fanciful. I would dispute that to not care requires emotion, at least in this sense. I could accept that the most obvious definition of "to care" relies on the concept of emotion, but to then say that to not care also does makes me wonder what is left to describe the non-existance of emotion. Also, I would take issue with the idea that anything "would make them flawed and unperfect". It might demonstrate that they are not, but not change them from one to the other.But if they could perfect themselves to a flawless state, do you think they would not do the same to evevything else? Becuase if they didnt, that means they would not care, and to care or not to care requires emotion, if they have not reached this state. But if they did it would make them flawed and unperfect.