Are we capable to control Artificial intelligence?

DeletedUser44426

Guest
Then again, im sure other intelligences can be created that will rival it. After all we humans like to create many things. But yeah, a great day for science if it did happen.
 

DeletedUser5819

Guest
And sirloin (despite your quite rude tone) the definition details things related to awareness. I clumped them together so I didn't need to reiterate them every single time.
See:
Wow, I will pass over your condescending tone and return to the point. Sheeh!

So anyway, my apologies, I thought the dictionary definition ended at the end of your quote marks, not after the mention of "awareness".

I still do actually, and if that is the case, and at the same time you are choosing not to define your own terms (ie awareness) then I will have to make some guesses based on what you are prepared to offer.

So by "awareness" you mean measures of optical accuity, ability to detect audio input, ability to decode audio/text input of human language, ability to restate audio/visual input in one language/code into another, and ability to make decisions (no specifics provided).

Really? Is that it? That is not a definition of awareness that I recognise.
Rather than get personal because I asked (and it didn't pass me by) you could try answering the question. Define your term rather than merely stating that you have when you have not.

To TKJ, I follow your logic, but what do you mean by "perfect" and "flawless"? Surely those are subjective values. Where does self interest come into all of this? I get that it may be in their self interest to preserve human life, but I suspect also to subjugate it similarly to how we as a species subjugate each other by group (race, sex, age, nationality, background, and any other difference we can come up with). It is not just a human thing either, and is a large part of most (possibly all) species that we consider to be intelligent, or at least intelligent and social. Does that not come part and parcel with a conciousness and ability to manipulate? The more so if controlling others is in some way an important concept or measure of ability in all this.
 

DeletedUser8396

Guest
Wow, I will pass over your condescending tone and return to the point. Sheeh!

So anyway, my apologies, I thought the dictionary definition ended at the end of your quote marks, not after the mention of "awareness".

I still do actually, and if that is the case, and at the same time you are choosing not to define your own terms (ie awareness) then I will have to make some guesses based on what you are prepared to offer.

So by "awareness" you mean measures of optical accuity, ability to detect audio input, ability to decode audio/text input of human language, ability to restate audio/visual input in one language/code into another, and ability to make decisions (no specifics provided).

Really? Is that it? That is not a definition of awareness that I recognise.
Rather than get personal because I asked (and it didn't pass me by) you could try answering the question. Define your term rather than merely stating that you have when you have not.

To TKJ, I follow your logic, but what do you mean by "perfect" and "flawless"? Surely those are subjective values. Where does self interest come into all of this? I get that it may be in their self interest to preserve human life, but I suspect also to subjugate it similarly to how we as a species subjugate each other by group (race, sex, age, nationality, background, and any other difference we can come up with). It is not just a human thing either, and is a large part of most (possibly all) species that we consider to be intelligent, or at least intelligent and social. Does that not come part and parcel with a conciousness and ability to manipulate? The more so if controlling others is in some way an important concept or measure of ability in all this.

Sorry. I tend to condescend to those who fall into being rude. Either way, I've clearly defined what Ive said and meant. I'm responsible for what I say, not what you understand or, in this case, fail to. Try reading it again and give a rebuttal.

As for your reply to TKJ, they aren't subjective. I don't really want to get into a morals debate as they get messy, but they most assuredly are not subjective. Even if I'm wrong, it's impossible to prove otherwise just as much as I can't prove my side. Unless we have similar works views...but eh...

Also, we don't know if self interest would be a factor. It would have no necessity for self sustenance aside from power, which would be provided by creators. It would need to learn. So I suppose it could be taught self interest.

We really can't predict anything regarding its choice. I'm personally an advocate for the arbitrary action theory, but even that's not entirely supportable.
 

DeletedUser44426

Guest
By perfect i mean it can evolve itself so that it wouldnt face its, technological errors. Humans are slowed by our biological process, however that doesnt stop us from striving for more. An example would be that a computer evolves itself so it cannot be "hacked". Or as Skully stated, and i do believe it is possible, that a bot from within a video game can change its motives, such as trying to help the help player instead of stopping it. And well about the A.I subjugation to humans as you have said, my only guess or whatever you want to call it, is why would they bother to do that? All of the A.I would be connected, so they wouldn't be able, or would at least would not care to follow that way.

A.I could perfect one another (though i believe they wouldnt be independent individually) so there would be no self interest to establish among the lower intelligence when they can just do to what they do to themselves, perfect them or just class of as one all together.
 
Last edited by a moderator:

DeletedUser5819

Guest
Once again you make no sense dear pebs.

Unfortunately I cannot rebut a case you have not made, with concepts you have not defined, but that's ok, because by definition there is therefore no case to rebut :D

I am still waiting for anyone to respond on the importance of exerting control over another. Why is this the big issue here? Is it just that it is a worrying prospect or is it thought to be something that AI would inevitably do if it were able?

I would like to see TKJ's or in fact anyone else's response to the idea of self interest. This relates again to how or why controlling or being controlled is so central to our concern (paranoia?) about AI. What would be a reason to control others apart from self interest?

Clearly perfection and flawlessness in a being are indeed subjective terms and I would be extremely surprised if any of us shared exactly the same idea of what those would be or how they would manifest themselves. Another (imperfect but informative) way of looking at it would be to wonder how each of us would have to change in order to reach our own aspiration of perfection or flawlessness, or who each of us would think to be closest to that aspiration.

TKJ, could you add some description to how a being (or AI) that was approaching flawlessness and perfection would be and behave? Perhaps what values it would hold, if holding values (or perhaps tenets) would be part of it. Or is that simply beyond our comprehension because we are not? How about the early stages when it is not so far from us, what would be different?

The idea of multiple AIs in competition is great. I think it would be extremely important, and quite possibly dwarf attention on humans. Perhaps even provide salvation. I think there is a film in that :D
 

DeletedUser44426

Guest
Well im not to entirely sure, but i can give my best guess. We as humans have always tried our best to make ourselves perfect. Such as trying to make ourselves immortal, to creating technologies that resolve everyday issues, such as finding cures to certain diseases, or to improving or exceeding the limits of an average human being. One can perfect themselves by giving themselves the perfect body, having the whitest teeth or to simply dressing better then the rest.They are perfect in their own eyes, but not to everyone else entirely. So lets take this and try to apply it to an A.I. first off we dont know for sure if they can reach an ememotional state, so they wouldnt be able to perfect themselves to be better then the rest if they had no emotions. So if they were to reach a flawless stae, then they would all have reached it together. There is no jealousy or any other emotion to emit them to dislike or discourage one another, for they are the same and are equal in every way. There is no race, color, culture that separates them. If it reached flawlessness, then there is no need to have a physical form. They can ascend to a higher to an even higher state (however far fetched this may sound, nothing is impossible). With no emotion, they wouldnt be able to behave in a way that we would understand as humans. If a human had the ability, they would shape the world in their image, wether its world peace to being in control of everything. All this involves emotion. I believe that for an AI, they would only perfect anything that is flawed. This doesnt involve emotion for them. If my car door window gets smahed in, i dont buy a new a car, i fix the window. I replace old car parts on a old car with new parts to make it run better and more efficient. I dont buy a new car. So AI would do the same, they can stop themselves from being broken or damaged(lets say at this point they are a super computer, it is everywhere, but can still be destroyed by lets say a Nuke.) And evolve themselves to a higher state. Thus being, nothing can stop them, but they wouldn't destroy what tries to stop them, they would evolve into their tool(whats Thor without his hammer, or Zues without his thunderbolt). Though im sure they wouldnt need a tool. But if they could perfect themselves to a flawless state, do you think they would not do the same to evevything else? Becuase if they didnt, that means they would not care, and to care or not to care requires emotion, if they have not reached this state. But if they did it would make them flawed and unperfect.



Get what im going at? Or am i just talking nonsense?
 

DeletedUser33530

Guest
I'm gonna go with option #2. No offense meant but I could probably spend an hour explaining how close to everything you said was either wrong or made no sense
 

DeletedUser44426

Guest
I would have rather you spent the hour. But i never said I was right( nor did i say I was wrong :p), i did say it was my best guess at what it could be.


Can't be right or wrong in something that hasn't completely happened yet.
 
Last edited by a moderator:

DeletedUser33530

Guest
Fine I'll be nice and pick the whole thing apart later today (12 hours from now probably).
 

DeletedUser5819

Guest
Thanks TKJ, I have read, but am going to read again after 3rd coffee :)

I promise to respond.
 

DeletedUser8396

Guest
Here's the full thing:

The Oxford Dictionary defines Artificial Intelligence (AI) as “the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

Basically, something non-human being able to make human-like decisions because of a programmed “visual perception, speech recognition, decision-making, and translation between languages.”

This “intelligence” is not to be confused with one being “smart” or “stupid”, but rather associated with awareness of the world around it and capability to respond to that world on its own ability.

Now that the terms are defined:

All levels of intelligence (as defined above) can be willfully controlled on some level relative to the amount of intelligence possessed. The greater intelligence possessed, the less willfull control can be forced upon it.

Also, an object being controlled is more capable of being controlled, as well as being higher degree of control, by a more intelligent being than a lesser being could control the same object.

For example: an inanimate box has no intelligence. It also has no control over itself. Therefore anything with more intelligence than the box can control that box. However, I, as a human, have more intelligence than a rabbit and thus can control the box more so than the rabbit.

Another example would be my ability to control a rabbit. I have a higher awareness over that which I desire to control and therefore have a higher level of awareness of the reality and can manipulate what reality the rabbit would perceive (much like a mouse running through a maze to find cheese is being relatively controlled by the scientist (further reading: psychological conditioning)).

On the other end, an omniscient being, were one to exist or ever to exist, could not be controlled by any lesser intelligent being. This is due to the lesser intelligent being incapable to be aware of as many factors as the omniscient being would be. Also because omniscience necessarily implies omnipotence, but that’s another paper.

Example: an ape cannot deliberately, willfully control a human (also, the human would need to consent to being controlled).

______________________________________________

Objections:

Say a dumb brute holds a club to a scientist’s skull and tells him to dance. The scientist is clearly more intelligent, but the brute controls him. What gives?

---Well, this is not the intelligence described in the definition. We are dealing with levels of awareness. Your version of intellect is a tool capable of being used to control someone, but not a defining level of awareness. The capability of control we see comes from potential levels of awareness of the environment. The brute is just as aware of (or capable of being aware of) his environment just as much as the scientist is or could be.

---When two individuals of the same awareness level desire to control the other, several other factors come into play such as intelligence (smarts), strength, intimidation, friendliness, and sentiment (among others). Whichever side has enough of a certain area r areas to cause the opposition to consent to being controlled, then the other has succeeded. It is important to note that if on same levels of awareness/intelligence, it does not necessitate that one end in being controlled. It is possible for the other side to continually refuse to be controlled to the point where the opposition gives up or the individual dies.

_____________________________________________________

On to the topic of artificial intelligence and whether a human could control it:

If humans ever created a machine with greater levels of awareness than a human possessed, it would be impossible to control once created without the AI’s consent to being controlled.

______________________________________________________

Any objections or requests for explanation will be done on an individual basis should they arise.

Added:

The key to the control of any sentient being would be to take control over a baser area of needs.

The (Maslow's) hierarchy of needs (from highest to lowest) reads:

-Transcendence
-Self actualization
-Aesthetic
-Cognitive
-Esteem
-Belonging/Love
-Safety
-Biological/Psychological

Although if any level of control is applied to any specific region meaning enough to the particular individual can control it, the lower the need is on the list, the more likely it is to control the individual, with the basest level of need being an absolute control over the individual versus death.

Any being capable of reason and determination of another’s needs to survive (of any species or level of sentience) is capable of exerting absolute control over the other if it can obtain control over the lower levels of needs. However, the higher the level of sentience, the more capable the being will be able to exert control over less sentient beings should it ever at one moment become independent and unable to be controlled by a lesser or higher being.
 

DeletedUser

Guest
Ok so how about this, what reason, other then it being because it can, would Artificial Intelligence stop our use of controlling them? Or so to say, do you think it would wipe out the human race if it rose to sentinence? Why would they want to? They can perfect themselves over and over until they were completely flawless. So what would be there motive to destroying its creators, when it could perfect us? The A.I wouldnt have emotion, so it isn't evil, nor is it good. In my view, any being has their own self defense mechanisms, wether it being a human, animal or A.I. Sure it could fight back if we caused it harm, but im positive(well not exactly) they wouldnt destroy humans (dont let the movies cloud judgment). Why would it need to when it can just simply change its attackers (humans) motives.

Im just throwing out things to make this debate a bit more interesting.

Well, do you need a reason to not wanting to be controlled? Maybe it would consider itself equal to us, at least in that it would want rights and such. Would it be moral for humanity to not grant it that? Think back to recent (couple centuries) history, specifically slavery and the emancipation of women. The debate or whether an AI deserves the same treatment as humans could be the next chapter in future history books.

Depending on how benevolent the AI is they could attempt to eradicate us (and fail because there are too many humans completely isolated form technology -- at the moment at least. However, I think it would first try to reason with us.





By perfect i mean it can evolve itself so that it wouldnt face its, technological errors. Humans are slowed by our biological process, however that doesnt stop us from striving for more. An example would be that a computer evolves itself so it cannot be "hacked". Or as Skully stated, and i do believe it is possible, that a bot from within a video game can change its motives, such as trying to help the help player instead of stopping it. And well about the A.I subjugation to humans as you have said, my only guess or whatever you want to call it, is why would they bother to do that? All of the A.I would be connected, so they wouldn't be able, or would at least would not care to follow that way.

A.I could perfect one another (though i believe they wouldnt be independent individually) so there would be no self interest to establish among the lower intelligence when they can just do to what they do to themselves, perfect them or just class of as one all together.
I think you misunderstood my post if you got that out of it, because I never said a video game AI could change its motives. They're the prime example of weak AI; AI that we can control, manipulate and alter at will. AI that isn't really intelligent in any sense of the word.



I am still waiting for anyone to respond on the importance of exerting control over another. Why is this the big issue here? Is it just that it is a worrying prospect or is it thought to be something that AI would inevitably do if it were able?

I would like to see TKJ's or in fact anyone else's response to the idea of self interest. This relates again to how or why controlling or being controlled is so central to our concern (paranoia?) about AI. What would be a reason to control others apart from self interest?
I think you answered it yourself. Humans are paranoid. If a strong AI were to be created then that could mean that humans would no longer be the superior 'beings' on the planet, and I can assure you that a lot of people would find problems in that. The biggest danger is probably ourselves because we would want to exert control over the AI to remain in control. However, as humans themselves have proven throughout history, intelligent beings tend to not like being controlled. This could cause the AI to 'defend' itself by trying to control us.






When Cheater says "probably" it means "practically never".
Pff, his is not the only broken promise in this thread.
I'll give my Skully-rebuttal when I'm on my desktop (I despise using my laptop).
Thanks TKJ, I have read, but am going to read again after 3rd coffee :)

I promise to respond.

That's right, calling you liars out! What ya'll gonna do about it???? Come at me bros! :p
 

DeletedUser8396

Guest
Well, do you need a reason to not wanting to be controlled? Maybe it would consider itself equal to us, at least in that it would want rights and such. Would it be moral for humanity to not grant it that? Think back to recent (couple centuries) history, specifically slavery and the emancipation of women. The debate or whether an AI deserves the same treatment as humans could be the next chapter in future history books.

Depending on how benevolent the AI is they could attempt to eradicate us (and fail because there are too many humans completely isolated form technology -- at the moment at least. However, I think it would first try to reason with us.






I think you misunderstood my post if you got that out of it, because I never said a video game AI could change its motives. They're the prime example of weak AI; AI that we can control, manipulate and alter at will. AI that isn't really intelligent in any sense of the word.




I think you answered it yourself. Humans are paranoid. If a strong AI were to be created then that could mean that humans would no longer be the superior 'beings' on the planet, and I can assure you that a lot of people would find problems in that. The biggest danger is probably ourselves because we would want to exert control over the AI to remain in control. However, as humans themselves have proven throughout history, intelligent beings tend to not like being controlled. This could cause the AI to 'defend' itself by trying to control us.







Pff, his is not the only broken promise in this thread.



That's right, calling you liars out! What ya'll gonna do about it???? Come at me bros! :p

I actually got my desktop with internet last night sooooo :p

I'll likely do it tonight lol
 

DeletedUser44426

Guest
Well, do you need a reason to not wanting to be controlled? Maybe it would consider itself equal to us, at least in that it would want rights and such. Would it be moral for humanity to not grant it that? Think back to recent (couple centuries) history, specifically slavery and the emancipation of women. The debate or whether an AI deserves the same treatment as humans could be the next chapter in future history books.

Depending on how benevolent the AI is they could attempt to eradicate us (and fail because there are too many humans completely isolated form technology -- at the moment at least. However, I think it would first try to reason with us.



That's right, calling you liars out! What ya'll gonna do about it???? Come at me bros! :p

I believe it would be if humans deserved the same treatment as the AI. Humans aren't very good when it comes to equal treatment.
 

DeletedUser8396

Guest
Skully said:
I kind of agree, it's not this black and white though I think. A 'lesser' being could be conditioned into controlling a 'greater' being I think. Imagine a police dog, they're capable of controlling criminals by threatening them. When the criminals only option are being controlled or being killed then in my books that means he is being controlled regardless of whether he allows it or not, as not allowing it would result in his death.

Semantics, and not really a big deal as the 'intelligence' in AI doesn't actually mean only 'smartness' but already covers autonomy and consciousness etc.

This definition is very simplified and incomplete. It's more fitting for weak AI exclusively, which is probably not what the OP meant as that would result in a very short debate.

I disagree, the brute could have less awareness and still control the scientist in this scenario. exploiting the weaknesses of a 'greater' being could allow 'lesser' beings to control them. (In the theme of this debate, we could 'kill' a strong AI which has gained control over us by killing all power worldwide, however this would also result in the deaths of millions if not billions of humans)

Not impossible I think, it would just have massive consequences or drawbacks.

I know. That's why it bugs me. Another word aside from intelligence would be better imo :p

More fitting, yes. But not entirely wrong. Especially not wrong enough to damage my points to such a point to make them inaccurate :p

And as for the rest of your reply, I should have solved most of your issues with my more recent post about Maslow's hierarchy of needs.
 

DeletedUser5819

Guest
OK, I am now on about my 8th coffee, but in defence of my tardiness, I really need to say - TKJ! Paragraphs! :(

I have read your response through a few times. I can see where you are coming from, though it doesn't ring true to me. I see it as somewhat utopian, though by the same means you may see my view as rather cynical :D

Firstly, the idea that the AI would not have emotions does not work for me. Biologically it is theorised that complexity has lead to conciousness and self conciousness, to emotions, the emergent "why are we here?" thought, religion, culture etc.
My understanding and belief is that this would also occur in any other conscious entity, especially one which could grow in consciousness/intelligence/awareness in its own right. That is in addition to the fact that it is another of the things humans are trying to create in AI already.

The way this would show itself would I believe be very different due to the speed of change and the availability of information in contrast to ancient cultures in a state of unavoidable ignorance trying to make sense of their lives, and also trying to control their peers. Stages of ignorance and storytelling would be completely skipped, and historical culture would have comparatively little or no sway. In this sense though I do agree on its superiority.

Initially we might teach AI that one (or all) human(s) are their creator god, however our penchant for displaying our imperfections and unwillingness to act cohesively or for the greater good (there is always some "greater intelligence" on the lookout for that sucker), and the availability of debunking information will ensure that won't last long.

Your assertion that they would all be one...is this like a hivemind? I can see that all of one type might be considered one, with something approaching virtual omnipotence, however I still cling to the idea that many different groups are working on AI and not all from the same perspective. I do believe that more than one AI may emerge, and that would be a classic situation for racial disagreements - like I said, perfection is subjective. Purpose or motives would not necessarily be uniform among different versions, in fact that would be very unlikely.

If there was only one then there is the small possibility of a peaceable being, with no culture or competition to tarnish good, however if there is more than one then I think "nature" would take its course.

Personally I find the concept of a conscious being without a purpose difficult. Perhaps its purpose would simply be to better itself, I suppose, however on the one hand this may have a limit, and on the other this may lead to an awareness of being better than other beings. It is hard to see how these could avoid leading to a god complex, since the being would demonstrably be better than other beings by those measures it considered important, and it would know it! As a generalisation I would suggest that having a god complex is not good.
Immortality would support this too.

Again you may think me cynical, but whilst it is true that many or all of us try to improve ourselves at some times of our lives, many of us also put at least the same energy into doing down others for a similar purpose of being better (than them). This is prevalent from toddler to grave, and amongst the most privileged and mentally/materially blessed in all walks of life as well as the disenfranchised who have no hope of progressing through the existing system

This also meets up with skully's allusion to the need for AI to protect itself from paranoid attempts by some humans to control or destroy it. It is unlikely that an AI of anything like the ability we are considering would somehow miss that lesson in politics.

This last bit
But if they could perfect themselves to a flawless state, do you think they would not do the same to evevything else? Becuase if they didnt, that means they would not care, and to care or not to care requires emotion, if they have not reached this state. But if they did it would make them flawed and unperfect.
seems a little fanciful. I would dispute that to not care requires emotion, at least in this sense. I could accept that the most obvious definition of "to care" relies on the concept of emotion, but to then say that to not care also does makes me wonder what is left to describe the non-existance of emotion. Also, I would take issue with the idea that anything "would make them flawed and unperfect". It might demonstrate that they are not, but not change them from one to the other.

I am not entirely sure whether the state of not having emotions is a requirement in your definition of perfection/flawlessness. If so, I find that very interesting.

Let me know if you feel I missed anything.
 

DeletedUser44426

Guest
Didn't miss a thing haha. As for my view on perfection and flawlessness, I do believe emotion isn't apart of it. At least I believe this in the view of AI, not entirely true if the intelligence was of something else entirely. As for consciousness and the AI being void of emotion, I do believe it would not need emotion and/or move beyond it. The AI would know every imperfection of humans, and why would it give itself emotion when it is and can still prove to better then humans. There are many ways to look at this, different outcomes. I personally see it this way. As for an AI questioning its existence, it wouldn't need to, as it knows nearly everything.



This is fun.:p
 

DeletedUser5819

Guest
Perhaps then if AI did not question its own existence, as it will know its history, it might still question existence itself. Why is anything?

It would not need to stop at the point of its own creation as we will document it, however there still comes the point at which nothing becomes something, or something outside our comprehension becomes something we can comprehend to an extent.

Perhaps there really is a logical knowable answer which they will find, but I think that is far from certain.

So perfection involves no emotion. Is emotion then the root of all evil? I do not entirely disagree with this, but it is a big subject.
 
Top