Are we capable to control Artificial intelligence?

DeletedUser33530

Guest
Dear promises i have, as some claim, "broken" in this thread. Screw you.

The way this would show itself would I believe be very different due to the speed of change and the availability of information in contrast to ancient cultures in a state of unavoidable ignorance trying to make sense of their lives, and also trying to control their peers. Stages of ignorance and storytelling would be completely skipped, and historical culture would have comparatively little or no sway. In this sense though I do agree on its superiority.
I fail to see how an AI would avoid ignorance. Assuming we maintain control we'll probably feed it information at a certian rate. It's highly likely it will learn faster than humans ever could but it won't be instantly all knowing.


Initially we might teach AI that one (or all) human(s) are their creator god, however our penchant for displaying our imperfections and unwillingness to act cohesively or for the greater good (there is always some "greater intelligence" on the lookout for that sucker), and the availability of debunking information will ensure that won't last long.
What? We could totally trick an AI into thinking we are all gods. I'll sight all of the polytheistic religions as proof that it's easy to believe in imperfect gods that are constantly fighting and screwing stuff up. No offense meant by that. Besides if we teach an AI that our imperfections are our perfections and give it no reason to think otherwise it will believe it.

Your assertion that they would all be one...is this like a hivemind? I can see that all of one type might be considered one, with something approaching virtual omnipotence, however I still cling to the idea that many different groups are working on AI and not all from the same perspective. I do believe that more than one AI may emerge, and that would be a classic situation for racial disagreements - like I said, perfection is subjective. Purpose or motives would not necessarily be uniform among different versions, in fact that would be very unlikely.
virtual omnipotence? Meaning it's close to omnipotent or it is omnipotent but only in the internet.

If there was only one then there is the small possibility of a peaceable being, with no culture or competition to tarnish good, however if there is more than one then I think "nature" would take its course.
so they'll be peaceful hunter gatherers until they discover farming?
Not saying that AIs won't be driven by humans to hate or be jealous but nature tells us the smarter you are the more you work together. Until your survival or your mating rights are put at risk. That shouldn't happen for an AI. Also an AI will have no animal instincts or desires (unless we give it them) anyway.

Personally I find the concept of a conscious being without a purpose difficult. Perhaps its purpose would simply be to better itself, I suppose, however on the one hand this may have a limit, and on the other this may lead to an awareness of being better than other beings. It is hard to see how these could avoid leading to a god complex, since the being would demonstrably be better than other beings by those measures it considered important, and it would know it! As a generalisation I would suggest that having a god complex is not good.
Immortality would support this too.
Who says it will better and more importantly even if it was who says we can't make it believe otherwise?

Again you may think me cynical, but whilst it is true that many or all of us try to improve ourselves at some times of our lives, many of us also put at least the same energy into doing down others for a similar purpose of being better (than them). This is prevalent from toddler to grave, and amongst the most privileged and mentally/materially blessed in all walks of life as well as the disenfranchised who have no hope of progressing through the existing system
we do this in order to survive, so we may reproduce, and then protect those offspring. These are all traits of animals. An AI might not have these traits. In fact i would grealty recommend that we don't give our first few AI attempts a fear of death because that would give it a reason to prevent it's death. Keep in mind an AI could consider us performing any changes to it, to be equal to death as it's mind before the changes would be "dead".

This also meets up with skully's allusion to the need for AI to protect itself from paranoid attempts by some humans to control or destroy it. It is unlikely that an AI of anything like the ability we are considering would somehow miss that lesson in politics.
see above
 

DeletedUser

Guest
...
jesus-says-meme-generator-yo-cheater-secondline-47a9d8.jpg
 
Top