DeletedUser44426
Guest
So perfection involves no emotion. Is emotion then the root of all evil? I do not entirely disagree with this, but it is a big subject.
Sounds like a good debate. Haha
So perfection involves no emotion. Is emotion then the root of all evil? I do not entirely disagree with this, but it is a big subject.
I fail to see how an AI would avoid ignorance. Assuming we maintain control we'll probably feed it information at a certian rate. It's highly likely it will learn faster than humans ever could but it won't be instantly all knowing.The way this would show itself would I believe be very different due to the speed of change and the availability of information in contrast to ancient cultures in a state of unavoidable ignorance trying to make sense of their lives, and also trying to control their peers. Stages of ignorance and storytelling would be completely skipped, and historical culture would have comparatively little or no sway. In this sense though I do agree on its superiority.
What? We could totally trick an AI into thinking we are all gods. I'll sight all of the polytheistic religions as proof that it's easy to believe in imperfect gods that are constantly fighting and screwing stuff up. No offense meant by that. Besides if we teach an AI that our imperfections are our perfections and give it no reason to think otherwise it will believe it.Initially we might teach AI that one (or all) human(s) are their creator god, however our penchant for displaying our imperfections and unwillingness to act cohesively or for the greater good (there is always some "greater intelligence" on the lookout for that sucker), and the availability of debunking information will ensure that won't last long.
virtual omnipotence? Meaning it's close to omnipotent or it is omnipotent but only in the internet.Your assertion that they would all be one...is this like a hivemind? I can see that all of one type might be considered one, with something approaching virtual omnipotence, however I still cling to the idea that many different groups are working on AI and not all from the same perspective. I do believe that more than one AI may emerge, and that would be a classic situation for racial disagreements - like I said, perfection is subjective. Purpose or motives would not necessarily be uniform among different versions, in fact that would be very unlikely.
so they'll be peaceful hunter gatherers until they discover farming?If there was only one then there is the small possibility of a peaceable being, with no culture or competition to tarnish good, however if there is more than one then I think "nature" would take its course.
Who says it will better and more importantly even if it was who says we can't make it believe otherwise?Personally I find the concept of a conscious being without a purpose difficult. Perhaps its purpose would simply be to better itself, I suppose, however on the one hand this may have a limit, and on the other this may lead to an awareness of being better than other beings. It is hard to see how these could avoid leading to a god complex, since the being would demonstrably be better than other beings by those measures it considered important, and it would know it! As a generalisation I would suggest that having a god complex is not good.
Immortality would support this too.
we do this in order to survive, so we may reproduce, and then protect those offspring. These are all traits of animals. An AI might not have these traits. In fact i would grealty recommend that we don't give our first few AI attempts a fear of death because that would give it a reason to prevent it's death. Keep in mind an AI could consider us performing any changes to it, to be equal to death as it's mind before the changes would be "dead".Again you may think me cynical, but whilst it is true that many or all of us try to improve ourselves at some times of our lives, many of us also put at least the same energy into doing down others for a similar purpose of being better (than them). This is prevalent from toddler to grave, and amongst the most privileged and mentally/materially blessed in all walks of life as well as the disenfranchised who have no hope of progressing through the existing system
see aboveThis also meets up with skully's allusion to the need for AI to protect itself from paranoid attempts by some humans to control or destroy it. It is unlikely that an AI of anything like the ability we are considering would somehow miss that lesson in politics.
Who says it will better and more importantly even if it was who says we can't make it believe otherwise?