AI and AC Are Two Different Things

A growing community of people—both laypeople and experts—believe that since computers are getting exponentially smarter (and have been for many decades), they will become more intelligent than humans sometime within the next fifty years, and when they do they will be a major threat to us. I am one such worrywart. But when weighing this claim, many people—both laypeople and experts—often bring a specific, and horribly wrong, counterargument against the worrywarts. That argument goes something like this:

“A conscious computer? Like in the movies? Give me a break. Computing technology is centuries away from being able to create a machine that has feelings, awareness, and a sense of selfhood like that of humans. The human brain is far too complex. It’s pointless to worry so much about something that won’t exist for hundreds of years.”

This argument fails to realize what AI actually is. The first artificial superintelligence will not be a conscious being. It won’t have feelings, it won’t “hate humans”, it won’t be aware of its own capacity to think, and it won’t have a mind that can process and reflect on subjective experiences the way ours can. The skeptics are right that, given current trends of technological growth, we’re probably centuries away from being able to create such a truly conscious being. But that’s not the point, and it never has been.

The point is that consciousness and intelligence are not the same thing. And AI researchers aren’t trying to build artificial consciousness (AC). They’re trying to build artificial intelligence.

The first AI won’t be sentient. Its brain won’t be anything remotely like a human brain. Instead it’ll be a very, very sophisticated computer program. It won’t need empathy and self-awareness in order to steal our nuclear launch codes. It will just need a defined goal, and the ability to outsmart human beings.

We love to anthropomorphize superintelligent machines by depicting them as vengeful war-gods or ultra-benevolent deities, basically asking, “What would humans be like if we had superintelligence?” But it’s far more accurate for us to ask instead, “What would Microsoft Windows be like if it had superintelligence?” And that is a thought that should frighten us all.


For further reading, I highly recommend the fiction novel Blindsight by Peter Watts and the nonfiction book Superintelligence by Nick Bostrom.

Leave a Reply

Your email address will not be published. Required fields are marked *