A basic question arising from the acceptance of the idea that machines can think is, what should I teach a machine to think? Daniel Suarez makes an extraordinarily profound point which is that we live in an environment in which humans are the vector of discovery for machine AIs. In other words, machine AIs are weak, but human AIs are strong and we humans will build, using our intuition, those things that machines cannot intuit for themselves. It is that symbiosis, as it were, that makes machines more powerful than they ordinarily would be.
Given that, a secondary question about the acceptance that machines can think is, what does a machine feel when it is thinking? To the extent that one can conceive that a computer senses itself, it might learn not to use certain chips when they have redundant systems in the way we might chew on one side of our mouth when we have a toothache. One of the implications, then, of a sufficiently adaptable machine is that it would prefer to do certain operations. Like humans, it would overuse its strengths. At some point this would become a handicap. The machine would have its own cognitive failures.
What might emerge from this man-machine symbiosis is that machines might tell us things that we refuse to believe, because the machine has no compunction in revealing that truth. So it is with this in mind that I consider Ariely's work. What can machines help us to know about our own irrationality?
Comments