Smart Computers or Stupid Humans?

“The problem with the Turing Test is that it presents a conundrum of scientific method. We presume that improvement to machines takes place, so there is a starting state in our experiments where the human is considered ‘smarter’, whatever that means, than the computer. We are measuring a change in human ability to discriminate between human and machine behavior. But the human is defined as the most flexible element in the measurement loop at the start, so how do we know we aren’t measuring a state change in the human, rather than the computer? Is there an experimental difference, in this setup, between computers getting ‘smarter’ and humans getting ‘stupider’? I don’t think so.”


– Jaron Lanier, Agent Alien

Source

A Cartoon Version of the World

“An agent’s model of what you are interested in will be a cartoon model, and you will see a cartoon version of the world through the agent’s eyes. It is therefore a self-reinforcing model. This will recreate the lowest-common-denominator approach to content that plagues TV. ‘You’re interested in Balinese ritual, therefore you’re interested in travel, therefore you’re interested in the Infobahn Travel Game Show!’.”

– Jaron Lanier, Agent Alien