Machine intelligence finds prospects of failure challenging. Programming a computer, let alone an AI, to make creative decisions in the absence of key knowledge makes the black box syndrome even more obscure. It also increases the risk of legal liability for inadvertently or imprecisely having programmed a machine to fail. Permitting ever-increasing AI participation in real-world decisions without human intervention creates a new environment where blame for failures becomes hard to assess.
You’re driven by the only information you think is available
Watson and Homes were discussing a new case that they had been engaged to solve. “Well,” Watson mused, “the only thing we know about the victim was that he smelled of alcohol.” “Be careful, Watson,” said Sherlock. “Before long you will be imagining situations that are driven by the apparent use of alcohol. Which pub […]