Intuality AI Magazine

December 16, 2022
Click Here

Machine intelligence finds prospects of failure challenging. Programming a computer, let alone an AI, to make creative decisions in the absence of key knowledge makes the black box syndrome even more obscure. It also increases the risk of legal liability for inadvertently or imprecisely having programmed a machine to fail. Permitting ever-increasing AI participation in real-world decisions without human intervention creates a new environment where blame for failures becomes hard to assess.

Featured Editorials

This Weeks Articles

You’re driven by the only information you think is available

Watson and Homes were discussing a new case that they had been engaged to solve. “Well,” Watson mused, “the only thing we know about the victim was that he smelled of alcohol.” “Be careful, Watson,” said Sherlock. “Before long you will be imagining situations that are driven by the apparent use of alcohol. Which pub […]

Humanized AI seeks to maximize available hope

Availability of information determines how much truth we receive and perceive.  Truth should not be just a matter of judgment, a factoid should be true or not true. But truth cannot be perceived until it is conveyed (available) and received (understood). This simple statement is actually a big controversy today, because truth conveyed by media (availability) involves […]

The two-edged sword of information availability

Humanized AI is defined by the data inputted into it. No more; no less.  There are usually other data categories that could bring added value to the application environment, either as objective data series for which output predictions and actionable alerts would be necessary, or as influencer data series that could add quality to those […]