Availability of information determines how much truth we receive and perceive. Truth should not be just a matter of judgment, a factoid should be true or not true. But truth cannot be perceived until it is conveyed (available) and received (understood). This simple statement is actually a big controversy today, because truth conveyed by media (availability) involves a delivery system (manipulated selectivity) which is intentionally biasing perception of truthiness. Humans cope with incomplete and misleading data all the time, but machine intelligence largely cannot cope. What can be done?
Availability of information determines how much truth we receive and perceive. Truth should not be just a matter of judgment, a factoid should be true or not true. But truth cannot be perceived
until it is conveyed (available) and received (understood). This simple statement is actually a big controversy today, because truth conveyed by media (availability) involves a delivery system (manipulated selectivity) which is intentionally biasing perception of truthiness. Humans cope with incomplete and misleading data all the time, but machine intelligence largely cannot cope. What can be done?
Too much data can overload both humans and computer, but too little data leaves us all uninformed. Do we have too little information, enough available information, or an overload? We are not enabled to know. Do computers and their AI-advanced-iterations know? No! Computers are still programmed by humans with human intentionality, which adds biases to the data, which in turn manipulates and limits both computer and human awareness. Do Humans know? No! Someone out there determines what information computers use, and that becomes what the mis-informed and under-informed learn, but it is not necessarily the Truth.
We are all inundated with information, most of it not relevant, but much of it directed at us with some force. Media is a major force in its massive scale and repetitive access to our consciousness, but availability is not the only force. It is the direction and intent behind the data presented (or not presented) that is the real motivational agent that can transform neutral data into actionable decisions and visible results. Such force is amplified by the extent to which the neutrality of data becomes biased in an intended direction. Suddenly bias becomes a force.
Humans have so far been trained to “watch” at most 2 data series at a time, a vast oversimplification of reality. Much more than the convergence of two variables in a graph becomes too challenging to most human observers. Thus the use of the common test of future recessions: an inversion of long term bond yields and short term interest rates trendlines. Focusing on such a “simple” solution, biases the outcome and the ultimate solution to just two historically effective sources of information, which usually excludes consideration of third and fourth key variables that might well countermand a decision reached by just the first two.
Humans have learned to act despite data availability gaps, but computers cannot. At least not yet. Despite human brain limitations to focus on more than a narrow amount of information at a time, humans are actually accustomed to making even weighty decisions without complete information. That is not entirely a matter of human ignorance, but can be overcome with a certain level of human wisdom gained from repeat experience, focusing on the most relevant data available to each specific challenge.
Human wisdom is contained in widely scattered information and experience that involve Human Intuition: This is hard to teach to a machine, but processing widely available information sources can be the heart of the machine’s potential performance advantage. That is why we need AI help. The computer can make the entire world its source of available information, and can multiply its focus on multiple types of data that are calculated as most relevant.
Machine intelligence finds prospects of failure challenging. Programming a computer, let alone an AI, to make creative decisions in the absence of key knowledge makes the black box syndrome even more obscure. It also increases the risk of legal liability for inadvertently or imprecisely having programmed a machine to fail. Permitting ever-increasing AI participation in real-world decisions without human intervention creates a new environment where blame for failures becomes hard to assess.
Humanized AI trains both humans and machines to focus on relevant data from the widest available data sources. IntualityAI trains its systems to monitor all the available data streams even if deep-data interrelationships have not previously been recognized as correlated. This in turn gives humans access to a disciplined set of multiple data streams and more available data than they could address or absorb by themselves. The point is that both humans and AI have thus far been deprived of the bigger picture where sometimes both humans and computers have been walking in the dark and in insufficient truth, merely speculating that available information is sufficient to explain the universe. Humanized AI seeks to maximize available hope.
by Michael Hentschel, Yale and Kellogg anthropologist, economist, venture capitalist