Our humanized AI acts strangely! Sometimes we can anticipate its next ‘alert’ about a future event; other times not. And, it’s been decades since we were last able to deconstruct the reason and logic of why it output an alert. Similar to our own subconscious, we can no longer retrace its reason for its actions.  And, this is not just for the investment markets, but for all of its applications to-date, like sports, elections, health, opinion, random numbers and more.

Are we now uncomfortably staring in the face of a humanized AI? Is its Intuitive Rationality really simulating a kind of human intuition? Should we now start calling it Jim, Bob, Alice or Jane? Lets deconstruct this!

Wikipedia says this kind of general intelligence is the “. . . hypothetical ability of an intelligent agent to understand or learn any intellectual task that a human being can”. 

While it’s an easily stated and lofty goal, it is the left side of the computer/human equation – a computer system that can appear to be smart and challenge human intelligence.  The right side of the equation – the experiential dynamics of human behavior and intuitive decision-making that actually determine our reality – gets little mention in public presentation, discussion and debate.  No doubt sometime in the not-so-distant future, we will observe AI applications that seem to exhibit sentience, self-awareness and consciousness. However, we are convinced that such systems will generally be one step behind its human competitors in the dynamics (an important qualifier) of intuitive thought, creativity and the human ability to perceive new and unique possibilities – the right side of the equation. The qualifier is, ‘if it walks like a duck . . .” It’s our opinion that this computer/human equation will always be a duck with one lame leg. 

Daniel Kahneman, Nobel prize winner and the father of behavioral economics, and his partner, Amos Tversky, explain how two systems divide our brain and constantly fight for control of our behavior and actions. The chart, above, shows their basic function: intuition versus rationality. One is super fast and efficient; the other is slow and lazy, i.e., “To hell with it. I think what I’ve got is good enough!” Our human/computer equation maps nicely: rational thinking = computer (conscious) and intuitive thinking = human (subconscious).  Humanized AI does the rational work, a glutton that doesn’t get tired, and attempts the intuitive work, as it simulates its twelve biases that we have explain in prior magazine issues.   The system digests these every-changing biases into a singular Intuition – a sense of the future – for all the data sources that it sees, not only after each new data event, but for 150 sequential future events. And, this is where humanized AI has the edge in the human/computer competition. It’s faster, it consumes vast amounts of data, and it never sleeps. It may demonstrate intuition. But the quality of it will always lack the uniqueness of the human conceptual, creative experience.

by Grant Renier, engineering, mathematics, behavioral science, economics