Humanized AI is defined by the data inputted into it. No more; no less.  There are usually other data categories that could bring added value to the application environment, either as objective data series for which output predictions and actionable alerts would be necessary, or as influencer data series that could add quality to those objective data feeds.   The user’s selected objective and influencer data determine the application’s environment. 

In the equity markets, for example, there are many sources of real time information about corporate status that are downloadable for analysis. But we know that there are other undisclosed activities that are ongoing that could potentially affect the stock price. We would love to get our hands on these insider transactions and have them be immediately available to the public market to determine a more accurate stock price.

The chart is IntualityAI’s 60-day prediction of COVID19 death rates in California, made on March 15, 2021. The system made predictions of the equity markets for the same time period and included them to measure their possible influence on the COVID19 predictions.  As shown in the chart, there was an influence on the death rate in the range of 2.4% to -0.8%.  This is a case where this kind of relevance was unknown until the data was  made available to the COVID19 predictions.  In hindsight, one could conclude:

1. The predicted rise in COVID16 CA deaths could create increased investment in the equity markets, ala the 3rd round of government stimulus funneling cash into those markets, and

2. The increase in equity investments could prompt optimistic but premature ‘opening up’ that could result in an increase in the death rate. 

So, humanized AI must allow the user freedom to add additional inputs regardless of their perceived influence on future performance.  The technology must be given the responsibility of determining degrees of relevance of seemingly related data. This philosophical shift towards increased data availability is contrary to the current reticence by AI developers to relinquish that ‘control’.  We are afraid of letting AI know too much.

by Grant Renier, engineering, mathematics, behavioral science, economics