Data and AI in an analytical world
Artificial Intelligence (AI) has had a long history up until now, remaining in university labs for a number of years before coming of age. Today, connectivity and computing power have reached a turning point where it is viable to apply algorithms locally. This opens up a whole new set of applications, some of which will make existing practices obsolete because they offer superior solutions, and some of which won’t.
The expectations of what AI will bring are not always realistic. One of the better non-technical articles on the progression of the capabilities of AI is the seven deadly sins of AI predictions. Despite, a tenacious opinion often expressed is that Business Intelligence (BI) is made obsolete by AI. Or that AI will bring more value than BI ever will. Both of these opinions are frequently expressed by people not experienced in data science.
Applying insights derived from algorithms is a very different game to what we do with our dashboards in BI. My complexity of information projects series is an in-depth exploration into different kinds of information usage and, significantly, the conditions they set for governance.
The actual mechanics of working with data, and the governance required, eludes a lot of people. While everyone is entitled to their opinions, if they then start acting on misconceptions, it can result in economic harm to the company or organisation in which they work. This list challenges some of the assumptions, which in my mind have emerged from a lack of understanding of the mechanics:
- Data isn’t fairy dust. And it’s not oil either. Data is nothing but the recording of human initiated activities. To better understand the limitations this brings, this example about training computers to understand humanity provides some context.
- While there is useful information to be gathered from data, this can be likened to finding a diamond buried in the ground: more data doesn’t mean more insights, but it does mean you have to dig more and that it’s harder to find the diamonds.
- If you know what you are looking for, and have collected the relevant data, your chances of finding diamonds increases. That’s about the only comparison to be made with oil companies: investing money to pinpoint where to look does yield greater returns.
- You cannot compare what you know to what you don’t know, or what you think will happen to what you think won’t. You need to gather data about what you don’t know to be able to infer attributed behavior or classification by applying algorithms. That’s not a game of chance or gathering piles of random data. You need to be clear about what you don’t know and look for data to fill the gaps.
- If you know the causality, algorithms won’t bring any new insights into the relationships. The point of using statistics is to prove whether a hypothesized causal relationship is likely or not. Positive and negative results have equal value. Having more similar data won’t improve your statistics beyond a certain point, but having additional supporting or invalidating data will. But only in the hands of a trained statistician. The percentage of humankind with a knack for statistics hasn’t grown proportionally with the availability of AI.
Prediction models use the same data that BI solutions use. However the purposeof what you want to dowith that information is different.
There is no hierarchy in these purposes. Prediction models aren’t more valuable than BI solutions and vice versa. Both bring value, if and only if, an organisation is able to govern the application of the insights for its intended purpose. There is still a lot to be improved on here. The struggle with ‘digitization’ of companies revolves around the governance issues, and not the technological implementation.
Reactions through Twitter @MartijntenNapel or e-mail