All humans are pre-programmed (whether they like it or not) with inherent cognitive biases. Understanding this is critical for understanding why inherent biases can in turn, be present in AI-driven technology, thanks to poor design decisions.
The tendency that humans have to systematically simplify and deviate from the tenets of rationality – usually through mental shortcuts commonly known as heuristics – can often lead to suboptimal decisional outcomes. You’ve likely used heuristics multiple times before breakfast this morning. Some examples include:
While these mental shortcuts are very useful for our brain’s ability to navigate the massive amounts of data, we encounter through our senses every day, it’s important not to pass these biases onto the technology we create.
The strength of AI is its ability to process data without bias, enabling it to find correlations that our minds wouldn’t be capable of – but if we restrict that capability by designing bias into technology we implement, through restricting inputs or limiting connection points, we’re in danger of ending up exactly where we started.
” The strength of AI is its ability to process data without bias, enabling it to find correlations that our minds wouldn’t be capable of. “
Human operators develop knowledge and intuition around the processes and equipment they control. This allows for rapid diagnosis of many problems but can lead to blindness and bias for some problems that don’t fit their personal experience.
To illustrate the importance of removing bias here’s an interesting case study from one of our clients.
Our client runs a refinery. Part of their process requires filtering of a solution to remove impurities. The filters would run trouble free for long periods of time then suddenly, they would block, sometimes rapidly, with no obvious or definable reason that anyone could identify.
Once blocked the filters required a shutdown and the use of jackhammers to clear them – this expensive process would also induce a costly shutdown period.
There were many theories and as to what caused the blockages to occur, but none of them could be proven. In addition, all the theories were about conditions in the immediate vicinity of the filter house.
AI was used with all data from across the facility to look for conditions that correlated with the start of a blockage. The time since the last maintenance of the crusher was found to correlate strongly with the blockages.
At the crusher, rocks are broken up into smaller pieces to aid downstream processing. Sometimes multiple passes are required to break the rocks into small enough pieces, they go through a sizing screen where the small pieces are passed downstream and the large pieces are recirculated back into the crusher.
When the crusher is due for maintenance it becomes less effective, so the rocks need to be recirculated more times to be crushed to the required small size.
This is what AI technology identified as the root cause of the problem – because of the recirculation of the rocks during this process, rocks are checked for size after they’ve been dosed with caustic (an alkali substance). So, when rocks need multiple passes through the crusher, the inadvertent and unintended result is that they get multiple doses of caustic.
This overdosing of caustic raised the pH further down the process in the refinery setting off a chain of events that was ultimately causing the filters in a completely different building to repeatedly become blocked!
Human operators were not looking at the crusher to resolve this issue, as it did not seem relevant to them. But once the AI had highlighted the correlation it took them only a short time to understand why the problem was occurring.
In this example, it becomes perfectly clear that when using AI we must be careful not to introduce our biases by considering only the data that we think is relevant. AI can process huge datasets and we should let it do so to discover correlations that we had never previously considered and investigate them for ways to improve our processes.
Empowering decision making and operational efficiency with AI on an offshore platform in Central Asia.
Read ArticleInterested in a demo of one of our data solution products?
DataHUB4.0 is our enterprise data historian solution, OPUS is our Auto AI platform and OASIS is our remote control solution for Smart Cities and Facilities.
Book your demo with our team today!
Ready to embark on a pilot project or roll-out AI innovation enterprise wide? Perhaps you need assistance integrating your systems or storing your big data? Whatever the situation, we are ready to help you on your digital transformation.
The efficient deployment, continuous retraining of models with live data and monitoring of model accuracy falls under the categorisation called MLOps. As businesses have hundreds and even.
Learn more about DataHUB+, VROC's enterprise data historian and visualization platform. Complete the form to download the product sheet.
Discover how you can connect disparate systems and smart innovations in one platform, and remotely control your smart facility. Complete the form to download the product sheet.
'OPUS, an artistic work, especially on a large scale'
Please complete the form to download the OPUS Product Sheet, and discover how you can scale Auto AI today.
Interested in reading the technical case studies? Complete the form and our team will be in touch with you.
Subscribe to our newsletter for quarterly VROC updates and industry news.