WellAware Blog

Industrial dashboards are full of unprocessed data

Written by Cameron Archer | 4/2/20 10:35 PM

Digital transformation initiatives are full of utopian visions of beautiful dashboards that help engineers, managers, and executives make great decisions every time. But how sure can we be that these dashboards are telling the truth?

Let's talk about science for a second.

Have you ever heard someone proclaim “It’s Science!” with great authority about a fact they are just so sure about? Often delivered with a condescending tone? Perhaps even a Wikipedia quote or two to back it up? The "It's Science" idiom is usually used to dismiss any notion that an idea contrary to generally accepted theory can hold any water.

 

But the truth about science is that, in many ways, it isn’t always true. Science is the process of observing natural or synthetic phenomena, repeatedly and hopefully with great care, to arrive at a conclusion about a hypothesis. And while the scientific method may lead us to develop theories or laws about the world around us, history tells us that “science” isn’t always up to snuff.

 

So the next time somebody dismisses you with “science”, remind them that the world used to be flat and at the center of the universe...

Yep, definitely round. | Photo by The New York Public Library.

The data we collect from Industrial IoT solutions is kind of like “science” - there’s truth hidden in the data, but only with careful processing and testing can we arrive at the actual truth about what is going on in the physical world.

 

At WellAware, we refer to the problem of arriving at bad conclusions from good data as the Unprocessed Data problem, which is different from others that we’ve covered in our data problem blog series in a few ways.

 

What causes unprocessed data?

Unprocessed data is made up of inherently great raw data. It’s high-quality, has few or no errors, is properly calibrated, and it shows up on time. But that’s not enough. The problem of unprocessed data happens when aggregators and analyzers make incorrect interpretations from raw data, just like ancient scientists made incorrect interpretations about the Earth. As a result, unprocessed data leads to faulty dashboards with bad conclusions that lead to worse decisions. If data processors don’t develop clear, truthful dashboards with relevant insights, leaders are likely to make wrong decisions that affect the operational health and safety of their people and assets.

 

Whereas other data issues, such as low-res data or latent data, prevent us from getting to the objective truth because the raw data is not high quality, unprocessed data is a problem of interpretation. High-quality field data has to be processed correctly for it to be useful.

There are several causes of the unprocessed data problem.

 

First, the office workers who aggregate and study field data may have biases that impact their abilities to look at data objectively. They may make assumptions, knowingly or unknowingly, that impact how they construct dashboards which guide their business leaders.

 

For example, an operations engineer may know that certain assets have a history of breaking down or malfunctioning. As a result, they analyze data with this historic bias, spinning their interpretation to skew towards a shutdown even when sensor data may suggest otherwise.

 

Secondly, there are always outliers when it comes to data analysis. Knowing when to discount outliers and when to factor them in is a critical skill. These outliers might be good data, or they could be bad data. Part of good processing is understanding when to throw out data that shouldn’t be included in a dashboard.

 

Consider an example:

Two data aggregators, Adam and Ben, are looking at the same dataset. In this case, the data accurately represents all field assets (aka it isn’t bad data). Aggregator Adam has been with the organization for a long time and knows that pressure levels in pipeline C tend to increase at certain times of the year due to normal events. He correctly interprets a rising pressure level as a relative outlier that does not require additional evaluation.

 

Aggregator Ben, on the other hand, is new on the job. He sees the rising pressure level in pipeline C on his dashboard and immediately asks a field operator to take a look. Fortunately, nothing is wrong. However, that field operator is now behind on other important tasks. Aggregator Ben correctly identified a relative outlier, but he processed it incorrectly, leading to operational inefficiency that cascades into other areas of the organization.

Don't just strive for beautiful dashboards. Make sure they tell the truth. | Photo by Stephen Dawson.

 

Who is affected by unprocessed data?

The problem of unprocessed data primarily affects those who make business decisions. Many times, these individuals aren’t the ones digging into spreadsheets and databases. They receive analytical outputs and dashboards created by analysts or engineers.

 

If leaders are told the wrong “stories” about field data, they can’t make the best possible choices when it comes to managing field assets. The negative impact of unprocessed data compounds the higher up you go. Field workers may end up spending a lot of time dealing with non-issues based on bad guidance from corporate.

 

What do we need to consider?

Industrial organizations can minimize the effects of the unprocessed data problem by working to eliminate bias from data processing. There are two great ways to do this: teamwork and statistics.

Teamwork is great for many reasons, but one key benefit is that it allows data processors to check their biases against others. If you can pull five experts into a room to make a decision about data that might tell a number of different stories, they are much more likely to make the right decision than a single expert sitting at his computer, weighing the data against all of his historical biases.

 

And furthermore, businesses need to employ good statistics. When making conclusions from datasets, consider the quality of the data: How big is the sample size? How significant is the result? Include statisticians on your data teams to ensure that you arrive at appropriate conclusions.

 

By incorporating statistics into analyses, aggregators increase the likelihood of processing sensor data appropriately and building good dashboards. They can protect field workers from unnecessary work and ensure their leaders have full context around every data point.

 

One final thought: consider automating your processing. Advances in artificial intelligence and machine learning are leading to automated data processing. Machines are generally much better at eliminating bias and sticking to statistical best practices. There are tradeoffs, of course. While machines can be less biased, they often don’t have the eye for nuance that humans have - which can still lead to bad processing and bad conclusions.

 

What’s at stake for your business?

 

To understand what’s at stake for your business, ask the following questions:

  • Who in your organization is processing data?
  • What biases might they have?
  • How sophisticated are your data analytics capabilities?
  • Could you incorporate machine learning into your data processing to eliminate bias?
  • How talented and diverse is your data analytics team?

Many industrial companies put significant effort into optimizing their IoT networks and dashboards from a design standpoint, but they forget to invest in their data processing competency. As a result, they miss out on maximizing the power of their digital transformation.

 

At WellAware, we understand how to mitigate the unprocessed problem. Ready to speak to one of our data experts?

Get a Demo of WellAware Today.