The predictive maintenance technology applies machine learning to sensory signal data to predict when a machine failure may occur. The goal of this experience is to raise awareness of potential failure events to increase safety and reduce costs.
Machines are setup with multiple sensors which capture data on vibration, temperature, pressure and speed. The data model then ingests all of this data, filters out the noise and then pushes alerts to the end user.
An equipment analyst's concern is to minimize machine failure and to equip their field crew with as much information as possible to address mechanical issues.
Our client had 42 Equipment Analysts which managed all the units within their region. I was fortunate to speak directly with Craig, a veteran Equipment Analyst, who was also working with data scientists to validate data model rules.
Equipment Analyst's main goal is to ensure a booster is running at maximum efficiency by minimizing down times and prioritizing planned and unplanned maintenance events.
The project kicked off at the client site with a design workshop. This workshop was critical not only for my own learning but to also validate the use case for the experience.
This wire chunked out the required information architecture for an alert detail view. This wire was used to capture user's feedback as well as to drive discussions around flow and user tasks.
Filters were used to allude to IA which would fall under user permissions. Seasonal and geography based filters were discussed for modeling downtimes and natural disaster readiness.
Persistent alerts menu allows for easy access and reference throughout the experience.
Fidelity and content were then layered into the cards. This iteration led to discussions around the need for critical interactions such as prioritization, sorting, comparing and archiving.
Users can more readily sort, compare and digest priority. The archive tab holds alerts for past alert and work order reference.
Once a user selects an alert from the landing view, the system navigates the user to an alert detail view. The user can now investigate why an alert was created, the details of the sensory readings as well as time series data visualizations. If an alert warranted further investigation, users could also assign work orders.
This initial version consolidated sensor signal data points and time series data. Upon further validation, I learned that users needed to be able to focus more attention to the timer series charts independent of one another.
The user can filter sensor data to view all data points or data points the system defines as priority. Including all data points helped to increase trust between the users and the system.
4 of the 5 SMEs voiced concerns about trusting the system. My main contact, Craig, had over 20 years in the field and his number one concern was the safety of his crew. He wanted to be able to verify that the system was flagging sensor combinations correctly as he was not convinced the system was fool proof.
Users can view and compare all time series sensor data. Users can filter by a specific sensor, zoom into a specific timestamp for further investigation and filter by date and time independently per chart.
Users can view past and current maintenance intervals.
Users can view work order information, materials used, total cost of repairs and other pertinent information. This more detailed information was not possible to implement due to integration issues between Maximo and the Predictive Maintenance platform (we could not pull that data).
Users can create a work order or update the status of an existing alert.