3 Types of Feature Engineering For Using AI in Manufacturing
21204
post-template-default,single,single-post,postid-21204,single-format-standard,bridge-core-1.0.4,translatepress-en_US,ajax_fade,page_not_loaded,,qode-content-sidebar-responsive,qode-theme-ver-18.0.6,qode-theme-onetech,wpb-js-composer js-comp-ver-5.7,vc_responsive
 

Feature Engineering for Manufacturing-Based MicroAI™ Use Cases

Feature Engineering for Manufacturing-Based MicroAI™ Use Cases

14 Oct 20

engineering-microai

Feature Engineering for Manufacturing-Based MicroAI™ Use Cases

After the raw input channels have been selected for a manufacturing-based AI solution, the next step will be feature engineering. Feature engineering is the process of applying domain knowledge to transform raw data into more useful channels for the AI engine. In other words, successfully feature engineered channels allow emphasis to be placed on predefined thresholds to aid the AI in accurately identifying abnormal behavior.

The following sections will cover different types of feature engineering and how they can be applied in a manufacturing setting.  The topics covered will include: data decomposition, data reframing, and data normalization.

Data Decomposition

Data Decomposition is the practice of breaking down a signal to measure a specific aspect of it. This can be applied in multiple ways within a manufacturing use case. For example, fault data is quite commonly present and logged in manufacturing environments. The faults are usually registered categorically. Rather than use the categorical data as input to MicroAI(TM), the channel can be broken down into the number of faults tracked by the channel. So instead of having a single channel whose values can be fault_type_1, fault_type_2, fault_type_3; three binary channels can be created named is_fault_type_1is_fault_type_2is_fault_type_3.  Each of these new channels will have values of 0 or 1 depending on whether or not that specific fault is present.

engineering-microai-1

Data Contextualization

Data Contextualization refers to adding related information to data channels to make the data more meaningful. By creating additional related channels, patterns and breaks in patterns may become more evident to the AI engine. For example, if you are monitoring the movement of a robotic arm on an assembly line that performs the exact same task repeatedly until the end of the shift, a timing contextualization channel may aid the AI.

On top of the raw accelerometer and gyroscope values being used to monitor the robotic arm’s movement, an additional timing channel could be fed to the AI engine. This timing channel would increment every second during a robot’s cycle and reset to 0 when the robot finished a sequence. This would allow the AI to develop a relationship between the specific movements of the robotic arm and how far along the robotic arm should be for its sequence.

Data Normalization

The final type of feature engineering to be discussed is data normalization. This is the process of normalizing the data to a range of 0 to 1. These shared ranges allow the AI to better understand the relationships between grouped channels. Because MicroAI™ can perform its training on the edge with a live stream of data, normalization for some data channels can be tricky. For example, any data channel that could potentially increase or decrease indefinitely is a bad candidate for normalization. Conversely, any continuous data channel with known bounds can receive normalization.

In the end, the goal of these feature engineering techniques is to make the patterns clearer to the AI engine. However, when it comes to feature engineering, the results of the input data transformations cannot be known in advance. The only way to determine its usefulness is through a repeated process of experimentation and validation.