Predictive Maintenance: detect Faults from Sensors with CNN

Predictive Maintenance: detect Faults from Sensors with CNNAn interesting approach with python code and graphic representationsMarco CerlianiBlockedUnblockFollowFollowingMar 30In Machine Learning the topic of Predictive Maintenance is becoming more popular with the passage of time.

The challanges are not easy and very heterogenous: it’s usefull to have a good knowledge of the domain or to be in touch with people which know how the underlying system works.

For these reasons when a data scientist engages himself in this new field of battle has to follow a linear and rational approach, keeping in mind that the easiest solutions are always the better ones.

In this article, we will take a look at a classification problem.

We will apply a simple but very power model made with CNN in Keras and we will try to give a visual explanation of our results.

THE DATASETI decided to take a dataset from the evergreen UCI repository (Condition monitoring of hydraulic systems).

The data set was experimentally obtained with a hydraulic test rig.

This test rig consists of a primary working and a secondary cooling-filtration circuit which are connected via the oil tank.

The system cyclically repeats constant load cycles (duration 60 seconds) and measures process values such as pressures, volume flows and temperatures while the condition of four hydraulic components (cooler, valve, pump and accumulator) is quantitatively varied.

We can image to have an hydraulic pipe system which cyclically recives impulse due to e.


the transition of particular type of liquid in the pipeline.

This phenomenon lasts 60 seconds and was measured by different sensor (Sensor Physical quantity Unit Sampling rate, PS1 Pressure bar, PS2 Pressure bar, PS3 Pressure bar, PS4 Pressure bar, PS5 Pressure bar, PS6 Pressure bar, EPS1 Motor power, FS1 Volume flow, FS2 Volume flow, TS1 Temperature, TS2 Temperature, TS3 Temperature, TS4 Temperature, VS1 Vibration, CE Cooling efficiency, CP Cooling power, SE Efficiency factor) with different Hz frequencies.

Our purpose is to predict the condition of four hydraulic components which composed the pipeline.

These target condition values are annotated in form of integer values (easy to encode) and say us if a particular component is close to fail for every cycle.

READ THE DATAThe values measured by each sensor are available in a specific txt file, wherein each row rapresents a cycle in form of a time series.

I’ve decide to take into account the data coming from the Temperature Sensor (TS1, TS2, TS3, TS4) measured with a frequence of 1 Hz (60 observation for every single cicle).

label = pd.


txt', sep=' ', header=None)data = ['TS1.




txt']df = pd.

DataFrame()#read and concat datafor txt in data: read_df = pd.

read_csv(txt, sep=' ', header=None) df = df.

append(read_df)#scale datadef scale(df): return (df – df.


std(axis=0)df = df.

apply(scale)For the first cycle we have these time series from the Temperature Sensor:Temperature Series for cicle 1 from TS1 TS2 TS3 TS4THE MODELIn order to capture interesting features and not obvious correlations from the series at our disposal we have decided to adopt a 1D CNN.

This kind of model suits very well the analysis of time sequences of sensor and imposes to reshape the data in short fixed-length segments.

Developing this workflow I took inspiration from this post which adopts a very usefull approach.

I picked the same CNN described on the Keras website and refreshed the parameters.

The model was built to classify the status of the Cooler component giving as input only the time series of temperature in array format (t_periods x n_sensor for every single cycle).

n_sensors, t_periods = 4, 60model = Sequential()model.

add(Conv1D(100, 6, activation='relu', input_shape=(t_periods, n_sensors)))model.

add(Conv1D(100, 6, activation='relu'))model.


add(Conv1D(160, 6, activation='relu'))model.

add(Conv1D(160, 6, activation='relu'))model.




add(Dense(3, activation='softmax'))model.

compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])BATCH_SIZE, EPOCHS = 16, 10history = model.

fit(X_train, y_train, batch_size=BATCH_SIZE, epochs=EPOCHS, validation_split=0.

2, verbose=1)In this case with only 10 epochs we are able to achive incredile results!Train on 1411 samples, validate on 353 samplesEpoch 1/101411/1411 [==============================] – 2s 2ms/step – loss: 0.

2581 – acc: 0.

9391 – val_loss: 0.

0867 – val_acc: 0.

9830Epoch 2/101411/1411 [==============================] – 2s 1ms/step – loss: 0.

1111 – acc: 0.

9731 – val_loss: 0.

0686 – val_acc: 0.

9830Epoch 3/101411/1411 [==============================] – 2s 1ms/step – loss: 0.

0925 – acc: 0.

9759 – val_loss: 0.

0674 – val_acc: 0.

9802Epoch 4/101411/1411 [==============================] – 2s 1ms/step – loss: 0.

1093 – acc: 0.

9731 – val_loss: 0.

0769 – val_acc: 0.

9830Epoch 5/101411/1411 [==============================] – 2s 1ms/step – loss: 0.

1022 – acc: 0.

9731 – val_loss: 0.

0666 – val_acc: 0.

9802Epoch 6/101411/1411 [==============================] – 2s 1ms/step – loss: 0.

0947 – acc: 0.

9773 – val_loss: 0.

0792 – val_acc: 0.

9830Epoch 7/101411/1411 [==============================] – 2s 1ms/step – loss: 0.

0984 – acc: 0.

9794 – val_loss: 0.

0935 – val_acc: 0.

9830Epoch 8/101411/1411 [==============================] – 2s 1ms/step – loss: 0.

0976 – acc: 0.

9738 – val_loss: 0.

0756 – val_acc: 0.

9802Epoch 9/101411/1411 [==============================] – 2s 1ms/step – loss: 0.

0957 – acc: 0.

9780 – val_loss: 0.

0752 – val_acc: 0.

9830Epoch 10/101411/1411 [==============================] – 2s 1ms/step – loss: 0.

1114 – acc: 0.

9738 – val_loss: 0.

0673 – val_acc: 0.

9802Making the prediction on the test data the model reaches an ACCURACY of 0.

9909% precision recall f1-score support 0 0.

99 1.

00 1.

00 151 1 1.

00 0.

98 0.

99 138 2 0.

99 1.

00 0.

99 152weighted avg 0.

99 0.

99 0.

99 441KPis in each classes are awesome!.This result is particular important for class 0 (componet Cooler ‘ close to total failure’) because in this way we are able to detect and prevent possible faults in the system.

VISUALIZE THE RESULTSIf we want to have a general overview of the system status and see the incredible goodness of our model, might be usefull to see a graphic representation.

To reach this target we reutilize the CNN that we built above to make a decoder and extract clever features from the time series of every single cycle.

With keras this is possible in one single line of code:emb_model = Model(inputs=model.

input, outputs=model.


output)The new model is a decoder which recieves as input data in the same format of the NN we utilized for the classification task (t_periods x n_sensor for every single cycle) and return ‘prediction’ in form of embeddings coming from GlobalAveragePooling1D layer with relative dimension (a row of 160 embedding variables for every single cycle).

Computing the prediction with our encoder on the test data, adopting a techique to reduce dimension (like PCA or T-SNE) and plotting the results we can see this magic:tsne = TSNE(n_components=2, random_state=42, n_iter=300, perplexity=5)T = tsne.

fit_transform(test_cycle_emb)fig, ax = plt.

subplots(figsize=(16,9))colors = {0:'red', 1:'blue', 2:'yellow'}ax.


T[0], T.

T[1], c=[colors[i] for i in y_test]) plt.

show()TSNE on cycle embeddings for test dataWOOW!!!.This graph tells the truth!.Each dot rapresents a cycle in the test set and the relative color is the target class of the Cooler condition.

It’s possible to see how the distinction among target values of Cooler component is well defined.

This approach is a key indicator of the perfomance of our model.

SUMMARYIn this post we try to solve a problem of Predictive Maintenance, in form of a classification task for time series with CNN.

A strong model with impressive perfomance was developed.

We try to give also a visual representation of the results.

It’s important to underline the powerfull of the CNN not only in case of prediction but also as instrument to detect invisible relations among data.

Keep in touch: Linkedin.

. More details

Leave a Reply