Thunderstruck: Disaster CNN visualization of AC power lines

What during all the training made it look there for being the most significant area?That being said, this prediction should be false, but looks like the signal is broken… so perhaps it is the data.

Stiff Upper LipSo I’m a little nervous at this point, although the validation set claims we are doing well, it doesn’t seem to see the items adequately weighted.

The predictions kind of go all over the place, and I’m not sure why.

So I decided to move on.

Looking further into the predictions, it appears that the predictions make sense.

We see that we are getting some values in both 0 and 1, which is good.

The set is very biased with almost 10:1 discrepancy, however, we would also expect that.

It is relatively close to our dataset.

Let’s map all the test items back appropriately as each image now represents 3 different signals.

Then we submit to Kaggle.

Let there be rock!Well, that was disappointing.

It scored even less than all 0s.

At this point, it was time to move onto something else.

It’s a long way to the top, and I had other projects to tackle.

Other CNNs in the competition got closer to 0.

50 ish, and the winner used an LGBM.

Things to improve:If you have signal data, you can certainly try to make it into an image.

However, you lose significant amounts of information during the conversion.

You also increase the overall size of the data that you are using.

This process of converting and combing gets very complicated and confusing quickly, and there are many mistakes to do things wrong here.

Combined with item 1 I am reasonably sure I lost lots of data and may have lost more or mislabeled some items.

It would be challenging for me to go back and double check original images to ensure we didn’t miscreate them.

It is also a difficult problem because things I thought were faults weren’t according to the labeling.

I have seen this method successfully used on other projects.

Perhaps they had the resolution setup better or more information in the photos was more distinguishable.

References:CompetitionReading in data with pythonCNN LSTM for signalPower fault detection winnerThunderstruck: Disaster CNN visualization of AC power linesNET Centre at VŠB is trying to detect partial discharge patterns from overhead power lines by analyzing power signals.

This Kaggle challenge was a fun one for any electrical power enthusiasts.

Ideally, we would be able to detect the slowly increasing damage to the power lines before it suffers a power outage or starts an electrical fire.

However, there are many miles of powerlines.

Also, damage to powerline isn’t immediately apparent, small damage from about anything (trees, high wind, manufacturing flaws, etc.

) can be the start of cascading damages from discharges which increase the likely hood of failure in the future.

It is a great goal.

If we can successfully estimate the lines that need repairs, we can reduce costs while maintaining the flow of electricity.

I mean money talks.

The tabular data set is very massive with 800,000 points for each signal and in total comes to about 10 GB.

The bloated set wasn’t what I was looking for just coming off the Microsoft Malware.

I had spent so much time just trying to get that dataset into my computer and was taken back at the possibility of doing so much data management again.

I decided to do something crazy and make it a CNN problem and not a tabular problem.

To change a tabular problem into a CNN is particularly insane.

Why would you take perfectly good data and turn it into fuzzier and less accurate data?It was fasterIt was hackierAlso, it was more fun!Kinda… are you ready?High VoltageWe should first look at some electrical engineering concepts, especially with powerlines.

There are 3 cycles or phases associated with the alternating current we see here.

The data provided has signals for these phases and are dependent on each other for predicting when lines have problems.

Therefore the piece of equipment had three different signals going through it, and there was an interdependence between them.

If one of these looks weird, it impacts the other two.

It is rather trivial plot data from signals onto a graph.

So we can look at one of these signals quickly.

A rule of thumb with many CNNs can help you decide if it is useful for the model.

If you can see a difference, then a CNN can do as good of a job or better, faster.

So let’s see what one signal looks like:Ok.

That makes sense.

Now, what would 3 of them look?.Can something look super weird?.I think so!Ok, this might work!.Granted I can see the green phase 2 line seems to have a bit of spikiness in this image.

Maybe that is important, maybe not.

However, I imagine that would be what our CNN is trying to detect.

First, we need to convert all the signal files into images, and I completed this and saved it on the hard drive (using this jupyter notebook).

It was much faster to go back and load these into memory than generate them every time (there are lots of signals).

Then I went back and did the same thing for our test set.

I probably could have done a better job with this, but only needed it to run one time.

Now image size matters, As you compress data, you lose some of the details in the signal.

Think about losing detail as you zoom out of a picture.

So I kept a reasonably large image size, besides we can make the image smaller later during training.

I feel that we can gloss over most of the CNN setup (if interested you can look here).

Nothing is unique except for the transforms.

Unlike cats and dogs, or medical imaging, I did not need to transform or changed around the images.

The images we generate should always be similar, and more importantly, the images will always be in the same format our test images appear.

In our test, the images will always be the same size.

This method is different from other image models since the test set could include different animals, different angles, and overall different looking subjects.

In this way, transforms would help other models but not for this case.

As we start training, we see training improving!.Good!Validation is getting better as well as the Matthews coefficient!.Good!Great, let’s try this and look at some of these.

One thing I noticed is where items have correct predictions.

After a couple of epochs, we can see a heatmap of what the model views as important.

We can see items that it views as correct and the items it predicted correctly.

For the signals that it gets correct, it seems to float on the lower signal.

Ok, that’s not ideal.

There is nothing that would suggest this makes sense.

And hells bells we only see the items where there is not a fault.

However, the plots where predictions were incorrect ones are all over.

Some of them are showing what I would expect.

This one shows wild fluctuations.

For example, we want to see something like below where weights are all over the 3 signals.

However, there are many more where they highlight areas that don’t make sense.

Like here:Why is it highlighting the area on the right?.What during all the training made it look there for being the most significant area?That being said, this prediction should be false, but really looks like the signal is broken… so perhaps it is the data.

Stiff Upper LipSo I’m a little nervous at this point, although the validation set claims we are doing well, it doesn’t seem to see the items adequately weighted.

The predictions kinda go all over the place, and I’m not sure why.

So I decided to just move on.

Looking further into the predictions, it appears that the predictions make sense.

We see that we are getting some values in both 0 and 1, which is good.

The set is very biased with almost 10:1 discrepancy, however, we would also expect that.

It is relatively close to our dataset.

Let’s map all the test items back appropriately as each image now represents 3 different signals.

Then we submit to Kaggle.

Let there be rock!Well, that was disappointing.

It scored even less than all 0s.

At this point, it was time to move onto something else.

It’s a long way to the top and I had other projects to tackle.

Other CNNs in the competition got closer to 0.

50 ish, and the winner used an LGBM.

Things to improve:If you have signal data, you can certainly try to make it into an image.

However, you lose significant amounts of information during the conversion.

You also increase the overall size of the data that you are using.

This process of converting and combing gets very complicated and confusing quickly, and there are many mistakes to do things wrong here.

Combined with item 1 I am reasonably sure I lost lots of data and may have lost more or mislabeled some items.

It would be challenging for me to go back and double check original images to ensure we didn’t miscreate them.

It is also a difficult problem because things I thought were faults weren’t according to the labeling.

I have seen this method successfully used on other projects.

Perhaps they had the resolution setup better or more information in the photos was more distinguishable.

References:CompetitionReading in data with pythonCNN LSTM for signalPower fault detection winnerMy Github Code.

. More details

Leave a Reply