Feature engineering, Explained

And this is important during the feature engineering as well.One common practice is to introduce a boolean feature indicating whether a given sample had a missing value in the given feature..It takes on either True if it was missing or False in the case where everything was in order..It lets the machine learning model know if it should treat the given value as a trustworthy one or should work around it.Another of the common feature engineering methods is bringing the data into a given interval..Why would we do it?.The first reason is trivial as computations on a bounded range of numbers prevent some numerical inaccuracies and limit the computational power required..The second reason is that some machine learning algorithms can handle that data better when it is normalized..There are several approaches to normalization of the data.In nature and in the human society, many things are governed by the normal (Gaussian) distribution..That is why we introduce normalization characteristic to the distribution..It is given by the following equation:Here the X is our new feature..It is acquired by subtracting the mean value  of the old feature from every sample of the old feature  and then dividing it by the standard deviation  which tells us about how wide is the spread of values of the feature..This brings the value of X into an interval of [-1,1].An alternative normalization can be done by subtracting the minimal value Xmin from the feature and then dividing by its range given as Xmax – Xmin..This gives us:This normalization maps the given feature into the interval [0,1].As we mentioned, different models require different normalization in order to work efficiently.. More details

Leave a Reply