# Ensemble Learning In Artificial Intelligence

Datapoint:', datapoint) print('Predicted class:', predicted_class)Visualize the test data points based on classifier boundaries:# Visualize the datapoints visualize_classifier(classifier, test_datapoints, *len(test_datapoints), 'Test datapoints') plt.

show()If you run the code with the rfflag, you will get the following output:You will get the following output on your Terminal:For each data point, it computes the probability of that point belonging to our three classes.

We pick the one with the highest confidence.

If you run the code with the erfflag, you will get the following output:You will get the following output on your Terminal:As we can see, the outputs are consistent with our observations.

Dealing with class imbalanceA classifier is only as good as the data that’s used for training.

One of the most common problems we face in the real world is the quality of data.

For a classifier to perform well, it needs to see an equal number of points for each class.

But when we collect data in the real world, it’s not always possible to ensure that each class has the exact same number of data points.

If one class has 10 times the number of data points of the other class, then the classifier tends to get biased towards the first class.

Hence we need to make sure that we account for this imbalance algorithmically.

Let’s see how to do that.

Create a new Python file and import the following packages:import sys import numpy as np import matplotlib.

pyplot as plt from sklearn.

ensemble import ExtraTreesClassifier from sklearn import cross_validation from sklearn.

metrics import classification_report from utilities import visualize_classifierWe will use the data in the file data_imbalance.

txtfor our analysis.

Each line in this file contains comma-separated values.

The first two values correspond to the input data and the last value corresponds to the target label.

We have two classes in this dataset.

Let's load the data from that file:# Load input data input_file = 'data_imbalance.

txt' data = np.

loadtxt(input_file, delimiter=',') X, y = data[:, :-1], data[:, -1]Separate the input data into two classes:# Separate input data into two classes based on labels class_0 = np.

array(X[y==0]) class_1 = np.

array(X[y==1])Visualize the input data using a scatter plot:# Visualize input data plt.

figure() plt.

scatter(class_0[:, 0], class_0[:, 1], s=75, facecolors='black', edgecolors='black', linewidth=1, marker='x') plt.

scatter(class_1[:, 0], class_1[:, 1], s=75, facecolors='white', edgecolors='black', linewidth=1, marker='o') plt.

title('Input data')Split the data into training and testing datasets:# Split data into training and testing datasets X_train, X_test, y_train, y_test = cross_validation.

train_test_split( X, y, test_size=0.

25, random_state=5)Next, we define the parameters for the Extremely Random Forest classifier.

Note that there is an input parameter called balancethat controls whether or not we want to algorithmically account for class imbalance.

If so, then we need to add another parameter called class_weightthat tells the classifier that it should balance the weight so that it's proportional to the number of data points in each class:# Extremely Random Forests classifier params = {'n_estimators': 100, 'max_depth': 4, 'random_state': 0} if len(sys.

argv) > 1: if sys.

argv == 'balance': params = {'n_estimators': 100, 'max_depth': 4, 'random_state': 0, 'class_weight': 'balanced'} else: raise TypeError("Invalid input argument; should be 'balance'")Build, train, and visualize the classifier using training data:classifier = ExtraTreesClassifier(**params) classifier.

fit(X_train, y_train) visualize_classifier(classifier, X_train, y_train, 'Training dataset')Predict the output for test dataset and visualize the output:y_test_pred = classifier.

predict(X_test) visualize_classifier(classifier, X_test, y_test, 'Test dataset')Compute the performance of the classifier and print the classification report:# Evaluate classifier performance class_names = ['Class-0', 'Class-1'] print("." + "#"*40) print(".Classifier performance on training dataset.") print(classification_report(y_train, classifier.

predict(X_train), target_names=class_names)) print("#"*40 + ".") print("#"*40) print(".Classifier performance on test dataset.") print(classification_report(y_test, y_test_pred, target_names=class_names)) print("#"*40 + ".") plt.

show()The full code is given in the file class_imbalance.

py.

If you run the code, you will see a few screenshots.

The first screenshot shows the input data:The second screenshot shows the classifier boundary for the test dataset:The preceding screenshot indicates that the boundary was not able to capture the actual boundary between the two classes.

The black patch near the top represents the boundary.

You should see the following output on your Terminal:You see a warning because the values are 0in the first row, which leads to a divide-by-zero error (ZeroDivisionErrorexception) when we compute the f1-score.

Run the code on the terminal using the ignore flag so that you do not see the divide-by-zero warning:\$ python3 –W ignore class_imbalance.

pyNow if you want to account for class imbalance, run it with the balance flag:\$ python3 class_imbalance.

py balanceThe classifier output looks like this:You should see the following output on your Terminal:By accounting for the class imbalance, we were able to classify the data points in class-0with non-zero accuracy.

Finding optimal training parameters using a grid searchWhen you are working with classifiers, you do not always know what the best parameters are.

You cannot brute-force it by checking for all possible combinations manually.

This is where grid search becomes useful.

Grid search allows us to specify a range of values and the classifier will automatically run various configurations to figure out the best combination of parameters.

Let’s see how to do it.

Create a new Python file and import the following packages:import numpy as np import matplotlib.

pyplot as plt from sklearn.

metrics import classification_report from sklearn import cross_validation, grid_search from sklearn.

ensemble import ExtraTreesClassifier from sklearn import cross_validation from sklearn.

metrics import classification_report from utilities import visualize_classifierWe will use the data available in data_random_forests.

txtfor analysis:# Load input data input_file = 'data_random_forests.

txt' data = np.

loadtxt(input_file, delimiter=',') X, y = data[:, :-1], data[:, -1]Separate the data into three classes:# Separate input data into three classes based on labels class_0 = np.

array(X[y==0]) class_1 = np.

array(X[y==1]) class_2 = np.

array(X[y==2])Split the data into training and testing datasets:# Split the data into training and testing datasets X_train, X_test, y_train, y_test = cross_validation.

train_test_split( X, y, test_size=0.

25, random_state=5)Specify the grid of parameters that you want the classifier to test.

Usually, we keep one parameter constant and vary the other parameter.

We then do it vice versa to figure out the best combination.

In this case, we want to find the best values for n_estimators and max_depth.

Let's specify the parameter grid:# Define the parameter grid parameter_grid = [ {'n_estimators': , 'max_depth': [2, 4, 7, 12, 16]}, {'max_depth': , 'n_estimators': [25, 50, 100, 250]} ]Let’s define the metrics that the classifier should use to find the best combination of parameters:metrics = ['precision_weighted', 'recall_weighted']For each metric, we need to run the grid search, where we train the classifier for a particular combination of parameters:for metric in metrics: print(".##### Searching optimal parameters for", metric) classifier = grid_search.

GridSearchCV( ExtraTreesClassifier(random_state=0), parameter_grid, cv=5, scoring=metric) classifier.

fit(X_train, y_train)Print the score for each parameter combination:print(".Grid scores for the parameter grid:") for params, avg_score, _ in classifier.

grid_scores_: print(params, '–>', round(avg_score, 3)) print(".Best parameters:", classifier.

best_params_)Print the performance report:y_pred = classifier.

predict(X_test) print(".Performance report:. More details