Neural Networks is a machine learning algorithm. They are used for both classification and regression problems. Neural Networks are supervised algorithms
, that is, they require labeled data to train.
Learn more about Neural Networks here.
In this shot, we will implement Neural Network for classification
, using the scikit-learn toolkit
.
We will use the built-in digits dataset
from the scikit-learn library and split it into test and train datasets.
The model will be trained on the training data, and we will use the test data to evaluate
the model.
We import the dataset
from the sklearn library with built-in sample datasets. We will use the train_test_split
function and the the accuracy
and confusion matrix
metrics
from the sklearn library to split the data into train and test samples and to evaluate the results, respectively. Then, we will use the already built model for Neural Network
from the sklearn library.
The imports will look like this:
import numpy as np
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import confusion_matrix, accuracy_score
dataset
We will load the dataset
from the sklearn library to a local variable. Now, all the data from the library is in the dataset variable. Let’s look at the code given below:
dataset = load_digits()
Train
test
splitWe will split the dataset into train
and test
using the sklearn library. We will use a 80-20 split
, in which 80
percent data will be train
data and 20
percent of the data will be test
data. Let’s look at the code given below:
x_train, x_test, y_train, y_test = train_test_split(dataset.data, dataset.target, test_size=0.20, random_state=4)
Neural Network classifier
We will make the Neural Network classifier
, and call it NN. We will use all the default parameters
. More details on the parameters can be found on scikit learn.
NN = MLPClassifier()
Training
the modelWe will train
the model on the training data
and the training labels
.
NN.fit(x_train, y_train)
Testing
the modelWe will use the testing data
and testing labels
to test the model.
y_pred = NN.predict(x_test)
Evaluating
the modelWe will use the accuracy
function to get the accuracy of the model and use the confusion matrix
function to find the confusion matrix.
accuracy = accuracy_score(y_test,y_pred)*100
confusion_mat = confusion_matrix(y_test,y_pred)
Then, we will multiply accuracy by 100 to scale it out of 100.
Finally, we will print the results.
print("Accuracy for Neural Network is:",accuracy)
print("Confusion Matrix")
print(confusion_mat)
# Step 1# Importing the necessary librariesimport numpy as npfrom sklearn.datasets import load_digitsfrom sklearn.model_selection import train_test_splitfrom sklearn.neural_network import MLPClassifierfrom sklearn.metrics import confusion_matrix, accuracy_score# Step 2# Loading the datasetdataset = load_digits()# Step 3# Splitting the data into tst and train# 80 - 20 Splitx_train, x_test, y_train, y_test = train_test_split(dataset.data, dataset.target, test_size=0.20, random_state=4)# Step 4# Making the Neural Network ClassifierNN = MLPClassifier()# Step 5# Training the model on the training data and labelsNN.fit(x_train, y_train)# Step 6# Testing the model i.e. predicting the labels of the test data.y_pred = NN.predict(x_test)# Step 7# Evaluating the results of the modelaccuracy = accuracy_score(y_test,y_pred)*100confusion_mat = confusion_matrix(y_test,y_pred)# Step 8# Printing the Resultsprint("Accuracy for Neural Network is:",accuracy)print("Confusion Matrix")print(confusion_mat)
The accuracy for the Neural network shown above is 98 percent. We can improve this further by fine-tuning the model, using the parameters
listed above. Also, if we observe the confusion matrix
, we will notice that it has most of its values along the main diagonal, hinting that the accuracy
is not skewed.
In practice, Neural Networks are considered simple models. For complex tasks, CNNs and RNNs are used.