Ensemble methods in Python: averaging

Ensemble methods in machine learning combine the strengths of multiple models for enhanced performance. Averaging, a key technique, consolidates predictions through a weighted average. It combines predictions from multiple models by calculating the average of their outputs, often improving overall performance. It helps reduce overfitting and enhances model robustness. This method is simple yet effective, providing a straightforward way to leverage diverse insights from individual models.

Averaging Algorithm
Averaging Algorithm

How to implement averaging using Python

Let’s look at the steps required to implement the averaging algorithm in Python.

Import the libraries

The first step is to import the required libraries.

from sklearn.ensemble import VotingClassifier
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.datasets import load_breast_cancer
from sklearn.metrics import accuracy_score

Load the dataset

The next step is to load the dataset. We’ll use the breast cancer dataset provided by the sklearn library. This dataset consists of 30 features. The target variable is the diagnosis where 1 represents malignant and 0 represents benign tumors. The train_test_split function divides the dataset into training and testing data.

cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, test_size=0.2, random_state=42)

Define the base models

The next step is to choose the base models. Averaging classifier uses multiple models to calculate the weighted average. We’ll use the random forest classifier and gradient boosting classifier for this example.

rf_model = RandomForestClassifier(n_estimators=10, random_state=42)
gb_model = GradientBoostingClassifier(n_estimators=10, random_state=42)

Implement averaging

We’ll now create an instance for the VotingClassifier and fit the training data to train the model. We’ll set the voting method to soft to calculate the weighted average. We create a VotingClassifier instance to aggregate predictions from multiple models. Soft voting calculates a weighted average of probability estimates.

averaging_model = VotingClassifier(estimators=[('rf', rf_model), ('gb', gb_model)], voting='soft')
averaging_model.fit(X_train, y_train)

Predict and evaluate

Now, we’ll make the predictions on the test set and calculate accuracy.

y_pred = averaging_model.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy: {:.2f}%".format(accuracy * 100))

Example

The following code shows how we can implement the averaging ensemble classifier in Python:

from sklearn.ensemble import VotingClassifier
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.datasets import load_breast_cancer
from sklearn.metrics import accuracy_score
# Load and split the dataset
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, test_size=0.2, random_state=42)
# Define base models
rf_model = RandomForestClassifier(n_estimators=10, random_state=42)
gb_model = GradientBoostingClassifier(n_estimators=10, random_state=42)
# Create an ensemble using averaging
averaging_model = VotingClassifier(estimators=[('rf', rf_model), ('gb', gb_model)], voting='soft')
averaging_model.fit(X_train, y_train)
# Predict and evaluate
y_pred = averaging_model.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy: {:.2f}%".format(accuracy * 100))

Explanation

  • Lines 1–5: We import the required libraries.

  • Line 8: We load the breast cancer dataset from sklearn and stores it in the cancer variable.

  • Line 9: This line splits the dataset into train and test.

  • Lines 12–13: We define RandomForestClassifier and GradientBoostingClassifier as the base models for the VotingClassifier.

  • Lines 16–17: We create a VotingClassifier with specified base models. We use the soft-voting method to calculate the weighted average.

  • Line 20: The trained model is used to make predictions on the test data.

  • Lines 21–22: The code calculates the accuracy of the model’s predictions by comparing them to the true labels in the test set. The accuracy is printed as a percentage.

Unlock your potential: Ensemble learning series, all in one place!

To continue your exploration of ensemble learning, check out our series of Answers below:

Free Resources

Copyright ©2025 Educative, Inc. All rights reserved