What is domain adaptation in zero-shot learning?

Domain adaptation refers to strategies or approaches intended to handle the domain shift between the testing (unseen classes) and training (seen classes) data. When the distribution of data in the training and testing phases is different, a domain shift takes place. Due to the disparities in the data distributions, the model can find it difficult to generalize to classes that have not yet been observed, which could result in a decline in performance.

Domain adaption strategies

Domain adaptation strategies are methods to address the challenge of domain shift. These methods aim to make models more robust to changes in data distribution, improving their generalization performance. Here are some common domain adaptation methods:

Common strategies for domain adaption
Common strategies for domain adaption

Feature-level adaptation

To reduce sensitivity to domain differences, this performs input feature transformation. For example:

  • Domain-invariant feature learning: This includes learning representations that are domain-independent. A common approach is adversarial training, such as domain adversarial neural networks, in which a domain discriminator is added to the model, and the feature extractor is trained to confuse the discriminator.

  • Principal component analysis (PCA): This involves reducing the dimensionality of the feature space using PCA, which can aid in the removal of domain-specific variations.

Instance-level adaptation

To enhance model performance across domains, this adjusts individual instances. For example:

  • Instance weighting: Domain shift can be mitigated by assigning different weights to instances during training based on their relevance to the target domain.

  • Instance normalization: Aligning the distributions can be aided by normalizing instances to have similar statistics across domains.

Label-level adaptation

To better suit the target domain, this adapts the labels or the output space of the model. For example:

  • Label embedding: To bridge the gap between source and target domains, it maps class labels into a shared space, which is typically learned during training.

  • Class prior estimation: To account for differences in class distributions, class priors can be adjusted based on the target domain.

Data-level adaptation

To reduce the domain gap between the source and target domains, it manipulates the training data. For example:

  • Data augmentation: Models can be adapted to unknown distributions by creating synthetic data or perturbing existing data in a way that mimics the target domain.

  • Self-ensembling: This is an approach in which pseudolabels are created using a model’s predictions on unlabeled target domain data after the model has been trained on the target domain. The model is then under the supervision of these pseudo-labels during training.

Model-based adaptation

To adapt to the target domain, it modifies the learning algorithm or architecture. For example:

  • Fine-tuning: A model can be made to fit the new distribution by fine-tuning it on the target domain after it has been trained on the source domain.

  • Transfer learning: Using pretrained models on a related task or domain and then fine-tuning them on the target domain can be a beneficial approach.

Ensemble-based adaptation

To improve overall performance and robustness, it combines predictions from multiple models trained on source and target domains. For example:

  • Model ensembling: Robustness can be improved by combining predictions from several models trained on various source domains or with various adaptation strategies.

  • Domain ensembling: To create a more comprehensive understanding of the distribution, this combines the models trained on different subsets of the target domain.

Unsupervised domain adaptation (UDA)

When labeled target domain data is limited or nonexistent, it entails converting a model from a labeled source domain to an unlabeled target domain. For example:

  • Cycle consistency: Distributions can be aligned by making sure that the mapping from the source domain to the target domain and back is consistent.

  • Maximum mean discrepancy (MMD): This involves reducing the disparity between the means of source and target domain feature representations.

Note: The specific characteristics of the source and target domains, the availability of labeled data, and the nature of the task at hand all influence the choice of domain adaptation strategy.

How does it work?

The general workflow for domain adaptation in zero-shot learning (ZSL) is as follows:

Domain adaptation working mechansim
Domain adaptation working mechansim
  1. Source domain training

    1. Use a source domain with labeled examples to train a model. Typically, this source domain includes information from known classes or concepts.

    2. From these known classes, the model learns the ability to identify and categorize instances.

  2. Feature alignment

    1. Examine how the feature distributions of the source and target domains differ from one another. This is important because if the features are significantly different, the model might have trouble generalizing to the target domain.

    2. The feature spaces of the source and target domains are aligned using strategies such as domain adversarial training or other domain adaptation techniques.

  3. Knowledge transfer

    1. Transfer knowledge from the source domain to the target domain. Both feature-level adaptation and the adaptation of semantic information or class representations can be included in this transfer.

    2. To make a generalization to the target domain easier, either the model parameters are modified, or a mapping function is discovered.

    3. In zero-shot learning, the model must identify and categorize instances from classes that were not seen during source domain training. This step is crucial.

  4. Zero-shot inference

    1. Perform zero-shot inference on the target domain using the adapted model.

    2. The model should be able to recognize and classify instances from classes that it has never seen before during training.

    3. This entails making use of the transferred knowledge, which includes information at the class level as well as feature-level adaptation.

Code example

The following code is an implementation of a domain adaptation model using a basic neural network architecture. It aims to adapt the model to perform well on the target domain even when labeled data isn’t available in the target domain:

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers as tf_layers, models as tf_models, optimizers as tf_optimizers
# Generate synthetic source domain data
num_src_samples = 500
num_classes = 5
num_features = 50
src_features = np.random.rand(num_src_samples, num_features)
src_labels = np.random.randint(0, num_classes, size=num_src_samples)
# Generate synthetic target domain data
num_tgt_samples = 400
tgt_features = np.random.rand(num_tgt_samples, num_features)
# Define a simpler domain adaptation model
class CustomDomainAdaptationModel(tf_models.Model):
def __init__(self, num_classes, num_features):
super(CustomDomainAdaptationModel, self).__init__()
self.shared_layers = self._build_shared_layers(num_features)
self.label_predictor = self._build_label_predictor(num_classes)
self.domain_classifier = self._build_domain_classifier()
def _build_shared_layers(self, input_dim):
shared_model = tf_models.Sequential([
tf_layers.Dense(64, activation='relu', input_dim=input_dim),
])
return shared_model
def _build_label_predictor(self, num_classes):
return tf_layers.Dense(num_classes, activation='softmax')
def _build_domain_classifier(self):
return tf_layers.Dense(1, activation='sigmoid')
def call(self, inputs):
shared_features = self.shared_layers(inputs)
domain_pred = self.domain_classifier(shared_features)
label_pred = self.label_predictor(shared_features)
return label_pred, domain_pred
# Loss functions for domain adversarial training
def custom_domain_adversarial_loss(true_labels, domain_pred):
domain_labels = tf.ones_like(domain_pred) # Source domain label = 1
domain_loss = tf.keras.losses.binary_crossentropy(domain_labels, domain_pred)
return tf.reduce_mean(domain_loss)
def custom_classification_loss(true_labels, label_pred):
return tf.keras.losses.sparse_categorical_crossentropy(true_labels, label_pred)
# Domain adaptation training
def train_custom_domain_adaptation(src_features, src_labels, tgt_features, num_epochs):
model = CustomDomainAdaptationModel(num_classes, num_features)
optimizer = tf_optimizers.Adam()
for epoch in range(num_epochs):
# Training step on source domain data
with tf.GradientTape() as tape:
src_label_pred, src_domain_pred = model(src_features)
src_label_loss = custom_classification_loss(src_labels, src_label_pred)
src_domain_loss = custom_domain_adversarial_loss(src_labels, src_domain_pred)
total_loss = src_label_loss + src_domain_loss
gradients = tape.gradient(total_loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
# Evaluation on target domain data (no labels)
_, tgt_domain_pred = model(tgt_features)
tgt_domain_loss = custom_domain_adversarial_loss(tf.zeros_like(tgt_domain_pred), tgt_domain_pred)
if epoch % 10 == 0:
print(f"Epoch {epoch}: Source Label Loss: {src_label_loss.numpy()}, "
f"Source Domain Loss: {src_domain_loss.numpy()}, "
f"Target Domain Loss: {tgt_domain_loss.numpy()}")
# Training hyperparameters
num_epochs_custom = 20
# Training the domain adaptation model
train_custom_domain_adaptation(src_features, src_labels, tgt_features, num_epochs_custom)

Code explanation

Here’s a breakdown of the code:

  • Lines 3–5: We import the necessary libraries.

  • Lines 8–12: Here, we generate num_src_samples samples with num_features features for the source domain, along with random labels from num_classes classes.

  • Lines 15–16: Here, we generate num_tgt_samples samples with num_features features for the target domain.

  • Line 19: The code defines a class named CustomDomainAdaptationModel that inherits from models.Model, indicating it’s a Keras model.

  • Line 20: The class has an __init__ method that initializes the model. It takes num_classes and num_features as parameters.

  • Lines 21–24: The line super(CustomDomainAdaptationModel, self).__init__() calls the constructor of the parent class (models.Model).

    • We create and initialize three attributes:

      • self.shared_layers: This is the shared layers of the model, created using the _build_shared_layers method.

      • self.label_predictor: This is the label predictor layer, created using the _build_label_predictor method.

      • self.domain_classifier: This is the domain classifier layer, created using the _build_domain_classifier method.

  • Lines 26–30: The _build_shared_layers method builds and returns a shared model. The shared model consists of a single dense layer with 64 units, relu activation, and an input dimension specified by input_dim.

  • Lines 32–33: The _build_label_predictor method builds and returns a label predictor layer. The label predictor is a dense layer with num_classes units and softmax activation, suitable for multiclass classification.

  • Lines 35–36: The _build_domain_classifier method builds and returns a domain classifier layer. The domain classifier is a dense layer with a single unit and sigmoid activation, suitable for binary classification (source or target domain).

  • Lines 38–42: The call method is where the forward pass of the model is defined. Given the inputs, it passes the inputs through the shared layers to obtain shared_features. The domain_pred is the output of the domain classifier applied to the shared features. The label_pred is the output of the label predictor applied to the shared features. The method returns a tuple (label_pred, domain_pred) representing the model’s predictions for class labels and domain classification.

  • Lines 45–51: We define two loss functions:

    • custom_domain_adversarial_loss: This computes the binary cross-entropy loss for the domain classifier.

    • custom_classification_loss: This computes the sparse categorical cross-entropy loss for the label predictor.

  • Lines 54–56: The train_custom_domain_adaptation function is defined to train the domain adaptation model. The training process involves alternating between source domain training and target domain evaluation. The model is trained using both classification loss and domain adversarial loss. The Adam optimizer is used to minimize the total loss.

  • Lines 60–67: These perform the training step on the source domain data.

  • Lines 70–76: Here we perform evaluation on the target domain data.

  • Lines 79: The number of training epochs (num_epochs) is set to 20.

  • Line 82: The train_custom_domain_adaptation function is called with the source and target domain data.

  • Expected output: The code prints the source label loss, source domain loss, and target domain loss every 10 epochs during training.

Domain adaptation vs. deep domain adaptation

In ZSL scenarios, both domain adaptation and deep domain adaptation aim to address the challenge of adapting models to new, unknown domains. They differ in terms of the techniques employed, the degree to which they’re integrated with deep learning frameworks, and the level of complexity involved in achieving domain alignment:

Features

Domain Adaptation

Deep Domain Adaptation

Scope

This is a broader term encompassing various methods to align different domains.

This focuses on leveraging deep learning architectures to achieve domain adaptation.

Complexity

This typically involves less complex architectures.

This typically involves more complex architectures due to the integration of deep learning techniques within the adaptation process.

Representation learning

This often emphasizes learning less abstract and invariant representations within the neural network layers.

This often emphasizes learning more abstract and invariant representations within the neural network layers.

Unlock your potential: Zero-shot learning (ZSL) series, all in one place!

To continue your exploration of Zero-Shot Learning (ZSL), check out our series of answers below:

Free Resources

Copyright ©2025 Educative, Inc. All rights reserved