Normally employed convolution in Convolutional Neural Networks for image processing allows the input image to be
Dilated convolution is another variant of convolution in which the kernel is inflated as per a dilation factor prior to the process.
The steps below highlight the major steps in the process:
The kernel size is increased subjective to the dilation factor by adding zeros to the right, bottom, and diagonal of each entry in the original kernel except for the entries in the last row and column. The number of zeros added is one less than the dilation factor.
For instance, a dilation size of 3 would mean that 2 zeros are added along each direction.
Following the dilation of the kernel as demonstrated above, the input space is convoluted with the dilated kernel:
Preservation of resolution: The image resolution is not deteriorated as in the case of max-pooling.
Captures more contextual details: Grater spatial coverage helps capture more distantly related features.
Free Resources