
Data Augmentation for Bounding Boxes: Rethinking Image Transforms for Object Detection
How to adapt major image augmentation techniques for object detection purposes. We also cover the implementation of horizontal flip augmentation.
How to adapt major image augmentation techniques for object detection purposes. We also cover the implementation of horizontal flip augmentation.
We implement Scale and Translate augmentation techniques, and what to do if a portion of your bounding box is outside the image after the augmentation.
This is part 3 of the series where we are looking at ways to adapt image augmentation techniques to object detection tasks. In this part, we will cover how to implement how to rotate and shear images as well as bounding boxes using OpenCV's affine transformation features.
Previously, we have covered a variety of image augmentation techniques such as Flipping, rotation, shearing, scaling and translating. This part is about how to bring it all together and bake it into the input pipeline for your deep network.
In this article, we'll use Quilt to transfer versioned training data to a remote machine. We'll start with the Berkeley Segmentation Dataset, package the dataset, then train a PyTorch model for super-resolution imaging.
Batch Normalisation does NOT reduce internal covariate shift. This posts looks into why internal covariate shift is a problem and how batch normalisation is used to address it.
In this post, we will learn how to train a style transfer network with Paperspace's Gradient° and use the model in to create an interactive style transfer mirror.
An look into how various activation functions like ReLU, PReLU, RReLU and ELU are used to address the vanishing gradient problem, and how to chose one amongst them for your network.
In this post, we will learn how to train a language model using a LSTM neural network with your own custom dataset and use the resulting model inside so you will able to sample from it directly from the browser!