Translation Invariance & Equivariance in Convolutional Neural Networks
In this article, we examine two features of convolutional neural networks, translation equivariance and invariance.
Oreolorun is a machine learning engineer with particular interest in computer vision applications. He loves to write about nuanced concepts in machine learning in an easy to understand manner.
In this article, we examine two features of convolutional neural networks, translation equivariance and invariance.
In this article, we examine the game theory based approach to explaining outputs of machine learning models: Shapely Additive exPlanations or SHAP. We then demo the technology using sample images in a Gradient Notebook.
In this followup article, we will be taking a look at another beneficial use of autoencoders. We explored how an autoencoder's encoder can be used as a feature extractor with the extracted features then compared using cosine similarity in order to find similar images.
In this article we took a look at one of the uses of autoencoders: image denoising. In this tutorial, we show how an autoencoder's representation learning allows it to learn mappings efficient enough to fix incorrect pixels/datapoints.
In this article, we examine the processes of implementing training, undergoing validation, and obtaining accuracy metrics - theoretically explained at a high level. We then demonstrate them by combining all three processes in a class, and using them to train a convolutional neural network.
In this article, we explore what dimensions imply in a convolutional neural network context. We show how to create a custom convnet as a baseline model, and then proceed to create new versions of it, with increased width, increased depth and the last one with both increased depth and width.
This tutorial provides step-by-step instructions on how to handle class imbalances in image datasets for training computer vision models.
In this article, we took a look at data augmentation as an upsampling technique for handing class imbalance by looking at 5 sample methods. Thereafter, we augment a dataset and train it on a convnet using said dataset show how it improved accuracy and recall scores.
Batch normalization is a term commonly mentioned in the context of convolutional neural networks. In this article, we are going to explore what it actually entails and its effects, if any, on the performance or overall behavior of convolutional neural networks.