Translation Invariance & Equivariance in Convolutional Neural Networks
In this article, we examine two features of convolutional neural networks, translation equivariance and invariance.
In this article, we examine two features of convolutional neural networks, translation equivariance and invariance.
Learn how to customize your diffusion model images with multiple concepts!
In this post, we presented the LSTM subclass and used it to construct a weather forecasting model. We proved its effectiveness as a subgroup of RNNs designed to detect patterns in data sequences, including numerical time series data.
In this article, we examine the game theory based approach to explaining outputs of machine learning models: Shapely Additive exPlanations or SHAP. We then demo the technology using sample images in a Gradient Notebook.
In this tutorial, we cover using sentence embeddings for semantic search using Cohere in a Gradient Notebook
In this followup article, we will be taking a look at another beneficial use of autoencoders. We explored how an autoencoder's encoder can be used as a feature extractor with the extracted features then compared using cosine similarity in order to find similar images.
In this article, we talk about what Dense Passage Retrieval is, how it works, and its uses. We also show how to implement it using the Simple Transformers python library in a Gradient Notebook.
In this article, we will see some GANs improvements over time, then we go through the revolutionary ProGAN paper to see how it works and understand it in depth.
In this tutorial, we show how to apply model interpretability algorithms from Captum on simple models. We demo building a basic model and use attribution algorithms such as Integrated Gradients, Saliency, DeepLift, and NoiseTunnel to attribute the image's label to the input pixels and visualize it.