How JavaScript Libraries Are Training Neural Networks on Web Browsers

3 years ago   •   11 min read

By Vihar Kurama

For years, JavaScript has been one of the most-loved programming languages by developers. It’s primarily used for creating web browser UI's and backend business logic (with Node.js). Looking at the surveys conducted by Stack Overflow and GitHub, it's consistently ranked #1 in terms of the number of repositories that it houses and pull request activities triggered. Deep learning, on the other hand, is widely used to train machines to recognize patterns and automate tasks. A few examples include navigating spaces with data provided by phones in the shortest distance possible, detecting and identifying faces on cameras, and identifying complex abnormalities in the field of medicine, to name a few.

Perhaps you might be wondering how JavaScript–a programming language that’s mainly used for building the web–is related to Artificial Intelligence. To answer your queries, in this article we’ll look at how JavaScript blends with Deep Learning; an otherwise exclusive territory which supports languages such as Python and C++. Below is the content we’ll be going through.

  • The Traditional Working of Deep Learning
  • Pipeline to Build Deep Learning Models
  • JavaScript for Deep learning and Popular Libraries/Frameworks
  • Deep Learning on Web Browsers
  • Building a Neural Network for Color Classification with ml5.js
  • Conclusion

Bring this project to life

The Traditional Working of Deep Learning

Deep Learning, a subset of Artificial Intelligence, was born from the idea of imitating brains in the 1950s. Deep learning algorithms are implemented using neural networks that are traditionally programmed with languages such as Python and C++. If you are not familiar with what neural networks are, don't worry! Vaguely speaking, a neural network is an artificial brain consisting of layers through which data is sent and patterns are learned. The learning phase is referred to as "training." It involves (typically) large quantities of data, and models with (typically) a lot of parameters. Hence GPUs are used for accelerated computation. After these networks are trained, they must be deployed somewhere to make predictions on real data (such as classifying images and text, detecting events in audio or video streams, and so on). Without deployment, training a neural network is just a waste of computing power.

Below are a few popular programming languages and frameworks that are widely put into use to build or use deep learning models.

  • Python has always been a top choice for building AI. Google’s TensorFlow and Facebook’s PyTorch are the most-used deep learning frameworks for use with Python.
  • C++ is also used for developing state-of-the-art neural networks, to get computations done in the least amount of time.
  • R and Julia are also used in a few cases to develop custom models; however, these languages are only limited to few features, and it might be hard to build a variety of neural networks as the programming communities are relatively limited.

Pipeline to Build Deep Learning Models

Before we discuss Deep Learning on the web, let’s first examine the standard pipeline we follow to build neural networks. This gives us a clear understanding of the neural network implementation strategies on web browsers involving a few minor tweaks. Here’s the process:

  1. Data Sourcing and Loading: As we all know, the core of deep learning is data; the more data we have, the better the neural networks perform. All this data is fed as a tensor(s)–an n-dimensional data type. We can have any data converted into the tensor format, from basic numbers to text, images, and videos. A few additional advantages of using tensors include multi-process data loading and batch processing, meaning you can train your neural networks with multiple tensors simultaneously.
  2. Defining the Model: The immediate step after loading the data is to define the neural network. This is the brain of the entire pipeline that will continuously learn from the data provided. However, this step involves a lot of hyperparameters and experimentation to achieve state-of-the-art performance. A few commonly-used hyperparameters are batch_size, number_of_epochs, loss_function, optmization_function and learning_rate. Next, we define the neural network where we decide on the total number of layers and parameters required for the model. Again, there are different types of layers and architectures we can choose from, depending on the data, its structure, and the output required.
  3. Training the Model: In the previous step we’ve learned what designing a model entails. Yet, the process is still half-baked. A model could be used only when training of the data through that model happens. A model which has its parameters defined is trained for a certain number of epochs by passing the data iteratively, and updating the loss using a loss function. The backpropagation algorithm then comes into the picture to improve the metrics and accuracy of the neural network at every iteration. After the training process is complete, the model's weights are saved, using any specific format that the framework supports. We can call this the trained model.
  4. Model to Cloud and Production: The trained models are typically of a large size, usually dependent on the neural network used; they vary from Kilobytes to Gigabytes. To use them in production, we store them on the cloud and load them again for deployment.

In the next section, we’ll discuss the mechanism to embed models into web browsers using JavaScript.

JavaScript for Deep learning

As discussed, JavaScript is one of the most popular programming languages since its birth, which dates back to 1995. However, up until 2015, this was not considered for subjects related to Deep Learning and Artificial Intelligence because of three main reasons:

  1. Most people have an intuition that JS is a slow language.
  2. Operations like Matrix Multiplication and Normalization are difficult to implement.
  3. A majority of the libraries used to implement deep learning (like Scikit-learn, TensorFlow, etc.) are used with Python.

Yet now there are a handful of libraries based on JavaScript for Deep Learning. Let's have a look.

  • TensorFlow.js: TensorFlow.js is a JavaScript library for building neural networks and to using them directly on web browsers. One cool feature of this library is that it allows converting existing Python-based TensorFlow models to work on web browsers. It also allows implementing advanced networks for facial recognition, movement detection, and much more, directly in the browser.
  • Brain.js: Brain.js is one of the popular JavaScript libraries for building Neural Networks; it also supports accessing GPUs directly from web browsers and Node.js. Many typical neural networks such as CNNs and RNNs can also be implemented using this library.
  • ml5.js: This is a friendly framework built on top of TensorFlow.js; it has good community support and documentation for people who are getting started with deep learning directly with JavaScript.
  • ConvNetJS: ConvNetJS is one of the first JavaScript libraries that has been used for deep learning. It was originally developed by Andrej Karpathy to implement simple classification and convolutional models. ConvNetJS is still maintained by its community, and supports a few cool networks.

Now that we understand the deep learning pipeline along with the JavaScript libraries available, it’s time to see how to merge JavaScript with Deep Learning.

Deep Learning on Web Browsers

On Desktops and Laptops alike, web browsers like Chrome, Firefox, and Safari are the main means by which users access the content and services of the internet. Due to this broad reach, the web browser is a logical choice for deploying deep learning models, as long as the kinds of data the models require are available from the browser. But what kinds of data are available from the browser? The answer is many! Let’s now talk about some real use cases. Consider YouTube, where there are billions of videos; Instagram, which is filled with images; and Twitter, which is full of text.

Here’s a question for you: when you search for songs or videos on YouTube or Spotify, you'll see recommendations next to them of other songs which might interest you. Ever wondered how this happens? The magic behind this logic consists of a lot of Machine Learning and Deep Learning algorithms running on browsers and backend servers. This is where JavaScript plays a crucial role, to fetch and bind the data with browsers and craft intelligent algorithms. Below are a few examples where deep learning algorithms run on web browsers with JavaScript in place.

Real-Time Face Detection on Web browser with TensorFlow.js
Real-time text toxicity detection web browsers

Alright! Let’s now dive into the code where we’ll be learning to predict the color which we see on the web browser.

Building a Neural Network for Color Classification

In this section, we'll be using ml5.js library to create and train a neural network on web browser. Our goal here is to predict the name of the given color in thee web browser with our trained neural network. Also, make sure you have node installed on your computer, else you can install it from the following link.  Node allows you to run and install all the packages necessary to create our NNs with ml5.js. Now, let's follow the below steps.

Step 1: Create NPM and necessary files

To get started with our project, first, you’ll have to create an npm project. This can be accomplished by using the command npm init in your terminal or command prompt. Also, make sure you have node and npm installed on your computer. If not, follow the instructions from this website here. Below is a screenshot of my terminal after I created my project. You can add in a necessary description of your project here. As shown in the image below, this command creates a package.json file which is like the entry file for your project. It contains all the list of dependencies and versions of libraries installed in your project.

Step 2: Add Scripts to index.html

In this step, we’ll be writing some HTML. We’ll include P5.JS-a JavaScript library for creative coding. This helps us visualise our training and make some animations in our project. Below is the code snippet,

<html>
   <head>
      <meta charset="UTF-8" />
      <title>Color Classifier - Neural Network</title>
      <script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.9.0/p5.min.js"></script>
      <script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.9.0/addons/p5.dom.min.js"></script>
      <script
         src="https://unpkg.com/ml5@0.5.0/dist/ml5.min.js"
         type="text/javascript"
         ></script>
   </head>
   <body>
      <h1>Color Classifier - Neural Network</h1>
      <script src="sketch.js"></script>
   </body>
</html>

Here, as you see, P5JS and ml5.js have been added in the HTML script tags. You can think of this as importing your p5JS (visualising library) and ml5 (deep learning library) into your web browser.

In the <body> tag, an H1 tag has been added which contains the name of the project and an additional script tag with sketch.js where all the deep learning logic goes in. Prior to diving deep into this concept, download and put the training data onto your local machine.

Step 3: Download the Dataset

You can download the dataset from the link here. Make sure you create a folder named data inside your project and move the downloaded data there.

{
 "entries": [
   {
     "b": 155,
     "g": 183,
     "label": "green-ish",
     "r": 81,
     "uid": "EjbbUhVExBSZxtpKfcQ5qzT7jDW2"
   },
   {
     "b": 71,
     "g": 22,
     "label": "pink-ish",
     "r": 249,
     "uid": "fpqsSD6CvNNFQmRp9sJDdI1QJm32"
   },
 ]
}

A JSON file would be present consisting of the above entries, here r, g, and b stand for red, green and blue which are the inputs to be given to our neural network. The key “label” is the corresponding output for that particular RGB value given in the single dictionary. Building a neural network is what goes next.

Step 4: Building the Neural Network

A few JavaScript variables shall be defined to make them interact with our browser. Below is the code snippet.

 let neuralNetwork;
 let submitButton;
 
 let rSlider, gSlider, bSlider;
 let labelP;
 let lossP;

In the neuralNetwork variable, the Neural Network shall be defined in the upcoming functions, and submitButton is to send the input to the trained model. The Slider buttons will help to choose different colours as inputs and make predictions. Lastly, the labelP variable will be used to render the output text, and lossP is to print the accuracy.

Next, let’s define setup() function to load all the variables; below is the code snippet.

function setup() {
 lossP = createP("loss");
 
 createCanvas(100, 100);
 
 labelP = createP("label");
 
 rSlider = createSlider(0, 255, 255);
 gSlider = createSlider(0, 255, 0);
 bSlider = createSlider(0, 255, 255);
 
 let nnOptions = {
   dataUrl: "data/colorData.json",
   inputs: ["r", "g", "b"],
   outputs: ["label"],
   task: "classification",
   debug: true,
 };
 neuralNetwork = ml5.neuralNetwork(nnOptions, modelReady);
}

In this function, the loss and label variables are set to loss and label strings using the createP() function from the P5 JS. These are simple strings. The next three variables (rSlider, gSlider, bSlider) are used to create sliders using the createSlider() function and correspond to the colour codes of red, green and blue respectively.

Next, the whole configuration of the neural network is added in the list variable that has been created earlier (neuralNetwork).  A dataUrl label points to the dataset. The inputs r, g, and b (dataset keys) are appended to a  list and the output corresponds to the label key. An additional variable called classification has been defined to convey to the neural network that classification is the desired operation.

Lastly, a neural network using the neuralNetwork function from the ml5 library is created wherein all the options are sent using the nnOptions variable as an argument.

Step 5: Setting Hyperparameters and Saving the Model

In this step, the declared neural network will be trained by using the train() method on the neuralNetwork variable.

function modelReady() {
 neuralNetwork.normalizeData();
 const trainingOptions = {
   epochs: 20,
   batchSize: 64,
 };
 neuralNetwork.train(trainingOptions, whileTraining, finishedTraining);
 // Start guessing while training!
 classify();
}
 
function whileTraining(epoch, logs) {
 lossP.html(`Epoch: ${epoch} - loss: ${logs.loss.toFixed(2)}`);
}
 
function finishedTraining(anything) {
 console.log("done!");
}

Now to run this project, you can open the index.html file by running any webserver or else you can deploy these files to netlify. I’m currently using `python3 -m http.server` to serve the project on web browser locally. Now, this runs on port 8000, you can navigate to localhost:8000 and see the below screenshot.

Training and Predictions on Web Browser using ml5.js

This code is modified on top of ml5.js examples, to check more like these browse through the following link.

Conclusion

In this article, we've learned how Neutral Networks can be trained on Web browsers using Javascript libraries and how they are different from traditional pipelines. Furthermore, we've worked on an example where we used ml5.js, a library used to build Neural Networks using JS, to create a color classifier. If you still want to explore more examples and want to build your own Network checkout TensorFlow.js and Brain.js libraries where you can work with huge image, text and audio datasets.

Add speed and simplicity to your Machine Learning workflow today

Get startedContact Sales

Spread the word

Keep reading