How Multi-Processing Helped Me Speed Up My JavaScript Application By 4x

Written by rusanov | Published 2023/07/25
Tech Story Tags: nodejs | multiprocessing | javascript | javascript-tutorial | javascript-fundamentals | cpu | guide | hackernoon-top-story | hackernoon-es | hackernoon-hi | hackernoon-zh | hackernoon-vi | hackernoon-fr | hackernoon-pt | hackernoon-ja

TLDRvia the TL;DR App

JavaScript has been inherently designed as a single-threaded beast. Yet, in the wild terrains of computing, the predators known as 'multiprocessing' and 'multi-core processors' are waiting to be tamed, ready to boost your code execution to unheard-of speeds. 💪🚀

I dared to step into this jungle, put my code to the ultimate survival test, and emerged with astonishing results. 🏆 Now, it's your turn to join me on this enthralling quest. We'll delve deep into the enigma of multiprocessing in Node.js, armed with riveting code examples and shining the torch 🔦 on the spectacular fruits of my experiments. 🍎🍏

Get ready to set sail on this adrenaline-fueled adventure of supercharging JavaScript performance through the magic of multiprocessing! Buckle up and brace yourself as we're about to launch into the mesmerizing realm of high-octane coding.

Before we venture too deep, let's equip ourselves with some trusty tools. We'll fashion a few auxiliary functions to simulate the often arduous computational work. Let's create a new artifact, a file named utils.js, and inscribe these essential incantations there.

// utils.js

function generateRandomData(size) {
  const data = [];

  for (let i = 0; i < size; i++) {
    data.push(Math.random());
  }

  return data;
}

function processData(data) {
  // performs some calculations on the array
  // to simulate high resource intensity

  let sum = 0;
  for (let num of data) {
    for (let j = 0; j < 1000000; j++) {
      sum += Math.sqrt(num);
    }
  }

  return sum;
}

module.exports = {
  generateRandomData,
  processData,
};

Single-threaded version

Execution in a single thread represents a hardworking and reliable approach to problem-solving. The single-threaded version of the script can be visually described as such. The single-threaded version's code is quite straightforward. We create data and send it off for processing.

// sync.js
const { generateRandomData, processData } = require("./utils");

const data = generateRandomData(30000);

console.time("single-thread. Time:");
processData(data);
console.timeEnd("single-thread. Time:");

We launch the script with the command: node sync.js

We wait... and wait... and wait...

And after all that waiting, we receive a message indicating the script's execution time.

single-thread. Time:: 25.888s

This approach fits the bill for most cases. But there's one hiccup. Who, in the right mind, adores the art of waiting? To overcome this agonizing delay, we ought to harness the full firepower of our computers! After all, most modern computers come loaded with more than a single CPU core!

So, why should we let those additional cores sit idle when they could be crunching numbers and supercharging our code execution? It's time to light up those sleeping giants and unlock the raw power of multiprocessing! Let's dive in!

Multi-Processed version of the script

By adopting the multiprocessed approach, we can leverage multiple cores of our CPU, propelling our script's performance several folds. The process of our multiprocessed code's operation can be visualized with this diagram.

In essence, we're simply partitioning a sizable dataset into segments and assigning each segment for processing to a discrete CPU core.

Create a file entitled multi-process.js and populate it with the following content.

// multi-process.js
const childProcess = require("child_process");
const utils = require("./utils");

const data = utils.generateRandomData(30000);
const chunkSize = Math.ceil(data.length / 4);
const chunks = [];

for (let i = 0; i < 4; i++) {
  const start = i * chunkSize;
  const end = start + chunkSize;
  chunks.push(data.slice(start, end));
}

console.time("multiProcessed");
const workers = [];
let results = []; // result collection array

for (let i = 0; i < chunks.length; i++) {
  const worker = childProcess.fork("./worker.js");
  // pass its number and data to the workflow
  worker.send({ workerNumber: i, data: chunks[i] });

  workers.push(
    new Promise((resolve, reject) => {
      worker.on("message", (result) => {
        results.push(result); // add the result to the result array
        resolve();
      });
      worker.on("error", reject);
      worker.on("exit", (code) => {
        if (code !== 0) {
          reject(new Error(`Worker stopped with exit code ${code}`));
        }
      });
    })
  );
}

Promise.all(workers)
  .then(() => {
    console.timeEnd("multiProcessed");
    console.log("Processing results:", results);
  })
  .catch((err) => console.error(err));

This code reveals the orchestration of a solitary worker process in the symphony of multi-processed data handling in Node.js.

In brief, here's what's happening:

  • The worker receives data and its number from the main process via process.on('message').

  • The processData function carries out calculations on the portion of data assigned to this worker.

  • The result is sent back to the main process via `process.send()``.

  • The worker terminates with code 0 via process.exit().

Fire up the script with the command: node multi-process.js

Hold tight for the turbo boost...

And we get the conclusion that the code worked in 5 seconds!

Worker 0 started
Worker 1 started
Worker 2 started
Worker 3 started
====================
Worker 1 finished
====================
Worker 2 finished
====================
Worker 3 finished
====================
Worker 0 finished
multiProcessed: 5.266s
Processing results: [
  4971422688.053512,
  4989646323.157899,
  4999088030.661542,
  5008034869.924775
]

Our script has worked four times faster! Isn't that magnificent?

The Great Unveiling: Testing Results

With an eager curiosity, I ran both scripts on a computer blessed with a 4-core processor, waiting to witness the magic unfold:

  • The solo artist, our single-threaded script, diligently processed the data in 25.8 seconds.

  • The power-packed team, our multi-threaded script, knocked it out of the park in just 5.2 seconds!

Behold the power of multiprocessing – more than quadrupling the speed of computations!

These stark contrasts highlight how multiprocessing can drastically amplify the computational capabilities of your machine and trim down execution time.

Final Thoughts

Our thrilling exploration paints a vivid picture of how multiprocessing can turbocharge computational tasks in Node.js. Unleashing your code onto every single processor core offers a tangible leap in performance, akin to shifting from walking to teleportation!

It's definitely worth adding this arrow to your coding quiver and experimenting with this approach in your projects. And with the advent of Worker Threads in Node.js, implementing multiprocessing has become a breeze.

Got an adrenaline rush reading this? Feel free to share your own adventures with multiprocessing in Node.js in the comments below! Let's continue to unravel the mysteries of high-speed coding together.


Written by rusanov | Former lawyer, now tech enthusiast. Sharing tips and insights on AI, coding, and how tech boosts life quality
Published by HackerNoon on 2023/07/25