Practical hosting options for Node.JS

Node.JS is an application runtime environment that enables using JavaScript for building server-side applications. Thanks to its unique I/O model, Node.JS excels in many scalable and real-time workloads.

Being a single-threaded asynchronous platform, Node.JS is different from most backend platforms, making it both great, yet somewhat unique when it comes to pretty much everything. One of the important aspects of JavaScript’s unique nature affects how Node.JS applications should be hosted.

Node.JS Endpoint Configuration

We recommend configuring each server to start dropping requests whenever the event loop delay significantly exceeds the average. When Node.JS REST service responds to the load-balancer with the 503 error code, it gives the balancer a chance to re-route the request to available healthy nodes until the overloaded instance gets a chance to resolve the event queue situation, rather than causing cascading problems. Based on this fact alone, whatever we end-up choosing as our hosting platform, has to run our production code in a setup that includes a load-balancer and multiple Node.JS instances available to handle requests, for each service.


  • Heroku
    Apps run in lightweight containers called dynos. The platform provides limited-autoscaling, manual scaling from the web interface, CLI and API. Only “performance dynos” have dedicated resources and therefore offer consistently reliable performance. Other dyno types have shared resources, and while usually Heroku apps exhibit decent performance, if your dyno is colocated with a resource-hungry dyno owned by another customer, your performance will suffer. Heroku allows developers to persist and cache data using popular open source solutions, including Postgres, Redis, and Apache Kafka. MySQL, MongoDB, and other solutions are available as add-ons.
    Heroku offers database and application rollback features, which might be useful in some situations.
    Heroku environment closely resembles the local environment, making the setup of the initial DevOps pipeline easy. Overall, Heroku is one of the easiest platforms to manage.

  • AWS Elastic Beanstalk
    Apps run in EC2 virtual machines. Decent autoscaling support and better resource isolation. Elastic Beanstalk works with Postgres, MySQL, Oracle, Redis, MongoDB provisioned as resources, when enabled via extensions, and AWS cloud database – Amazon Aurora.


  • AWS Lambda
    Autoscaling out of the box. Elimination of cold starts, and doing direct lambda calls without invoking API gateway, help Lambda functions execute with latency compatible with apps hosted on EC2. AWS Lambda is stateless and if needs to persist data, it has to use another resource, such as an Amazon cloud database, Amazon S3, any external database or/and cache, which could be hosted with EC2, any REST accessible cloud service.
    Since Lambda is an autoscaling solution, debugging lambda code presents additional challenges compared to code running on a single host.

  • Google Cloud Functions and Google Cloud Functions for Firebase
    Autoscaling of the box. API similar Express.JS. The optional Firebase integration provides access to analytics, authentication, real-time database.
    Google FaaS, in our experience, usually have slightly better warm-up time responses, yet somewhat lower throughput and a slightly higher latency than the AWS counterpart.

Node.JS Process Managers

  • Pm2
    The most popular Node.JS process manager comes with a variety of advertised features, including cluster mode, hot reload, monitoring and more. There is no remote management, and each node runs unaware of others.

  • Strong-pm
    Strongloop process manager is an attempt to make a production-grade Node.JS process manager, with improved clustering support.


  • Amazon ECS
    Supports managed – AWS Fargate, and AWS EC2 clusters deployment, private-registry, scalable, management API, integrates with AWS services, including security, data storage, and others. Conceptually, Amazon ECS is similar to Kubernetes.

  • Microsoft Azure Container Service (ACS)
    Azure Container Service (ACS) is based on Apache Mesos, an open source orchestration system. Microsoft ACS comes with tools similar to those offered by AWS ECS. ACS is able to host containers on Kubernetes, DC/OS, or Docker Swarm clusters.

Container Orchestration

  • Kubernetes
    Kubernetes is emerging as a de-facto standard platform for cloud-scale workloads. While Docker Smarm and Apache Mesos, which offer similar technical functionality are both excellent products, their market share is nowhere near that of Kubernetes. Available for on-premises, single-cloud and multicloud hosting, Kubernetes offers a truly ubiquitous platform for deploying software solutions at cloud-scale. Kubernetes hosting is offered by many vendors including, Amazon AWS, Google GKE, and Microsoft Azure.
    While Kubernetes is a great platform for achieving scale, managing Kubernetes requires deep domain expertise. If you are new to Kubernetes, please visit our DevOps blog for practical advice.


While all of the above are viable options, and both pm2 and strong-pm are good products, we usually do not use Node.JS process managers, and do not host Node.JS on bare-metal or bare VMs where process managers would be more of a natural choice. Hosting Node.JS on a fixed number of machines creates problems for dynamic scaling.

For simple apps, we believe PaaS and Serverless/FaaS solutions from leading cloud hosting vendors offer excellent performance and reliability, with costs far below hosting on VMs or hardware.
For large microservices, architects should consider both Serverless and Kubernetes. FaaS are generally easier to build and manage, while Kubernetes gives operations a lot more control over behavior. Unlike FaaS, operating Kubernetes requires significant technical expertise and DevOps resources.

More from our blog

See all posts