What the hell does serverless mean?

Written by aaronedell | Published 2018/01/28
Tech Story Tags: serverless | cloud-computing | infrastructure | machine-learning | containers

TLDRvia the TL;DR App

source: Trek10 | https://www.trek10.com/blog/serverless-framework-for-processes-projects-and-scale/

Serverless computing is most likely going to be the infrastructure ROI king of 2018. It would be wise for any business looking to scale to take a hard look at it. So… what the hell is serverless computing?

How can you not have servers?

How can you not have servers?

It seems a bit counter-intuitive to think that the future of cloud computing and scale is a concept that sounds like philosophical bullshit. “Who needs servers, man…?” But its actually quite serious.

Let’s clear the air on the most irritating aspect of serverless; serverless computing still requires servers, so the name is totally misleading.

The difference between traditional cloud computing and serverless is that you, the customer who requires the computing, doesn’t pay for underutilized resources. Instead of spinning up a server in AWS for example, you’re just spinning up some code execution time. The serverless computing service takes your functions as input, performs logic, returns your output, and then shuts down. You are only billed for the resources used during the execution of those functions.

Function as a Service (FaaS) Platforms

Serverless computing should really be called function-as-a-service platform. If you’ve heard about AWS Lambda or Google Cloud Function, then you’ve heard about FaaS. The benefits of these platforms is that developers don’t have to think about multi-threading or load-balancing. They can just focus on their code. They trust the FaaS to handle all the resource management for them. It also turns out to be a lot cheaper than being billed for a fixed quantity of servers.

There are some downsides to using FaaS from cloud companies as well. For one, they will spin down your runtime environments if you’re not using it a lot. Paradoxically, they also limit the total amount of resources available to you, introducing latency and problems with high-performance. Monitoring, debugging, and security are also tricky with these cloud providers (as they would be with any cloud computing workflow) due to the fact that it…well… runs in a public cloud that you don’t have access to or control of.

Old school (sort of)

Machine Learning

Its no secret that most enterprises, companies, and startups are going to spend time and money on AI and machine learning this year. What you will see is a slow realization that machine learning, containerization, and FaaS are all meant for each other. Machine Learning is a very specific set of calculations that, in a lot of cases, function in discreet units. For example, you may need to process every image your social network uploads for nudity so you can flag it as inappropriate (or very appropriate…depending on what your social network does). Each image will require a call to something like Nudebox by Machine Box, which will return some information, than you can then act on or store (or both). Nudebox can process those functions individually, in parallel, at scale… it doesn’t really matter as long as its fast. As your machine learning needs grow, you’ll want to scale, but its not cost-effective to be spinning up big servers with GPUs in them unless you’re going to be using every bit of their resources all the time they’re online. (Machine Box boxes don’t require GPUs, and are incredibly lightweight).

Containers

Learn about containers, and learn about them now. They are the natural evolution to virtual machines. They essentially solve a similar problem to FaaS; they make things cheaper because you’re not paying for unused stuff… like all that time you sit around waiting for a VM of Windows to boot up just to run your software. They are going to be a critical part of the machine learning/FaaS future, because containers do the heavy lifting. Containers are where the machine learning models run.

Save money now

This is how you can save money now; build your own serverless platform for your machine learning needs. It sounds hard, but its really not. A lot of these capabilities are open sourced and/or are really affordable and scale nicely. My company Machine Box is $499/month for unlimited machine learning action. OpenFAAS, Docker and Kubernetes are free unless you need enterprise support, and even then its still a lot cheaper than running this all in the public cloud.

But it isn’t just about the cost of running all this, its about the time its going to save you implementing, deploying and scaling. In about 1 hour (I’m not exaggerating, time it and write me back if I’m wrong), you can deploy an ultra-scalable, production-ready, full enterprise-grade machine learning platform running on containers and serverless infrastructure.

That means you can go to your boss and tell them you’ve saved them millions of dollars… IN ONE HOUR! I guarantee you they’ll give you a promotion and a raise on the spot*.

*I cannot guarantee this.

So do yourself a favor and learn more about these tools. You’re welcome.

What is Machine Box?

Machine Box puts state of the art machine learning capabilities into Docker containers so developers like you can easily incorporate natural language processing, facial detection, object recognition, etc. into your own apps very quickly.

The boxes are built for scale, so when your app really takes off just add more boxes horizontally, to infinity and beyond. Oh, and it’s way cheaper than any of the cloud services (and they might be better)… and your data doesn’t leave your infrastructure.

Have a play and let us know what you think.

What is OpenFAAS

https://www.openfaas.com/

From their website: “With OpenFaaS you can package anything as a serverless function — from Node.js to Golang to CSharp, even binaries like ffmpeg or ImageMagick.

You can try OpenFaaS in 60 seconds or write and deploy your first Python function in around 10–15 minutes. So grab a coffee and learn how the FaaS-CLI makes serverless functions simple.

So bring your laptop, your own on-prem hardware or create a cluster in the cloud. Pick Docker or Kubernetes to do the heavy lifting enabling you to build a scaleable, fault-tolerant event-driven serverless platform for your applications.

Our core values are: developer first, operational simplicity and community centric.”


Published by HackerNoon on 2018/01/28