Admittedly, this sentiment isn’t always seen as a good thing from the business interest of a project. The “new stuff” is likely not battle-tested or a known quantity – our natural tendency as humans is to at least be cautious when considering an unknown. Unfortunately, this behavioral pattern is not always a helpful one – many organizations that achieve success in the market today are “first-movers” that aren’t afraid to use the newest and/or best tools available to them. Today, many organizations are realizing that the ability to adapt quickly to a changing business environment with varying client needs is essential to survival.

Cloud computing has brought us a wealth of options and flexibility for delivering value to the end-user. However, learning how to use the myriad of new tools while delivering greater value and higher efficiency is much easier said than done.

Two of the hottest technologies that people are talking about these days are serverless computing and network service mesh.

So, what does serverless actually mean?

Serverless is on-demand, distributed computing wherein the cloud provider manages the server and the environment. However, be careful not to be confused by the “-less” suffix, a server is still required to run your code. The serverless paradigm uses snippets of code called functions via cloud provider runtimes referred to as function as a service (FaaS). Naturally we can understand functions in the context of the application continuum:

  • Functions are to containers what containers are to VMs
    • Monolith → applications → services → functions

What does that buy us?

  • For developers, it offers greater scalability, flexibility, faster time to release, and reduced costs
    • Reduces cost via lower utilization of overall resources when a long-running process can be transformed into one or multiple function(s). This is because code is only called and executed at the time of a request.
    • Closer to a true use-what-you-pay-for model so that we minimize what we pay for in idle CPU time
  • Auto-scaling for high and low request periods can happen much faster
    • functions are typically smaller artifacts than containers because they don’t come bundled with system dependencies
    • ideally, functions are deployable within milliseconds
  • Functions are decoupled, which allows for code isolation and the ability to release updates, patches, fixes, and new features faster
  • No need to worry about management of environment, libraries, and backend services – we only need to provide code, which is run by the vendor
  • Strong use-case for IoT paradigm where pieces of code can be executed on the network’s “edge” without having to store data or a small amount of data (e.g., a trained neural network model)
  • Certified cloud foundry (CF) providers (certified platforms) are supporting hybrid-architecture that allows for CF platforms to leverage both containers and functions.

What are the challenges?

Ultimately, not every application in a business’s portfolio is suitable for converting into a suite of functions. Here are a couple of things to consider when creating a new function or migrating to a serverless architecture.

Latency:

Functions are not long running processes, therefore issues with request latency may arise. This is due the instantiation of the function at the time of the request. Depending on the nature of the use case, this may not be acceptable.

Environment:

Although we no longer need to specify our dependencies, this also means that we hand over control of the environment, language, and libraries used to the vendor.

What is a Network Service Mesh?

The second hot technology, Network Service Mesh (NSM), is a new addition to The Cloud Native Computing Foundation (CNCF) as of April 2019 – backed by Cisco, VMWare, and RedHat. The project focuses on bringing the cloud native paradigm to Layers 2 and 3[i] by creating an abstraction layer to expose a networking interface.

It can be helpful to think of NSM as the cloud native implementation of Network Function Virtualization (NFV). The NFV framework is composed of Virtualized Network Functions (VNF), which are responsible for handling specific network functionalities for one or more VMs. Furthermore, VNFs are components that can be chained together much like microservices or functions.

What does that buy us?

  • Since NSM can be deployed as a Kubernetes Service Resource deployment, we don’t need to learn syntax for another configuration and deployment tool
  • Simply deploy and manage network services with abstracted compute and storage – same paradigm as cloud computing for network functionality
  • Reduce capital expenditure (CapEx) for proprietary hardware – with virtualized networking, features are implemented by code instead of hardware
  • More quickly respond to and address security concerns
    • No need to update configuration of each machine – update configurations from your version control system and updates are applied to all machines 

Real Life Use Case

As a developer, it can be a challenge to get code running in a production environment for many reasons. One such hurdle can be getting DNS entries for the application or service. This process often requires emailing someone from the operations team with a title like “system administrator” to make a request to change a DNS entry to forward requests to cloud foundry routes. I have found myself waiting days for these requests to be resolved. Fortunately, there is a better way – using the network service mesh abstraction to allow our service to describe the routing capabilities that it needs via a Kubernetes deployment.

The real promise of cloud-computing is not just new technologies and buzzwords. Playing with the new stuff is great, but the real goal is to solve real client needs, deliver tangible value, and increase efficiency of resource usage. Functions and Network Service Mesh are truly meant to help organizations create workflows that:

  • Enable developers and platform maintainers to do their jobs with as little friction as possible. Create a self-service approach that executes predictable steps for expected outcomes
  • Increase visibility into what's happening in real time
  • Increase transparency between developers, operations, business sponsor for better collaboration
  • Gather data that can be used to better respond to issues and to build a better product

Navigating the ever-evolving world of technology is challenging. It is important to understand which of the “shiny new things” that are worth your investment.  Working with an expert partner such as CGI can help to cut through the noise to find the right solutions for your organization.

Learn more about how we help clients tackle complex cloud-based solutions in our latest whitepaper

 

[i] Layer 2 is responsible for transferring data between adjacent network nodes in a wide area network (WAN) or between nodes on the same local area network (LAN) segment. Layer 3 is responsible for packet forwarding including routing through intermediate routers

About this author

Picture of Tito Yslas

Tito Yslas

Consultant

Tito Yslas is an alumnus of the University of Southern California where he focused his studies on economics, mathematics, and finance. Immediately after graduating from USC, Tito served as a business development associate at a merchant banking firm. After his experience in the financial industry, ...

Add new comment

Comment editor

  • No HTML tags allowed.
  • Lines and paragraphs break automatically.
Blog moderation guidelines and term of use