Tito Yslas

Tito Yslas

Consultant

Admittedly, this sentiment isn’t always seen as a good thing from the business interest of a project. The “new stuff” is likely not battle-tested or a known quantity – our natural tendency as humans is to at least be cautious when considering an unknown.

Unfortunately, this behavioral pattern is not always a helpful one – many organizations that achieve success in the market today are “first-movers” that aren’t afraid to use the newest and/or best tools available to them. Today, many organizations are realizing that the ability to adapt quickly to a changing business environment with varying client needs is essential to survival.

New cloud computing tools can improve value and efficiency

Cloud computing has brought us a wealth of options and flexibility for delivering value to the end-user. However, learning how to use myriad new tools while delivering greater value and higher efficiency is much easier said than done.

Two of the hottest cloud computing technologies that people are talking about these days are:

  1. Serverless computing
  2. Network service mesh

What is serverless computing?

Serverless is on-demand, distributed computing wherein the cloud provider manages the server and the environment. However, be careful not to be confused by the “-less” suffix; a server is still required to run your code. The serverless paradigm uses snippets of code called functions via cloud provider runtimes referred to as function as a service (FaaS). Naturally, we can understand functions in the context of the application continuum:

  • Functions are to containers what containers are to VMs
  • Monolith → applications → services → functions

What are the benefits of serverless computing?

For developers, there are many pros of serverless computing, including:

  1. Reduced costs
  2. Scalability
  3. Fast
  4. Flexibility

1. Serverless computing can reduce costs

Costs can be reduced because there’s a lower utilization of overall resources when a long-running process can be transformed into one or multiple function(s). This is because code is only called and executed at the time of a request. It’s closer to a true use-what-you-pay-for model so that we minimize what we pay for in idle CPU time.

2. Scalable cloud computing

Auto-scaling for high and low request periods can happen much faster with serverless computing, as these functions are typically smaller artifacts than containers because they don’t come bundled with system dependencies. Ideally, functions are deployable within milliseconds.

3. Serverless computing is faster

These easily deployable functions are also decoupled, which allows for code isolation. It also gives offers the ability to release updates, patches, fixes, and new features faster.

4. Flexible cloud computing

With serverless computing, there’s no need to worry about management of environment, libraries, and backend services – we only need to provide code, which is run by the vendor. It also means there’s a strong use-case for IoT paradigm where pieces of code can be executed on the network’s “edge” without having to store data or a small amount of data (e.g., a trained neural network model).

Certified cloud foundry (CF) providers (certified platforms) are supporting hybrid architecture that allows CF platforms the flexibility to leverage both containers and functions.

What are the serverless computing challenges?

Ultimately, not every application in a business’s portfolio is suitable for converting into a suite of functions. Here are a couple of things to consider when creating a new function or migrating to a serverless architecture:

  • Latency: Functions are not long-running processes; therefore issues with request latency may arise. This is due to the instantiation of the function at the time of the request. Depending on the nature of the use case, this may not be acceptable.
  • Environment: Although we no longer need to specify our dependencies, this also means that we hand over control of the environment, language, and libraries used to the vendor.

What is a network service mesh?

The second hot technology, Network Service Mesh (NSM), is a new addition to The Cloud Native Computing Foundation (CNCF) as of April 2019 – backed by Cisco, VMWare, and RedHat. The project focuses on bringing the cloud native paradigm to Layers 2[i] and 3 by creating an abstraction layer to expose a networking interface.

It can be helpful to think of NSM as the cloud native implementation of Network Function Virtualization (NFV). The NFV framework is composed of Virtualized Network Functions (VNF), which are responsible for handling specific network functionalities for one or more VMs. Furthermore, VNFs are components that can be chained together, much like microservices or functions.

Top benefits of network service mesh

  • Since NSM can be deployed as a Kubernetes Service Resource deployment, we don’t need to learn syntax for another configuration and deployment tool.

  • Simply deploy and manage network services with abstracted compute and storage – the same paradigm as cloud computing for network functionality.

  • Reduce capital expenditure (CapEx) for proprietary hardware – with virtualized networking, features are implemented by code instead of hardware.

  • More quickly respond to and address security concerns: No need to update the configuration of each machine – update configurations from your version control system and updates are applied to all machines.

Real life network service mesh use case

As a developer, it can be a challenge to get code running in a production environment for many reasons. One such hurdle can be getting DNS entries for the application or service. This process often requires emailing someone from the operations team with a title like “system administrator” to make a request to change a DNS entry to forward requests to cloud foundry routes.

I have found myself waiting days for these requests to be resolved. Fortunately, there is a better way – using the network service mesh abstraction to allow our service to describe the routing capabilities that it needs via a Kubernetes deployment.

The real promise of cloud computing is not just new technologies and buzzwords. Playing with the new stuff is great, but the real goal is to solve real client needs, deliver tangible value, and increase efficiency of resource usage. Functions and Network Service Mesh are truly meant to help organizations create workflows that:

  • Enable developers and platform maintainers to do their jobs with as little friction as possible. Create a self-service approach that executes predictable steps for expected outcomes.

  • Increase visibility into what's happening in real time.

  • Increase transparency between developers, operations, and business sponsors for better collaboration.

  • Gather data that can be used to better respond to issues and to build a better product.

Navigating the ever-evolving world of technology is challenging. It is important to understand which of the “shiny new things” are worth your investment. Working with an expert partner such as CGI can help to cut through the noise to find the right solutions for your organization.

 


 

[i] Layer 2 is responsible for transferring data between adjacent network nodes in a wide area network (WAN) or between nodes on the same local area network (LAN) segment. Layer 3 is responsible for packet forwarding including routing through intermediate routers

About this author

Tito Yslas

Tito Yslas

Consultant

Tito Yslas is an alumnus of the University of Southern California where he focused his studies on economics, mathematics, and finance. Immediately after graduating from USC, Tito served as a business development associate at a merchant banking firm. After his experience in the financial industry, ...