cloud like cloud repatriation

On-Prem but Cloud-Like?

For most businesses, cloud is the most efficient way to deliver applications, allowing them to fully focus on business growth instead of wasting time on infrastructure operations.  But at a certain point in that growth, it makes sense to move some or all workloads from the public cloud to a private infrastructure.

“You’re crazy if you don’t start in the cloud; you’re crazy if you stay on it.”             

                                                                                                            – Martin Casado, “The Cloud Paradox”


For cloud-born businesses, this could be the first time they have to consider many of the aspects of infrastructure management on their own.  As an IT manager or CTO, you’re about to get a crash course in infrastructure management.

On-prem adds friction

In the cloud, no thought has to be given to the details of network operations, the cloud is great because it delivers infrastructure ready to be consumed; at every moment hiding away the complexities of modern application delivery including global BGP routing, server load balancing, traffic encryption, network security, and all of the associated risk of human error.

That’s one of the big challenges for cloud repatriation. We’ve gotten used to the frictionless workflows of our favorite cloud provider and we want to replicate that experience, but in many circumstances we don’t have the necessary skill set, besides there’s not much of a business value for wasting time for legacy detailed network operations.

Configuring a BGP router or a network switch isn’t rocket science, you can find plenty of online guides, Stack Overflow articles, Reddit posts, and obviously there are thousands of consulting companies looking to do this sort of work for you.

The same can be said for configuring network load balancers, open source software provides a lot of great options and configuration examples.  Common linux based software packages would likely include HAProxy or NGINX, and there are hundreds of commercial offerings here as well.

And naturally you’ll need some sort of firewall or packet filter to protect your application servers.  You can easily

 use iptables to perform basic filtering as well as NAT to handle address translation from your private addresses to your assigned public IPs.  Couple this with Wireguard and you’ve got encryption capabilities to connect to other data centers or trusted third parties.

Not so fast…

There is also the issue of performance.  Proprietary network devices are expensive for a reason, they typically include a custom processor, an ASIC, that handles the networking tasks at traffic levels that were once impossible to achieve with a normal linux server.

But these days, the big cloud providers don’t use proprietary commercial equipment, they buy commodity hardware and have teams of people to develop fully automated software stacks for their customers to use.  They also don’t use standard network interface cards (NICs), they buy SmartNICs which are designed to offload network tasks from the full OS and kernel data path.  Coupling a SmartNIC with custom data plane software like DPDK, a normal linux server can offload network forwarding to the SmartNIC and thus perform network function on par with specialized hardware.  
This server/software design can improve packet forwarding performance from single gigabit limits to line rate, Layer-3 routed packets can be forwarded at up to 100 gigabits.  A popular routing suite that can be fully accelerated with this method is the Free Range Routing project (FRR for short).

So commodity hardware and open source software achieves the same performance levels as all of the big networking company’s devices.  But remember that the mega-cloud companies have hundreds of developers to make all of this work properly (and easily).

So let’s imagine that you completed all of the configuration tasks described above and you have your first cloud-native application running.  What happens when a developer says you need to launch another application or increase capacity?  Spoiler Alert: you’re going to have to repeat all of these tasks all over again.


This operational overhead doesn’t exist when you use the public cloud, you just request a new elastic IP address with a security group and point it to your application farm.  In your new on-prem environment, these configuration tasks will absolutely start to slow down your application deployments to the point at which you’ll likely say:

“Gee, this was so much simpler in the public cloud!”
How can we get the cloud-like networking experience on-prem?  

Netris is created by long-time NetOps practitioners for delivering cloud-like user experience for your on-prem, colo, edge, or private cloud environments. Netris has a simple and intuitive UI, modern rest-API, and seamless integration with application and cloud-management tools like Kubernetes, Terraform, etc.  Just like in the public cloud you only need to describe what you need, and Netris will automatically figure out how to configure your physical network to deliver the necessary services. Perhaps most importantly, you can continue leveraging the same toolset and skills that you’ve been using within the public cloud.

As an example, Netris can integrate directly with the Kubernetes management plane.  When an application is deployed that requires exposing an IP to the internet, Netris will automatically do the following tasks:

  • Place the k8s ingress nodes into the proper VLAN/VXLAN (if necessary)
  • Create a L4 load balancing instance using the TCP ports that are to be exposed by the ingress
  • Assign an external IP address to be used for the service
  • Translate the internal VIP IP address to the external IP address
  • Create an Access Control List rule to only permit the desired exposed port number
  • Announce this new external IP address to the global upstream BGP routers

This takes seconds for the Netris controller, undoubtedly it takes a lot longer for all the operations to be completed (correctly!) by humans.

This is the core functionality of Netris, it intelligently couples together multiple configuration tasks across the entire infrastructure and delivers the necessary changes automatically  as soon as they are needed.  This is the wonderful experience that the public clouds deliver, but in your own data center, on your own terms, and at a substantially reduced cost to your business.

We call this new approach to networking: Automatic NetOps

Netris is freely available for on-prem or cloud hosted evaluations, check out what’s your preferred way of giving Netris a shot.

To get started, click the button below and select your desired evaluation method.

Alex Saroyan has been involved in architecting and operating large-scale data center networks for telcos and online companies globally since 2001. Passionate about efficiency, simplicity, and goodness, Alex has co-founded to bring cloud-like user experience to private on-prem networks.