Transactions-per-second is the raw base measure of system performance. An array of different factors contribute to that performance, but one factor is the throughput of the system and that can be maximised by using a load balancer.
To get the maximum performance out of a system, AVI Networks’ Vantage Platform uses software-defined principles to separate the central management system or global plane, from the data plane of distributed load balancers. The company believes that it is this basic architectural difference that gives it an advantage over its competitors.
ZDNet recently spoke to AVI CEO Amit Pander to find out more.
You claim that you can scale a web application from one to one million TPS. How is this achieved?
Pander: We were working with a large retail customer who wanted a system that could deal with big peaks in network traffic like those it experienced on days like Cyber Monday.
Working with them we were able to scale an application to one million SSL TPS at speed. Now it is important to point out that it is SSL, which means that it is more secure transactions but they also tend to be more onerous, but we did it in under ten minutes on Google Cloud.
Now this limits the idea of over-provisioning. No one is going to provision themselves to over a million SSL TPS. However, if there is a denial-of-service attack or a massive run on your website because you have released some spectacularly good product, then you can be assured that if you have AVI on your product, you can scale it to those levels.
Also, we stopped at a million, not because we couldn’t do more but because we thought that it was a good, meaningful number and probably more than most people will need.
The two really meaningful things about it were how quickly we could do it and the fact that it was automated. And the trigger here was performance — to keep performance steady.
To do that, we started scaling and we scaled out to almost a million in under 10 minutes and then we were able to get back down as needed. So for the retailer it meant that they could grow their application at will and not be panicked. But the most amazing thing was that this whole operation cost about $50. You know, it’s mindboggling.
Tell me a little about the company.
We have now been shipping product for about two years. Some of [our customers] are the largest financial companies, both in Europe and the US, as well as some of the largest retailers along with some of the biggest telcos.
You can probably understand why it’s the largest companies we are going after. But they are also the ones who have the most sensitivity around performance and security.
What sort of networking do you do?
We are in the business of application networking services. Its Layer 4 through 7 [from the Transport layer to the Application Layer], above the routing and switching companies like Cisco, Nuance, etc.
We are in a richer layer that has more information about the application and goes all the way through to the presentation layers. What that means is that we have a lot more information about the application itself. It’s generally considered a rich and strategic layer to work in.
The services in this area range for load balancing where companies like F5 or Citrix work. But it also incorporates other things that need application-level support. Things like network performance monitoring, with the ability to look at end-to-end timings of a client’s request to your assets, requests such as being able to answer question like, ‘how long does a typical round trip request to your website take, where are the bottlenecks, where are the levels of security employed to deliver that information?’.
And then there is the third level which is the security itself. Can you understand when a denial-of-service (DoS) attack is happening and can you do something about it? Can you be a firewall? This is probably the biggest part of this layer.
The strategy is really a dart board, the bull’s eye being load balancing. The load balancer is essentially a traffic manager — when a request comes in it directs the request from the customer to the right server but not like the router, at the application level.
For example, I can have two, three, or four different instances of the application on the same server or VMs and I can make sure it goes to the right one and then I can bench based on security, or response or anything else I want. I could also, and this is something that is fairly unique to us, direct that request from the data centre based, say, in Santa Clara to an Amazon data centre in the cloud somewhere, or an Azure or a Google cloud instance of your application.
That’s often called cloud bursting or hybrid cloud. It’s a way of handling peak capacity or something else that many companies are contemplating. It may have many instances in different locations.
Then the next circle in the bull’s eye for us is performance monitoring or network performance monitoring. We are able to give you a timing on how your users are doing, how the application is performing, and so on.
We can tell you if there’s a spike in traffic, and what the issue is. Is there a customer surge because of something like Black Monday or is there a DoS attack underway?
What’s your unique selling point [USP] in this market?
The reason we got into this business — and we started with that central core of load balancing — was because we realised that a lot of companies were moving towards software-defined principles in Layers 2 and 3. Companies like Nuage Networks, VMware’s NSX, or Nicira are all software-defined networking companies but nobody is doing SDN for Layer 4 through 7.
When we saw this we said, ‘this is nuts’ — because everybody was going towards commodity hardware with distributed architecture, hybrid architecture. They needed a solution that can be software-defined and where there is a centralised controller and a distributed data plane.
That doesn’t exist in our space, so it was a head scratcher for us because we saw customers moving towards those kinds of flexible networks. That’s when our founders said, ‘hey, there’s a gap in the market. We need software defined in the Layer 4 through 7 space, with a distributed architecture that can work in the cloud and that can work in hybrid environments that include traditional data centres’.
That is our value proposition: we can behave like a load balance that can distribute traffic across environments, very much like an SDN.
The second aspect is that we are the only load balancer that combines network functionalities like network performance management and micro-segmentation. What we have done is separate the brains, if you will, from the actual legs of the device. The brains are the controller that can be scaled out separately and the data plane that is scaled out separately.
Can you product do all these things on the fly? Can it separate out the functionality so that the devices focus on different things and switch between them as required?
Absolutely. Our system is so concurrent, or parallel, that many of our customers call it a network DVR because they can go back in time and play back an issue that may have happened 6, 12, or 18 hours ago.
The specific USP that we bring is the ability to deploy the same architecture, the exact same load balance, the same service across hybrid clouds. So whether it is a container on Amazon, or it is in my traditional server, or on a VM on bare metal, I can manage it as one application on a virtual estate.
Our second differentiator in this market is our ability to troubleshoot quickly. Our customers say that they find that the time to do this can go from days to minutes.
Our third differentiator is cost. Because we are a software product, you can use us on commodity hardware.
And finally because we are elastic, you don’t have to buy more than you need. With our competitors’ customers, when they start, they tend to size for peak load. With us, that’s not necessary.