atom feed10 messages in net.launchpad.lists.openstackRe: [Openstack] Performance metrics
FromSent OnAttachments
Neelakantam GaddamJun 20, 2012 5:56 am 
Dan WendlandtJun 20, 2012 6:25 am 
Sandy WalshJun 20, 2012 6:54 am 
Rick JonesJun 20, 2012 9:36 am 
Huang ZhitengJun 20, 2012 8:09 pm 
Rick JonesJun 21, 2012 9:16 am 
Narayan DesaiJun 21, 2012 12:41 pm 
Rick JonesJun 21, 2012 2:21 pm 
Narayan DesaiJun 21, 2012 7:06 pm 
Rick JonesJun 29, 2012 2:35 pm 
Subject:Re: [Openstack] Performance metrics
From:Rick Jones (
Date:Jun 21, 2012 2:21:20 pm

On 06/21/2012 12:41 PM, Narayan Desai wrote:

We did a bunch of similar tests to determine the overhead caused by kvm and limitations of the nova network architecture. We found that VMs themselves were able to consistently saturate the network link available to the host system, whether it was 1GE or 10GE, with relatively modern node and network hardware. With the default VLANManager network setup, there isn't much you can do to scale your outbound connectivity beyond the hardware you can reasonably drive with a single node, but using multi-host nova-network, we were able to run a bunch of nodes in parallel, scaling up our outbound bandwidth linearly. We managed to get 10 nodes, with a single VM per node, each running 4 TCP streams, up to 99 gigabits on a dedicated cross country link. There was a bunch of tuning that we needed to do, but it wasn't anything particularly outlandish compared with the tuning needed for doing this with bare metal. We've been meaning to do a full writeup, but haven't had time yet.

TSO and GRO can cover a multitude of path-length sins :)

That is one of the reasons netperf does more than just bulk transfer :) When I was/am measuring "scaling" of an SMP node I would use aggregate, burst-mode, single-byte netperf TCP_RR tests to maximize the packets per second while minimizing the actual bandwidth consumed.

And if there is a concern about flows coming and going there is the TCP_CRR test which is like the TCP_RR test but each transaction is a freshly created and torn-down TCP connection.

happy benchmarking,