On Thu, Jun 21, 2012 at 12:36 AM, Rick Jones <rick...@hp.com> wrote:
I do not have numbers I can share, but do have an interest in discussing
methodology for evaluating "scaling" particularly as regards to
"networking." My initial thoughts are simply starting with what I have done
for "network scaling" on SMP systems (as vaguely instantiated in the likes
of the runemomniaggdemo.sh script under
http://www.netperf.org/svn/netperf2/trunk/doc/examples/ ) though expanding
it by adding more and more VMs/hypervisors etc as one goes.
By 'network scaling', do you mean the aggregated throughput
(bandwidth, packets/sec) of the entire cloud (or part of it)? I think
picking up 'netperf' as micro benchmark is just 1st step, there's more
work needs to be done. For OpenStack network, there's 'inter-cloud'
and 'cloud-to-external-world' throughput. If we care about the
performance for end user, then reason numbers (for network scaling)
should be captured inside VM instances. For example, spawn 1,000 VM
instances across cloud, then pair them to do 'netperf' tests in order
to measure 'inter-cloud' network throughput.
While netperf (or its like) is simply a microbenchmark, and so somewhat
removed from "reality" it does have the benefit of not (directly at least :)
leaking anything proprietary about what is going-on in any one vendor's
environment. And if something will scale well under the rigors of netperf
workloads it will probably scale well under "real" workloads. Such scaling
under netperf may not be necessary, but it should be sufficient.