I am doing a simulation on ns-2 for a datacenter network and I just 
realised that I am not sure how exactly I would go to calculate the 
network throughput.

I know that to calculate throughput over a link I would first trace the 
last hop link, add the received packet sizes for the correct 
source-destination pair for the interval I want (1 sec is probably the 
norm) and then repeat for the next interval and I can plot a graph of 
throughput vs time but this is just for one link and one pair of nodes.

What I am wondering is how do you do this for the whole network?

I am thinking that I MUST trace all last hop links to the hosts (i.e. in 
a fat tree as in my case i would have to monitor the links from hosts to 
the nodes of the
lower level of the pod)

fat-tree with k=4

In the case as above I would monitor the links from then black boxes to 
the first blue circles. But do I add all received packets regardless of 
source-destination pair? And do I add them for every interval?

At the same time I also want to measure queue length (over time). In 
this case do I monitor the queues of all nodes and average them over the 
same period of time?

Any help/hints/clarification is appreciated.



---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com

Reply via email to