Hello!
Thank you very much for responding. I'd send you my paper, but unfortunately we
are required to write it in our local language. I don't think it would be of
much help. But I will explain my idea more thoroughly.
The network looks the same as the first example on this page (but is set up
differently):
http://nile.wpi.edu/NS/simple_ns.html
The idea is to test the existing congestion algorithms on new 100Gb/s networks
and provide observations. I want to show simple results, like cwnd growth,
queue usage, delays, RTTs and so on.
There are two TCP FTP senders on nodes n0 and n1. All three links have 100Gbps,
100ms and DropTail settings. As stated in my previous mail, the bandwidth-delay
product is quite large. Actually, just now I realized I made a big mistake
calculating it, the result is missing three zeros. It should be 1 250 000 000
B. This is even larger than the largest available window using the window scale
option (according to Wikipedia, the largest window size is 1 073 725 440 with
the use of window scale option). The main problem I'm facing is the queues are
situated in links rather than in the nodes. This is a problem, because two left
links should not have queues that drop anything (I think). As far as I know,
routers have output queues, but senders and receivers have send / receive
buffers and do not drop packets (especially when sending), but rather adjust
receiving and congestion window. The dropping in the first two links causes my
algos responding to congestion when I'm quite sure t!
hey are not supposed to (the chokepoint router is not congested at that time).
So if I set the first two links with large queues, the packets are not dropped,
but on the other hand, the simulation is slowed down and after some time dies
because of the lack of memory. I'm growing desperate and really thinking of
testing the NS3, maybe it deals better with extremely large BDPs. I tried
turning off namtrace-all, increasing MWS and removing unnecessary headers. I
will try to lower the delay to 50ms, thus reducing the BDP. If anyone has any
more ideas, please share.
Thanks for any insights.
From: ajay.bidan...@gmail.com
To: nutcracke...@hotmail.com
Subject: [ns] Running simulations of long fat networks (LFNs)
Date: Sun, 15 Jul 2012 06:57:25 +0530
Hi
Andy
We have solved similar paper/problem
, I need you base paper, so that can easily solve your proble.
Need any clarification, please
free to call me.
Basavraj bidanoor
+919632766055
Project manager
Ns2 project development center.
[ns] Running
simulations of long fat networks (LFNs)
nut cracker
Fri, 13 Jul 2012 05:21:35 -0700
Hello!
I'm
making some simple simulations for long fat networks - the networks with
large
bandwidth delay product. I'm working on simple Y shaped network, with two
senders,
chokepoint router and one receiver. My goal is to simulate networks
with
100Gbps bandwidths and 100ms delays (all three links have these settings).
If
I calculated correctly:
BDP
= 100Gb/s * 0.1s = 10Gb / 8 = 1 250 000 B
So
there could be roughly 1 250 000 bytes in transit through the network (even
more
if the queue is full). The problem I'm having is the packets are going
through
queues on senders. If I run my simulation in nam, I can see packets
dropping
from the nodes which are actually sending the data (they have TCP and
FTP
attached to them) because the queues are too small. I'm dealing with this
problem
by setting larger queues. But on networks with such large BDP, the
queues
must be 1GB in size (on sender nodes) to avoid the drops in the places
where
they should not even happen, which slows the simulation down to a halt.
Does
anybody know how to bypass the sending queues in this matter? Or any other
solution
to this problem? Please, this is very important to me and I'm getting
desperate
with it. Thanks for any information.
Regards,
Andy