Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

We'll see host-1 starts to send data to host-2. The transfer will show us an update every second (-i 1) and will terminate after 10 seconds (-t 10). On host-1 and host-2, we can see the number of packets that failed to make it from host-1 to host-2. We should see that all packets make it – (0 lost)/(total transferred).

...

Question(s)

Please limit your responses to at most three sentences each. Don't write run-on sentences either; less is more.

...

The queue above should be installed on ethY, where ethY is the interface leading from switch to host-2. 

...

Quickly discussing the parameters of interest to us, "rate XBps" specifies the service rate in X bytes/second, "burst YB" specifies the queue length as Y bytes. (Ignore the "limit 1512" parameter.) We can manipulate these parameters to achieve a desired network performance.

Question(s)

  1. We want to control the flow of traffic with our queue from host-1 to host-2. Explain why we installed the queue on the interface leading from switch to host-2 and not the arriving link on switch from host-1? Hint: What is the "server" or resource of the queueing system our packets are trying to access at the switch VM with respect to host-1-to-host-2 traffic flow?

...

Run Experiments with a Queue on the Switchblahdy blah

Experiment 1

Let's check the performance of our network between host-1 and host-2 after having added the queue.

If your iperf server is not running anymore, start it up again on host-2:

Code Block
languagebash
host-2$ iperf -s -u

And on host-1, start the iperf client:

Code Block
languagebash
host-1$ iperf -c 10.0.0.2 -u -i 1 -t 10

Check the results in the iperf server. We should see much worse performance than we did without the queue.

Questions(s)

  1. How many packets were successfully received by the server? Explain why this makes sense given a packet size of 1512 bytes, a queue length of 1512 bytes, and a service rate of 1512 bytes/second.
  2. Are there any unaccounted for packets in your observation to (1) above? If so, where do you think the error lies?
  3. Is the queueing system stable given the present configuration?
  4. Assuming the queueing system has been optimized and cannot be changed from the given configuration. What can be done to increase the reliability of data transfer between host-1 and host-2?

Experiment 2

We've observed a very poor-performing queuing system in Experiment 1. Let's try to tune the performance to increase the throughput between host-1 and host-2. Let's double the size of our queue:

Code Block
languagebash
switch$ sudo tc qdisc replace dev ethY handle 1 parent root tbf rate 1512Bps burst 3024B limit 1512

Run iperf on host-1 and host-2 as we did in Experiment 1, and record the results reported by the server on host-2.

Then, let's double the queue size again:

Code Block
languagebash
switch$ sudo tc qdisc replace dev ethY handle 1 parent root tbf rate 1512Bps burst 6048B limit 1512

Once again, run iperf on host-1 and host-2 as we did in Experiment 1, and record the results reported by the server on host-2.

Question(s)

  1. What can you observe about the number of successfully transmitted packets as the queue size increases? Explain your observation.
  2. Is the queueing system stable given the largest configured queue size of 6048 bytes?

Experiment 3

Now, we're going to see how the service rate impacts the queueing system performance. Configure the 

Code Block
languagebash
switch$ sudo tc qdisc replace dev ethY handle 1 parent root tbf rate 3024Bps burst 3024B limit 1512

Run iperf on host-1 and host-2 as we did in Experiment 1, and record the results reported by the server on host-2.

Then, let's double the service rate again:

Code Block
languagebash
switch$ sudo tc qdisc replace dev ethY handle 1 parent root tbf rate 6048Bps burst 6048B limit 1512

Once again, run iperf on host-1 and host-2 as we did in Experiment 1, and record the results reported by the server on host-2.

Question(s)

  1. What can you observe about the number of successfully transmitted packets as the service rate increases? Explain your observation.
  2. Is the queueing system stable given the largest configured service rate of 6048 bytes/second

Experiment 4

Now, we're going to combine an increased service rate with an increased queue size:

Code Block
languagebash
switch$ sudo tc qdisc replace dev ethY handle 1 parent root tbf rate 6048Bps burst 4536B limit 1512

Run iperf on host-1 and host-2 as we did in Experiment 1, and record the results reported by the server on host-2.

Then, let's increase the queue size again:

Code Block
languagebash
switch$ sudo tc qdisc replace dev ethY handle 1 parent root tbf rate 6048Bps burst 6048B limit 1512

Once again, run iperf on host-1 and host-2 as we did in Experiment 1, and record the results reported by the server on host-2.

Question(s)

  1. What can you observe about the number of successfully transmitted packets? Explain your observation.
  2. Is the queueing system stable?
  3. We've seen how an improperly tuned queueing system can impact the throughput of a data transfer. Given what you know about the max arrival rate (hint: check the iperf client), and a packet size of 1512 bytes, set the minimal queue size and service rate of the queueing system to achieve stability. Show all your work and evaluate the performance of your improved queueing system using iperf. Please provide the results.

Experiment 5

Now, we're going to reverse our data transfer to send packets from host-2 to host-1.

Kill the iperf server on host-2 and start it on host-1:

Code Block
languagebash
host-1$ iperf -s -u

And on host-2, start the iperf client:

Code Block
languagebash
host-2$ iperf -c 10.0.0.1 -u -i 1 -t 10

Check the results in the iperf server.

Question(s)

  1. Do you observe any lost packets transmitting data in the reverse direction? Please explain your reasoning.
  2. How could we create a queue to control the performance of the reverse data transfer?

Submission

  1. On the due date indicated in the calendar, please submit a hard copy of your responses to the questions posed inline above.