Project 1: My First Queue and Little's Law
This project assumes you are familiar with how to access GENI, design, deploy, login to, and test a topology of your own. Please refer to this tutorial if you need a refresher. |
---|
Table of Contents
Introduction
In this project, we will create a simple topology where we can design and implement a real queueing system. The goal is to demonstrate Little's Law as we play with queue parameters and observe traffic flow.
Design Topology
In order to implement a queue and to observe its effect on network traffic, we first need:
- A couple hosts to generate and receive packets. Let's call these "host-1" and "host-2".
- A switch on which to implement the queue. Let's call this VM the "switch".
As such, let's draw a GENI topology in Jacks to accomplish this. Our topology will have host-1 and host-2, each which is connected to the single switch. If a host-1 wants to relay packets to host-2, the packets must traverse the switch (ditto for host-2 to host-1).
It is important that the VM serving as the switch use the "Ubuntu 14 with OVS by Niky" image. The other VMs can use the default image; however, it is recommended you use the "Ubuntu 14 with OVS by Niky" image just to keep everything simple and versatile.
Once we finish our design, let's select an aggregate on which to deploy our GENI resources – make sure it's an InstaGENI aggregate. Recall that we can view the utilization of the various InstaGENI aggregates by visiting this page.
Discover How Resources are Connected
With GENI, we can ask for links between resources, such as VMs. These appear as network interfaces on the VMs. If a VM has more than one link, it is important to note the network interface belonging to each link. This can be done by observing the unique IP subnets GENI assigns to each link. In our simple topology, we have two links – one between host-1 and switch and another between host-2 and switch.
host-1$ ifconfig # note the IP address/subnet corresponding to our data plane interface ethV, likely eth1 host-2$ ifconfig # note the IP address/subnet corresponding to our data plane interface ethW, likely eth1 switch$ ifconfig # note the IP addresses/subnets corresponding to our data plane interfaces ethX and ethY, likely eth1 and eth2
As an example, suppose ethV is 20.0.1.1/24, ethW is 20.0.2.1/24, ethX is 20.0.1.2/24, and ethY is 20.0.1.2/24. If this is the case, then we can see that due to the /24 (i.e. 255.255.255.0) subnets, we have two distinct networks 20.0.1/24 and 20.0.2/24. Since ethV at 20.0.1.1 and ethX at 20.0.1.2 are on the same subnet, they must be the interfaces of the link between host-1 and switch. Likewise, because ethW at 20.0.2.1 and ethY at 20.0.2.2 are on the same subnet, they must be the interfaces of the link between host-2 and switch. By mapping the network interfaces and subnets, we can deduce the links between the VMs. It is important we do this now before we modify the network interfaces in the steps below.
You might have seen in Jacks where you can specify link IP addresses for each VM. We could have pre-assigned our IP addresses in Jacks to avoid having to figure out what network interfaces belong to what links. However, it's a good exercise to do it manually for simple topologies like ours. |
---|
Setup Resources
On our host-1 and host-2, we will need to install iperf, which is the tool we will use to generate packets.
host-1$ sudo apt-get install iperf
and don't forget host-2:
host-2$ sudo apt-get install iperf
Next, let's setup the switch by adding an Open vSwitch (OVS) bridge:
switch$ sudo ovs-vsctl add-br my-switch # add an OVS instance (i.e. a bridge) switch$ sudo ovs-vsctl add-port my-switch ethX # attach ethX to the switch switch$ sudo ovs-vsctl add-port my-switch ethY # attach ethY to the switch switch$ sudo ifconfig ethX 0 # remove ethX IP address (and also remove route) switch$ sudo ifconfig ethY 0 # remove ethY IP address (and also remove route)
where ethX and ethY are the interfaces associated with the LANs leading from the switch to host-1 and host-2. Hint: They're not eth0 and are likely to be eth1 and eth2. (Please don't use eth0, please...)
Now, if we were to test connectivity from host-1 to host-2 via the switch using ping, we'd be unsuccessful. Why? Well, take a look at the route tables of host-1 and host-2 and you'll see they are not routable from one another. To fix this, we can assign host-1 and host-2 IPs on the same subnet. You can use whatever private IPs you want, but I'll use the following just so we can better understand the context of the iperf commands we'll run later:
host-1$ sudo ifconfig ethV 10.0.0.1/24 # set ethV IP on host-1 host-2$ sudo ifconfig ethW 10.0.0.2/24 # set ethW IP on host-2 (must be different from host-1 but on same subnet)
where ethV is the network interface on host-1 that connects to the LAN leading to the switch and ethW is the network interface on host-2 that connects to the LAN leading to the switch.
After this, we should be able to ping between host-1 and host-2. Verify that you can do this before proceeding.
Run Experiment without a Queue on the Switch
First, let's run an iperf test to establish a baseline. iperf essentially pumps either TCP or UDP packets into the network – whatever we specify – to test the network performance. TCP will attempt to completely saturate the network link without losing any data; UDP, on the other hand, will send data at a defined rate from the source without care for whether or not the data actually makes it to the recipient. We'll use UDP for this experiment to gain better control over the data rate.
We'll designate host-1 as the client and host-2 as the server, where the client will send data to the server (i.e. a file upload).
On host-2:
host-2$ iperf -s -u # run an iperf server (-s) process to accept UDP (-u) on the default port of 5001
And on host-1:
host-1$ iperf -c 10.0.0.2 -u -i 1 -t 10 # run an iperf client (-c <ip>) process that sends UDP (-u) packets to iperf server 10.0.0.2
We'll see host-1 starts to send data to host-2. The transfer will show us an update every second (-i 1) and will terminate after 10 seconds (-t 10). On host-1 and host-2, we can see the number of packets that failed to make it from host-1 to host-2. We should see that all packets make it – (0 lost)/(total transferred).
Question(s)
Please limit your responses to at most three sentences each, but be specific.
- What do the results tell us about the utilization of the queueing system on the switch?
- What is the service rate of the switch? You do not have to give an exact answer.
Setup Queue on the Switch
Let's make things more interesting and setup a queue that we can control on the switch.
To set up our queue, we'll use a program called tc, or traffic control. This is a relatively nice and high level way to manipulate queueing on our VM network interfaces.
switch$ sudo tc qdisc replace dev ethY handle 1 parent root tbf rate 1512Bps burst 1512B limit 1512
The queue above should be installed on ethY, where ethY is the interface leading from switch to host-2.
Quickly discussing the parameters of interest to us, "rate XBps" specifies the service rate in X bytes/second, "burst YB" specifies the queue length as Y bytes. (Ignore the "limit 1512" parameter.) We can manipulate these parameters to achieve a desired network performance.
Question(s)
Please limit your responses to at most three sentences each, but be specific.
- We want to control the flow of traffic with our queue from host-1 to host-2. Explain why we installed the queue on the interface leading from switch to host-2 and not the arriving link on switch from host-1? Hint: What is the "server" or resource of the queueing system our packets are trying to access at the switch VM with respect to host-1-to-host-2 traffic flow?
Run Experiments with a Queue on the Switch
Experiment 1
Let's check the performance of our network between host-1 and host-2 after having added the queue.
If your iperf server is not running anymore, start it up again on host-2:
host-2$ iperf -s -u
And on host-1, start the iperf client:
host-1$ iperf -c 10.0.0.2 -u -i 1 -t 10
Check the results in the iperf server. We should see much worse performance than we did without the queue.
Questions(s)
Please limit your responses to at most three sentences each, but be specific.
- How many packets were successfully received by the server? Explain why this makes sense given a packet size of 1512 bytes, a queue length of 1512 bytes, and a service rate of 1512 bytes/second.
- Are there any unaccounted for packets in your observation to (1) above? If so, where do you think the error lies?
- Is the queueing system stable given the present configuration? Explain.
- Assuming the queueing system cannot be changed from the given configuration, what can be done to increase the reliability of data transfer (i.e. eliminate lost packets) between host-1 and host-2?
Experiment 2
We've observed a very poor-performing queuing system in Experiment 1. Let's try to tune the performance to increase the throughput between host-1 and host-2. Let's double the size of our queue:
switch$ sudo tc qdisc replace dev ethY handle 1 parent root tbf rate 1512Bps burst 3024B limit 1512
Run iperf on host-1 and host-2 as we did in Experiment 1, and record the results reported by the server on host-2.
Then, let's double the queue size again:
switch$ sudo tc qdisc replace dev ethY handle 1 parent root tbf rate 1512Bps burst 6048B limit 1512
Once again, run iperf on host-1 and host-2 as we did in Experiment 1, and record the results reported by the server on host-2.
Question(s)
Please limit your responses to at most three sentences each, but be specific.
- What can you observe about the number of successfully transmitted packets as the queue size increases? Explain your observation.
- Is the queueing system stable given the largest configured queue size of 6048 bytes?
Experiment 3
Now, we're going to see how the service rate impacts the queueing system performance. Double the service rate:
switch$ sudo tc qdisc replace dev ethY handle 1 parent root tbf rate 3024Bps burst 3024B limit 1512
Run iperf on host-1 and host-2 as we did in Experiment 1, and record the results reported by the server on host-2.
Then, let's double the service rate again:
switch$ sudo tc qdisc replace dev ethY handle 1 parent root tbf rate 6048Bps burst 3024B limit 1512
Once again, run iperf on host-1 and host-2 as we did in Experiment 1, and record the results reported by the server on host-2.
Question(s)
Please limit your responses to at most three sentences each, but be specific.
- What can you observe about the number of successfully transmitted packets as the service rate increases? Explain your observation.
- Is the queueing system stable given the largest configured service rate of 6048 bytes/second
Experiment 4
Now, we're going to combine an increased service rate with an increased queue size:
switch$ sudo tc qdisc replace dev ethY handle 1 parent root tbf rate 6048Bps burst 4536B limit 1512
Run iperf on host-1 and host-2 as we did in Experiment 1, and record the results reported by the server on host-2.
Then, let's increase the queue size again:
switch$ sudo tc qdisc replace dev ethY handle 1 parent root tbf rate 6048Bps burst 6048B limit 1512
Once again, run iperf on host-1 and host-2 as we did in Experiment 1, and record the results reported by the server on host-2.
Question(s)
Please limit your responses to at most three sentences each, but be specific.
- What can you observe about the number of successfully transmitted packets? Explain your observation.
- Is the queueing system stable?
- We've seen how an improperly tuned queueing system can impact the throughput of a data transfer. Given what you know about the max arrival rate (hint: check the iperf client), and a packet size of 1512 bytes, set a minimal queue size and service rate of the queueing system to achieve stability. Show all your work and evaluate the performance of your improved queueing system using iperf. Please provide the results of your experiment(s).
Experiment 5
Now, we're going to reverse our data transfer to send packets from host-2 to host-1.
Kill the iperf server on host-2 and start it on host-1:
host-1$ iperf -s -u
And on host-2, start the iperf client:
host-2$ iperf -c 10.0.0.1 -u -i 1 -t 10
Check the results in the iperf server.
Question(s)
Please limit your responses to at most three sentences each, but be specific.
- Do you observe any lost packets transmitting data in the reverse direction? Please explain your reasoning.
- How could we create a queue to control the performance of the reverse data transfer?
Submission
On the due date indicated in the calendar, please submit a hard copy of your responses to the questions posed inline above.