Table of Contents
...
By default, the SOS controller’s REST API runs at the controller’s IP on TCP port 8080. An HTTP client can be used to leverage the API. The popular utility curl is used throughout this document as an example HTTP client. For more information on curl, please refer to curl’s documentation. The following is a general example on how to use curl:
...
To keep track of the transfers which should have SOS performed, the controller maintains a whitelist, where all transfers listed will use SOS, while all transfers absent will be handled outside of SOS. The default behavior of the controller is to emulate a L2 learning switch for all transfers that are not proactively whitelisted.
The following is an example of adding a whitelist entry:
Code Block | ||
---|---|---|
| ||
curl http://192.168.1.1:8080/wm/sos/whitelist/add/json -X POST -d '{"server-ip-address":"10.0.0.4", "server-tcp-port":"5001", "client-ip-address":"10.0.0.2"}' | python -m json.tool |
...
which returns:
Code Block | ||
---|---|---|
| ||
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 191 0 101 100 90 16248 14478 --:--:-- --:--:-- --:--:-- 16833 { "code": "0", "message": "WhitelistEntry successfully added. The entry may initiate the data transfer." } |
Tune Transfer Parameters
SOS's performance can be tuned by adjusting the number of parallel connections to use between the agents, the agent application's TCP receive buffer size (in bytes) per connection, and the reordering queue length on the receiving agent. The OpenFlow flow timeouts can also be adjusted, but it is recommended that they be left at their defaults of 0 for the hard timeout and 60 seconds for the idle timeout.
To adjust the number of parallel connections to use between the agents, one can do the following:
Code Block | ||
---|---|---|
| ||
curl http://192.168.1.1:8080/wm/sos/config/json -X POST -d '{"parallel-connections":"4096"}' | python -m json.tool |
which returns:
Code Block | ||
---|---|---|
| ||
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 88 0 57 100 31 9961 5417 --:--:-- --:--:-- --:--:-- 11400
{
"code": "0",
"message": "Parallel connections set to 4096"
} |
Likewise, to adjust the TCP receive buffer size in the agent application, one can do the following:
Code Block | ||
---|---|---|
| ||
curl http://192.168.1.1:8080/wm/sos/config/json -X POST -d '{"buffer-size":"70000"}' | python -m json.tool |
which returns:
Code Block | ||
---|---|---|
| ||
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 72 0 49 100 23 8105 3804 --:--:-- --:--:-- --:--:-- 9800
{
"code": "0",
"message": "Buffer size set to 70000"
} |
And lastly, to adjust the agent application's reordering queue length, one can do the following:
Code Block | ||
---|---|---|
| ||
curl http://192.168.1.1:8080/wm/sos/config/json -X POST -d '{"queue-capacity":"5"}' | python -m json.tool |
which returns:
Code Block | ||
---|---|---|
| ||
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 70 0 48 100 22 5710 2617 --:--:-- --:--:-- --:--:-- 6000
{
"code": "0",
"message": "Queue capacity set to 5"
} |
If you would like to experiment with idle timeouts, you may adjust the idle timeout as follows:
Code Block | ||
---|---|---|
| ||
curl http://192.168.1.1:8080/wm/sos/config/json -X POST -d '{"idle-timeout":"60"}' | python -m json.tool |
which returns:
Code Block | ||
---|---|---|
| ||
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 68 0 47 100 21 9712 4339 --:--:-- --:--:-- --:--:-- 11750
{
"code": "0",
"message": "Idle timeout set to 60"
} |
The idle timeout is by default set to 60 seconds. This means that the last packet of the transfer, on a per-flow basis, will cause the flows to expire and be automatically removed from the switches 60s later. This is a long time, but it is safe in that it allows the agents to clear their buffers and finish transferring data, which might be terminated prematurely given a combination of shorter timeouts and poor parallel connection and buffer size parameter choices.
Changing the hard timeout is supported, but it does not make sense at present (thus I will not include an example ). Having hard timeouts could serve as a way to kick out transfers that have used more than their allotted share/time, but this is not a feature that is supported. As such, it is recommended that the hard timeout be left at 0 seconds (infinite).
Check Controller Readiness
Before a transfer is to be initiated, the controller should be queried to ensure all systems are ready to react to the transfer and perform SOS. This works well in a single-user environment where the user is performing sequential transfers. As such, it is the model we use at this point; however, a solution is nearing completion that allows the controller to pre-allocate resources for a user during a specific time period (which is more real-world) – these are the start and end times indicated in the whitelist REST API above.
To probe the controller to see if it is ready, one can perform use the status API as follows:
Code Block | ||
---|---|---|
| ||
curl http://192.168.1.1:8080/wm/sos/status/json -X GET | python -m json.tool |
which returns:
Initiate Transfer
The first thing to do prior
...