It’s been a while since the last post, so I thought I’d post a followup article to which focused on bandwidth limiting in a datacenter environment using tc and iproute2.

I’ve taken the same script but tweaked IPs and bandwith values into my office. Previously I was on a 24mbit down 2.5mbit up DSL connection courtesy of The office is only about 800m from the closest exchange which is quite nice – I generally find I get 18+mbit down and 1.5+mbit up. Not only great bandwidth, but latency is also very small and responsiveness is great, especially as a regular [constant] SSH use. Recently, despite having no business justification whatsoever, I ordered the same again for the same office. This one clocks in at about 19mbit up and 1.7mbit down – even better! Some ISPs support line bonding – I dont believe that many in the UK do, and seeing as at the time of writing, bethere were the only ISP to support anywhere close to 24mbit, I wasn’t going to try and find another.

I did have a similar idea of writing a VPN appliance that could run on a local machine/router and connect to a machine that had greater bandwidth than both lines combined:

[ Local Machine ] – [ Nat Router (VPN Client) ] == Over 2 Connections == [ Remote Box in NOC (VPN Server) ]

Complicated, difficult and probably never work. Not sure how you’d overcome sequencing issues, delays, congestion, etc, but nevertheless – I didn’t decide to pursue that one.

There’s a great article on lartc that I used as a guideline for creating the split access setup. On my Linux router (deb-etch 2.8.24) I used iproute2 and some shell magic [I’ve blocked out the IPs] – eth0 is my local interface ( Here’s my routerboard load balancing script:

IP1=XX.XXX.8.214 #IP on conn1
IP2=YY.YYY.123.235 #IP on conn2
P1=XX.XXX.0.1 #Gateway on conn1
P2=YY.YYY.120.1#Gateway on conn2
P1_NET=XX.XXX.0.0 #Network address conn1
P2_NET=YY.YYY.120.0 #Network address conn2
ip route add $P1_NET dev $IF1 src $IP1 table 1
ip route add default via $P1 table 1
ip route add $P2_NET dev $IF2 src $IP2 table 2
ip route add default via $P2 table 2
ip route add $P1_NET dev $IF1 src $IP1
ip route add $P2_NET dev $IF2 src $IP2
ip rule add from $IP1 table 1
ip rule add from $IP2 table 2
ip route add default scope global nexthop via $P1 dev $IF1 weight 1 nexthop via $P2 dev $IF2 weight 1

Job done, now set up iptables for masquerading:

$IPTABLES -t nat -F
$IPTABLES -t nat -X
$IPTABLES -t filter -F
$IPTABLES -t filter -X
echo 1 > /proc/sys/net/ipv4/ip_forward

Done. We now have net access on both connections, I use:

echo "144000" > /proc/sys/net/ipv4/route/secret_interval

14400 seconds is 4 hours. This means that we flush dead routes after 4 hours.

The system works as follows. Each time a connection is established outbound, the server will decide which connection to route it out of. The method above should be a rough 50/50 split, with further configuration you are able to change your ratio. The server decides based on congestion and maybe a few other factors. However this isn’t pure multiple connection access that you’d received with a bonded line. One connection can only ever be established over the same route. Say for example I start a download, I will only ever utilize a single connection, and another download from the same IP will also utilize the same connection as this is where the kernel has cached the route to. A 3rd download to a new server will likely be established over the second connection though, as the first is congested and the second is idle. 4 hours after that first download completing, a new decision will be made if the original server is contacted again.

With iptables, using –stats and random, you would probably be able to split 50/50 cleaner as if one connection was established over ISP1, a second connection to the same server could be established over the other connection with no regard for the existing route, however I wouldn’t recommend doing this at iptables level.

As a single user, using the net as a regular home user does, I’m not sure how useful such a setup would be or even if any improvement would be noticed.

It works best in a multi user environment were multiple connections are being opened and closed frequently and simultaneously. It’s also great for torrents such as the debian CD archives. As you’re opening hundreds of connections, the system works perfectly and I’ve peaked at 3.8MB/sec before.