Global DNS-Based load balance

The Current Situation

Currently many projects such as public NTP servers, software mirrors/archives, web site and chat services consists of a number (from less than 5 to hundreds) of identical or near identical sites. Are few specific examples include: Each of the above requires the end user (who may not be technically minded) to manually choose one of a large number of servers, adjust their configuration to use it and possibly change their setting every so often when the server of their choice is no longer online or the best to use.

For our examples we'll assume that Example.org is a free software project that has servers is the USA, Japan and Germany from which their software can be downloaded.

Currently they would probably create download.us.example.org, download.jp.example.org and download.de.example.org all of which would probably be linked off the main page and have additional domains (such as download.uk.example.org and download.za.example.org ) pointing to them. People who wish to download the software would either have to explicitly choose the nearest server or (as many would do) would end up just using the default one.

My Proposed solution

My proposed solution is for the DNS to be manipulated in such a way that all end users can configure the same server name and they will automaticly connected to (usually) the closest real server. Thus in the example.org case people would just connect to download.example.org . If they were in the US then the DNS would return the IP of the US based server, if they were in Europe they would tend to get the German server etc.

example.org would be able to do this by checking to see which ip number the DNS query of download.example.org comes from. They would then match it against the nearest server for that ip. Since end users usually use a DNS server relatively close to them the server closest to the name server will be the closet to the user.

This method is similar (AKAIK) to that used by Akamai and 3dns. For a sample host load balanced by Akamai try tracerouting/pinging "isux-t.activeupdate.trendmicro.com"

For non-commercial projects the view command in bind9 provides a straightforward way that the ip or the DNS query can be used to determine the result.

Detailed implementation

What I propose is that a framework of data and software be created that would be usable by non-commercial projects to implement their own balanced server setup. The data and scripts should be significant for most projects to use although some would want this extended. At a minimum the project would need:
  1. A list of groups of networks that are associated with each other obtained via examination of the global BGP table and other sources.
  2. One or two test ip's for each of the above.
  3. A simple program (which should be less than 20 lines of Perl) that can be installed on each "server" and can regularly ping each test ip.
  4. Simple scripts to convert 1,2 and the outputs of 3 into bind configuration files.
Item (1) should be able to be obtained by inspection of the Global BGP table, If the full table is obtained (at around the same time) from several sites, networks which share the exact same path on all routers should be regarded as originating from the same topographic area. I would expect that there would be perhaps 10-20,000 such groups of networks.

Item (2) would require each of the above groups to have one to two test ips so that servers could determine their distance to the entire group. in-addr.arpa records might be a good start for obtain this. Alternatively we could wait until name servers in this group start to query our project's name servers and then use those.

A sample group entry might look like:

as7657-1:203.109.252.42:203.109.252.0/22,203.173.192.0/19,203.109.144.0/20
Which shows group "as7657-1" , a test ip of "203.109.252.42" and three networks in the group.

Item (3) would be a simple perl program that took the list of ip's in Item (2) and pinged them every so often. One ping per day or less should be significant. Since this program would have to be installed on every server for each project this program should be simple and something that a system administrator can check in minutes. For each project it would forward it data to a central source for the project.

Item (4) would be a simple program each Project would run on the data obtained from (3). At it's simplest level it would find which server obtained the lowest ping time to the test ip's for each group and then ensure that all queries from that group were mapped to that server. I would expect that this would consist of a short perl program.

Items (1) and (2) will be implemented by the overall project which items (3) and (4) above will be implemented by each "customer". I would expect that some customers would replace (4) with their own software that perhaps took account of load of servers as well as distance or other factors.

To be done

Items to be done
  • Testing of bind Testing needs to be done to make sure that bind can handle ACLs with 10s of thousands of entries and hundreds of "view" entries.
  • Format of data The format that the data gathered should be kept in must be decided, This should be as simple as possible for easy importation into bind.
  • Gathering of data We need to work out the groups of networks and test ips.
  • Software The Scripts in (3) and (4) need to be written.
  • Large scale testing A test rollout should be attempted. The NTP community may be interested here.

Feedback Requested

This is a draft idea, please email me with any comments, potential problems, offers of help or ideas you may have.