E-Voting News and Analysis, from the Experts

Monday November 01, 2004

Designing for DoS in Elections

Filed under: — Joseph Lorenzo Hall @ 5:53 pm PST

Along with all the benefits of using networked technology in election administration and politics, come all the bugaboos.

Specifically, we’ve seen intentional and unintentional denial-of-service (DoS). At the time of this writing, the wonderful mypollingplace.com has been brought to its knees by the heightened attention of voters trying to find their polling place (no word on the possibility that this is malicious). We’ve also seen DoS problems in Georgia, , Tennessee, Florida and Texas with the failure of electronic “poll books” in polling places connected with central registration databases. (Note: Here is a complete list of poll-book problems courtesy of VotersUnite! and their database of problems reported in the news for this election.)

In fact, the gracious Rob Malda allowed the VVF/EFF folks to plant a story on Slashdot in order to test the resiliency of a few critical web services and contingency plans that will be used by the massive Election Protection Coalition in tomorrow’s election. You might think that all of this is a tad paranoid, but there’s current litigation in New Hampshire involving a plot in the November 2002 election where one partisan group hired a telemarketing firm to keep the voter protection hotline of another group busy for most of E-Day.


The URI to TrackBack this entry is: http://evoting-experts.com/wp-trackback.php/19

  1. Interesting. If this had been noted publicly before, I’m sure more people could have come up with info and resources to avoid it…

    (in particular, round-robin DNS records with a set of reverse WWW proxies are good to avoid static-site DDOSes, but take a while to get enough proxies set up.)

    Comment by Justin Mason — Monday November 01, 2004 @ 7:15 pm PST

  2. I was requested to elaborate a little on the previous comment, so here goes!

    Basically, a common DDoS attack scenario is for the attacker to use hundreds or thousands of machines to connect to a website, request an URL, wait for the response to start, receive some (or all) of the response data and disconnect. (This is the general DDoS attack covering most of the recent attacks I’ve heard of.) In this case, a good way to avoid it is to reduce the amount of requests processed by the server. One approach is to set up “reverse proxies".

    In other words, the DNS record for www.example.com doesn’t list the “real” server; instead it lists the address of a reverse proxy, which is configured to look like a normal web server, but when a request is received, it should reply with data from its cache where possible, and to query the “real” host’s address only if that data is not already cached (or if the cached data is too old and has expired). It’s configured in advance to know what the “real” host’s address is. The Squid HTTP proxy, and similar commercial products, are capable of this.

    It can be made more resilient by setting up multiple reverse proxies, each set up to contact the single back-end server. The DNS A record for www.example.com would then use DNS round-robin load balancing or similar to list all of the multiple reverse proxies, and thereby spread the load from an attack between the proxies.

    This approach works well for static data like a site’s front page; however, more complex sites use dynamic data fetched from a database in response to specific personalised user data to generate the pages, and putting a reverse proxy in front of those won’t work too well – since each user will need different personalised data. For that data to be personalised, the request really needs to hit the back-end “real” server. I don’t know of a generalised solution to that one ;)

    The difficulty, time-wise, with setting up reverse proxies is that you want multiple reverse proxies, on separate networks. (this avoids the problem where an attacker can still attack a single bottleneck – the upstream connection.) But getting multiple machines configured on multiple networks takes time – and often, requires volunteer help, which needs social coordination in advance.

    In addition, it takes time for DNS changes to propagate through the DNS infrastructure, so getting the DNS A record for www.example.com to change to list all those proxies won’t happen immediately either. (however the time taken for this one is typically on the order of a few hours.)

    Comment by Justin Mason — Monday November 01, 2004 @ 7:44 pm PST

RSS feed for comments on this post.

Leave a comment

Line and paragraph breaks automatic, e-mail address never displayed, HTML allowed: <a href="" title="" rel=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>



Powered by WordPress