double HTTP requests

We first discovered that our webapp was receiving duplicate HTTP AJAX requests from clients that results in a database insert. Fortunately jQuery had a nocache timestamp as part of the AJAX request so we could recognize it as a duplicate and reject the 2nd request.

As we tried to narrow down the cause we found that it happens on both GET/POST, as well as on a variety of browsers and network providers. After days of trying to reproducing the behavior we resorted to using iMacro to do browser automation and finally manage to reproduce it occasionally. What was surprising was that we were receiving the response of the 2nd response, which was the rejection message! (while the database insert was successful) It was absolutely confusing and we set up the automation again with Wireshark.

The packet analysis confirmed that the client browser was sending the duplicate request. However we also scanned Firebug and confirmed that only a single AJAX call was made! It didn’t make any sense to any of us until I noticed a pattern in the Wireshark logs that the 1st request made was always terminated without a response (either a TCP Connection Reset or a proper TCP Connection Close initiated by the server), and the 2nd request would be made on another kept-alive TCP connection. Following that symptom with Google I chanced upon a blog that highlighted HTTP 1.1 RFC 8.2.4.

If an HTTP/1.1 client sends a request which includes a request body, but which does not include an Expect request-header field with the “100-continue” expectation, and if the client is not directly connected to an HTTP/1.1 origin server, and if the client sees the connection close before receiving any status from the server, the client SHOULD retry the request.

Could this be it? We mocked up a HTTP server using raw Java ServerSockets by intentionally closing the first connection and allowing the second. We logged the requests on the server-side and voila! The mock server showed that it received two HTTP requests, Firebug showed that it sent one only, and Wireshark showed that the browser sent two.

Essentially, we just proved that the browsers adhered to HTTP 1.1 RFC 8.2.4…….

6 Comments »

  1. Andrew Otis said,

    December 16, 2012 at 4:13 am

    Hey! Did you guys figure out a way to work around this? I believe I’m having the same problem. You can’t specifically force an Expect: 100-Continue when using XMLHttpRequest….what did you end up doing to resolve this?

  2. Andrew Otis said,

    December 16, 2012 at 4:22 am

    Also, did you have any query string as part of your request? I am sending a unique md5 value as a URL parameter just for an additional layer of testing, and in most cases, after makin the ajax call *ONCE*, I see 2-4 requests coming to the server in my access logs as well as some custom logging I’ve configured in my php script. The *last* request to the server contains the correct md5 value, the second most recent request shows the md5 value for the last test/page load. SO strange!!! Sometimes I just get one request on the page load. In every case where there is just one request, it’s always the correct/current md5 value. It makes my web application completely unusable, as it is extremely ajax heavy, in some cases I am doing an ajax call to insert/delete something in MySQL, then a 2nd call right after that to retrieve the most current list of things (I am using the ajaxq plugin to queue these requests and ensure they fire in the correct sequence, but this problem occurs with or without the plugin) I know I can update some things in my js code to potentially work around this issue, but it’s the principal of the thing now and I want to get to the bottom of this! 🙂

  3. Andrew Otis said,

    December 16, 2012 at 4:30 am

    Oh, and rejecting a database query based on the cachebuster value will not work for me. I’m doing sequential ajax calls, like the example provided above one to set one to get, and if the server responds to the get requests for the incorrect ID/value, the issue becomes very noticeable to the user 🙂

  4. geek said,

    December 18, 2012 at 9:32 pm

    For us the request was identical, even when we timestamp the request parameters: we receive the same parameters on the server. The tell-tale sign for us was a single request from the browser (as observed by Firebug or other browser debugging tool), but multiple requests from Wireshark.

    I see that in your case you are receiving different URL parameters? So I’m not so sure if our cause is the same. You may want to verify that two identical requests were received.

    In our case we needed an immediate fix and we happen to have a cache server in front so we configured it to return the same response to the 2nd request (you really want the 2nd request to have a success response as that is the response the browser will receive). The more correct answer should be to design the requests as idempotent, if possible, so that you always get the same result.

  5. Thomas said,

    June 7, 2013 at 11:08 pm

    HTTP 1.1 RFC 8.2.4. is not the answer because the retry also happens when the client *is* directly connected to an HTTP/1.1 origin server, see http://stackoverflow.com/questions/15155014/inconsistent-browser-retry-behaviour-for-timed-out-post-requests

    Although I’m not really sure what “directly connected to an HTTP/1.1 origin server” means: does this only cover explicit proxy (known to the client)?

  6. Prevent Duplicate Form Submission | Shaiekh's Notebook said,

    October 10, 2013 at 12:01 am

    […] Duplicate HTTP requests from browser. […]

RSS feed for comments on this post · TrackBack URL

Leave a Comment