Last modified: 2012-10-29 17:24:16 UTC

Wikimedia Bugzilla is closed!

Wikimedia has migrated from Bugzilla to Phabricator. Bug reports should be created and updated in Wikimedia Phabricator instead. Please create an account in Phabricator and add your Bugzilla email address to it.
Wikimedia Bugzilla is read-only. If you try to edit or create any bug report in Bugzilla you will be shown an intentional error message.
In order to access the Phabricator task corresponding to a Bugzilla report, just remove "static-" from its URL.
You could still run searches in Bugzilla or access your list of votes but bug reports will obviously not be up-to-date in Bugzilla.
Bug 13777 - SquidUpdate blocks for a long time on the first URL it is given
SquidUpdate blocks for a long time on the first URL it is given
Status: NEW
Product: MediaWiki
Classification: Unclassified
General/Unknown (Other open bugs)
1.12.x
All All
: Low normal (vote)
: ---
Assigned To: Nobody - You can work on this!
:
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2008-04-17 17:16 UTC by Tom Hughes
Modified: 2012-10-29 17:24 UTC (History)
0 users

See Also:
Web browser: ---
Mobile Platform: ---
Assignee Huggle Beta Tester: ---


Attachments

Description Tom Hughes 2008-04-17 17:16:28 UTC
SquidUpdate employs a somewhat curious scheme whereby rather than parsing the HTTP response from Squid it just waits until it has received a certain number of bytes or a certain amount of time has passed.

When it opens the first socket to each server it sends the first request as a test of some sort, then collects the response. After all the connections are open it loops reading responses and sending more requests before finally collecting the last response.

Unfortunately for the first socket the response has already been collected when the socket was opened, so no more bytes are read by the later code and it has to wait for the timeout to expire.

That timeout appears to be intended to be 4 milliseconds as it is 200 passes of a loop with a 20 microsecond delay call. Very few machines can actually do a usleep(20) that fast though, so in reality the sleep is longer and so is the total expiry.

On the system where I encountered this problem each call to usleep(20) actually took about 40 milliseconds, which when repeated 200 times gave a total timeout of 8 seconds. The result is that each call to SquidUpdate() takes about 8s (or a multiple thereof with more that one Squid server).
Comment 1 Siebrand Mazeland 2008-08-18 20:29:50 UTC
Is this behaviour still current? (version 1.14alpha)
Comment 2 Tom Hughes 2008-08-21 23:25:05 UTC
It doesn't look like there has been any significant change between 1.12.0 and trunk that would fix this.

Note You need to log in before you can comment on or make changes to this bug.


Navigation
Links