Last modified: 2010-05-15 15:33:29 UTC
Server: SuSE 9.1 pro, Apache 2.0.49, PHP 4.3.4, SSL. Client: Firefox 1.0 (Windows and Linux) Periodically (about one in ten requests) will result in firefox prompting to download the index.php as an "application/octet-stream". The problem seems most frequent when clicking an Edit link, however viewing articles directly will also trigger it sometimes. Problem is intermittent in that the same link will sometimes work fine. Additionally, a moderate percentage of page views take up to a minute to return from the server. Of these, some return normal pages, and some have the download problem. The problem appears to happen only over https (OpenSSL). Via http, the problem is not seen. The problem only happens in Firefox. IE and Konquerer both seem okay. No errors are logged on the server. I have also tested on a SuSE 9.2 server with Apache 2.0.50 and PHP 4.3.8, same result. I have also tested with MediaWiki-1.3.9, same result.
Just some further notes: - Other PHP applications are running successfully over SSL on this machine, with no similar issues. - Have tried with and without memcached. I'm interested to know if others can duplicate this behavior.
The stream that firefox is prompting to save appears to be a gziped file of some sort? Headers follow, and I've attached the full file (binary). HTTP/1.1 200 OK Date: Thu, 16 Dec 2004 18:01:34 GMT Server: Apache/2.0.49 (Linux/SuSE) X-Powered-By: PHP/4.3.4 Set-Cookie: PHPSESSID=9e101ab5143e451d11f0eebae4e6225c; path=/ Content-language: en Vary: Accept-Encoding,Cookie Expires: -1 Cache-Control: private, must-revalidate, max-age=0 Content-Encoding: gzip Transfer-Encoding: chunked Content-Type: text/html; charset=utf-8
Created attachment 174 [details] Sample downloaded content stream.
My last comment led me to look at the output compression options in mediawiki. Commenting out the compression line (20) in LocaSettings.php solved the symptoms, if not the problem. ===== ## Compress output if the browser supports it # if( !ini_get( 'zlib.output_compression' ) ) @ob_start( 'ob_gzhandler' ); ===== My guess is that the SSL host is not configured to handle the compressed output in the same way that the normal http host is, even though they're part of the same Apache installation, but that's about the extent of my understanding at this point.
I've been having this same problem with MediaWiki 1.4.0 (final version) running on Apache 1.3.33 (PHP 4.3.10) and Firefox 1.02 on Windows. I'm *not* running SSL. I tried changing the default MIME type on Apache to application/octet-stream, but the reported behavior still happens. Weird.
*** Bug 2256 has been marked as a duplicate of this bug. ***
some users have reported similar behaviour on WMF sites, although not over SSL. i looked at the contents of the stream and it seems to be gzip compressed, but the content was corrupted (it was obviously a monobook page, but the contents were mangled somehow). this may have been related to smellie's later memory problems, though, rather than a software or configuration issue... speak to sj if you want more information about this, he has been seeing it a lot.
I've problems on Win XP Pro SP2 (Slovenian add.). Klemen Kocjancic
I tried to open a http://sl.wikipedia.org/wiki/Lemuel_Cornick_Shepherd_mlaj%C5%A1i and got this on page: <html><body></body></html>
When you see <html><body></body></html> in Mozilla or Firefox's "View Source" window, that is not actually what was sent. That's inserted mysteriously by Mozilla when the document actually had no contents at all or some other problem occurred.
I don't know if this is the same bug... I was working on http://sl.wikipedia.org/w/index.php?title=Seznam_glasbenih_vsebin&action=submit and got this (plus on other pages): Warning: mysql_query(): Unable to save result set in /usr/local/apache/common-local/php-1.4/includes/Database.php on line 349 Warning: mysql_query(): Unable to save result set in /usr/local/apache/common-local/php-1.4/includes/Database.php on line 349 Error in fetchObject(): Server shutdown in progress Backtrace: * GlobalFunctions.php line 510 calls wfbacktrace() * Database.php line 505 calls wfdebugdiebacktrace() * Parser.php line 3087 calls databasemysql::fetchobject() * Parser.php line 182 calls parser::replacelinkholders() * GlobalFunctions.php line 1149 calls parser::parse() * SkinTemplate.php line 321 calls wfgetsitenotice() * OutputPage.php line 429 calls skinmonobook::outputpage() * OutputPage.php line 636 calls outputpage::output() * Database.php line 386 calls outputpage::databaseerror() * Database.php line 333 calls databasemysql::reportqueryerror() * Parser.php line 3081 calls databasemysql::query() * Parser.php line 182 calls parser::replacelinkholders() * OutputPage.php line 239 calls parser::parse() * OutputPage.php line 233 calls outputpage::addwikitexttitle() * Article.php line 1182 calls outputpage::addwikitextwithtitle() * Article.php line 1145 calls article::showarticle() * EditPage.php line 313 calls article::updatearticle() * EditPage.php line 68 calls editpage::editform() * EditPage.php line 164 calls editpage::edit() * index.php line 176 calls editpage::submit()
> Periodically (about one in ten requests) will result in firefox prompting to download the > index.php as an "application/octet-stream". This problem has existed at least since 1.3, and in my opinion it is unrelated to SSL (because I've never been using SSL) and also to gzip (because every response uses that, not just those that exhibit the problem). Back when I set up CambridgeWiki, I had the problem constantly. About 1 in 3 requests would exhibit the problem. With newer versions, this seems to have gone down, and it is now running 1.3.7. However, recently the problem has come up more and more on Wikimedia's own servers. I see it about once in every 10 to 20 requests, but it is more common with &action=edit/history/diff; very rarely when just viewing an article. Unfortunately, since the problem is unpredictable, it is not reproducible, and as such, I cannot show you the response headers sent by the server when it happens.
(In reply to comment #11) > I don't know if this is the same bug... >... That looks like bug 2159.
incompatibility in "product" between MediaWiki: bug 1109 Intermittent content-type of "application/octet-stream" prompts download window in Firefox Wikimedia web sites: bug 2159: http://en.wikipedia.org/wiki/Special:Allpages not working --- question: How "vote limitation" is handeled at bugzilla if a bug is "moved" to another "product"? Regards Reinhardt [[user:gangleri]]
Created attachment 598 [details] Uncompressed page from previous attachment (In reply to comment #7) If the first body line is removed (3 characters long: bdf), the file can be uncompressed ok (attached).
Anybody working on this, this is really annoying.
I've still had no success actually reproducing this problem on Wikimedia's servers or my test servers, but was able to confirm it on someone else's server and took some network packet captures to try to figure it out. The client-side problem may be this Mozilla bug: https://bugzilla.mozilla.org/show_bug.cgi?id=197426 With output compression on, MediaWiki ends up sending a little bit of data (an empty gzip header) along with for instance '304 Not Modified' responses which are not supposed to have any data[1]. Under some circumstances this seems to confuse Mozilla/Firefox, with the results as seen above. [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.5 The deadly combination seems to be: * HTTP 1.1 persistent connections (Keep-Alive on in Apache config) * PHP's ob_gzhandler engaged in LocalSettings.php * Splitting HTTP headers and the data block across two packets * SECRET INGREDIENT: Slowness/size/multiple requests/something?? <speculation> Since data isn't expected with a 304 response anyway, Mozilla may be considering the request done at the end of the packet. When interpreting results for the next request on the connection, it ends up seeing the extra data from the previous request before the headers. It apparently assumes the server just made a wee little mistake and forgot to send any headers, and treats the data+headers+data as a single set of returned data such as the attached sample stream. Since it looks binary, it offers to download rather than display as text inline. </speculation> I'd like anybody experiencing this with their server to try the following, in turn: a) Try turning off keepalive in httpd.conf: # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive Off (If it's _on_ and you're still having this problem, let me know.) b) Disable output compression in MediaWiki: ## Compress output if the browser supports it /* if( !ini_get( 'zlib.output_compression' ) ) @ob_start( 'ob_gzhandler' ); */ c) Try turning those two back on and apply the experimental patch I'm attaching. This will prevent that extra empty compressed data blob from being output on 304 Not Modified responses.
Created attachment 659 [details] Adds an ob_end_clean() call on 304 Not Modified returns Please test this patch if experiencing this problem on your server.
No feedback given so far. I'm marking this FIXED provisionally; I've committed the path on CVS HEAD and it's been live for a few days.
This problem happend to me for the first time last Saturday (October 28th). It's the same problem as #12. It only happens in monobook. For the last couple of days I run simple.css and that is no problem. But when switching to monobook I get, for example: You have choosen to open Special:Recentchanges which is a: application/octet-stream from: http://sv.wikipedia.org Would you like to save this file?" It happens about every 10 request, and to any kind of action (edit, article, recent changes..., new article). Firefox 2.0, but also in 1.5. //StefanB on Swedish Wikipedia http://sv.wikipedia.org/wiki/User:StefanB
Created attachment 2642 [details] attachment to #20
Created attachment 2643 [details] attachment 2 [details] to #20
Created attachment 2644 [details] new attachement to #20
Created attachment 2645 [details] new attachement 2 to #20
Both of these are gzipped with empty contents; that sounds like the (supposedly fixed) problem I mention above, should check if we've had a regression...
The original fix broke due to an additional output buffer layer being added recently on our servers; only the topmost buffering layer was canceled, so the compression handler was now left running and started appending its junk again. I've improved the fix in r17452 to peel back *all* output buffers, which should get things happy again. StefanB, please confirm if this resolves things for you.
Yes, I've clicked about 100 links without having the problem, so it seems to be fixed. Thank you.