Last modified: 2013-09-02 08:25:52 UTC

Wikimedia Bugzilla is closed!

Wikimedia migrated from Bugzilla to Phabricator. Bug reports are handled in Wikimedia Phabricator.
This static website is read-only and for historical purposes. It is not possible to log in and except for displaying bug reports and their history, links might be broken. See T54253, the corresponding Phabricator task for complete and up-to-date bug report information.
Bug 52253 - Protocol-relative URLs are poorly supported or unsupported by a number of HTTP clients
Protocol-relative URLs are poorly supported or unsupported by a number of HTT...
Status: NEW
Product: MediaWiki
Classification: Unclassified
General/Unknown (Other open bugs)
1.22.0
All All
: Low normal (vote)
: ---
Assigned To: Nobody - You can work on this!
:
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2013-07-30 00:00 UTC by Tim Starling
Modified: 2013-09-02 08:25 UTC (History)
7 users (show)

See Also:
Web browser: ---
Mobile Platform: ---
Assignee Huggle Beta Tester: ---


Attachments

Description Tim Starling 2013-07-30 00:00:19 UTC
Protocol-relative URLs might be various kinds of awesome, and they work with every major browser. The problem with them is, they break many things that are not browsers. It's easy to write an app that fetches some HTML with HTTP, but less easy to correctly interpret that HTML. Using obscure features like protocol-relative URLs causes the less carefully-written HTTP clients to break.

Access logs demonstrate that there are many broken clients: 
http://paste.tstarling.com/p/HonYcW.html

Note that the browser-like UA strings might not be fake -- Flash and Java apps running under the browser send the UA string of their host. Of the UA strings without "Mozilla" in them, we have:

* Three versions of perl, two Java libraries
* Microsoft BITS (a background download helper, probably used by an offline reader app)
* KongshareVpn (website defunct)
* Instapaper (an offline downloader/reader for Android)
* Googlebot-Image (IP confirmed to be Google)
* A couple of phone models (Dorado, Symbian)

The length of this list is limited by the sample size, not by the actual number of broken scripts. It's a long, long tail.

This is just a gripe bug, I don't have any concrete plan for replacing protocol-relative URLs in our infrastructure. Krinkle asked me about it at I4155c740 , so I thought I may as well document the problem.
Comment 1 Alex Monk 2013-07-30 00:36:08 UTC
I'm tempted to say this is an issue with said badly written clients, not MediaWiki, and therefore this is invalid...
Comment 2 Tim Starling 2013-07-30 00:48:05 UTC
(In reply to comment #1)
> I'm tempted to say this is an issue with said badly written clients, not
> MediaWiki, and therefore this is invalid...

It's not about technical correctness, it's about courtesy. Yes, there are hundreds of clients that are broken in this way, and it is the fault of the hundreds of developers who individually wrote those clients, but the easiest place to fix the problem is in MediaWiki and the WMF frontend.

In any case, client compatibility is an essential part of developing web server software. If the site was completely broken in IE or Firefox, you wouldn't say the bug was invalid. The only difference is scale -- whether we should care about the long tail of badly-written HTML parsers. I am saying that we should care.
Comment 3 MZMcBride 2013-07-30 01:43:02 UTC
(In reply to comment #0)
> Protocol-relative URLs might be various kinds of awesome, and they work with
> every major browser.

This bug somewhat annoyingly fails to mention why protocol-relative URLs are being used. Bug 20342 comment 0 gives a decent overview of why protocol-relative URLs were implemented on Wikimedia wikis.

In reality, most (see rough stats below) URLs in articles are simply relative ("/wiki/Foo" or "/w/index.php?title=Foo"), not protocol-relative ("//wikiproject.org/w/index.php?title=Foo"). Resources (bits.wikimedia.org and upload.wikimedia.org) rely on protocol-relativity more than anything else (sending mixed content causes ugly warnings in browsers).

Switching bits and upload to always be HTTPS would resolve a good portion of the issue being discussed here. A few minor edits to interface messages would also go a long way in resolving most of this bug.

(In reply to comment #1)
> I'm tempted to say this is an issue with said badly written clients, not
> MediaWiki, and therefore this is invalid...

We sometimes accomodate stupid clients (as well as stupid users, for that matter). I'm was somewhat inclined to say that this bug is a duplicate of bug 20342 ("Support for protocol-relative URLs (tracking)") or bug 47832 ("Force all Wikimedia cluster traffic to be over SSL for all users (logged-in and anon)"), but assuming the numbers I ran below are correct, this bug could simply be a tracking bug with a few (relatively easy) dependencies.

---

Recently featured articles on the English Wikipedia [[Main Page]]:
* [[Barber coinage]]
* [[Harold Davidson]]
* [[War of the Bavarian Succession]]


For [[Barber coinage]]:

$ curl -s --compressed "https://en.wikipedia.org/wiki/Barber_coinage" | grep -o '="//' | wc -l
      76

$ curl -s --compressed "https://en.wikipedia.org/wiki/Barber_coinage" | egrep -o '="/[^/]' | wc -l
     330


For [[Harold Davidson]]:

$ curl -s --compressed "https://en.wikipedia.org/wiki/Harold_Davidson" | grep -o '="//' | wc -l
      46

$ curl -s --compressed "https://en.wikipedia.org/wiki/Harold_Davidson" | egrep -o '="/[^/]' | wc -l
     233


For [[War of the Bavarian Succession]]:

$ curl -s --compressed "https://en.wikipedia.org/wiki/War_of_the_Bavarian_Succession" | grep -o '="//' | wc -l
      91

$ curl -s --compressed "https://en.wikipedia.org/wiki/War_of_the_Bavarian_Succession" | egrep -o '="/[^/]' | wc -l
     362


Summary:
* [[Barber coinage]]: 81.28% of links are simply relative; of the protocol-relative links (76), 71.05% are from bits (18) or upload (36)

* [[Harold Davidson]]: 83.51% of links are simply relative; of the protocol-relative links (46), 63.04% are from bits (15) or upload (14)

* [[War of the Bavarian Succession]]: 79.91% of links are simply relative; of the protocol-relative links (91), 56.04% are from bits (15) or upload (36)



Of the protocol-relative links, there are lots of very low-hanging fruit that could easily be made to always use HTTPS that are deflating the numbers directly above. For example:

* <a href="//wikimediafoundation.org/wiki/Privacy_policy" title="wikimedia:Privacy policy">Privacy policy</a> [twice]
* <a href="//en.wikipedia.org/wiki/Wikipedia:Contact_us">Contact Wikipedia</a>
* <a href="//wikimediafoundation.org/"><img src="//bits.wikimedia.org/images/wikimedia-button.png" width="88" height="31" alt="Wikimedia Foundation"/></a>
* <a href="//www.mediawiki.org/"><img src="//bits.wikimedia.org/static-1.22wmf11/skins/common/images/poweredby_mediawiki_88x31.png" alt="Powered by MediaWiki" width="88" height="31" /></a>
* <a rel="license" href="//en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License">Creative Commons Attribution-ShareAlike License</a>
* <a rel="license" href="//creativecommons.org/licenses/by-sa/3.0/" style="display:none;"></a>
* <a href="//donate.wikimedia.org/wiki/Special:FundraiserRedirector?utm_source=donate&amp;utm_medium=sidebar&amp;utm_campaign=C13_en.wikipedia.org&amp;uselang=en" title="Support us">Donate to Wikipedia</a>
* <link rel="copyright" href="//creativecommons.org/licenses/by-sa/3.0/" />


These amount to 9 links. Re-running our numbers again, we find:

* [[Barber coinage]]:
  protocol-relative links: 76
    bits.wikimedia.org:    18
    upload.wikimedia.org:  36
    interface:              9
    easily eliminated: 82.89%

* [[Harold Davidson]]:
  protocol-relative links: 46
    bits.wikimedia.org:    15
    upload.wikimedia.org:  14
    interface:              9
    easily eliminated: 82.61%

* [[War of the Bavarian Succession]]:
  protocol-relative links: 91
    bits.wikimedia.org:    15
    upload.wikimedia.org:  36
    interface:              9
    easily eliminated: 65.93%




Basically, as I see it, if instead of griping, you simply provisioned a few more resources servers (upload and bits) and submitted a few edits to MediaWiki messages or changesets to Gerrit, you could resolve somewhere between two-thirds and four-fifths of the problem you're describing in comment 0 without breaking a sweat.

That number gets even higher (probably somewhere around 90–95%) with an additional single edit to [[m:Interwiki map]] and an adjustment to the interlanguage links output, which make up most of the remaining protocol-relative URLs.

For what it's worth.
Comment 4 Tim Starling 2013-07-30 04:06:21 UTC
(In reply to comment #3)
> Switching bits and upload to always be HTTPS would resolve a good portion of
> the issue being discussed here. A few minor edits to interface messages would
> also go a long way in resolving most of this bug.

My estimate on bug 51002 was that sending all traffic through HTTPS would require the HTTPS cluster to be expanded by a factor of 10. Since the relevant metric is connection rate, not object rate, bits and upload would probably be most of that, since browsers open multiple concurrent connections to those servers during a normal request. So you'd be looking at maybe 80 additional servers. Maybe it would be worthwhile, but it wouldn't be cheap, either in terms of capital cost or staff time. 

Writing an nginx module to rewrite the URLs would probably be simpler than setting up a cluster of 80 servers.
Comment 5 Nemo 2013-07-30 11:39:53 UTC
Maybe worth mentioning, due to protocol-relative URLs our feeds do not validate: "id must be a full and valid URL", e.g. <http://validator.w3.org/feed/check.cgi?url=https%3A%2F%2Fwww.mediawiki.org%2Fw%2Findex.php%3Ftitle%3DSpecial%3ARecentChanges%26feed%3Datom%26limit%3D50>

This seems to have caused problems with some version of Thunderbird (bug 44647) and at some point I was unable to use our feeds in Facebook (or maybe FriendFeed; I've not tested recently).
Comment 6 MZMcBride 2013-08-01 06:07:32 UTC
(In reply to comment #4)
> My estimate on bug 51002 was that sending all traffic through HTTPS would
> require the HTTPS cluster to be expanded by a factor of 10. Since the
> relevant metric is connection rate, not object rate, bits and upload would
> probably be most of that, since browsers open multiple concurrent connections
> to those servers during a normal request. So you'd be looking at maybe 80
> additional servers.

This may be a stupid question, but I got asked today and I didn't know the answer: if Wikimedia currently has a fairly large number of Web servers providing HTTP access, couldn't most of those servers be re-provisioned to serve HTTPS instead? I'm not sure why you would need 80 additional servers (not that the Wikimedia Foundation couldn't easily afford them, in any case).
Comment 7 Tim Starling 2013-08-01 06:35:54 UTC
(In reply to comment #6)
> This may be a stupid question, but I got asked today and I didn't know the
> answer: if Wikimedia currently has a fairly large number of Web servers
> providing HTTP access, couldn't most of those servers be re-provisioned to
> serve HTTPS instead? I'm not sure why you would need 80 additional servers
> (not that the Wikimedia Foundation couldn't easily afford them, in any case).

The reason you need more servers is because serving HTTPS is more expensive than serving HTTP, because of the encryption overhead. You could have the same servers doing both HTTP and HTTPS, and I believe that is indeed the plan, but you would still need more servers because the CPU cost of serving an HTTPS connection is higher, so you need more cores to do the same connection rate.

Note that the 80 figure is just the current host count (9) multiplied by 9 and rounded off. Ryan pointed out to me that 4 of those 9 servers are old and have only 8 cores, whereas the newer servers have 24 cores. Since CPU is the limiting factor, it's really the core count that should be multiplied by 9. We have 152 cores currently doing HTTPS, so we would need an extra 1368, implying we need 57 additional 24-core servers.
Comment 8 Nemo 2013-08-01 11:38:40 UTC
(In reply to comment #7)
> The reason you need more servers is because serving HTTPS is more expensive
> than serving HTTP, because of the encryption overhead. You could have the
> same
> servers doing both HTTP and HTTPS, and I believe that is indeed the plan, but
> you would still need more servers because the CPU cost of serving an HTTPS
> connection is higher, so you need more cores to do the same connection rate.

Didn't Google produce stats claiming that the CPU cost was basically negligible?
Comment 9 Derk-Jan Hartman 2013-09-02 08:25:52 UTC
I just found out that there is an intermittent a bug in IE with protocol relative stylesheet inclusion, that causes IE to sometimes download the url twice if it is not yet cached.

http://www.stevesouders.com/blog/2010/02/10/5a-missing-schema-double-download/

A Microsoft employee responds in that thread:
"Internal to Trident, the download queue has “de-duplication” logic to help ensure that we don’t download a single resource multiple times in parallel.
Until recently, that logic had a bug for certain resources (like CSS) wherein the schema-less URI would not be matched to the schema-specified URI and hence you’d end up with two parallel requests.
Once the resource is cached locally, the bug no longer repros because both requests just pull the cached local file."

Note You need to log in before you can comment on or make changes to this bug.


Navigation
Links