Last modified: 2011-03-13 18:04:33 UTC
Some browsers (in particular Internet Explorer) may barf and die when served content with application/ xhtml+xml. A wiki running in strict XHTML mode for savvy browsers should be able to automatically downgrade for browsers that won't take it, perhaps based on the Accept: headers. Without this, the XHTML mime type mode is useful only for testing. Note that this may have implications for caching.
A simple way to do this in php would be: if ( stristr($_SERVER['HTTP_ACCEPT'],'application/xhtml+xml') ) { header('Content-type: application/xhtml+xml'); } else if ( stristr($_SERVER['HTTP_ACCEPT'],'application/xml') ) { header('Content-type: application/xml'); } else if ( stristr($_SERVER['HTTP_ACCEPT'],'text/xml') ) { header('Content-type: text/xml'); } else { header('Content-type: text/html'); } However this is probably something better done at the squids.
What benefit would this give? The only difference I can see in Firefox is that extended characters don't work (apparently it ignores the <meta> content-type, so you need <?xml version="1.0" encoding="iso-8859-1"?> )
At the moment, the main benefit of using the application/xml+xhtml content-type is for testing, to ensure we're producing well-formed output and not relying on quirks of the HTML renderer. Since we can't yet guarantee well-formed output at all times, we wouldn't actually want to turn on an auto-detect at the moment. The meta content-type is not actually necessary, as a real Content-Type header is used. Its only benefit is for people who save the page to an .html file on disk and reload it from disk.
Reassigning severity to enhancement, taking off the XHTML validity tracking bug.
(In reply to comment #3) > At the moment, the main benefit of using the application/xml+xhtml > content-type is for testing, to ensure we're producing well-formed output There's also the benefit of being able to embed other XML namespaces inside our XHTML, e.g. MathML and SVG.
Keep in mind that Mozilla's XML display is not yet incremental: https://bugzilla.mozilla.org/show_bug.cgi?id=18333
Hi, I am running a wiki with XHTML for viewing MathML (MathML only work with XHTML for Mozilla browsers) and i think I have some interesting issues here. I have 2 problems with it: 1- This problem is not serious, but its a bit annoying: The edit buttons disapear. I didnt searched for it, but the buttons implementation are not compatible with XHTML. 2- This is the serious problem: Every bad-formed tag structure make the page stop working. It is unnaceptable to happen in a user-authoring content. The worse it that some of the erros are created by the wiki sanitizer, not the user, but I wont post it here, as I already will post it as bugs. This is my conclusions: - MathML feature will not work while wiki dont output as XHTML - the mediawiki is still not ready for XHTML. Any error will make the wiki page crash and burn. Mediawiki still require some heavy parser/sanitizer fixes for that work. My sugestions - fix the wiki parser/sanitizer that generate bad tag formations - make sanitizer check and fix all tag formation if it is XHTML valid after all parse is done (and not just for the user content). - (alternatively) make sanitizer recognize bad formed XHTML (after parsing) and redirecting the user to edit the page, warning where the error is. I dont like much this solution because it forces the wiki user to fix his errors, making the wiki less flexible. Besides, some of the errors are generated by wiki itself.
I think we're just not going to do this. The web standards world has rebelled against XHTML 2.0 and is looking to HTML 5 work for the future. While we'll still do our best to always make compliant output, serving to typical end-users as application/xhtml+xml just isn't going to be our target. It's more likely that we'll eventually migrate to HTML 5's non-XML formulation.