Last modified: 2006-10-21 16:38:31 UTC

Wikimedia Bugzilla is closed!

Wikimedia has migrated from Bugzilla to Phabricator. Bug reports should be created and updated in Wikimedia Phabricator instead. Please create an account in Phabricator and add your Bugzilla email address to it.
Wikimedia Bugzilla is read-only. If you try to edit or create any bug report in Bugzilla you will be shown an intentional error message.
In order to access the Phabricator task corresponding to a Bugzilla report, just remove "static-" from its URL.
You could still run searches in Bugzilla or access your list of votes but bug reports will obviously not be up-to-date in Bugzilla.
Bug 7646 - Allow Raw XML
Allow Raw XML
Status: RESOLVED INVALID
Product: MediaWiki
Classification: Unclassified
Parser (Other open bugs)
unspecified
All All
: Normal enhancement with 1 vote (vote)
: ---
Assigned To: Nobody - You can work on this!
:
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2006-10-20 16:57 UTC by Jamie Hari
Modified: 2006-10-21 16:38 UTC (History)
0 users

See Also:
Web browser: ---
Mobile Platform: ---
Assignee Huggle Beta Tester: ---


Attachments

Description Jamie Hari 2006-10-20 16:57:21 UTC
Similar to how we allow escaping of raw html, javascript, php, et al.


Implementation:

LocalSettings.php would have a switch which would allow tiny bits of xml to be
stuck in the content of pages. (  $wgAllowRawXML=true||false; )


Example content formatting IN the articles might look like this:


<aritcle_title>MediaWiki</aritcle_title>

<definition>A fantastic piece of software.</definition>

<type>Wiktionary Entry</type>

<version>1.0</version>


OR


<subject_title>Spider-Man</Subject_title>

<real_name>Peter Parker</real_name>

<height>5'11"</height>

<timestamp>12:34:56-12/25/06)</timestamp>


Reasoning:

For allowing external applications (which could be web, or windows/mac/linux
client software) to not only pull full content out of MediaWiki (Similar to
functionality in API Bug 208 & XML export), but once they have the contents of
the article, they can pull detailed, specific data out of the article itself as
well.

The above situation(s) could be universally useful to Wiktionary and Wikipedia.
Currently, a third-party company called Webaroo(.com) takes full database dumps
of Wikipedia to provide offline content to PDAs and laptops. 

Imagine the power of being able to dynamically pull an entire dictionary, or
parts thereof, to your PDA (offline) and 'synchronize' it periodically? Surely,
these are only a few of the possibilites...

P.S. Is there already some sort of functionality/API/methodology that might
allow this, or something similar?
Comment 1 Aryeh Gregor (not reading bugmail, please e-mail directly) 2006-10-20 20:26:00 UTC
This can already be done in any number of ways, such as storing them in template
parameters that don't show up as anything.  Templates by themselves are pretty
easy for anyone to parse; see also [[Wikipedia:Persondata]].  What advantage
would we get by disappearing anything unrecognized that looks like XML?  And
where is this XML to be output, if anywhere?
Comment 2 Brion Vibber 2006-10-21 16:38:31 UTC
You can do this trivially as a custom extension.

Of course this would never ever be allowed on a public site like Wikipedia since, being 
identical to $wgRawHtml, it would be a huge security risk.

Note You need to log in before you can comment on or make changes to this bug.


Navigation
Links