Last modified: 2005-06-05 15:42:10 UTC

Wikimedia Bugzilla is closed!

Wikimedia has migrated from Bugzilla to Phabricator. Bug reports should be created and updated in Wikimedia Phabricator instead. Please create an account in Phabricator and add your Bugzilla email address to it.
Wikimedia Bugzilla is read-only. If you try to edit or create any bug report in Bugzilla you will be shown an intentional error message.
In order to access the Phabricator task corresponding to a Bugzilla report, just remove "static-" from its URL.
You could still run searches in Bugzilla or access your list of votes but bug reports will obviously not be up-to-date in Bugzilla.
Bug 2328 - A Wiki Dedicated Browser
A Wiki Dedicated Browser
Product: MediaWiki
Classification: Unclassified
Special pages (Other open bugs)
PC All
: Normal normal (vote)
: ---
Assigned To: Nobody - You can work on this!
: parser
Depends on:
  Show dependency treegraph
Reported: 2005-06-05 02:00 UTC by Diego
Modified: 2005-06-05 15:42 UTC (History)
0 users

See Also:
Web browser: ---
Mobile Platform: ---
Assignee Huggle Beta Tester: ---


Description Diego 2005-06-05 02:00:40 UTC
Hi, i want to request the implementation of a simple web script which do not
return a web page but returns the raw wikicode, to let browsers programs to
download only de code and parse it on the client PC.

What we win with this:
- Better browsing experience for all users.
- Save bandwidth connection.
- Save server processing (that actually is a lot for old machines like my MMX).
- Better O.S. integration (i'm sure that all linux distributions will add a
program like M$ Encarta but with the content of all wikis).

A feature like this is not hard to do, you just need to make a script which must
not parse the wikicode. And can be called with the programs like this.
And with the same feature can be implemented a program like Babylon Translator
which could use the GPL'd content of a translation wikitionary. With short
content to fit in smalls windows.

Also with the same ideology (of parsing on the client machine) can be
implemented an option to parse in the client web browser with javascript. Using
the same script mentioned. 
Even, this can accelerate a lot the speed of the web. So the client cached web
doasn't need to change the entire html code, just the wiki code and nothing more.

Look, i was thinking on this pseudo-code.

1) Clients call the page
2) Server Sends a wiki main page with the javascript parser included.

2 bis) The Javascript Parser can be stored completele in a cookie.
       So, before the server sends the page:
       a) It checks a parser version key on the cookie.
       b) If it is outdated, the script saves the new javascript parser on the
       c) The server sends the html code with a javascript that loads and run
the parser from the cookie.

3) The parser finds and store in a variable the content of the wiki page (wiki
4) If the user clicks in a link, it just will redownload the new wikicode and
clean the old one, but the main html will not change.

With this method will save a lot of bandwidth and server processing. I think
that the server load will be reduced to something like 20%. Which is a very
small digit.

People or organizations with the willingness of make a GPL'd documentation wiki
of anything doasn't need to carry all the data-processing on its servers. People
how want read it will need process the data with its computers.
Comment 1 Brion Vibber 2005-06-05 02:32:18 UTC
Special:Export or action=raw can already be used to return raw wikitext of a page.
Comment 2 Diego 2005-06-05 15:42:10 UTC
Excelent, im working on it.  :)

But a method to return the search results in xml will be fine too.

Note You need to log in before you can comment on or make changes to this bug.