Last modified: 2005-06-05 15:42:10 UTC

Wikimedia Bugzilla is closed!

Wikimedia migrated from Bugzilla to Phabricator. Bug reports are handled in Wikimedia Phabricator.
This static website is read-only and for historical purposes. It is not possible to log in and except for displaying bug reports and their history, links might be broken. See T4328, the corresponding Phabricator task for complete and up-to-date bug report information.
Bug 2328 - A Wiki Dedicated Browser
A Wiki Dedicated Browser
Status: RESOLVED INVALID
Product: MediaWiki
Classification: Unclassified
Special pages (Other open bugs)
unspecified
PC All
: Normal normal (vote)
: ---
Assigned To: Nobody - You can work on this!
: parser
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2005-06-05 02:00 UTC by Diego
Modified: 2005-06-05 15:42 UTC (History)
0 users

See Also:
Web browser: ---
Mobile Platform: ---
Assignee Huggle Beta Tester: ---


Attachments

Description Diego 2005-06-05 02:00:40 UTC
Hi, i want to request the implementation of a simple web script which do not
return a web page but returns the raw wikicode, to let browsers programs to
download only de code and parse it on the client PC.

What we win with this:
- Better browsing experience for all users.
- Save bandwidth connection.
- Save server processing (that actually is a lot for old machines like my MMX).
- Better O.S. integration (i'm sure that all linux distributions will add a
program like M$ Encarta but with the content of all wikis).

A feature like this is not hard to do, you just need to make a script which must
not parse the wikicode. And can be called with the programs like this.
www.somewiki.com/wc.php?=some_wiki_content
And with the same feature can be implemented a program like Babylon Translator
which could use the GPL'd content of a translation wikitionary. With short
content to fit in smalls windows.


Also with the same ideology (of parsing on the client machine) can be
implemented an option to parse in the client web browser with javascript. Using
the same script mentioned. 
Even, this can accelerate a lot the speed of the web. So the client cached web
doasn't need to change the entire html code, just the wiki code and nothing more.

Look, i was thinking on this pseudo-code.

1) Clients call the page www.somewiki.com
2) Server Sends a wiki main page with the javascript parser included.

2 bis) The Javascript Parser can be stored completele in a cookie.
       So, before the server sends the page:
       a) It checks a parser version key on the cookie.
       b) If it is outdated, the script saves the new javascript parser on the
cookie.
       c) The server sends the html code with a javascript that loads and run
the parser from the cookie.

3) The parser finds and store in a variable the content of the wiki page (wiki
code).
4) If the user clicks in a link, it just will redownload the new wikicode and
clean the old one, but the main html will not change.


With this method will save a lot of bandwidth and server processing. I think
that the server load will be reduced to something like 20%. Which is a very
small digit.


People or organizations with the willingness of make a GPL'd documentation wiki
of anything doasn't need to carry all the data-processing on its servers. People
how want read it will need process the data with its computers.
Comment 1 Brion Vibber 2005-06-05 02:32:18 UTC
Special:Export or action=raw can already be used to return raw wikitext of a page.
Comment 2 Diego 2005-06-05 15:42:10 UTC
Excelent, im working on it.  :)

But a method to return the search results in xml will be fine too.

Note You need to log in before you can comment on or make changes to this bug.


Navigation
Links