Last modified: 2010-10-12 13:57:47 UTC
I notice that these titles such as http://commons.wikimedia.org/w/api.php?action=parse&text=%7B%7BSequenceTitleBlackBG%0A%7CTitle%3D+end+title%0A%7D%7D&title=Sequence%3ACats&format=json currently use GET XHR calls. This should probably use POST, because of the max size of http URIs. "Microsoft states that the maximum length of a URL in Internet Explorer is 2,083 characters, with no more than 2,048 characters in the path portion of the URL." A large piece of CJK scripted text might possibly run into that limitation.
Sounds reasonable. switched to post in r74650 The one potential down side is the action parse requests during many users opening the same sequence for editing won't be cached. But in practice it may not matter since editing will be relatively rare and it won't hit the parser cache anyway when editing a custom string.
(In reply to comment #1) > The one potential down side is the action parse requests during many users > opening the same sequence for editing won't be cached. But in practice it may > not matter since editing will be relatively rare and it won't hit the parser > cache anyway when editing a custom string. Custom strings don't hit the parser cache. You can try to have them hit Squid cache by setting an &smaxage param (in which case you're right that you need GET), but AFAIK making such unlikely-to-be-revisited requests cacheable just slows things down: there is virtually no caching benefit and the Squids need to write all this stuff to cache, then evict it later.
Right. The smaxage param would let you cache the parser output of a bunch of title screens of a long sequence. Since this is just for editing its overkill. If we ever move towards smil playback ( rather than flattened files ) this would be better optimized via a a server side SMIL output system that resolved all the dynamic templates and transcluded sequences into a display representation ( similar to wikitext -> html ). That 'resolved smil' file could be cached as output for the Sequence page.