Last modified: 2011-04-21 07:32:36 UTC

Wikimedia Bugzilla is closed!

Wikimedia migrated from Bugzilla to Phabricator. Bug reports are handled in Wikimedia Phabricator.
This static website is read-only and for historical purposes. It is not possible to log in and except for displaying bug reports and their history, links might be broken. See T17402, the corresponding Phabricator task for complete and up-to-date bug report information.
Bug 15402 - Performance optimization for sortable tables
Performance optimization for sortable tables
Status: RESOLVED FIXED
Product: MediaWiki
Classification: Unclassified
Parser (Other open bugs)
unspecified
All All
: Normal enhancement with 1 vote (vote)
: ---
Assigned To: DieBuche
: patch, patch-need-review
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2008-09-01 02:21 UTC by SharkD
Modified: 2011-04-21 07:32 UTC (History)
6 users (show)

See Also:
Web browser: ---
Mobile Platform: ---
Assignee Huggle Beta Tester: ---


Attachments
unified diff file (11.19 KB, patch)
2008-09-01 02:21 UTC, SharkD
Details
HTML demonstration files (167.22 KB, application/zip)
2008-09-02 03:44 UTC, SharkD
Details
unified diff file (10.89 KB, patch)
2008-09-02 21:25 UTC, SharkD
Details

Description SharkD 2008-09-01 02:21:33 UTC
Created attachment 5239 [details]
unified diff file

Various performance optimizations. Some are general in scope (such as storing variables once before a loop instead of evaluating them each time the loop is traversed), others are specific to particular portions of code.
Comment 1 Daniel Friesen 2008-09-01 09:24:29 UTC
I'm no JS Guru, but how is this pile of junk optimization? You've got any guides showing that these are actual optimizations?


-	for (var i = 0; i < tableBodies.length; i++) {
+	for (var i = tableBodies.length - 1; i > -1; i--) {
-	for (var i = rowStart; i < table.rows.length; i++) {
+	for (var i = rowStart, n = table.rows.length; i < n; i++) {
-		if (table.rows[i].cells.length > column) {
-			itm = ts_getInnerText(table.rows[i].cells[column]);
+		if (rowCells.length > column) {
+			itm = ts_getInnerText(rowCells[column]);

How is this kind of thing optimized? Firstly, this looks far less readable than normal.
And how is something like "tableBodies.length" different from "n" they're both variables?
Additionally, you've changed things from starting at the start of the page and going to the end, to starting at the end and going to the start. What kind of optimization is this?

I personally dislike this:
-			tables[ti].setAttribute('id','sortable_table_id_'+idnum);
-			++idnum;
+			thisTable.setAttribute('id', 'sortable_table_id_' + idnum++);
This makes the code harder to understand. We avoid this kind of thing inside of core, why should it be done in JS?

And errhm... this?
-	sortfn = ts_sort_caseinsensitive;
...
-	if (itm.match(/^[\d.,]+\%?$/))
-		sortfn = ts_sort_numeric;
...
+	else if (itm.match(/^[\d.,]+\%?$/))
+		var sortfn = ts_sort_numeric;

If a browser followed any pattern of logic, then those statements would basically create the sortfn variable inside of the condition, then discard it right after...
Comment 2 Bryan Tong Minh 2008-09-01 12:32:46 UTC
(In reply to comment #1)
> I'm no JS Guru, but how is this pile of junk optimization? You've got any
> guides showing that these are actual optimizations?
> -       for (var i = 0; i < tableBodies.length; i++) {
> +       for (var i = tableBodies.length - 1; i > -1; i--) {
> -       for (var i = rowStart; i < table.rows.length; i++) {
> +       for (var i = rowStart, n = table.rows.length; i < n; i++) {
> -               if (table.rows[i].cells.length > column) {
> -                       itm = ts_getInnerText(table.rows[i].cells[column]);
> +               if (rowCells.length > column) {
> +                       itm = ts_getInnerText(rowCells[column]);
> How is this kind of thing optimized? Firstly, this looks far less readable than
> normal.
> And how is something like "tableBodies.length" different from "n" they're both
> variables?
> Additionally, you've changed things from starting at the start of the page and
> going to the end, to starting at the end and going to the start. What kind of
> optimization is this?
It is only one variable lookup instead of two. That said, I don't think the performance gain is noticable. It is also not very readable.

Comment 3 SharkD 2008-09-01 15:50:01 UTC
(In reply to comment #1)
> I'm no JS Guru, but how is this pile of junk optimization? You've got any
> guides showing that these are actual optimizations?
> -       for (var i = 0; i < tableBodies.length; i++) {
> +       for (var i = tableBodies.length - 1; i > -1; i--) {
> -       for (var i = rowStart; i < table.rows.length; i++) {
> +       for (var i = rowStart, n = table.rows.length; i < n; i++) {
> -               if (table.rows[i].cells.length > column) {
> -                       itm = ts_getInnerText(table.rows[i].cells[column]);
> +               if (rowCells.length > column) {
> +                       itm = ts_getInnerText(rowCells[column]);
> How is this kind of thing optimized? Firstly, this looks far less readable than
> normal.
> And how is something like "tableBodies.length" different from "n" they're both
> variables?

See: http://blogs.msdn.com/ie/archive/2006/08/28/728654.aspx

By using "tableBodies.length", both "tableBodies" and its property "length" need to be looked up each time the loop is traversed. By cacheing the poperty in its own variable, "n", only a single lookup needs to be done. This is a savings of 50%.


> Additionally, you've changed things from starting at the start of the page and
> going to the end, to starting at the end and going to the start. What kind of
> optimization is this?

In javascript, looping in reverse order results in performance gains.


> I personally dislike this:
> -                      
> tables[ti].setAttribute('id','sortable_table_id_'+idnum);
> -                       ++idnum;
> +                       thisTable.setAttribute('id', 'sortable_table_id_' +
> idnum++);
> This makes the code harder to understand. We avoid this kind of thing inside of
> core, why should it be done in JS?

Again, this change results in the variable being looked up only once.


> And errhm... this?
> -       sortfn = ts_sort_caseinsensitive;
> ...
> -       if (itm.match(/^[\d.,]+\%?$/))
> -               sortfn = ts_sort_numeric;
> ...
> +       else if (itm.match(/^[\d.,]+\%?$/))
> +               var sortfn = ts_sort_numeric;
> If a browser followed any pattern of logic, then those statements would
> basically create the sortfn variable inside of the condition, then discard it
> right after...

In javascript, conditional statements fon't possess their own scope. Only functions and loops do. So, this matter is not an issue.

Also, in the original code the "sortfn" variable is given global scope, which results in having to look in the global *and* local scopes when searching for the variable's value.
Comment 4 SharkD 2008-09-01 16:46:03 UTC
(In reply to comment #3)
> (In reply to comment #1)
> > And how is something like "tableBodies.length" different from "n" they're both
> > variables?
> See: http://blogs.msdn.com/ie/archive/2006/08/28/728654.aspx
> By using "tableBodies.length", both "tableBodies" and its property "length"
> need to be looked up each time the loop is traversed. By cacheing the poperty
> in its own variable, "n", only a single lookup needs to be done. This is a
> savings of 50%.

Woops. Not sure what I was thinking. The savings are (n-1)/n (not 50%), where n is the number of times the block is looped.
Comment 5 Aryeh Gregor (not reading bugmail, please e-mail directly) 2008-09-01 19:35:26 UTC
To quote Donald Knuth: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil."  Optimized code is often a lot less readable and often insignificantly faster, and attempting to optimize will always have a risk of introducing bugs.  Do *not* optimize unless you know that there's some specific performance problem that you're aiming to fix.  Almost none of these changes are likely to be a big performance improvement, and some are probably slower.  For instance:

>-	for ( var i = start; i < finish; i++ ) {
>+	for ( var i = finish - 1; i > start - 1; i-- ) {

This is not going to do anything.  > is not faster than <.  In fact, this is likely to be slower if anything, by comparing to an arithmetic expression on every loop instead of a constant variable.  The reason that decreasing loops are likely to be slightly faster is when you make a change like

>-	for ( var i = 0; i < nodeList.length; i++ ) {
>+	for (var i = nodeList.length - 1; i > -1; i--) {

which changes a comparison of two variables to a comparison of a variable and a constant.  But even this optimization is trivial compared to the work the loop does: only a very tight loop that runs many times would see a measurable difference here.

Basically, if you want any microoptimization to any part of the code checked in, please provide benchmarks, before and after, demonstrating that it really does provide some significant benefit.
Comment 6 SharkD 2008-09-02 01:17:18 UTC
I'll try to create a large table and see what difference it makes when sorting.

BTW, here's the largest sortable table I've personally worked on:

http://en.wikipedia.org/wiki/Chronology_of_console_role-playing_games
Comment 7 SharkD 2008-09-02 03:44:48 UTC
Created attachment 5255 [details]
HTML demonstration files

I've attached three demonstration files showing the effect of the various optimizations. Forgive the naming scheme... I added one later and forgot to rename them.

#1. html_table_sorting_original.html - no optimizations at all--more or less the original state of the script before any changes
#2. html_table_sorting_unoptimized.html - includes all optimizations except optimizations to "for" loops
#3. html_table_sorting_optimized.html - includes all optimizations including optimizations to "for" loops

The difference between the #1 and the #2 is huge. Something along the lines of a factor of 20 times.

The difference between the #2 and #3 is not so great. After eight trials of sorting the "Platform" column the average was 578ms for the #2 and 537ms for #3--a savings of about 7%.

For the "Release year" column I got an average of 805ms for #2 and 756ms for #3--a difference of about 6%. Not huge, but also not insignificant.

Note also that both the latter two versions include some optimizations (not related to reversing "for" loops) I missed earlier, so I should probably create a new patch to include them.
Comment 8 SharkD 2008-09-02 05:11:00 UTC
(In reply to comment #7)
> ... so I should probably create a new patch to include them.

... but not include the "for" loop optimizations, I guess.

Comment 9 SharkD 2008-09-02 20:25:38 UTC
Another optimization would be to change the regexps from this format:

if (itm.match(/^\d\d[\/. -][a-zA-Z]{3}[\/. -]\d\d\d\d$/))

to this format:

if (/^\d\d[\/. -][a-zA-Z]{3}[\/. -]\d\d\d\d$/.test(itm))
Comment 10 SharkD 2008-09-02 20:31:39 UTC
Another is to define arrays literally. E.g., change:

var thing = new Array();

to:

var thing = [];
Comment 11 SharkD 2008-09-02 21:25:41 UTC
Created attachment 5262 [details]
unified diff file

Here's an updated version of the optimizations, minus the edits to the "for" loops.
Comment 12 Aryeh Gregor (not reading bugmail, please e-mail directly) 2008-12-24 23:26:34 UTC
Sorry, I'm just not going to apply any patch resembling the ones you've provided on this bug.  It mixes things that are probably useful with way too much stuff that's ugly (converting if to ternary operator, adding temp variables to avoid array accesses) and unlikely to help anything.  If you want to narrow it down to one patch per exact type of change, with benchmarks for each change individually (probably only two or three that are significant), reopen the bug.
Comment 13 SharkD 2008-12-26 07:23:46 UTC
> It mixes things that are probably useful with way too
> much stuff that's ugly (converting if to ternary operator, 
> adding temp variables to avoid array accesses)

Not sure what you're talking about here. There are no instances of if statements being changed to ternary form. In fact there are only three types of optimizations being made: storing frequently used objects and object properties in variables, using literal definitions to intialize objects and sorting the regular expressions in order of likelihood.

> and unlikely to help anything.

Also not sure what you're talking about here. The use of variables to store common objects accounts for the bulk of the difference in performance.

-Mike
Comment 14 Aryeh Gregor (not reading bugmail, please e-mail directly) 2008-12-26 15:10:10 UTC
(In reply to comment #13)
> > It mixes things that are probably useful with way too
> > much stuff that's ugly (converting if to ternary operator, 
> > adding temp variables to avoid array accesses)
> 
> Not sure what you're talking about here. There are no instances of if
> statements being changed to ternary form.

-	var ta;
-	if ( doId ) {
-		ta = [doId];
-	} else {
-		ta = window.ta;
-	}
+	
+	var ta = doId ? [doId] : window.ta;

> Also not sure what you're talking about here. The use of variables to store
> common objects accounts for the bulk of the difference in performance.

Each instance of this must be separately justified, because every one makes the code harder to read (more lines of code, and you have to remember what more local variables mean).  For instance, consider the first one in the file:

+			var thisTa = ta[id];
+			var thisId = thisTa[0];

These will each be referenced a couple of times for each element of the ta array (if that's even defined).  But there are also two initializations for each element of the ta array.  thisId is initialized and then used *three times*, and thisTa is initialized and then used *once* (not counting the initialization of thisId).  This optimization is only even possibly useful if the variable is referred to many times -- I'm quite sure that initializing an extra local variable is much more expensive than an extra array access.  If you can demonstrate any detectable performance improvement from this change -- and I mean detectable in normal situations as the code will actually execute, not detectable when you run the code ten million times over in a loop to amplify any differences by a factor of ten million -- I will eat my hat.

And my hat is fairly expensive.

I am not going to commit any optimization unless either 1) it improves or at least does not harm code readability, or 2) the *exact* optimization is shown to give significant performance improvements.  "Significant" means that in some plausible scenario, such as an extremely large sortable table, the difference would actually be noticeable to a normal human being, or at least would come close to being noticeable.  And I'm not going to commit 94 lines changed because some of them, somewhere, cause a significant benefit -- you have to show the exact, specific change that causes the benefit, and I'll commit that, not the rest of the stuff that will do nothing.

Please note that this is not some stubborn idea of mine.  I'm not just refusing to commit this patch because I personally think it's not well done.  I'm quite sure that if I committed it, it would be reverted for exactly the reasons I gave -- just because I have commit access doesn't mean that I can commit anything I like.  Supposed optimizations that reduce code quality are pretty generally reverted in MediaWiki if they don't give evidence of clear improvement.  It doesn't do you or me any good for me to commit things that I believe are bad code.  I can't commit anything that I don't believe is up to the project's standards.
Comment 15 SharkD 2008-12-27 08:07:52 UTC
(In reply to comment #14)
> (In reply to comment #13)
> > > It mixes things that are probably useful with way too
> > > much stuff that's ugly (converting if to ternary operator, 
> > > adding temp variables to avoid array accesses)
> > 
> > Not sure what you're talking about here. There are no instances of if
> > statements being changed to ternary form.
> -       var ta;
> -       if ( doId ) {
> -               ta = [doId];
> -       } else {
> -               ta = window.ta;
> -       }
> +       
> +       var ta = doId ? [doId] : window.ta;

For you to single out a single (and extremely minor) change toward a format that is used 14 other times in the file--and then cite it as "too hard to read"--is completely silly. You're being unreasonable and wasting everyone's time.

> > Also not sure what you're talking about here. The use of variables to store
> > common objects accounts for the bulk of the difference in performance.
> Each instance of this must be separately justified, because every one makes the
> code harder to read (more lines of code, and you have to remember what more
> local variables mean).

Do you really think that coders here are so underqualified that this will seriously throw them off track? Is the average coder who might be affected by this even likely to have his or her changes committed? This would need to be the case for your comment to be meaningful. It's not like you have to remember each variable throughout the entirety of the code. Rather, they're only significant on a block-per-block basis and can be safely forgotten once a block is completed. Anyone who actually knows what they are doing won't be tripped up by this.

> For instance, consider the first one in the file:
> +                       var thisTa = ta[id];
> +                       var thisId = thisTa[0];
> These will each be referenced a couple of times for each element of the ta
> array (if that's even defined).  But there are also two initializations for
> each element of the ta array.  thisId is initialized and then used *three
> times*, and thisTa is initialized and then used *once* (not counting the
> initialization of thisId).  This optimization is only even possibly useful if
> the variable is referred to many times -- I'm quite sure that initializing an
> extra local variable is much more expensive than an extra array access.

In fact, you're wrong. First of all, in the original code, *two* array lookups are done each time, not just one:

ta[id][n]

Secondly, I've measured almost a 20% performance increase when not doing the direct array lookups with as little as four variables. So, if you're right about something it's significant, but if you're wrong it's not? That seems a bit biased to me.

> If you
> can demonstrate any detectable performance improvement from this change -- and
> I mean detectable in normal situations as the code will actually execute, not
> detectable when you run the code ten million times over in a loop to amplify
> any differences by a factor of ten million -- I will eat my hat.
> And my hat is fairly expensive.

Again, this, is silly. It's akin to judging a house to be insufficient because its bricks are small.

> I am not going to commit any optimization unless either 1) it improves or at
> least does not harm code readability, or 2) the *exact* optimization is shown
> to give significant performance improvements.  "Significant" means that in some
> plausible scenario, such as an extremely large sortable table, the difference
> would actually be noticeable to a normal human being, or at least would come
> close to being noticeable.  And I'm not going to commit 94 lines changed
> because some of them, somewhere, cause a significant benefit -- you have to
> show the exact, specific change that causes the benefit, and I'll commit that,
> not the rest of the stuff that will do nothing.

You're criteria of code readability is subjective. A multitude of coding techniques are used inconsistently in the code, reflecting the patchy nature of its growth. There is no *one* standard used throughout. Methods you have deemed deficient are already used with considerable frequency, yet you fail to make yourself heard in these cases.

> Please note that this is not some stubborn idea of mine.  I'm not just refusing
> to commit this patch because I personally think it's not well done.  I'm quite
> sure that if I committed it, it would be reverted for exactly the reasons I
> gave -- just because I have commit access doesn't mean that I can commit
> anything I like.  Supposed optimizations that reduce code quality are pretty
> generally reverted in MediaWiki if they don't give evidence of clear
> improvement.  It doesn't do you or me any good for me to commit things that I
> believe are bad code.  I can't commit anything that I don't believe is up to
> the project's standards.

It does in fact seem like you're making stuff up, because you could have mentioned all this way back when I filed my initial report. Instead, you appear to be tacking on more and more conditions that didn't exist beforehand in each written exchange. For someone who claims to be considerate of wasting programmers' time, you are failing to do exactly that.
Comment 16 SharkD 2008-12-27 08:20:01 UTC
(In reply to comment #15)
> In fact, you're wrong. First of all, in the original code, *two* array lookups
> are done each time, not just one:
> ta[id][n]
> Secondly, I've measured almost a 20% performance increase when not doing the
> direct array lookups with as little as four variables. So, if you're right
> about something it's significant, but if you're wrong it's not? That seems a
> bit biased to me.

The *decrease* in performance for a single instance is only 5%.
Comment 17 x00000000 2008-12-27 14:01:27 UTC
(In reply to comment #14)
> I'm quite sure that initializing an
> extra local variable is much more expensive than an extra array access.

No, a variable declared with |var| at function scope is very cheap. It's similar to a register in any decent ECMAScript implementation. Using as much local variables as possible (if used more than once) is the most important optimization (except better algorithms). An array access in ECMAScript usually needs conversion of the index to a string and a hash table lookup. An ECMAScript array is just an object with a special prototype and a magic |length| property. If the array or object is a host object (like in |table.rows[i].cells|), accesses may also need security checks, which are the most expensive thing.

BTW, an additional optimization would be to store |document| in a local variable if used more than once. It's almost guaranteed to need a security check at every access as a global property. And |this| is faster than |window| where they are equivalent (|window|, like |self|, is a self-reference).
Comment 18 Brion Vibber 2008-12-27 19:59:33 UTC
If I may interject -- please make a *minimal* patch for the sorting optimization and include some basic benchmark data. (something like "Sorting this table took 2000ms with the old code and 150ms with the new code, on a machine with this configuration and with this browser")

Don't change style or code from other functions that's not vital to this particular issue, so we can actually discuss the merits of the relevant code instead of making general arguments about why very large patches with vague descriptions are to be avoided.
Comment 19 Aryeh Gregor (not reading bugmail, please e-mail directly) 2008-12-27 23:33:24 UTC
(In reply to comment #15)
> For you to single out a single (and extremely minor) change toward a format
> that is used 14 other times in the file--and then cite it as "too hard to
> read"--is completely silly. You're being unreasonable and wasting everyone's
> time.
> 
> . . .
> 
> Do you really think that coders here are so underqualified that this will
> seriously throw them off track? Is the average coder who might be affected by
> this even likely to have his or her changes committed?
>
> . . .
>
> You're criteria of code readability is subjective. A multitude of coding
> techniques are used inconsistently in the code, reflecting the patchy nature of
> its growth. There is no *one* standard used throughout. Methods you have deemed
> deficient are already used with considerable frequency, yet you fail to make
> yourself heard in these cases.
> 
> ...
> 
> It does in fact seem like you're making stuff up

You know what, forget it.  If you're not willing to listen to my code review, you can get someone else to review and commit your patches.  I've been patient, but I'm no longer interested in dealing with you.
Comment 20 Aryeh Gregor (not reading bugmail, please e-mail directly) 2008-12-27 23:52:58 UTC
To expand a little.  If you want code committed to MediaWiki, you will have to write it in a form that one or more people with commit access is willing to review and commit.  If you think the way you're asked to do things is stupid, that's fine, you have every right to your opinion, but MediaWiki is not a pluralistic democracy where everyone gets to write code that they personally think is good.

If you think your code is good and MediaWiki developers think it's bad, you have two choices.  You can acknowledge that when writing patches for MediaWiki, it's the MediaWiki developers who have the right to decide what goes into their project, and write your patches to their standards (however silly you find them).  Or you can hold your nose up in the air and refuse to do things the way the people with commit access are asking you to, and then your code won't get committed.  You've so far chosen the latter path as often as not, and that simply makes it a waste of time for me to bother reviewing your code.

And regardless of coding standards, being rude to people is a surefire way of getting them to give up on you.  I know you're a volunteer, but so am I.  You're under no obligation to write patches, but I'm under no obligation to review or commit them.  Regardless of any other circumstances, if you don't even take the basic step of being polite to me, I'm not going to be willing to review your code, because it's not going to be worth the unpleasantness.
Comment 21 SharkD 2008-12-28 05:10:45 UTC
(In reply to comment #18)
> If I may interject -- please make a *minimal* patch for the sorting
> optimization and include some basic benchmark data. (something like "Sorting
> this table took 2000ms with the old code and 150ms with the new code, on a
> machine with this configuration and with this browser")
> Don't change style or code from other functions that's not vital to this
> particular issue, so we can actually discuss the merits of the relevant code
> instead of making general arguments about why very large patches with vague
> descriptions are to be avoided.


The latest patch *is* minimal, in that it only includes changes pertinent to the scope of this submission. I.e., it only includes changes that improve the performance of the script, or changes required for other changes to function properly.

Secondly, I've *already* included a set of files which you yourself can be used to benchmark the performance increases. On my system, the performance improved from ~14000 milliseconds to between ~1400 and ~750 milliseconds depending on which column of the table you click.

-Mike
Comment 22 SharkD 2008-12-28 05:22:32 UTC
(In reply to comment #20)
> And regardless of coding standards, being rude to people is a surefire way of
> getting them to give up on you.  I know you're a volunteer, but so am I. 
> You're under no obligation to write patches, but I'm under no obligation to
> review or commit them.  Regardless of any other circumstances, if you don't
> even take the basic step of being polite to me, I'm not going to be willing to
> review your code, because it's not going to be worth the unpleasantness.

*I'm* being rude? I've been continually onslaught with rhetorical nonsense meant to diminutize, such as "How is this kind of thing optimized?", "What kind of
optimization is this?" and "And don't give me the ... crap"; not to mention your own jockularizing of my treatment in this regard ("Come on, guys, don't bite the newbies."), as well as your *hat*. 

Talk about stroking your ego.

-Mike
Comment 23 SharkD 2008-12-28 05:24:04 UTC
(In reply to comment #17)
> BTW, an additional optimization would be to store |document| in a local
> variable if used more than once. It's almost guaranteed to need a security
> check at every access as a global property. And |this| is faster than |window|
> where they are equivalent (|window|, like |self|, is a self-reference).

Thanks for the tip.

-Mike
Comment 24 SharkD 2008-12-28 05:55:44 UTC
(In reply to comment #19)
> You know what, forget it.  If you're not willing to listen to my code review,
> you can get someone else to review and commit your patches.  I've been patient,
> but I'm no longer interested in dealing with you.

Then why are you still doing so? The fact is that you're just flaunting your memorization of "policies" and "rules" and know next to nothing about the actual subject. Do you think this makes your actual involvement meaningful?

-Mike
Comment 25 SharkD 2008-12-30 03:59:39 UTC
It seems that most if not all of the optimizations don't affect Firefox 3. You can see this with the HTML demonstration files I uploaded. On my machine, sorting each column takes around 14 seconds in IE7 without the optimizations, and between 0.5 and 1.5 seconds (depending on the column) with the optimizations. In Firefox 3 there's no measurable difference as far as I can tell; in either case it is roughly as fast as IE7 with the optimizations.

-Mike
Comment 26 Roan Kattouw 2009-01-07 14:28:04 UTC
(In reply to comment #25)
> It seems that most if not all of the optimizations don't affect Firefox 3. You
> can see this with the HTML demonstration files I uploaded. On my machine,
> sorting each column takes around 14 seconds in IE7 without the optimizations,
> and between 0.5 and 1.5 seconds (depending on the column) with the
> optimizations. In Firefox 3 there's no measurable difference as far as I can
> tell; in either case it is roughly as fast as IE7 with the optimizations.
> 
> -Mike
> 

I guess that means Firefox 3 optimizes stuff on its own pretty well.

(In reply to comment #24)
> (In reply to comment #19)
> The fact is that you're just flaunting your
> memorization of "policies" and "rules" and know next to nothing about the
> actual subject. Do you think this makes your actual involvement meaningful?
This wasn't directed to me, of course, so I'll just note that to get code committed, you need to write code that works well (which means you gotta know what you're doing pretty well) *and* follow our rules concerning stuff like code style. Even if someone can't help you out on the first part, they can (and, in this case, do) still point out which rules you aren't following, which is obviously meaningful.
Comment 27 SharkD 2009-01-11 01:44:55 UTC
(In reply to comment #26)
> This wasn't directed to me, of course, so I'll just note that to get code
> committed, you need to write code that works well (which means you gotta know
> what you're doing pretty well) *and* follow our rules concerning stuff like
> code style. Even if someone can't help you out on the first part, they can
> (and, in this case, do) still point out which rules you aren't following, which
> is obviously meaningful.

The only person who has mentioned any rules with any specifity was Ilmari Karonen, adn that was on Wikipedia Village Pump.

-Mike
Comment 28 Mike.lifeguard 2009-01-11 01:49:02 UTC
(In reply to comment #6)
> I'll try to create a large table and see what difference it makes when sorting.
> 
> BTW, here's the largest sortable table I've personally worked on:
> 
> http://en.wikipedia.org/wiki/Chronology_of_console_role-playing_games
> 

If a large sorttable is needed for testing, http://meta.wikimedia.org/wiki/User:COIBot/XWiki is quite large on bad days (like today)
Comment 29 DieBuche 2011-04-21 07:32:36 UTC
r86088 is faster than proposed patch, markin fixed

Note You need to log in before you can comment on or make changes to this bug.


Navigation
Links