Last modified: 2013-01-22 20:59:18 UTC
Spezial:Import should not try to import and execute the whole file at once. I think it would be better when the import would be made step by step/page by page and the result/progress would be displayed via ajax. Then maybe even bigger files could be importet with a shorter execution time and a smaller memory limit
The import process could be improved extremely by loading the file and then processing in background instead of waiting for the answer until the import is proceeded completely.
It's a good idea, I've thought of it by myself... Actually I think the ideal import process should resemble DVCS one, like 'git fetch' :-) And just for the interest - what are your use cases of import, what for do you use it? (we use it for regular replication between wiki installations)
I use it because aggregate pages from different wikis under CC by-sa (mainly german wikipedia) to add Semantic Annotations in my wiki and add further information. Because of this I have to import the complete history (according to CC by-sa License) which can take very long for big and old pages.