Last modified: 2014-10-07 03:36:05 UTC
I think currently the servers demand to receive the complete file before verifying its size. It would be better if it only received the portion set by the upload quota. In other words if the user is uploading a 500MB file, and the quota is 5MB. The upload should be canceled the second the file exceeds 5MB rather than waiting for the entire 500MB file. This can also be exploited maliciously.
I'm pretty sure this would require special support from PHP to abort the connection early; not sure if that's present.
Not sure the information from http://au2.php.net/manual/en/function.stat.php would helpful or not.
Nope.
(In reply to comment #0) > This can also be exploited maliciously. I don't see how; please send details to security@wikimedia.org.
Email was sent to security@wikimedia.org
Summary is roughly 'if you upload a lot of stuff, the servers' drives may fill up'. Well, yeah. That's really a pretty general issue, since we keep all the submitted data even if it's deleted. ;) The only operational issue specific to this would be: 1) If you upload a file much much larger than the file upload limit, - then - 2) It may fill up the victim web server's disk during the process. That's an issue of how PHP handles its upload limit. Uploaded file data is saved to temporary files, thus usually in a /tmp directory, and discarded at the end of the request. The question is, if your upload is _bigger_ than the file size limit, does it discard the file *at the point where the limit is reached* or *after saving the entire file*. I would expect that it discards the file at the point where the limit is reached, but we'd have to check... Ok, my quick peek at PHP's rfc1867.c which implements HTTP file upload handling looks like it deletes the temporary file when the upload is canceled, which is the safest course. Filling up servers' /tmp areas could have some DoS consequences, but that's about the limit of it even if there was a problem. To perform the attack you'd have to upload a large number of files _under_ the size limit, and try to keep them there as long as possible. On Wikimedia's own configuration you'd also have to do enough of these to keep several hundred separate web servers *all* full. An annoyance at best, and hardly worth it.
Any improvements in handling behavior may have to be upstream (eg, better error reporting), assuming there's even a good way of handling it. You'd have to cut off the HTTP connection, I guess, so I'm not sure how you'd report the error cleanly.