Last modified: 2011-11-29 03:20:56 UTC
The download progress page hasn't been updated since 12-Feb-2009 at 06:52 UTC. Either the dump processes are dead, or the process that updates the web page is.
The box they were running on (srv31) is dead. We'll reassign them over the weekend if we can't bring the box back up.
The dump processes are dead again since February 26.
Bah... machine went down again and we didn't get the worker threads restarted after. Starting now...
Is it possible to run 5 processes for dumping as before? Only two processes are not enough, becuse one process is for enwiki, the other is now nlwiki, then frwiki, then dewiki. This takes too much time for a running dump system.
The process for nlwiki seems to be dead since yesterday 18:55. Can you please restart with 5 processes as before.
There is a new problem: Look at http://download.wikipedia.org/plwiki/20090407/ http://download.wikipedia.org/plwiki/20090406/ http://download.wikipedia.org/plwiki/20090405/ http://download.wikipedia.org/plwiki/20090404/ http://download.wikipedia.org/plwiki/20090403/ http://download.wikipedia.org/plwiki/20090402/ or http://download.wikipedia.org/ruwiki/20090407/ http://download.wikipedia.org/ruwiki/20090406/ http://download.wikipedia.org/ruwiki/20090405/ There is an abort at he step "Creating split stub dumps..." and at the next day there is a new try. With only 2 dump processes this is a real problem. Is this step really necessary? If not, when you can skipp this step. Before January, 29, 2009 there was a status output in this step, see: http://download.wikipedia.org/frwikisource/20090129/. But since January,31, 2009 there is no output, see: http://download.wikipedia.org/dewikinews/20090131/ Best regards Andim
The same problem at http://download.wikipedia.org/ptwiki/20090418/ http://download.wikipedia.org/ptwiki/20090417/ http://download.wikipedia.org/ptwiki/20090416/ http://download.wikipedia.org/ptwiki/20090415/ It seems there is a problem with "Creating split stub dumps..." at big editions.
All the dumps are back on and working as of last Friday 5/1. We've added capacity and are monitoring this close.