Last modified: 2010-01-07 10:31:47 UTC
Bots are assumed to log-in before making any changes. As far as I can determine there are two ways to do that, and both start with /w/. In robots.txt, bots are denied access to /w/. So while a bot is supposed to log-in to be conformant, a conformant bot actually can't log-in.
That's what the API is for.
(In reply to comment #1) > That's what the API is for. > Nice theory, but that doesn't really work because the api is under the /w/ directory. :-) http://en.wikipedia.org/w/api.php
Hi, It seems my mention of two ways to log-in is causing confusion for some. The two ways I know for a bot to log in are: * http://wiki.project.org/w/api.php, which is under /w/, disallowed by robots.txt. * http://wiki.project.org/w/index.php, which is under /w/ as well. How then, does the standards-aware programmer make his bot log on? (I'd continue about the actual changing of pages having the same problem, but as it happens my own bot is just a reader. Then again, the new version of the bot can just as easily write, if there's a way to legally do that.)
Robots.txt is meant to apply to general web crawlers, not to bots designed specifically to work with the MediaWiki software. In any case, the proper way for a bot to access modern versions of MediaWiki is via api.php only; if the API can't do something you need to do, file a bug on it. I'm closing this one as INVALID.