Last modified: 2010-05-15 15:41:11 UTC

Wikimedia Bugzilla is closed!

Wikimedia migrated from Bugzilla to Phabricator. Bug reports are handled in Wikimedia Phabricator.
This static website is read-only and for historical purposes. It is not possible to log in and except for displaying bug reports and their history, links might be broken. See T10338, the corresponding Phabricator task for complete and up-to-date bug report information.
Bug 8338 - spider repellent for boring pages
spider repellent for boring pages
Status: RESOLVED INVALID
Product: MediaWiki
Classification: Unclassified
Parser (Other open bugs)
1.7.x
PC Linux
: Low minor (vote)
: ---
Assigned To: Nobody - You can work on this!
:
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2006-12-20 17:23 UTC by Dan Jacobson
Modified: 2010-05-15 15:41 UTC (History)
0 users

See Also:
Web browser: ---
Mobile Platform: ---
Assignee Huggle Beta Tester: ---


Attachments

Description Dan Jacobson 2006-12-20 17:23:05 UTC
On a small wiki analyzing web logs shows that 90% of the traffic
is search engines indexing the most useless places, like Allmessages,
old revisions, you name it. Every nook and cranny they've crawled
into.

Therefore please increase nofollow links or whatever. Indeed, perhaps
just exclude all except current pages and categories, and keep them out of
Special: and Mediawiki: namespaces too, etc.

I see there is a $wgNamespaceRobotPolicies etc. but little wiki sysops
would not dare to tinker with these, so there should be better defaults.
Comment 1 Rob Church 2006-12-23 23:33:02 UTC
Read includes/DefaultSettings.php:

/**
 * Robot policies for namespaces
 * e.g. $wgNamespaceRobotPolicies = array( NS_TALK => 'noindex' );
 */
$wgNamespaceRobotPolicies = array();

Doesn't seem all that hard to me.
Comment 2 Darkoneko 2006-12-23 23:36:38 UTC
hmmm, do we have something like a SPECIAL => 'noindex' too ?

Note You need to log in before you can comment on or make changes to this bug.


Navigation
Links