lkcl and redi have commented on the ongoing trust metric attack on mod_virgule sites, noting the effects on Advogato. The same thing is happening to other mod_virgule sites including robots.net and ghostscript. I emailed Raph a warning about this activity in May when I first noticed the use of automated programs creating large numbers of identical accounts on the three sites. I don’t want to link to any examples directly but try googling on “dltxprt” or manually typing in the user URL to see an example user on all three of the mentioned sites. I’ve been tracking IPs and the account names on robots.net so I can kill them all off if needed but so far the trust metric has resisted the attack effectively.
The spammer is using the notes field of each account for search engine link spamming but otherwise isn’t causing much immediate harm other than resource abuse. I have working code to delete mod_virgule accounts but I’m still pondering how best to use it to remove the evil doers in this case.
The blog spam seems limited to Advogato for some reason. If it starts on robots.net, I think my solution will be to remove the A tag from the list of tags that can be used by observers. I don’t want to remove the ability of observers to post blog entries, as lkcl suggested, because that’s the only way we find out enough about some new users to decide whether they should receive a higher trust ranking.
One interesting thing to note is that almost all of the spammer’s accounts certify each other, creating what Google refers to as a “bad neighborhood” in webpage trust rank terminology. If you have a legitimate webpage and link to a “bad neighborhood” it can adversely affect your own page’s rank. It might be wise to implement something similar in mod_virgule. If a legitimate, trusted user certifies an untrusted user in a “bad neighborhood”, maybe it should result in decrementing the trust of the legitimate user rather than increasing the trust of the bogus user? Just a thought.