This debate has rumbled on for years, but the Guardian’s Mark Sweney reported only last week that MPs are asking web companies to do more in vetting content on their sites. It’s not new – remember when the time when ISPs got sued failing to take down libellous websites quick enough?
The problem? Well, when you’re YouTube and you get millions of submissions and updates each day, who checks what, when and how? But things might get tricky if sites don’t get proactive and self-regulate or sign up to an informal code of practice.
Can technology help filter out user generated content? It depends from CMS to CMS and I bet that some post moderated sites search for abusive language via the front end search box. But even if it’s true that some of the big UGC sites have search technology that uses an algorithm to hunt down copyrighted music or TV content, how difficult would it be to get these sites to share this technology. Video search technology is big business and anything that can dynamically identify video patterns / human actions / faces is going to be worth zillions, not least to the authorities and security agencies. Imagine the potential of a video search tool that could recognise and flag up drunken fights or car thieves on a city’s 2,000+ CCTV cameras, effectively doing away with the labourious effort of a human trying to watch them all at once. An extreme example but you get my point.
[Read more about the MPs comments at BrandRepublic]