In a move perhaps inspired by Google Map’s adoption of Wikipedia content and Google’s overall preferential rankings of Wikipedia, Facebook has been testing out articles that are highly similar to Wikipedia’s. In fact, Facebook’s article pages have actually sucked in Wikipedia’s initial article content for topics in a great many cases I’ve seen thus far:
From my perspective, this sort of breaks one of the great benefits of hypertext that made the internet great: linking to source content.
Of course, there are a whole lot of sites out there which adopt the content of other sites wholesale and redisplay it on their own sites for the purpose of capturing keyword traffic. We call those sites “scrapers”.
Actually, I’m being mildly facetious — I’m fully aware that Wikipedia’s content is tagged under the CreativeCommons – Attribution ShareAlike license, which fully allows the articles to be shared, altered and rehosted.
But, is this efficient, is what I’m getting-at. It’s an effort to capture and hold all the Facebook users within a walled garden, just to further gain traffic/marketshare on Facebook’s part. I may not be thrilled by Google’s seemingly-preferential treatment of Wikipedia pages, but at least those pages have become a common source for information about a great many topics. If Facebook’s effort continues, I think the articles will increasingly become out of sync as Facebook users contribute to content within their silo while Wikipedia articles continue evolving separately.
A great many of these pages have been seeded into Facebook for locations in particular, such as for Boulder, Colorado, as I showed in the pic above. Could this be part of Facebook’s drive to become an authoritative source for local information, too? It’s hard for me to perceive that the struggle between Facebook, Google and Twitter to take over locations is all that good of a thing — these companies do not know all the info about the local areas, and they’re displacing local organization information such as the chambers-of-commerce, city government websites, newspapers, and other genuinely local information sites.
Seeing the Facebook initial implementation of this has made me reassess the motive — I don’t think they were necessarily seeking to nab Wikipedia’s traffic so much as Google’s.
Facebook’s strategy is also reminiscent of Google’s Knol project which has yet to be a contender for Wikipedia traffic. However, Facebook’s usership may be likely to stay within the walls of their garden, and it may not prove to be much of a challenge to change their users’ behavior from searching on Google for info versus going to Facebook’s own search box.
Tags: Facebook, local information, Wikipedia
I wonder whether hiding WP content behind FB’s walled garden is really compatible with the “Share Alike” obligation of the applicable CC license?
One thing which makes many of us uncomfortable is that many aspects of CC or GFDL licensing really has not been fully tested — for instance, if one takes and modifies Wikipedia content, that modified content is now also supposed to be CC/GFDL. So, as Facebook takes a portion of a Wikipedia article and places it on their page, how much of that page then becomes free and open for anyone to take and use? Facebook continues to display their own Copyright notice at the bottoms of these hybrid pages — but, doesn’t their adoption of the CC/GFDL content then negate their claim to copyright? I don’t believe there’s precedent set by a court for this question, yet.
It reminds me of the recent kerfluffle between WordPress and Thesis over whether Thesis could realistically defend copyright of their work on top of the WordPress platform which required GFDL…
[…] Some of these types of “community pages” are light on content, while in other cases Facebook has sucked in Wikipedia articles to populate them out a bit […]
[…] Wikipedia itself threatens to become the central database of places, with its increasingly-structured addition of addresses and geocoordinates along with absorption of its local content into Google Maps and Facebook. […]