New York Times paywall: round-up of the analysis

Nieman Journalism Labs’ Tom Coddington has a great round-up of the decision by the New York Times to introduce a pay-wall:

There were a couple pieces written supporting the Times’ proposal: Former CBS digital head Larry Kramer said he’d be more likely to pay for the Times than for the tablet publication The Daily, even though it’s far more expensive. The reason? The Times’ content has consistently proven to be valuable over the years. (Tech blogger John Gruber also said the Times’ content is much more valuable than The Daily’s, but wondered if it was really worth more than five times more money.) Nate Silver of Times blog FiveThirtyEight used some data to argue for the Times’ value.

The Times’ own David Carr offered the most full-throated defense of the pay plan, arguing that most of the objection to it is based on the “theology” of open networks and the free flow of information, rather than the practical concerns involved with running a news organization. Reuters’ Felix Salmon countered that the Times has its own theology — that news orgs should charge for content because they can, and that it will ensure their success. Later, though, Salmon ran a few numbers and posited that the paywall could be a success if everything breaks right.

There were more objections voiced, too: Both Mathew Ingram of GigaOM and former newspaper journalist Janet Coats both called it backward-looking, with Ingram saying it “seems fundamentally reactionary, and displays a disappointing lack of imagination.” TechDirt’s Mike Masnick ripped the idea that people might have felt guilty about getting the Times for free online.

One of the biggest complaints revolved around the Times’ pricing system itself, which French media analyst Frederic Filloux described as “expensive, utterly complicated, disconnected from the reality and designed to be bypassed.”Others, including Ken Doctor, venture capitalist Jean-Louis Gassee, and John Gruber, made similar points about the proposal’s complexity, and Michael DeGusta said the prices are just too high. Poynter’s Damon Kiesow disagreed about the plan structure, arguing that it’s well-designed as an attack on Apple’s mobile paid-content dominance.

 

Where to next for the Google Book Settlement?

This week a US judge ruled against the Google Book Settlement, the latets in a seven year legal saga that I’ve covered in some depth here.

Jerry Brito has a good explainer of the background of the case:

In mid-2005, the Author’s Guild and the American Association of Publishers filed suit to stop Google from scanning any more books. Soon the Author’s Guild’s case was certified as a class-action lawsuit, meaning that anyone who had ever published a book—millions of authors—would be part of the class represented and would be bound by the result of the case.

An Unsettling Settlement

Three years later, after extensive negotiations, the parties announced they had reached a settlement. Google would pay $125 million up front and would then be allowed to continue scanning books and making them available online. More importantly, Google would be allowed to offer not just snippets, but it would be allowed to sell entire text of books as well. The copyright holder would get about 2/3 of the revenues and Google would keep 1/3.

On its surface, the proposed settlement was a boon for all involved. Google would get to continue digitizing books, authors and publishers would get a cut of the profits, and consumers would get universal access to almost all of the world’s books. But reading between the lines, the settlement proved to be problematic.

Because it was a settlement to a class-action lawsuit, it meant that all authors who had ever published a book were bound. Google could scan any book without first asking for permission. If an author didn’t want his book to be scanned or included in Google’s database, he had to contact Google and opt-out. This would have turned copyright on its head.

As a result, many authors protested. The Author’s Guild and the publisher’s association had negotiated on behalf of millions of authors, and many felt the deal didn’t represent their wishes. Almost 7,000 authors wrote to the court asking to be removed from the lawsuit’s plaintiff class.

Saving the Orphans

Another contentious aspect of the settlement was how it treated “orphan works,” books the authors of which are unknown or can’t be found. It’s a well-known problem in copyright that members of Congress have tried to fix several times.

The problem is that if a company like Google wants to digitize a copyrighted book, and it can’t find its author to ask for permission, then its choices are 1) scan the book anyway and face heavy penalties if the author surfaces later and sues, or 2) leave the book undigitized and out of a universal library. As a result, hundreds of thousands of books are in a kind of limbo, not accessible to readers even if the author may well have been fine with digitization.

The Google Books settlement presented a solution to the problem. Because it bound all authors—-known and unknown—-Google could proceed to scan orphan works without having to worry. If an author later surfaced who didn’t want his book used, he could no longer sue Google. He could opt-out of the program and claim a check for the revenues associated with his book, but no more.

Some welcomed this solution to the problem, but others, including the Department of Justice, pointed out to the court that it would give Google a monopoly over orphan works. Because the settlement would only apply to Google, if another party like Amazon or the Internet Archive wanted to create its own digital library that included orphan works, it would not get the same protection.

And it wouldn’t be easy for other to get the same deal. Short of Congressional action, the only way a company like Amazon could get similar treatment would be to settle a class action suit of their own—a very difficult and time-consuming set of events to replicate. Additionally, because the authors and publishers who negotiated the Google deal are getting a cut of revenue, some have suggested that it would be in their interest to make sure Google remained a monopoly and would therefore not settle as easily with other parties.

What’s Next

Because class-action lawsuits can be as controversial as this one, the law requires that a court approve a settlement before it becomes binding. The court accepted over 500 briefs from various parties supporting or opposing the settlement and early last year held a hearing on the fairness of the settlement. It rejected the case yesterday.

The options available now to Google and the authors and publishers are:

  1. Continue litigating the original lawsuit, which is an unlikely scenario.
  2. Amend the settlement to make it opt-in, meaning that authors would have to give permission before their books are scanned.
  3. Appeal the judge’s decision to a higher court.

Judge Chin seemed to invite a new settlement, saying in his opinion that “Many of the concerns raised in the objections would be ameliorated if the [settlement] were converted from an ‘opt-out’ settlement to an ‘opt-in’ settlement.”

In the New York Times, Robert Darnton, himself a librarian and a strident if highly-0informed critic of the deal, weighed in with this opinion piece:

This decision is a victory for the public good, preventing one company from monopolizing access to our common cultural heritage.

Nonetheless, we should not abandon Google’s dream of making all the books in the world available to everyone. Instead, we should build a digital public library, which would provide these digital copies free of charge to readers. Yes, many problems — legal, financial, technological, political — stand in the way. All can be solved.

The Chronicle of Higher Education carries a good interview with Pamela Samuelson:

It’s the only ruling really that the judge, I think, could have made. The settlement was so complex, and it was so far-reaching. With the Department of Justice and the governments of France and Germany stridently opposed to the settlement, it seems to me that the judge really didn’t have all that much choice. So the ultimate ruling, that the settlement is not fair, reasonable, and adequate to the class, is one that I think was inevitable.

The thing that surprised me about the opinion was that he took seriously the issues about whether the Authors Guild and some of its members had adequately represented the interests of all authors, including academic authors and foreign authors. That was very gratifying because I spent a lot of time crafting letters to the judge saying that academic authors did have different interests. Academic authors, on average, would prefer open access. Whereas the guild and its members, understandably, want to do profit maximization.

The EFF’s Corynne McSherry has this analysis:

On the policy front, the court recognized – as do we – the extraordinary potential benefits of the settlement for readers, authors and publishers. We firmly believe that the world’s books should be digitized so that the knowledge held within them can made available to people around the world. But the court also recognized that the settlement could come at the price of undermining competition in the marketplace for digital books, giving Google a de facto monopoly over orphan books (meaning, works whose owner cannot be located). The court concluded that solving the orphan works problem is properly a matter for Congress, not private commercial parties. Sadly, Congress has thus far lacked the will to do so. Perhaps yesterday’s decision will finally spur Congress to revisit this important issue and pass comprehensive orphan works legislation, that allows for mass book digitization.

That said, the court also got some things fundamentally wrong in its copyright analysis. For example, it states that “a copyright owner’s right to exclude others from using his property is fundamental and beyond dispute” and then proceeds to quote at length from the letters of numerous authors (and their descendants) who share the misguided notion that a copyright is, by definition, an exclusive right to determine how a work can be used. We respectfully disagree. Copyright law grants to authors significant powers to manage exploitation of creative works as a function of spurring the creation of more works, not as a natural or moral right. And those powers are subject to numerous important exceptions and limitations, such as the first sale and fair use doctrines. Those limits are an essential part of the copyright bargain, which seeks to encourage the growth and endurance of a vibrant culture by both rewarding authors for their creative investments and ensuring that others will have the opportunity to build on those creative achievements. Thus, as the Supreme Court has explained, such limits are “neither unfair nor unfortunate” but rather “the means by which copyright advances the progress of science and art.” If the legal issues raised in the underlying lawsuit are ever litigated on the merits, let’s hope this or any future judge keeps the traditional American copyright bargain firmly in mind.

Michael Liedtke of the Associated Press thinks this is a micvrocosm of the larger anti-turst and monopoly challenges facing Google:

This week’s ruling from U.S. Circuit Judge Denny Chin did more than complicate Google’s efforts to make digital copies of the world’s 130 million books and possibly sell them through an online book store that it opened last year. It also touched upon antitrust, copyright and privacy issues that are threatening to handcuff Google as it tries to build upon its dominance in Internet search to muscle into new markets.

“This opinion reads like a microcosm of all the big problems facing Google,” said Gary Reback, a Silicon Valley lawyer who represented a group led by Google rivals Microsoft Corp. andAmazon.com Inc. to oppose the digital book settlement.

Google can only hope that some of the points that Chin raised don’t become recurring themes as the company navigates legal hurdles in the months ahead.

The company is still trying to persuade the U.S. Justice Department to approve a $700 million purchase of airline fare tracker ITA Software nearly nine months after it was announced. Regulators are focusing its inquiry into whether ITA would give Google the technological leverage to create an unfair advantage over other online travel services. Google argues it will be able to provide more bargains and convenience for travellers if it’s cleared to own ITA’s technology.

In Europe and the state of Texas, antitrust regulators are looking into complaints about Google abusing its dominance of Internet search to unfairly promote its own services and drive up its advertising prices.

And Google is still trying fend off an appeal in another high-profile copyright case, one stemming from its 2006 acquisition of YouTube, the Internet’s leading video site. Viacom Inc. is seeking more than $1 billion in damages after charging YouTube with misusing clips from Comedy Central, MTV and other Viacom channels. A federal judge sided with Google, saying YouTube had done enough to comply with digital copyright laws in its early days.

One of my favourite comentators on Google is of course the one-and-only Siva Vaidhyanathan, who is quoted in this excellent Inside Higher Ed piece:

Siva Vaidhyanathan, a media studies professor at the University of Virginia and a notable Google gadfly, said the company overplayed its hand by essentially trying to rewrite the rules governing the copying and distribution of book content through a class-action settlement. “Google clearly flew too close to the sun on this one,” he wrote in an e-mail. “…This is not what class-action suits and settlements are supposed to do.”

Vaidhyanathan said that Google now faces the choice of either continuing to fight for its interpretation of copyright law in the courts or scaling back its plans for a digital bookstore. “If Google decides to take the modest way out, it can still ask Congress to make the needed changes to copyright law that would let Google and other companies and libraries compete to provide the best information to the most people,” the media scholar says. “Congress should have been the place to start this in the first place.”

 

 

 

 

Jenna Newman on the Google book settlement

In the 1st issue for 2011 of the journal Scholarly and Research Communication comes a masterful exploration of the cultural and legal issues surrounding the Google book settlement by Jenna Newman. At 75 pages, this monograph-length essay is probably the most comprehensive and certainly the most current exploration of the issues underlying this giant experiment in digital publishing.

It’s not really possible to sum up the entire essay, so I’ll just cut to the chase and quote from her conclusion, which firstly establishes in extraordinary detail just how good the deal is for Google:

If the settlement is approved, Google can congratulate itself on a particularly excellent deal. It avoids years of uncertainty, not to mention ongoing legal fees, in litigation. It avoids prohibitive transaction costs by not having to clear rights individually for the works it has scanned already and all the works covered by the settlement and yet unscanned. It will receive a blanket licence to use a broad swath of copyrighted works, and it will enjoy an exclusive position, both as a market leader and with legal peace of mind, in the realm of digital rights: its private licence goes much further than current copyright legislation, particularly with respect to orphan works, for which rights are currently unobtainable in any market. Low transaction costs and legal certainty are key requirements for any mass digitization or digital archiving project (McCausland, 2009). The settlement offers both, to Google and Google alone. It will be years ahead of any potential competitors digitizing print works and may easily end up with an effective monopoly and a leading stake in the emerging markets for digital books. And all this costs Google only U.S.$125 million—a mere 0.53% of its gross revenue, or 1.92% of its net income, for 2009 alone (Google Inc., 2010b)

Newman suggests that th deal is far more equivocal for publishers and authors, but that given the other options on the table (including the risk of a music-industry style failure to establish a viable digital publishing platform until after piracy has eroded much of the value of the market), it may represent the “best deal available.”

But the real implications are for copyright law and communications policy:

The settlement may serve publishers’ and authors’ individual or immediate interests even as it erodes their collective and long-term ones. The public, too, has a significant vested interest in the subjects of the settlement—the books themselves, repositories to centuries of knowledge and creativity—as well as the legal and cultural environment the settlement endorses. A detailed account of the settlement’s economic and cultural costs and benefits is instructive, but more importantly the settlement highlights the structural and technological deficiencies of existing copyright law. Long copyright terms and the presumption of total rights protection have created a copyright regime that privileges the potential for commercial exploitation regardless of whether that exploitation is feasible or even desired by the creators themselves. This regime is also particularly ill equipped to recognize digital possibilities. Whatever happens to this settlement, such tensions continue to strain copyright’s rules.

A number of conditions on approval could address criticisms of the settlement, but perhaps the best way to ensure Google, publishers, and authors are all treated fairly is to pursue copyright reform, not private contracts, to address the legislative problems that the settlement tries to engage. Legislative changes with respect to intellectual property rights have been slow to reflect everyday technological realities. The existence of the settlement, and much of its reception, demonstrates that private interests and public appetites are eager to move beyond the limits of the current regulations. Copyright reform will be fraught with challenges of its own, but the existing legal framework—in Canada as in the U.S.—is increasingly inadequate for accommodating common and emerging practices and capabilities: copyright law has swung out of balance. The settlement may serve as an early test bed for certain possibilities, including digital distribution and access, and the imposition of limited formalities on rights-holders. However, as a private contract, it is an insufficient guide for legislative development. The trouble with copyright does not affect Google alone. The public interest demands more broadly applicable solutions, and these will be achieved—eventually, and possibly with great difficulty—through copyright legislation. We may get copyright reform wrong, as arguably we have done in the past, but that fear should be allayed if we also recall that we have the power to revise our legislative interventions until we get them right.

 

The Times paywall: what do the numbers tell us?

The preliminary numbers on The Times paywall are in … and no-one quite knows what to make of them.

Paid Content argues that while web readership has fallen off a cliff (as expected), the modest number of ongoing subscribes offers some hope for the future.

Roy Greenslade says its early days but the numbers probably don’t add up:

I am told that iPad numbers are “jumping around” all the time.

But there has been no attempt to counter my source’s view that there has been a measure of disappointment about online-only take-up.

Many people who tried out access in the early weeks have not returned. However, it is also true to say that some daily subscribers have been impressed enough to sign up on a weekly basis.

And it is also the case that the Sunday Times‘s iPad app has yet to launch. It is hoped that this will boost figures considerably, though I have my reservations about that.

I think, once we delve further into these figures, they will support the view that News Int’s paywall experiment has, as expected, not created a sufficiently lucrative business model.

Clay Shirky argues the paywall means a retreat from broad-based newspaper-style publishing to narrowcast newsletter publishing:

One way to think of this transition is that online, the Times has stopped being a newspaper, in the sense of a generally available and omnibus account of the news of the day, broadly read in the community. Instead, it is becoming a newsletter, an outlet supported by, and speaking to, a specific and relatively coherent and compact audience. (In this case, the Times is becoming the online newsletter of the Tories, the UK’s conservative political party, read much less widely than its paper counterpart.)

Murdoch and News Corp, committed as they have been to extracting revenues from the paywall, still cannot execute in a way that does not change the nature of the organizations behind the wall. Rather than simply shifting relative subsidy from advertisers to users for an existing product, they are instead re-engineering the Times around the newsletter model, because the paywall creates newsletter economics.

As of July, non-subscribers can no longer read Times stories forwarded by colleagues or friends, nor can they read stories linked to from Facebook or Twitter. As a result, links to Times stories now rarely circulate in those media. If you are going to produce news that can’t be shared outside a particular community, you will want to recruit and retain a community that doesn’t care whether any given piece of news spreads, which means tightly interconnected readerships become the ideal ones. However, tight interconnectedness correlates inversely with audience size, making for a stark choice, rather than offering a way of preserving the status quo.

This re-engineering suggests that paywalls don’t and can’t rescue current organizational forms. They offer instead yet another transformed alternative to it. Even if paywall economics can eventually be made to work with a dramatically reduced audience, this particular referendum on the future (read: the present) of newspapers is likely to mean the end of the belief that there is any non-disruptive way to remain a going concern.

 

Vadim Lavrusik on the future of social media in journalism

1stVideo is a video editing app for the iPhone. Analysts such as Alfred Hermida think it may become an important platform for mobile video reporting.

At Mashable, Vadim Lavrusik has a thoughtful and informative piece on future trends in social media and journalism. He runs through many of the trends many of you will already know about, but he also has some interesting insights of his own and collects a bunch of very worthwhile links. Lavrusik begins by pointing out that:

The future journalist will be more embedded with the community than ever, and news outlets will build their newsrooms to focus on utilizing the community and enabling its members to be enrolled as correspondents. Bloggers will no longer be just bloggers, but be relied upon as more credible sources.

Other trends Lavrusik notes include collaborative reporting (with links to Jay Rosen and David Clinch), journalists as community managers, reporting on social media as a recognised beat, online curation for a time-poor audience, the growth of roles such as social media editors, the rise of branded aggregations of blogs, and the importance of mobile technologies like Twitter for reporting.

Recommended.

Rebutting Christopher Madden: part 1

Recently I had a piece published Overland magazine calling for radical reform, perhaps even abolition, of the Australia Council for the Arts. This week, the Overland website carries a response by cultural policy analyst Christopher Madden.

I think Madden’s rebuttal misguided in several important respects and so today I’m going to unpick his piece item by item … but before I do that I think it’s worth saying that we agree on many things. More than that, I welcome this debate – it’s exactly what I hoped to provoke with the piece. Madden’s response to my article is robust, informed, detailed and well-intentioned. It’s also, I think, quite wrong. Continue reading

The heritage wars heat up

The Adelaide Symphony Orchestra plays Mahler in 2007. Source: Victoria Anderson

Well well, who would have thought an essay about cultural policy would generate so much heat?

A book chapter for the Centre for Policy Development by Marcus Westbury and myself has started to gain some serious attention in the high arts in Australia. I’ve already covered Richard Mills’ reaction to it here. But today in The Australian, there is a long article from Rosemary Sorensen about the debate, which includes the first formal response from Australia Council CEO Kathy Keele:

Australia Council chief executive Kathy Keele agrees that the dichotomy is unhelpful. “It’s not either-or,” she says, “it’s about doing it all.”

While she welcomes the debate, Keele does say that Westbury, who has created festival events under the Australia Council’s banner and also helped write one of its arts guides, has not done his homework well enough.

“He’s talking about an Australia Council that does not exist,” she says. “This whole conversation about heritage is not relevant: it’s really that we need more funding for the arts across the board.”

Keele laughs off the idea that an orchestra playing Bach or a theatre company performing Shakespeare is somehow out of date because the composer or the playwright is no longer with us.

“The people going to see it are not dead,” she says.

“Those performances are still about today.”

Of course, regular readers of this blog will know that this is in turn a misrepresentation of Marcus’ arguments, and I humbly suggest Keele should have a careful read of our essay. In it, we don’t actually say that Shakespeare or Bach are “out of date” because they are “no longer with us”, but we do point out the Australia Council’s overwhelming funding bias towards a small number of cultural organisations and a narrow range of cultural expressions.

As we point out in the essay, while there is substantial funding for organisations to perform works by Bach or Shakespeare (including funding for an entire company devoted to Shakespeare), only 2% of Australia Council music funding goes to contemporary music, only 5% of the arts funding in this country is devoted to living artists making new work, and the Australia Council gives five times as much funding to one opera company as it does to its entire Aboriginal and Torres Strait Islander Arts Board.

The debate is set to continue, so stay tuned.

Why we need to reform the Australia Council

Protests over a government decision to close the The Tote showed cultural policy matters. Photo: The Age / James Boddington

Marcus Westbury has an article in The Age today in which he asks whether the Australia Council has had its day.

We need a real debate about whether the well-intentioned but increasingly archaic central role of the Australia Council has had its day. Formed in the 1970s by the Whitlam government, the ”OzCo” introduced meaningful support for artists and organisations across theatre, dance, visual arts and literature for the first time. But times have moved on – or forward, as some slogans might prefer. The Australia Council’s structure and artistic focus are still hard-wired in an act written for it almost four decades ago. It defines both what culture is and how it should be administered in ways that are hopelessly out of date. As a result the Australia Council is increasingly irrelevant. It has had little meaningful engagement with the digital cultural revolution.

From today, Marcus and I are going to be campaigning for Australia COuncil reform. We’re calling for real and much-needed reform to the way the Australia Council operates, and to its funding responsibilities, and in more general terms, the entire cultural policy paradigm in this country. Specifically, we argue Australia needs a new cultural agency that will fund the new and contemporary cultural expressiosn the Australia Council won’t.

We’ve authored a book chapter for an upcoming Centre for Policy Developement book on the issue, which is up on the CPD website in full here.

Let the debate begin!

John Naughton on the internet

Writer and academic John Naughton. Soucre: Memex 1.1

The Observer has a great feature article by John Naughton on the internet. It shows Naughton’s typically lateral, playful prose style, and also mentions the work of Neil Postman, a writer who Naughton and I appear to both admire:

Many years ago, the cultural critic Neil Postman, one of the 20th century’s most perceptive critics of technology, predicted that the insights of two writers would, like a pair of bookends, bracket our future. Aldous Huxley believed that we would be destroyed by the things we love, while George Orwell thought we would be destroyed by the things we fear.

Postman was writing before the internet became such a force in our societies, but I believe he got it right. On the one (Huxleyan) hand, the net has been a profoundly liberating influence in our lives – creating endless opportunities for information, entertainment, pleasure, delight, communication, and apparently effortless consumption, to the point where it has acquired quasi-addictive power, especially over younger generations. One can calibrate the extent of the impact by the growing levels of concern among teachers, governments and politicians. “Is Google making us stupid?” was the title of one of the most cited articles in Atlantic magazine in 2008. It was written by Nicholas Carr, a prominent blogger and author, and raised the question of whether permanent access to networked information (not just Google) is turning us into restless, shallow thinkers with shorter attention spans. (According to Nielsen, a market research firm, the average time spent viewing a web page is 56 seconds.) Other critics are worried that incessant internet use is actually rewiring our brains.

On the other (Orwellian) hand, the internet is the nearest thing to a perfect surveillance machine the world has ever seen. Everything you do on the net is logged – every email you send, every website you visit, every file you download, every search you conduct is recorded and filed somewhere, either on the servers of your internet service provider or of the cloud services that you access. As a tool for a totalitarian government interested in the behaviour, social activities and thought-process of its subjects, the internet is just about perfect.

There’s plenty more to stimulate your thinking in this stylish piece. You can check out Naughton’s blog here.

Stephen Burt in the LRB on Facebook, MySpace and the era of social networking

This long and rewarding essay is currently up at the London Review of Books. Burt notes:

Facebook is big, and it seems to be everywhere. Founded in early 2004, restricted first to Harvard, then to students at American universities, and now open to anyone, the site claims 500 million users, having passed MySpace to become the largest social networking site in the world. Social networks – Facebook, MySpace, Orkut, Bebo (big in the UK) and QQ (big in China) – let users build a page about themselves, containing everything from romantic status (‘married’, ‘single’, ‘it’s complicated’) to video clips; each user’s page is linked to the pages of ‘friends’. Social networks aren’t the entire internet (no more than porn, or Google), though (like porn, like Google) they are a big slice: 80 per cent of Britons with internet connections use them. And (like porn, like Google) they are a synecdoche for the internet generally: social networking sites put us in touch with strangers who share our odd interests; reduce the effects of geographic distance; promote bitesize units of image and text; spread up to the minute news; suck away hours; and change how we see ourselves as social beings.full review is here

The full essay is here.