New York Times paywall: round-up of the analysis

Nieman Journalism Labs’ Tom Coddington has a great round-up of the decision by the New York Times to introduce a pay-wall:

There were a couple pieces written supporting the Times’ proposal: Former CBS digital head Larry Kramer said he’d be more likely to pay for the Times than for the tablet publication The Daily, even though it’s far more expensive. The reason? The Times’ content has consistently proven to be valuable over the years. (Tech blogger John Gruber also said the Times’ content is much more valuable than The Daily’s, but wondered if it was really worth more than five times more money.) Nate Silver of Times blog FiveThirtyEight used some data to argue for the Times’ value.

The Times’ own David Carr offered the most full-throated defense of the pay plan, arguing that most of the objection to it is based on the “theology” of open networks and the free flow of information, rather than the practical concerns involved with running a news organization. Reuters’ Felix Salmon countered that the Times has its own theology — that news orgs should charge for content because they can, and that it will ensure their success. Later, though, Salmon ran a few numbers and posited that the paywall could be a success if everything breaks right.

There were more objections voiced, too: Both Mathew Ingram of GigaOM and former newspaper journalist Janet Coats both called it backward-looking, with Ingram saying it “seems fundamentally reactionary, and displays a disappointing lack of imagination.” TechDirt’s Mike Masnick ripped the idea that people might have felt guilty about getting the Times for free online.

One of the biggest complaints revolved around the Times’ pricing system itself, which French media analyst Frederic Filloux described as “expensive, utterly complicated, disconnected from the reality and designed to be bypassed.”Others, including Ken Doctor, venture capitalist Jean-Louis Gassee, and John Gruber, made similar points about the proposal’s complexity, and Michael DeGusta said the prices are just too high. Poynter’s Damon Kiesow disagreed about the plan structure, arguing that it’s well-designed as an attack on Apple’s mobile paid-content dominance.

 

Where to next for the Google Book Settlement?

This week a US judge ruled against the Google Book Settlement, the latets in a seven year legal saga that I’ve covered in some depth here.

Jerry Brito has a good explainer of the background of the case:

In mid-2005, the Author’s Guild and the American Association of Publishers filed suit to stop Google from scanning any more books. Soon the Author’s Guild’s case was certified as a class-action lawsuit, meaning that anyone who had ever published a book—millions of authors—would be part of the class represented and would be bound by the result of the case.

An Unsettling Settlement

Three years later, after extensive negotiations, the parties announced they had reached a settlement. Google would pay $125 million up front and would then be allowed to continue scanning books and making them available online. More importantly, Google would be allowed to offer not just snippets, but it would be allowed to sell entire text of books as well. The copyright holder would get about 2/3 of the revenues and Google would keep 1/3.

On its surface, the proposed settlement was a boon for all involved. Google would get to continue digitizing books, authors and publishers would get a cut of the profits, and consumers would get universal access to almost all of the world’s books. But reading between the lines, the settlement proved to be problematic.

Because it was a settlement to a class-action lawsuit, it meant that all authors who had ever published a book were bound. Google could scan any book without first asking for permission. If an author didn’t want his book to be scanned or included in Google’s database, he had to contact Google and opt-out. This would have turned copyright on its head.

As a result, many authors protested. The Author’s Guild and the publisher’s association had negotiated on behalf of millions of authors, and many felt the deal didn’t represent their wishes. Almost 7,000 authors wrote to the court asking to be removed from the lawsuit’s plaintiff class.

Saving the Orphans

Another contentious aspect of the settlement was how it treated “orphan works,” books the authors of which are unknown or can’t be found. It’s a well-known problem in copyright that members of Congress have tried to fix several times.

The problem is that if a company like Google wants to digitize a copyrighted book, and it can’t find its author to ask for permission, then its choices are 1) scan the book anyway and face heavy penalties if the author surfaces later and sues, or 2) leave the book undigitized and out of a universal library. As a result, hundreds of thousands of books are in a kind of limbo, not accessible to readers even if the author may well have been fine with digitization.

The Google Books settlement presented a solution to the problem. Because it bound all authors—-known and unknown—-Google could proceed to scan orphan works without having to worry. If an author later surfaced who didn’t want his book used, he could no longer sue Google. He could opt-out of the program and claim a check for the revenues associated with his book, but no more.

Some welcomed this solution to the problem, but others, including the Department of Justice, pointed out to the court that it would give Google a monopoly over orphan works. Because the settlement would only apply to Google, if another party like Amazon or the Internet Archive wanted to create its own digital library that included orphan works, it would not get the same protection.

And it wouldn’t be easy for other to get the same deal. Short of Congressional action, the only way a company like Amazon could get similar treatment would be to settle a class action suit of their own—a very difficult and time-consuming set of events to replicate. Additionally, because the authors and publishers who negotiated the Google deal are getting a cut of revenue, some have suggested that it would be in their interest to make sure Google remained a monopoly and would therefore not settle as easily with other parties.

What’s Next

Because class-action lawsuits can be as controversial as this one, the law requires that a court approve a settlement before it becomes binding. The court accepted over 500 briefs from various parties supporting or opposing the settlement and early last year held a hearing on the fairness of the settlement. It rejected the case yesterday.

The options available now to Google and the authors and publishers are:

  1. Continue litigating the original lawsuit, which is an unlikely scenario.
  2. Amend the settlement to make it opt-in, meaning that authors would have to give permission before their books are scanned.
  3. Appeal the judge’s decision to a higher court.

Judge Chin seemed to invite a new settlement, saying in his opinion that “Many of the concerns raised in the objections would be ameliorated if the [settlement] were converted from an ‘opt-out’ settlement to an ‘opt-in’ settlement.”

In the New York Times, Robert Darnton, himself a librarian and a strident if highly-0informed critic of the deal, weighed in with this opinion piece:

This decision is a victory for the public good, preventing one company from monopolizing access to our common cultural heritage.

Nonetheless, we should not abandon Google’s dream of making all the books in the world available to everyone. Instead, we should build a digital public library, which would provide these digital copies free of charge to readers. Yes, many problems — legal, financial, technological, political — stand in the way. All can be solved.

The Chronicle of Higher Education carries a good interview with Pamela Samuelson:

It’s the only ruling really that the judge, I think, could have made. The settlement was so complex, and it was so far-reaching. With the Department of Justice and the governments of France and Germany stridently opposed to the settlement, it seems to me that the judge really didn’t have all that much choice. So the ultimate ruling, that the settlement is not fair, reasonable, and adequate to the class, is one that I think was inevitable.

The thing that surprised me about the opinion was that he took seriously the issues about whether the Authors Guild and some of its members had adequately represented the interests of all authors, including academic authors and foreign authors. That was very gratifying because I spent a lot of time crafting letters to the judge saying that academic authors did have different interests. Academic authors, on average, would prefer open access. Whereas the guild and its members, understandably, want to do profit maximization.

The EFF’s Corynne McSherry has this analysis:

On the policy front, the court recognized – as do we – the extraordinary potential benefits of the settlement for readers, authors and publishers. We firmly believe that the world’s books should be digitized so that the knowledge held within them can made available to people around the world. But the court also recognized that the settlement could come at the price of undermining competition in the marketplace for digital books, giving Google a de facto monopoly over orphan books (meaning, works whose owner cannot be located). The court concluded that solving the orphan works problem is properly a matter for Congress, not private commercial parties. Sadly, Congress has thus far lacked the will to do so. Perhaps yesterday’s decision will finally spur Congress to revisit this important issue and pass comprehensive orphan works legislation, that allows for mass book digitization.

That said, the court also got some things fundamentally wrong in its copyright analysis. For example, it states that “a copyright owner’s right to exclude others from using his property is fundamental and beyond dispute” and then proceeds to quote at length from the letters of numerous authors (and their descendants) who share the misguided notion that a copyright is, by definition, an exclusive right to determine how a work can be used. We respectfully disagree. Copyright law grants to authors significant powers to manage exploitation of creative works as a function of spurring the creation of more works, not as a natural or moral right. And those powers are subject to numerous important exceptions and limitations, such as the first sale and fair use doctrines. Those limits are an essential part of the copyright bargain, which seeks to encourage the growth and endurance of a vibrant culture by both rewarding authors for their creative investments and ensuring that others will have the opportunity to build on those creative achievements. Thus, as the Supreme Court has explained, such limits are “neither unfair nor unfortunate” but rather “the means by which copyright advances the progress of science and art.” If the legal issues raised in the underlying lawsuit are ever litigated on the merits, let’s hope this or any future judge keeps the traditional American copyright bargain firmly in mind.

Michael Liedtke of the Associated Press thinks this is a micvrocosm of the larger anti-turst and monopoly challenges facing Google:

This week’s ruling from U.S. Circuit Judge Denny Chin did more than complicate Google’s efforts to make digital copies of the world’s 130 million books and possibly sell them through an online book store that it opened last year. It also touched upon antitrust, copyright and privacy issues that are threatening to handcuff Google as it tries to build upon its dominance in Internet search to muscle into new markets.

“This opinion reads like a microcosm of all the big problems facing Google,” said Gary Reback, a Silicon Valley lawyer who represented a group led by Google rivals Microsoft Corp. andAmazon.com Inc. to oppose the digital book settlement.

Google can only hope that some of the points that Chin raised don’t become recurring themes as the company navigates legal hurdles in the months ahead.

The company is still trying to persuade the U.S. Justice Department to approve a $700 million purchase of airline fare tracker ITA Software nearly nine months after it was announced. Regulators are focusing its inquiry into whether ITA would give Google the technological leverage to create an unfair advantage over other online travel services. Google argues it will be able to provide more bargains and convenience for travellers if it’s cleared to own ITA’s technology.

In Europe and the state of Texas, antitrust regulators are looking into complaints about Google abusing its dominance of Internet search to unfairly promote its own services and drive up its advertising prices.

And Google is still trying fend off an appeal in another high-profile copyright case, one stemming from its 2006 acquisition of YouTube, the Internet’s leading video site. Viacom Inc. is seeking more than $1 billion in damages after charging YouTube with misusing clips from Comedy Central, MTV and other Viacom channels. A federal judge sided with Google, saying YouTube had done enough to comply with digital copyright laws in its early days.

One of my favourite comentators on Google is of course the one-and-only Siva Vaidhyanathan, who is quoted in this excellent Inside Higher Ed piece:

Siva Vaidhyanathan, a media studies professor at the University of Virginia and a notable Google gadfly, said the company overplayed its hand by essentially trying to rewrite the rules governing the copying and distribution of book content through a class-action settlement. “Google clearly flew too close to the sun on this one,” he wrote in an e-mail. “…This is not what class-action suits and settlements are supposed to do.”

Vaidhyanathan said that Google now faces the choice of either continuing to fight for its interpretation of copyright law in the courts or scaling back its plans for a digital bookstore. “If Google decides to take the modest way out, it can still ask Congress to make the needed changes to copyright law that would let Google and other companies and libraries compete to provide the best information to the most people,” the media scholar says. “Congress should have been the place to start this in the first place.”

 

 

 

 

Jenna Newman on the Google book settlement

In the 1st issue for 2011 of the journal Scholarly and Research Communication comes a masterful exploration of the cultural and legal issues surrounding the Google book settlement by Jenna Newman. At 75 pages, this monograph-length essay is probably the most comprehensive and certainly the most current exploration of the issues underlying this giant experiment in digital publishing.

It’s not really possible to sum up the entire essay, so I’ll just cut to the chase and quote from her conclusion, which firstly establishes in extraordinary detail just how good the deal is for Google:

If the settlement is approved, Google can congratulate itself on a particularly excellent deal. It avoids years of uncertainty, not to mention ongoing legal fees, in litigation. It avoids prohibitive transaction costs by not having to clear rights individually for the works it has scanned already and all the works covered by the settlement and yet unscanned. It will receive a blanket licence to use a broad swath of copyrighted works, and it will enjoy an exclusive position, both as a market leader and with legal peace of mind, in the realm of digital rights: its private licence goes much further than current copyright legislation, particularly with respect to orphan works, for which rights are currently unobtainable in any market. Low transaction costs and legal certainty are key requirements for any mass digitization or digital archiving project (McCausland, 2009). The settlement offers both, to Google and Google alone. It will be years ahead of any potential competitors digitizing print works and may easily end up with an effective monopoly and a leading stake in the emerging markets for digital books. And all this costs Google only U.S.$125 million—a mere 0.53% of its gross revenue, or 1.92% of its net income, for 2009 alone (Google Inc., 2010b)

Newman suggests that th deal is far more equivocal for publishers and authors, but that given the other options on the table (including the risk of a music-industry style failure to establish a viable digital publishing platform until after piracy has eroded much of the value of the market), it may represent the “best deal available.”

But the real implications are for copyright law and communications policy:

The settlement may serve publishers’ and authors’ individual or immediate interests even as it erodes their collective and long-term ones. The public, too, has a significant vested interest in the subjects of the settlement—the books themselves, repositories to centuries of knowledge and creativity—as well as the legal and cultural environment the settlement endorses. A detailed account of the settlement’s economic and cultural costs and benefits is instructive, but more importantly the settlement highlights the structural and technological deficiencies of existing copyright law. Long copyright terms and the presumption of total rights protection have created a copyright regime that privileges the potential for commercial exploitation regardless of whether that exploitation is feasible or even desired by the creators themselves. This regime is also particularly ill equipped to recognize digital possibilities. Whatever happens to this settlement, such tensions continue to strain copyright’s rules.

A number of conditions on approval could address criticisms of the settlement, but perhaps the best way to ensure Google, publishers, and authors are all treated fairly is to pursue copyright reform, not private contracts, to address the legislative problems that the settlement tries to engage. Legislative changes with respect to intellectual property rights have been slow to reflect everyday technological realities. The existence of the settlement, and much of its reception, demonstrates that private interests and public appetites are eager to move beyond the limits of the current regulations. Copyright reform will be fraught with challenges of its own, but the existing legal framework—in Canada as in the U.S.—is increasingly inadequate for accommodating common and emerging practices and capabilities: copyright law has swung out of balance. The settlement may serve as an early test bed for certain possibilities, including digital distribution and access, and the imposition of limited formalities on rights-holders. However, as a private contract, it is an insufficient guide for legislative development. The trouble with copyright does not affect Google alone. The public interest demands more broadly applicable solutions, and these will be achieved—eventually, and possibly with great difficulty—through copyright legislation. We may get copyright reform wrong, as arguably we have done in the past, but that fear should be allayed if we also recall that we have the power to revise our legislative interventions until we get them right.

 

The diffusion of the printing press in Europe, 1450-1500

These maps are just too pretty not to re-post. They come from Jeremiah Dittmar’s fascinating new paper, Information Technology and Economic Change: The Impact of the Printing Press.

The diffusion of the printing press, 1450-1500. Source: Jeremiah Dittmar.

There’s a good summary of the paper at Vox, but the take-home message is probably in two parts. Firstly:

  • First, the printing press was an urban technology, producing for urban consumers.
  • Second, cities were seedbeds for economic ideas and social groups that drove the emergence of modern growth.
  • Third, city sizes were historically important indicators of economic prosperity, and broad-based city growth was associated with macroeconomic growth (Bairoch 1988, Acemoglu et al. 2005).

And secondly:

I find that cities in which printing presses were established 1450-1500 had no prior growth advantage, but subsequently grew far faster than similar cities without printing presses. My work uses a difference-in-differences estimation strategy to document the association between printing and city growth. The estimates suggest early adoption of the printing press was associated with a population growth advantage of 21 percentage points 1500-1600, when mean city growth was 30 percentage points. The difference-in-differences model shows that cities that adopted the printing press in the late 1400s had no prior growth advantage, but grew at least 35 percentage points more than similar non-adopting cities from 1500 to 1600.

The worsening woes of the (recorded) music industry

From the Guardian‘s inestimable Charles Arthur comes a must-read post on the gloomy future of the record industry. Because it’s so good, I’ve re-posted here in full:

Bad news for the music industry. And it comes in threes.

First, Warner Music (which might be thinking of buying EMI from Citigroup?) reported its numbers for the fourth calendar quarter of 2010(which is actually its fiscal first quarter). Oh dear. Total revenue ($789m) down 14% from 2009, down 12% on constant currency basis (ie allowing for exchange rate fluctuation); digital revenue of $187m was 24% of total revenue (yay!), up 2% from last year (oooh), but sequentially down by 5%, or 7% on constant currency.

Operating income before depreciation and amortisation down 20% to $90m, from $112m a year ago. All of which led to a net loss of $18m, compared to a net loss of $17m a year before. In other words, things are still bad there. And it’s still got some heavy gearing: cash is $263m, long-term debt is $1.94bn. Warner might want to buy EMI, but it would put a hell of a strain on it. And the music business isn’t exactly looking like a place where you’d want a bank putting your money.

Second, Fred Wilson, a venture capitalist who spends upwards of $60 per month – and by his estimate around $2,000 annually – on music and music subscriptions was forced to turn pirate in order to get hold of the new Streets album:

“searched the Internet for the record. It was not even listed in iTunes or emusic. It was listed on Amazon US as an import that would be available on Feb 15th, but only in CD form. I’m not buying plastic just to rip the files and throw it out. Seeing as it was an import, I searched Amazon UK. And there I found the record in mp3 form for 4 pounds. It was going to be released on Feb 4th. I made a mental note to come back and get it when it was released. I got around to doing that today. I clicked on “buy with one click” and was greeted with this nonsense “

Which was Amazon saying that because he wasn’t in the UK, he couldn’t buy it. Unable to find a VPN that would let him masquerade as a Briton, he took the next step:

“So reluctantly, I went to a bit torrent search. I found plenty of torrents for the record and quickly had the record in mp3 form. That took less than a minute compared to the 20+ minutes I wasted trying pretty hard to buy the record legally.

“This is fucked up. I want to pay for music. I value the content. But selling it to some people in some countries and not selling it to others is messed up. And selling it in CD only format is messed up. And posting the entire record on the web for streaming without making the content available for purchase is messed up.”

Well, you could argue that an inability to actually wait for the few weeks, perhaps a month, before he could hear the songs via a licensed US label was what’s messed up. Is there no other music in the world that he can hear first? Nobody else? True, it would make sense if contracts were signed so that everything happened at once. But the record industry is still rather like the book industry: because it generates most of its money from physical things, it organises itself around those things.

And finally to Mark Mulligan, music analyst at Forrester Research.Writing on the Midem blog, Mulligan points out that “Digital music is at an impasse” because “it has not achieved any of its three key objectives”, specifically:

1 – to offset the impact of declining CD sales
2 – to generate a format replacement cycle and
3 – to compete effectively with piracy.

Mulligan notes that

“the divergence between emerging consumer behaviour and legitimate music products is widening at an alarming rate. And consumers are voting with their feet: Forrester’s latest consumer data shows digital music activity adoption is flat across ALL activity types compared to 1 year previously (in fact the data shows a slight decline).”

The hope on the part of the music business that the iPod, and the iTunes Store, and then digital music stores of all sorts, would be its saviour has turned out to be false. As Mulligan notes,

“all music activity is niche, except for video. Just 10% of Europeans and 18% of US consumers pay for digital music. Only music video has more than 20% adoption (and only in Europe at that): YouTube is digital music’s killer app.”

(If you are, or know, any young teenagers you”ll know that this is absolutely true. YouTube, and of course in Europe also Spotify. The problem with Spotify being, in the eyes of the record companies, that it simply doesn’t pay them enough. Whereas in Spotify’s eyes the record companies have for too long demanded too much.)

Mulligan adds that the “transition generation” – the 16-24 year-olds – aren’t the future. Instead, the future lies with the 12-15 year olds.

“In fact, when you look closely at the activities where 16-24′s over-index [do more than other age cohorts], you can see that their activity coalesces around recreating analogue behaviours in a digital context. The 16-24′s started out in the analogue era. They are the transition generation with transitional behaviours.

“The 12-15 year olds, though, don’t have analog baggage. All they’ve known is digital. Online video and mobile are their killer apps. These Digital Natives see music as the pervasive soundtrack to their interactive, immersive, social environments. Ownership matters less. Place of origin matters less. But context and experience are everything. The Digital Natives are hugely disruptive, but their disruption needs harnessing.”

So why does this matter, asks Mulligan? Because

“current digital music product strategy is built around the transition generation with transition products to meet their transitional needs and expectations. Neither the 99 cent download and the 9.99 streaming subscription are the future. They are transition products. They were useful for bridging the gap between analogue and digital, to get us on the first step of the digital path, but now it’s time to start the journey in earnest. We’d be naïve to argue that we’re anything close to the end game yet. But the problem is that consumer demand has already outpaced product evolution, again.”

It’s time, he argues, for the music companies to deal with the world as it is, rather than as it used to be or as they liked it. Many in the business will tell you that that is exactly what they are doing; and nothing that Mulligan says in any way detracts from the (real) efforts that are being made by many record executives, who are not as clueless or uninformed as many would like to think. Instead, they’re frequently dealing with institutional and sector-based inertia that’s hard to get moving. Plus if Simon Cowell can discover a singer on a talent show and propel her to the top of the UK and US album charts (the first British act since the Beatles to achieve that), selling millions of CDs, well, is his strategy so wrong and everyone else’s somehow so right? Realities like that give even the most digital executive pause.

Back to Mulligan, who points out that

“the digital natives have only ever known a world with on-demand access based music experiences. …And the experience part is crucial. In a post-content-scarcity world where all content is available, experience is now everything. Experience IS the product. With the contagion of free infecting everything the content itself is no longer king. Experience now has the throne.”

So what’s needed? He thinks future music products need “SPARC” (no, not the Sun processor architecture). Digital music products, he says, must be:
• Social: put the crowd in the cloud
• Participative: make them interactive and immersive
• Accessible: ownership still matters but access matters more
• Relevant: ensure they co-exist and joint the dots in the fragmented digital environment
• Connected: 174m Europeans have two or more connected devices. Music fans are connected and expect their music experiences to be also.

His parting shot: “Music products must harness disruption, that isn’t in question. What is, is whether they do so quickly enough to prevent another massive chunk of the marketplace disappearing for good?”

I think Warner may have answered that already, actually.

My commentary: after reading this, if you were a music industry executive you’d probably want to slash your wrists. But things may be both worse and better than it seems  for the big music publishers. Here’s why.

Firstly, experience can be excluded, branded and sold. The predominant form of musical experience today is not the download but the live music festival or concert. Large multinationals are already aggressively into this space (think LiveNation) and we should expect this to continue. Secondly, experience can be a good as well as a service: that is, really well produced and packaged vinyl can be an experience (although only a niche experience – but then again, all music is niche now anyway). Finally, certain aspects of the music market are not being disrupted in the same way as downloadable songs – for instance, royalty streams where the end customer is large enough to warrant legal pursuit by collection agencies.

On the other hand, in some ways, things really are as bad if not worse than the Forrester report suggests. Free music is not going away, and today’s teenagers really don’t expect to pay for it. That battle is over. So the future for recorded music may really be truly non-excludable and free. That’s a challenge that no-one in the industry seems willing to face up to, even those advocating streaming or subscription models. Finally, the recent history of the music industry suggests that music publishing executives – indeed, musicians themselves – struggle to understand the new paradigm, even twelve years after Napster.

John Lanchester: Can newspapers survive?

Lanchester in the LRB.

Short answer: not in print, but yes online.

Longer answer:

I feel equally certain in saying that what the print media need, more than anything else, is a new payment mechanism for online reading, which lets you read anything you like, wherever it is published, and then charges you on an aggregated basis, either monthly or yearly or whatever. For many people, this would be integrated into an RSS feed, to create what amounts to an individualised newspaper. I would be entirely happy to pay to subscribe to Anthony Lane on movies in the New Yorker, and Patricia Wells on restaurants in the Herald Tribune, and Larry Elliott on economics in the Guardian, and David Pogue on technology in the New York Times, and I also want to feel free to read anything else which catches my eye, whenever I feel like it – I just don’t want to have to think about paying every time I click on the article to read it. I want a monthly or yearly charge, taken off my credit card without my having to think about it. That charge could mount up pretty high over the course of a year, but not as high as the current costs – $4.99 for a single digital issue of the New Yorker, for example. Papers can charge different amounts for their content, and we the readers will be the market who decides what is worth what. The charging process has to be both invisible and transparent: invisible at the moment of use, and transparent when I want to see what I’ve paid. The idea is for a cross between a print version of Spotify, with a dash of Amazon and a dash of iTunes. All those players have the expertise to do it, as do the credit card companies. From the technical perspective it should not be all that hard to do, and it would, I believe, work in remonetising the newspaper business. Let us pay – we’re happy to pay.

Lanchester also links to this excellent piece by Alan Rusbridger and the OECD’s important 2010 study The Evolution of News and the Internet.

The Times paywall: what do the numbers tell us?

The preliminary numbers on The Times paywall are in … and no-one quite knows what to make of them.

Paid Content argues that while web readership has fallen off a cliff (as expected), the modest number of ongoing subscribes offers some hope for the future.

Roy Greenslade says its early days but the numbers probably don’t add up:

I am told that iPad numbers are “jumping around” all the time.

But there has been no attempt to counter my source’s view that there has been a measure of disappointment about online-only take-up.

Many people who tried out access in the early weeks have not returned. However, it is also true to say that some daily subscribers have been impressed enough to sign up on a weekly basis.

And it is also the case that the Sunday Times‘s iPad app has yet to launch. It is hoped that this will boost figures considerably, though I have my reservations about that.

I think, once we delve further into these figures, they will support the view that News Int’s paywall experiment has, as expected, not created a sufficiently lucrative business model.

Clay Shirky argues the paywall means a retreat from broad-based newspaper-style publishing to narrowcast newsletter publishing:

One way to think of this transition is that online, the Times has stopped being a newspaper, in the sense of a generally available and omnibus account of the news of the day, broadly read in the community. Instead, it is becoming a newsletter, an outlet supported by, and speaking to, a specific and relatively coherent and compact audience. (In this case, the Times is becoming the online newsletter of the Tories, the UK’s conservative political party, read much less widely than its paper counterpart.)

Murdoch and News Corp, committed as they have been to extracting revenues from the paywall, still cannot execute in a way that does not change the nature of the organizations behind the wall. Rather than simply shifting relative subsidy from advertisers to users for an existing product, they are instead re-engineering the Times around the newsletter model, because the paywall creates newsletter economics.

As of July, non-subscribers can no longer read Times stories forwarded by colleagues or friends, nor can they read stories linked to from Facebook or Twitter. As a result, links to Times stories now rarely circulate in those media. If you are going to produce news that can’t be shared outside a particular community, you will want to recruit and retain a community that doesn’t care whether any given piece of news spreads, which means tightly interconnected readerships become the ideal ones. However, tight interconnectedness correlates inversely with audience size, making for a stark choice, rather than offering a way of preserving the status quo.

This re-engineering suggests that paywalls don’t and can’t rescue current organizational forms. They offer instead yet another transformed alternative to it. Even if paywall economics can eventually be made to work with a dramatically reduced audience, this particular referendum on the future (read: the present) of newspapers is likely to mean the end of the belief that there is any non-disruptive way to remain a going concern.

 

The long-tail of publishing

The following post first appeared on the website of The Wheeler Centre for Books, Writing and Ideas, on October 4th 2010.

When was the last time you bought a CD?

If you’re like most young Australians, the answer is: a while ago. The advent of digital file sharing technologies has completely transformed the music publishing business. Since Napster was invented in 1999, CD sales have plungedmajor record labels are struggling – butconcert and festival attendances have boomed.

Now it’s the publishing industry’s turn to feel the destructive gale of technological change. A recent article in the Wall Street Journal is only the latest of many to chronicle the declining fortunes of traditional book publishers, particularly in fields like literary fiction:

From an e-book sale, an author makes a little more than half what he or she makes from a hardcover sale. The lower revenue from e-books comes amidst a decline in book sales that was already under way. The seemingly endless entertainment choices created by the Web have eaten into the time people spend reading books.

 

Publishers and authors face declining revenues and profits in the digital world. Source: LJK Literary Agents, Wall Street Journal

 

The sea-change in the publishing industry illustrates the new economics of digital distribution. It’s a phenomenon dubbed “the long tail” by Wired editor Chris Anderson. (Anderson borrowed the term from technology economist Erik Brynjolfsson).

The long tail is illustrated in the image below. The “tail” is simply the long rightwards sloping end of the curve. Inside the long tail are all the unpopular and obscure titles that never used to get published – but that can none-the-less sell in small numbers online. Aggregated together by a business model such as Amazon’s, this vast global back-catalogue can add up to real profits. In a nutshell: falling costs of publishing and distribution have allowed an avalanche of content to find new audiences. They are small audiences, but they are real.

 

MIT economist Erik Brynjolfsson analysed sales data from Amazon and found that 30-40% of Amazon book sales are titles that wouldn’t normally be found in bricks-and-mortar stores. Source: Erik Brynjolfsson, Jeffrey Hu and Michael Smith (2006) “From Niches to Riches: Anatomy of the Long Tail.”

 

What this means for writers is beginning to emerge. The long tail contains nearly everything that isn’t a commercially-viable proposition: in other words, most writers, bloggers and poets. But these new technologies can also help once-obscure writers and bloggers to connect directly to audiences, and even allow them to make a modest but sustainable living from their craft. As technology writer Kevin Kelly has observed, artists and writers may only need “1000 true fans” to build a career, and cheap and easy access to blogging engines globally makes this easier than ever before.

The ability of technology to put publishing in the hands of writers won’t create many superstars, but we’re already seeing its potential to allow amateurs to reach meaningful readerships and journalists, academics and other literary professionals to add second strings to their bows. Increasingly, writers are making money the way musicians are: bymonetising their speeches, presentations and merchandise. Theinter-connectedness of blogs, which rely on many reciprocal links between a community of interest in a particular niche, help this process.

Bottom-line: the long tail economics of blogging might be unsettling for writers and publishers used to the old models, but it’s a trend that’s here to stay.

The best article about freelancing – ever

Richard Morgan has written the best article about freelancing I’ve ever read at The Awl.

Excerpt below, but make sure you read the entire bitter-sweet funny-sad melancholy-hilarious magnum opus.

Heroes be damned; a writer should not model themselves after an editor. That is probably the single best realization I have made as a freelancer.

Moss said the thing that all editors inevitably tell all writers—something along the lines of “I really admire your determination, because I tried freelancing and didn’t last six months.” Editors like to talk about how much they need freelancers and how much they envy our freedom and our work ethic and our Rolodex. Whenever a friend loses his staff job at a magazine or newspaper, his ensuing panic reminds me that they put all their eggs in one basket and that I am cushioned because I have my eggs spread across so many baskets (which is a different kind of panic). Freelancing has great rewards, but trajectory is not really one of them. You do not go from being a freelance writer to a freelance editor to a freelance deputy managing editor. Essentially, I’m doing the same thing I was doing in 2003. The market for my vaudevillian sales of wonder tonic can dry up at any moment. An editor leaves. A magazine folds. And poof! Gone.

For the record, I’ve been freelancing for nine years and I’ve liked nearly all the editors I’ve ever written for. Editors are a writer’s best friend!

Jed Perl on writing in the digital age

In The New Republic, Jed Perl has a thoughtful essay on the demands and rewards of writing in the age of instant publication:

Writers write in order to be read. This is obvious. But the speed with which words, once written, are now being read—a speed shaped by technological innovations long before the Internet turned the quick turnaround into the virtually instantaneous turnaround—has set me to thinking about the extent to which writing, for the writer, ought to have a freestanding value, a value apart from the reader. There is too much talk about the literary marketplace, the cultural marketplace, and the marketplace of ideas. We need to remember that a book—or a painting or a piece of music—begins as the product of an individual imagination, and can retain its power even when largely or even entirely ignored. (The paintings of Piero della Francesca were overlooked for several centuries.) I do not for one moment minimize the economic pressures on writers to publish—and to publish, if they are lucky enough to have the choice, in higher-paying places rather than lower-paying ones. I’ve made my living as a writer for 30 years, and I know how difficult it can be. But writers who live for their readers—or for what their editors imagine their readers want—may end up with an impoverished relationship with those readers.