New York Times paywall: round-up of the analysis

Nieman Journalism Labs’ Tom Coddington has a great round-up of the decision by the New York Times to introduce a pay-wall:

There were a couple pieces written supporting the Times’ proposal: Former CBS digital head Larry Kramer said he’d be more likely to pay for the Times than for the tablet publication The Daily, even though it’s far more expensive. The reason? The Times’ content has consistently proven to be valuable over the years. (Tech blogger John Gruber also said the Times’ content is much more valuable than The Daily’s, but wondered if it was really worth more than five times more money.) Nate Silver of Times blog FiveThirtyEight used some data to argue for the Times’ value.

The Times’ own David Carr offered the most full-throated defense of the pay plan, arguing that most of the objection to it is based on the “theology” of open networks and the free flow of information, rather than the practical concerns involved with running a news organization. Reuters’ Felix Salmon countered that the Times has its own theology — that news orgs should charge for content because they can, and that it will ensure their success. Later, though, Salmon ran a few numbers and posited that the paywall could be a success if everything breaks right.

There were more objections voiced, too: Both Mathew Ingram of GigaOM and former newspaper journalist Janet Coats both called it backward-looking, with Ingram saying it “seems fundamentally reactionary, and displays a disappointing lack of imagination.” TechDirt’s Mike Masnick ripped the idea that people might have felt guilty about getting the Times for free online.

One of the biggest complaints revolved around the Times’ pricing system itself, which French media analyst Frederic Filloux described as “expensive, utterly complicated, disconnected from the reality and designed to be bypassed.”Others, including Ken Doctor, venture capitalist Jean-Louis Gassee, and John Gruber, made similar points about the proposal’s complexity, and Michael DeGusta said the prices are just too high. Poynter’s Damon Kiesow disagreed about the plan structure, arguing that it’s well-designed as an attack on Apple’s mobile paid-content dominance.

 

Advertisements

Where to next for the Google Book Settlement?

This week a US judge ruled against the Google Book Settlement, the latets in a seven year legal saga that I’ve covered in some depth here.

Jerry Brito has a good explainer of the background of the case:

In mid-2005, the Author’s Guild and the American Association of Publishers filed suit to stop Google from scanning any more books. Soon the Author’s Guild’s case was certified as a class-action lawsuit, meaning that anyone who had ever published a book—millions of authors—would be part of the class represented and would be bound by the result of the case.

An Unsettling Settlement

Three years later, after extensive negotiations, the parties announced they had reached a settlement. Google would pay $125 million up front and would then be allowed to continue scanning books and making them available online. More importantly, Google would be allowed to offer not just snippets, but it would be allowed to sell entire text of books as well. The copyright holder would get about 2/3 of the revenues and Google would keep 1/3.

On its surface, the proposed settlement was a boon for all involved. Google would get to continue digitizing books, authors and publishers would get a cut of the profits, and consumers would get universal access to almost all of the world’s books. But reading between the lines, the settlement proved to be problematic.

Because it was a settlement to a class-action lawsuit, it meant that all authors who had ever published a book were bound. Google could scan any book without first asking for permission. If an author didn’t want his book to be scanned or included in Google’s database, he had to contact Google and opt-out. This would have turned copyright on its head.

As a result, many authors protested. The Author’s Guild and the publisher’s association had negotiated on behalf of millions of authors, and many felt the deal didn’t represent their wishes. Almost 7,000 authors wrote to the court asking to be removed from the lawsuit’s plaintiff class.

Saving the Orphans

Another contentious aspect of the settlement was how it treated “orphan works,” books the authors of which are unknown or can’t be found. It’s a well-known problem in copyright that members of Congress have tried to fix several times.

The problem is that if a company like Google wants to digitize a copyrighted book, and it can’t find its author to ask for permission, then its choices are 1) scan the book anyway and face heavy penalties if the author surfaces later and sues, or 2) leave the book undigitized and out of a universal library. As a result, hundreds of thousands of books are in a kind of limbo, not accessible to readers even if the author may well have been fine with digitization.

The Google Books settlement presented a solution to the problem. Because it bound all authors—-known and unknown—-Google could proceed to scan orphan works without having to worry. If an author later surfaced who didn’t want his book used, he could no longer sue Google. He could opt-out of the program and claim a check for the revenues associated with his book, but no more.

Some welcomed this solution to the problem, but others, including the Department of Justice, pointed out to the court that it would give Google a monopoly over orphan works. Because the settlement would only apply to Google, if another party like Amazon or the Internet Archive wanted to create its own digital library that included orphan works, it would not get the same protection.

And it wouldn’t be easy for other to get the same deal. Short of Congressional action, the only way a company like Amazon could get similar treatment would be to settle a class action suit of their own—a very difficult and time-consuming set of events to replicate. Additionally, because the authors and publishers who negotiated the Google deal are getting a cut of revenue, some have suggested that it would be in their interest to make sure Google remained a monopoly and would therefore not settle as easily with other parties.

What’s Next

Because class-action lawsuits can be as controversial as this one, the law requires that a court approve a settlement before it becomes binding. The court accepted over 500 briefs from various parties supporting or opposing the settlement and early last year held a hearing on the fairness of the settlement. It rejected the case yesterday.

The options available now to Google and the authors and publishers are:

  1. Continue litigating the original lawsuit, which is an unlikely scenario.
  2. Amend the settlement to make it opt-in, meaning that authors would have to give permission before their books are scanned.
  3. Appeal the judge’s decision to a higher court.

Judge Chin seemed to invite a new settlement, saying in his opinion that “Many of the concerns raised in the objections would be ameliorated if the [settlement] were converted from an ‘opt-out’ settlement to an ‘opt-in’ settlement.”

In the New York Times, Robert Darnton, himself a librarian and a strident if highly-0informed critic of the deal, weighed in with this opinion piece:

This decision is a victory for the public good, preventing one company from monopolizing access to our common cultural heritage.

Nonetheless, we should not abandon Google’s dream of making all the books in the world available to everyone. Instead, we should build a digital public library, which would provide these digital copies free of charge to readers. Yes, many problems — legal, financial, technological, political — stand in the way. All can be solved.

The Chronicle of Higher Education carries a good interview with Pamela Samuelson:

It’s the only ruling really that the judge, I think, could have made. The settlement was so complex, and it was so far-reaching. With the Department of Justice and the governments of France and Germany stridently opposed to the settlement, it seems to me that the judge really didn’t have all that much choice. So the ultimate ruling, that the settlement is not fair, reasonable, and adequate to the class, is one that I think was inevitable.

The thing that surprised me about the opinion was that he took seriously the issues about whether the Authors Guild and some of its members had adequately represented the interests of all authors, including academic authors and foreign authors. That was very gratifying because I spent a lot of time crafting letters to the judge saying that academic authors did have different interests. Academic authors, on average, would prefer open access. Whereas the guild and its members, understandably, want to do profit maximization.

The EFF’s Corynne McSherry has this analysis:

On the policy front, the court recognized – as do we – the extraordinary potential benefits of the settlement for readers, authors and publishers. We firmly believe that the world’s books should be digitized so that the knowledge held within them can made available to people around the world. But the court also recognized that the settlement could come at the price of undermining competition in the marketplace for digital books, giving Google a de facto monopoly over orphan books (meaning, works whose owner cannot be located). The court concluded that solving the orphan works problem is properly a matter for Congress, not private commercial parties. Sadly, Congress has thus far lacked the will to do so. Perhaps yesterday’s decision will finally spur Congress to revisit this important issue and pass comprehensive orphan works legislation, that allows for mass book digitization.

That said, the court also got some things fundamentally wrong in its copyright analysis. For example, it states that “a copyright owner’s right to exclude others from using his property is fundamental and beyond dispute” and then proceeds to quote at length from the letters of numerous authors (and their descendants) who share the misguided notion that a copyright is, by definition, an exclusive right to determine how a work can be used. We respectfully disagree. Copyright law grants to authors significant powers to manage exploitation of creative works as a function of spurring the creation of more works, not as a natural or moral right. And those powers are subject to numerous important exceptions and limitations, such as the first sale and fair use doctrines. Those limits are an essential part of the copyright bargain, which seeks to encourage the growth and endurance of a vibrant culture by both rewarding authors for their creative investments and ensuring that others will have the opportunity to build on those creative achievements. Thus, as the Supreme Court has explained, such limits are “neither unfair nor unfortunate” but rather “the means by which copyright advances the progress of science and art.” If the legal issues raised in the underlying lawsuit are ever litigated on the merits, let’s hope this or any future judge keeps the traditional American copyright bargain firmly in mind.

Michael Liedtke of the Associated Press thinks this is a micvrocosm of the larger anti-turst and monopoly challenges facing Google:

This week’s ruling from U.S. Circuit Judge Denny Chin did more than complicate Google’s efforts to make digital copies of the world’s 130 million books and possibly sell them through an online book store that it opened last year. It also touched upon antitrust, copyright and privacy issues that are threatening to handcuff Google as it tries to build upon its dominance in Internet search to muscle into new markets.

“This opinion reads like a microcosm of all the big problems facing Google,” said Gary Reback, a Silicon Valley lawyer who represented a group led by Google rivals Microsoft Corp. andAmazon.com Inc. to oppose the digital book settlement.

Google can only hope that some of the points that Chin raised don’t become recurring themes as the company navigates legal hurdles in the months ahead.

The company is still trying to persuade the U.S. Justice Department to approve a $700 million purchase of airline fare tracker ITA Software nearly nine months after it was announced. Regulators are focusing its inquiry into whether ITA would give Google the technological leverage to create an unfair advantage over other online travel services. Google argues it will be able to provide more bargains and convenience for travellers if it’s cleared to own ITA’s technology.

In Europe and the state of Texas, antitrust regulators are looking into complaints about Google abusing its dominance of Internet search to unfairly promote its own services and drive up its advertising prices.

And Google is still trying fend off an appeal in another high-profile copyright case, one stemming from its 2006 acquisition of YouTube, the Internet’s leading video site. Viacom Inc. is seeking more than $1 billion in damages after charging YouTube with misusing clips from Comedy Central, MTV and other Viacom channels. A federal judge sided with Google, saying YouTube had done enough to comply with digital copyright laws in its early days.

One of my favourite comentators on Google is of course the one-and-only Siva Vaidhyanathan, who is quoted in this excellent Inside Higher Ed piece:

Siva Vaidhyanathan, a media studies professor at the University of Virginia and a notable Google gadfly, said the company overplayed its hand by essentially trying to rewrite the rules governing the copying and distribution of book content through a class-action settlement. “Google clearly flew too close to the sun on this one,” he wrote in an e-mail. “…This is not what class-action suits and settlements are supposed to do.”

Vaidhyanathan said that Google now faces the choice of either continuing to fight for its interpretation of copyright law in the courts or scaling back its plans for a digital bookstore. “If Google decides to take the modest way out, it can still ask Congress to make the needed changes to copyright law that would let Google and other companies and libraries compete to provide the best information to the most people,” the media scholar says. “Congress should have been the place to start this in the first place.”

 

 

 

 

The Australian government reviews its tax concessions to independent film production

The following article appeared in Crikey on March 4th:

The federal government has just finished a review of federal film financing arrangements — and given itself a rather large pat on the back. The result is an endorsement of film financing arrangements in which more and more taxpayers money is being given to Hollywood studios.

Confirming Sir Humphrey Appleby’s famous principle that you should “never commission an inquiry without knowing the outcome first”, the federal Arts Department’s 2010 Review of the Australian Independent Screen Production Sector makes a series of rosy findings about the state of the sector and the effectiveness of the government’s Australian Screen Production Incentive, a large tax refund to film producers.

More money is certainly leaving Treasury coffers: the report states that “in the three years since the introduction of the Australian Screen Production Incentive, the government has provided $412.1 million in support through the tax system, compared to $136.7 million in the three years before the package.”

But delve further into the report, and all sorts of questions start to pop up. First and foremost is the crucial question of whether those extra taxpayer dollars are really stimulating an upswing in domestic production across the board, or merely co-financing large Hollywood studio films such as Happy Feet 2 and Australia.

Arts Minister Simon Crean trumpeted the review’s findings. “The boost in government funding is a great achievement and contributing to the viability of the local film production industry,” he announced in a media release.

“Although it’s still early days, the increase in activity, particularly the production of Australian large budget films, such as Baz Luhrmann’sAustralia and George Miller’s Happy Feet 2, and the box office performance of films such as Tomorrow, When the War Beganshows the government support for the sector is having a significant impact.”

In fact, a close reading of the review suggests that the effect of the new funding arrangements is far less positive than the minister and the department claim. Much of the extra money — $169 million, in fact — has gone to foreign movie studios in the form of international production subsidies, though that’s not a fact that the review chose to highlight. But despite this, levels of foreign production in Australia have actually been falling, as the strengthening Aussie dollar and strong competition from other countries and locations have made the foreign production incentives less attractive.

More private investment has been attracted to Australian feature films, however, and more films are being made. Despite this, the domestic box office takings of Australian feature films has risen only slightly, from 3.8% between 2005-2007 to 4.4% in 2008-2010. That’s better than the subterranean levels of 2004, but still worse than the performance of Australian features in the early 2000s — let alone the 1990s.

As for television, the report found that while drama budgets had increased, total hours for Australian-produced adult television drama had remained steady. The reason? Television production is driven by local content quotas. To quote the report, “Australian television production levels remain stable over time and are closely linked to requirements under the Australian Content Standard.” In other words, the television networks are receiving more taxpayers money to produce drama that they are required to by the regulations. It’s a nice deal if you can get it.

Most of the money continues to flow to the big productions, such as Luhrmann’s upcoming Great Gatsby. These are loved by the industry, as they provide lots of employment for local casts and crew. But the review points out that a large part of the Australian screen sector is made up of small companies, many of which produce documentaries. These smaller firms have struggled to access the tax refunds, owing to high production thresholds. Features and documentaries made for less than $1 million or $250,000 respectively are ineligible for the offset, ruling out a large swathe of the independent sector.

Yet the review thinks this is a good thing, as it precludes the low-budget and arthouse features and documentaries that would be unlikely to make a return in any case. “Lowering the offset threshold for feature films to ensure access for emerging producers would to an extent alter the intent of the offset,” it says, “from one encouraging commercially focused features, to one that includes films less likely to be market and box office driven.”

The review confirms a subtle shift in Australian screen funding priorities away from backing emerging film-makers and new voices and towards big budget, Hollywood-financed productions. This may result in bigger box offices for bigger-budget Australian films — or it may not. The federal government’s last effort at supporting commercial film finance was the Film Film Corporation, a 20-year initiative that acted as a for-profit investor in feature production. The FFC lost more than a billion dollars in that time-frame, booking investment returns of negative 80%.

The new policy gets around this problem by simply giving tax refunds to big producers, regardless of how much money their film eventually makes. And it’s uncapped and open-ended: the bigger the budget of the film, the larger the taxpayer contribution.

Jenna Newman on the Google book settlement

In the 1st issue for 2011 of the journal Scholarly and Research Communication comes a masterful exploration of the cultural and legal issues surrounding the Google book settlement by Jenna Newman. At 75 pages, this monograph-length essay is probably the most comprehensive and certainly the most current exploration of the issues underlying this giant experiment in digital publishing.

It’s not really possible to sum up the entire essay, so I’ll just cut to the chase and quote from her conclusion, which firstly establishes in extraordinary detail just how good the deal is for Google:

If the settlement is approved, Google can congratulate itself on a particularly excellent deal. It avoids years of uncertainty, not to mention ongoing legal fees, in litigation. It avoids prohibitive transaction costs by not having to clear rights individually for the works it has scanned already and all the works covered by the settlement and yet unscanned. It will receive a blanket licence to use a broad swath of copyrighted works, and it will enjoy an exclusive position, both as a market leader and with legal peace of mind, in the realm of digital rights: its private licence goes much further than current copyright legislation, particularly with respect to orphan works, for which rights are currently unobtainable in any market. Low transaction costs and legal certainty are key requirements for any mass digitization or digital archiving project (McCausland, 2009). The settlement offers both, to Google and Google alone. It will be years ahead of any potential competitors digitizing print works and may easily end up with an effective monopoly and a leading stake in the emerging markets for digital books. And all this costs Google only U.S.$125 million—a mere 0.53% of its gross revenue, or 1.92% of its net income, for 2009 alone (Google Inc., 2010b)

Newman suggests that th deal is far more equivocal for publishers and authors, but that given the other options on the table (including the risk of a music-industry style failure to establish a viable digital publishing platform until after piracy has eroded much of the value of the market), it may represent the “best deal available.”

But the real implications are for copyright law and communications policy:

The settlement may serve publishers’ and authors’ individual or immediate interests even as it erodes their collective and long-term ones. The public, too, has a significant vested interest in the subjects of the settlement—the books themselves, repositories to centuries of knowledge and creativity—as well as the legal and cultural environment the settlement endorses. A detailed account of the settlement’s economic and cultural costs and benefits is instructive, but more importantly the settlement highlights the structural and technological deficiencies of existing copyright law. Long copyright terms and the presumption of total rights protection have created a copyright regime that privileges the potential for commercial exploitation regardless of whether that exploitation is feasible or even desired by the creators themselves. This regime is also particularly ill equipped to recognize digital possibilities. Whatever happens to this settlement, such tensions continue to strain copyright’s rules.

A number of conditions on approval could address criticisms of the settlement, but perhaps the best way to ensure Google, publishers, and authors are all treated fairly is to pursue copyright reform, not private contracts, to address the legislative problems that the settlement tries to engage. Legislative changes with respect to intellectual property rights have been slow to reflect everyday technological realities. The existence of the settlement, and much of its reception, demonstrates that private interests and public appetites are eager to move beyond the limits of the current regulations. Copyright reform will be fraught with challenges of its own, but the existing legal framework—in Canada as in the U.S.—is increasingly inadequate for accommodating common and emerging practices and capabilities: copyright law has swung out of balance. The settlement may serve as an early test bed for certain possibilities, including digital distribution and access, and the imposition of limited formalities on rights-holders. However, as a private contract, it is an insufficient guide for legislative development. The trouble with copyright does not affect Google alone. The public interest demands more broadly applicable solutions, and these will be achieved—eventually, and possibly with great difficulty—through copyright legislation. We may get copyright reform wrong, as arguably we have done in the past, but that fear should be allayed if we also recall that we have the power to revise our legislative interventions until we get them right.

 

Why AFACT’s piracy statistics are junk

Yesterday, the Australian Federation Against Copyright Theft (let’s call them AFACT or perhaps ‘Big Content’ for short) lost their appeal in the long-running and important copyright infringement suit against Australian ISP iiNet. As usual, some of the best commentary can be found by Stilgherrian (who really does need a second name, don’t you think?):

If you came in after intermission, you’ll pick up the plot quick enough. AFACT said iiNet’s customers were illegally copying movies, which they were, but iiNet hadn’t acted on AFACT’s infringement notices to stop them. AFACT reckoned that made iiNet guilty of “authorising” the copyright infringement, as the legal jargon goes. iiNet disagreed, refusing to act on what they saw as mere allegations. AFACT sued.

In the Federal Court a year ago, Justice Dennis Cowdroy found comprehensively in favour of iiNet. It was a slapdown for AFACT. AFACT appealed, and yesterday lost. Headlines with inevitable sporting metaphors described it as  two-nil win for iiNet.

But read the full decision and things aren’t so clear-cut.

One of the three appeals judges was in favour of AFACT’s appeal being dismissed. Another was also in favour of dismissal, but reasoned things differently from Justice Cowdroy’s original ruling. But the third judge, Justice Jayne Jagot, supported the appeal, disagreeing with Justice Cowdroy’s reasoning on the two core elements — whether iiNet authorised the infringements and whether, even if they had so authorised them, they were then protected by the safe harbour provisions of the Copyright Act.

There’s plenty of meat for an appeal to the High Court, and that’s exactly where this will end up going. Wake me when we get there.

As I argued today, also in Crikey, it’s ironic that Big Content seems to be about the only business lobby group in the country arguing for more regulation and red tape.

But the copyright case also comes in the wake of an interesting little micro-controversy about piracy statistics, released by AFACT late last week. Aided by an economics consultancy and a market research firm, AFACT released an impressive-seeming report that claimed that movie piracy was costing Australia $1.4 billion and 6,100 jobs a year.

Electronic Frontiers Australia made some pretty valid criticisms of the research, including the following:

1. The assumption that 45% of downloads equal lost sales is unproven and insufficient evidence is provided to support it. The survey method cited is better than assuming 100% of downloads are lost sales, but there is better analysis in other studies – for example this piece by Lawrence Lessig. If the study was correct, sales of DVDs and attendance at cinemas would be much more reduced than the reported industry figures. In fact, the movie industry is making record profits.

2. It can’t be ignored that downloads have an advertising effect both on the product downloaded and future releases. To the extent sales may be lost, these must be offset against other gains from advertising.

3. Gross revenue is not the relevant metric, due to variables such as investment in capital, distribution and costs of sales. Many of the movies downloaded may not have been available to view or buy in Australia. Profit is the metric of importance, but this is never studied.

4. Flow-on effects to other industries are wholly speculative, and lost tax on profits assumes the entities pay Australian company tax on sales pro-rata to revenue, which is not intuitive or evidenced. It also assumes that money not spent on movies is lost to the economy, instead of helping to create jobs in other sectors.

5. Peer to peer file sharing is merely the latest in a sequence of technologies since the 19th century which have been claimed to be the ruin of the creative arts. See chapter 15 “Piracy” by Adrian Johns (University of Chicago Press 2009) – the copyright owners said the same thing about copies of sheet music, tape recorders, every iteration of personal recording system and indeed public radio. However, “home piracy” acts not only as a loss to industry but also as a boon to distribution, bypassing censorship and limitations on sales by official outlets.

6. The report suffers, as have other industry-funded studies, from “GIGO”. With an assumption that “downloads = losses” unproven, all conclusions estimating the size of the loss are equally unproven. What if a vibrant sharing culture increases total sales for media respected as quality by consumers, but reduces sales of hyped media? (Research has shown that the biggest downloaders in fact spend more on entertainment than non-downloaders.)

7. The call-to-action of this report is obviously to “crack down on piracy”, shifting the cost of file-sharing from the industry to the taxpayer via increased law-enforcement. No industry, let alone the foreign-dominated entertainment industry, deserves a free ride for its business model. If instead, the industry noted that the report says 55% of downloads created a market for sales, much of which is unsatisfied due to current restrictive trade practices, then its future profitability would be in its own hands.

8. Repeated studies have demonstrated that the entertainment industry vies for money and commitment of time with all other forms of entertainment. The Internet, computer games and mobile telecommunication applications take “eyeballs and dollars” away from DVD and CD sales, but also sports arenas, sales of board games and printed works. Magazines are also suffering from a reduced value proposition with the Internet, and some forms of entertainment and some businesses in the industry will no doubt find it difficult to remain vibrant. Change is consumer-driven, and it’s futile for the industry to try to hold fast to a business model and methods of content distribution which are dying with or without fierce law enforcement of copyrights.

Unsurprisingly, AFACT  have responded, attacking EFA’s arguments.

Notably, AFACT replies that:

“The study does not assume that ‘downloads = losses’. As stated above, some 32 per cent of respondents said that they viewed an authorised version of a movie after watching the pirated version. As a result, 32 per cent of ‘all pirate views’ were removed from the ‘lost revenue’ calculations and were treated as ‘sampling’.”

This is a valid argument. AFACT has indeed removed these later viewings from their lost revenue calculations. But, as I’ll explore below, this doesn’t mean that AFACT’s methodology is sound.

AFACT’s other replies are far less persuasive. Take this line:

“It should be clearly noted that in almost all of these cases government or technology provided a barrier to prevent continued rampant infringement. In the case of public radio, legislation provided statutory copyright royalties. VHS and cassette tape may have been efficient technologies for recording, but in terms of cost and quality (analog degrades with time) they proved not to be efficient for distribution at that time. Laws were also designed to prevent mass distribution of pirated VHS tapes. Solutions, whether legislative, technological or otherwise are currently required to prevent or deter the unfettered digital distribution of pirated versions of copyrighted content.”

Not to put too fine a point on it, this is a rubbish argument. Statutory copyright royalties for broadcasters were not barriers to listeners – they were income streams to publishers. And, in fact, as EFA point out, radio proved to be such a powerful marketing tool for music labels that record companies regularly resorted to payola and other measures to get their songs on high-rating radio stations. This argument is a classic tautology: because AFACT believe that regulatory barriers are necessary to prevent infringement, they argue that the reason previous technologies didn’t lead to “rampant infrignement” was because they were strictly regulated. You don’t need a degree in logic to spot the flaw in this argument.

So who’s right?

On the whole, EFA has the better of the exchange. Indeed, there are plenty more holes you can pick in AFACT’s methodology if you wish. To start with, let’s examine their laughable “Annex 1” in the full report. This purports to explain how ABS input-output tables are used to generate a final figure for total piracy impact in terms of lost sales and job losses.

I’d like to say I carefully checked their methodology for its econometric accuracy. Unfortunately, I can’t – because the authors at Oxford Economics and Ipsos don’t publish their equations; nor do they publish their raw data.

Just as an exercise, I downloaded the ABS input-output tables and attempted to match the ABS data to the AFACT report. It’s impossible. The data tables in the AFACT report which might allow that kind of scrutiny are missing.

What Annex 1 does tell us is that Oxford Economics and Ipsos have made all sorts of behind-the-scenes calculations to do with the exact value of the multipliers they use and the precise allocation of various ABS industry data to various categories of their assumptions. But they don’t tell us how these figures were arrived at. To get a flavour of the opacity of the modelling, here’s their full explanation of two of the the multipliers they use:

Type II multipliers of 2.5 (Gross Output) and 1.1 (GDP) were estimated. This covers activity in the Australian motion picture exhibition, production and distribution industries as well as TV VOD, internet VOD, downloads of motion pictures and the retailing of these motion pictures

There is no further explanation of how the numbers of 2.5 and 1.1 were “estimated” and no equation which shows us what they multiply. Hence, it is literally impossible to verify, cross-check or otherwise scrutinise these figures. Indeed, the full report contains no true methods section. In other words, the academic credibility of these figures should be zero.

This rubbish is just another example of how lobby groups use consultants-for-hire to create vocal scare campaigns based on fictitious figures. It’s junk modelling, ordered up for the express purpose of industry rent-seeking.

Crikey’s Bernard Keane explained it helpfully for us in relation to climate lobbying in 2010:

This what you do:

  1. Commission a report from one of the many of economics consultancies that have broken out like a plague of boils in the past decade.  This should feature modelling demonstrating the near-apocalyptic consequences of even minor reform.  Even if your industry is growing strongly, you should refer to any lower rates of future growth as costing X thousands of jobs, without letting on that those jobs don’t actually exist yet, and might never exist due to a variety of other factors.
  2. Dress up the report as “independent”, slap a media-friendly press release on the top and circulate it to journalists before release, with the offer of an interview of the relevant industry or company head.
  3. Hire a well-connected lobbyist to press your case in Canberra.  When the stakes are high, commission some polling to demonstrate that a crucial number of voters in crucial marginal seats are ready to change their vote on this very issue.

The diffusion of the printing press in Europe, 1450-1500

These maps are just too pretty not to re-post. They come from Jeremiah Dittmar’s fascinating new paper, Information Technology and Economic Change: The Impact of the Printing Press.

The diffusion of the printing press, 1450-1500. Source: Jeremiah Dittmar.

There’s a good summary of the paper at Vox, but the take-home message is probably in two parts. Firstly:

  • First, the printing press was an urban technology, producing for urban consumers.
  • Second, cities were seedbeds for economic ideas and social groups that drove the emergence of modern growth.
  • Third, city sizes were historically important indicators of economic prosperity, and broad-based city growth was associated with macroeconomic growth (Bairoch 1988, Acemoglu et al. 2005).

And secondly:

I find that cities in which printing presses were established 1450-1500 had no prior growth advantage, but subsequently grew far faster than similar cities without printing presses. My work uses a difference-in-differences estimation strategy to document the association between printing and city growth. The estimates suggest early adoption of the printing press was associated with a population growth advantage of 21 percentage points 1500-1600, when mean city growth was 30 percentage points. The difference-in-differences model shows that cities that adopted the printing press in the late 1400s had no prior growth advantage, but grew at least 35 percentage points more than similar non-adopting cities from 1500 to 1600.

Elif Batuman on the double-entry book-keeping of writing

Elif Batuman. Image: ecu essays.

I must be the very last person in the literary world to discover the complex delight of reading Elif Batuman, but this piece of writing by her really did my head in.

It’s the first chapter of her doctoral dissertation, and it’s quite possibly the most erudite, dextrous and fleet-footed jaunt through the literary theory of the modern novel I’ve read since … well, since Borges:

The time of writing is not problematic for all novelists; only for 1) professional, full-time writers, who 2) maintain a strict allegiance to the raw material of lived experience.  The time of writing is not problematic for Casanova, because he takes up writing only in his retirement: far from scribbling his memoirs in the fear that he would die before completing his work, he actually tried to draw out his writing as long as possible, to fill his remaining years.  At the opposite end of the spectrum, metaliterary gamesters like Sterne or Diderot feel no epistemological responsibility to base their works on real experiences; to the contrary, epistemological self-sufficiency becomes for them a point of pride.  A much-cited passage from Tristram Shandy, for instance, testifies equally to a vivid awareness of the time of writing and a complete indifference towards “research”:

I am this month one whole year older than I was this time twelve-month; and having got… almost into the middle of my fourth volume—and no farther than to my first day’s life—’tis demonstrative that I have three hundred and sixty-four days more life to write… so that instead of advancing, as a common writer, in my work… I am just thrown so many volumes back.

Shandy delights precisely in his own ability to keep writing with no new material at all.  Life does not interrupt Shandy’s writing; Shandy interrupts his own writing, congratulating himself on the inexhaustible nature of his new amusement (“I shall lead a fine life out of this self-same life of mine”), and on its capability to stimulate the “manufactures of paper.”  He is not battling an inescapable condition, but inventing a gratuitous obstacle, protracting his “Life” with digressions, deferrals and ruptures.  That Shandy himself  sees these obstacles as voluntary is borne out by his claim that they were “never applicable before to any one biographical writer since the creation of the world,” and would “never hold good to any other, until its final destruction” (198): engaged in willful play, he has no idea of having stumbled onto an inherent novelistic problem.  In similar fashion, Diderot gleefully protracts the story of Jacques’s loves:  “What is there to prevent me from marrying off the master and having him cuckolded?  Or sending Jacques off to the Indies?  And leading his master there?  And bringing them both back to France on the same vessel?  How easy it is to make up stories!”37  “Qu’il est facile de faire des contes”: for Cervantes or Boswell or Proust, it is not so easy.  The artificial hurdle becomes, in their works, an organic barrier.  Play becomes work—or at least a more arduous game, with a stringent new rule: the epistemological obligation to “make up” stories from something, some real material.  “Faire des contes” becomes, in this way, “faire des comptes”: each narrative element—each obstacle, separation and reunion—is a debit which must be balanced, in the credit column, with some experiential knowledge.  To introduce the central metaphor of this dissertation, I propose that this balance can be construed as such an account in the style of double-entry bookkeeping:

 

Debit Credit
The time of research, lived experience The time of writing
Material for a book Unhappiness, knowledge, experience
Ginés’s crimes Ginés’s terms in the galley
Marcel’s experiences; the dinner invitation Marcel’s solitude; the writing notebook

If in this light we reconsider Boswell’s metaphor of reaping no more than he can sow—living no more than he can record—we see that it is essentially an economical one: if his experiences are too numerous to write about in the remaining time, Boswell will have misspent his life.

A major new talent.

The Australia Council’s recent Arts and Creative Industries report

The following article appeared in Crikey on February 4th. There’s been quite a bit of debate over at Crikey in the comments pages of this article, so head on over to see the discussion.

The plan to provoke a profound shake-up to the arts

In a week where so much has happened in the world, it’s not surprising a report from the Australia Council has not made the news. But in the rarefied atmosphere of arts policy, the release of a report entitled Arts and creative industries will make waves — the document, if followed to its logical conclusions, implies a profound shake-up to the current status quo. 

Authored by a team of QUT academics led by Professor Justin O’Connor, Arts and creative industries is a long, detailed and rigorous examination of the context, shape and setting of arts and cultural policy in Australia. It’s not quite the Henry Tax Review, but it’s certainly the most academically informed piece of research to be released by the Australia Council in a long time.

Beginning with a historical overview of 19th century culture and the genesis of “cultural policy” in postwar Britain, the report then examines each of the issues that has bedeviled the arts debate: the role of public subsidy, the growth of the industries that produce popular culture, the divide between high art and low art, and the emergence of the so-called “creative industries” in the 1990s. It’s as good a summary of the current state of play as you’re likely to find anywhere, including in the international academic literature.

O’Connor and his co-writers conclude that “the creative industries need not be —  indeed should not be — counter posed to cultural policy; they are a development of it” and that economic objectives (in other words, industry policy) should be a legitimate aim of cultural policy.

Taken as a whole, the argument has big implications for the way Australia currently pursues the regulation and funding of culture. For instance, it argues that “the ‘free market’ simply does not describe the tendencies of monopoly, agglomeration, cartels, restrictive practices, exploitation and unfair competition which mark the cultural industries” and that this in turn justifies greater regulation of cultural industries like the media. That’s a conclusion that few in the Productivity Commission or Treasury — let alone Kerry Stokes or James Packer — are likely to agree with.

The report also argues the divide between the high arts and popular culture has now largely disappeared, and that therefore “it is increasingly difficult for arts agencies to concern themselves only with direct subsidy and only with the non-commercial”. This is an argument which directly challenges the entire basis of the Australia Council’s funding model, in which opera and orchestral music receives 98% of the council’s music funding pie. No wonder the Australia Council’s CEO, Kathy Keele, writes in the foreword: “This study proposes to challenge many of our current conceptions, definitions, and even policies.”

Intriguingly, the report stops short of any concrete policy recommendations. Perhaps this is because some existed, but were excised from the report. Or perhaps it’s because any recommendations that genuinely flowed from this report would imply the break-up or radical overhaul of the Australia Council itself.

As Marcus Westbury this week observed in The Age: ”While the Australia Council isn’t backward in promoting research, reports and good news stories that validate the status quo, there is not much precedent for it challenging it.”

That’s because the real guardian of the current funding model is not the Australia Council, but the small coterie of large performing arts companies and high-status impresarios that are its greatest beneficiaries. It won’t be long before a coalition of high arts types, from Richard Tognetti to Richard Mills, start clamouring to defend their privilege.

The worsening woes of the (recorded) music industry

From the Guardian‘s inestimable Charles Arthur comes a must-read post on the gloomy future of the record industry. Because it’s so good, I’ve re-posted here in full:

Bad news for the music industry. And it comes in threes.

First, Warner Music (which might be thinking of buying EMI from Citigroup?) reported its numbers for the fourth calendar quarter of 2010(which is actually its fiscal first quarter). Oh dear. Total revenue ($789m) down 14% from 2009, down 12% on constant currency basis (ie allowing for exchange rate fluctuation); digital revenue of $187m was 24% of total revenue (yay!), up 2% from last year (oooh), but sequentially down by 5%, or 7% on constant currency.

Operating income before depreciation and amortisation down 20% to $90m, from $112m a year ago. All of which led to a net loss of $18m, compared to a net loss of $17m a year before. In other words, things are still bad there. And it’s still got some heavy gearing: cash is $263m, long-term debt is $1.94bn. Warner might want to buy EMI, but it would put a hell of a strain on it. And the music business isn’t exactly looking like a place where you’d want a bank putting your money.

Second, Fred Wilson, a venture capitalist who spends upwards of $60 per month – and by his estimate around $2,000 annually – on music and music subscriptions was forced to turn pirate in order to get hold of the new Streets album:

“searched the Internet for the record. It was not even listed in iTunes or emusic. It was listed on Amazon US as an import that would be available on Feb 15th, but only in CD form. I’m not buying plastic just to rip the files and throw it out. Seeing as it was an import, I searched Amazon UK. And there I found the record in mp3 form for 4 pounds. It was going to be released on Feb 4th. I made a mental note to come back and get it when it was released. I got around to doing that today. I clicked on “buy with one click” and was greeted with this nonsense “

Which was Amazon saying that because he wasn’t in the UK, he couldn’t buy it. Unable to find a VPN that would let him masquerade as a Briton, he took the next step:

“So reluctantly, I went to a bit torrent search. I found plenty of torrents for the record and quickly had the record in mp3 form. That took less than a minute compared to the 20+ minutes I wasted trying pretty hard to buy the record legally.

“This is fucked up. I want to pay for music. I value the content. But selling it to some people in some countries and not selling it to others is messed up. And selling it in CD only format is messed up. And posting the entire record on the web for streaming without making the content available for purchase is messed up.”

Well, you could argue that an inability to actually wait for the few weeks, perhaps a month, before he could hear the songs via a licensed US label was what’s messed up. Is there no other music in the world that he can hear first? Nobody else? True, it would make sense if contracts were signed so that everything happened at once. But the record industry is still rather like the book industry: because it generates most of its money from physical things, it organises itself around those things.

And finally to Mark Mulligan, music analyst at Forrester Research.Writing on the Midem blog, Mulligan points out that “Digital music is at an impasse” because “it has not achieved any of its three key objectives”, specifically:

1 – to offset the impact of declining CD sales
2 – to generate a format replacement cycle and
3 – to compete effectively with piracy.

Mulligan notes that

“the divergence between emerging consumer behaviour and legitimate music products is widening at an alarming rate. And consumers are voting with their feet: Forrester’s latest consumer data shows digital music activity adoption is flat across ALL activity types compared to 1 year previously (in fact the data shows a slight decline).”

The hope on the part of the music business that the iPod, and the iTunes Store, and then digital music stores of all sorts, would be its saviour has turned out to be false. As Mulligan notes,

“all music activity is niche, except for video. Just 10% of Europeans and 18% of US consumers pay for digital music. Only music video has more than 20% adoption (and only in Europe at that): YouTube is digital music’s killer app.”

(If you are, or know, any young teenagers you”ll know that this is absolutely true. YouTube, and of course in Europe also Spotify. The problem with Spotify being, in the eyes of the record companies, that it simply doesn’t pay them enough. Whereas in Spotify’s eyes the record companies have for too long demanded too much.)

Mulligan adds that the “transition generation” – the 16-24 year-olds – aren’t the future. Instead, the future lies with the 12-15 year olds.

“In fact, when you look closely at the activities where 16-24’s over-index [do more than other age cohorts], you can see that their activity coalesces around recreating analogue behaviours in a digital context. The 16-24’s started out in the analogue era. They are the transition generation with transitional behaviours.

“The 12-15 year olds, though, don’t have analog baggage. All they’ve known is digital. Online video and mobile are their killer apps. These Digital Natives see music as the pervasive soundtrack to their interactive, immersive, social environments. Ownership matters less. Place of origin matters less. But context and experience are everything. The Digital Natives are hugely disruptive, but their disruption needs harnessing.”

So why does this matter, asks Mulligan? Because

“current digital music product strategy is built around the transition generation with transition products to meet their transitional needs and expectations. Neither the 99 cent download and the 9.99 streaming subscription are the future. They are transition products. They were useful for bridging the gap between analogue and digital, to get us on the first step of the digital path, but now it’s time to start the journey in earnest. We’d be naïve to argue that we’re anything close to the end game yet. But the problem is that consumer demand has already outpaced product evolution, again.”

It’s time, he argues, for the music companies to deal with the world as it is, rather than as it used to be or as they liked it. Many in the business will tell you that that is exactly what they are doing; and nothing that Mulligan says in any way detracts from the (real) efforts that are being made by many record executives, who are not as clueless or uninformed as many would like to think. Instead, they’re frequently dealing with institutional and sector-based inertia that’s hard to get moving. Plus if Simon Cowell can discover a singer on a talent show and propel her to the top of the UK and US album charts (the first British act since the Beatles to achieve that), selling millions of CDs, well, is his strategy so wrong and everyone else’s somehow so right? Realities like that give even the most digital executive pause.

Back to Mulligan, who points out that

“the digital natives have only ever known a world with on-demand access based music experiences. …And the experience part is crucial. In a post-content-scarcity world where all content is available, experience is now everything. Experience IS the product. With the contagion of free infecting everything the content itself is no longer king. Experience now has the throne.”

So what’s needed? He thinks future music products need “SPARC” (no, not the Sun processor architecture). Digital music products, he says, must be:
• Social: put the crowd in the cloud
• Participative: make them interactive and immersive
• Accessible: ownership still matters but access matters more
• Relevant: ensure they co-exist and joint the dots in the fragmented digital environment
• Connected: 174m Europeans have two or more connected devices. Music fans are connected and expect their music experiences to be also.

His parting shot: “Music products must harness disruption, that isn’t in question. What is, is whether they do so quickly enough to prevent another massive chunk of the marketplace disappearing for good?”

I think Warner may have answered that already, actually.

My commentary: after reading this, if you were a music industry executive you’d probably want to slash your wrists. But things may be both worse and better than it seems  for the big music publishers. Here’s why.

Firstly, experience can be excluded, branded and sold. The predominant form of musical experience today is not the download but the live music festival or concert. Large multinationals are already aggressively into this space (think LiveNation) and we should expect this to continue. Secondly, experience can be a good as well as a service: that is, really well produced and packaged vinyl can be an experience (although only a niche experience – but then again, all music is niche now anyway). Finally, certain aspects of the music market are not being disrupted in the same way as downloadable songs – for instance, royalty streams where the end customer is large enough to warrant legal pursuit by collection agencies.

On the other hand, in some ways, things really are as bad if not worse than the Forrester report suggests. Free music is not going away, and today’s teenagers really don’t expect to pay for it. That battle is over. So the future for recorded music may really be truly non-excludable and free. That’s a challenge that no-one in the industry seems willing to face up to, even those advocating streaming or subscription models. Finally, the recent history of the music industry suggests that music publishing executives – indeed, musicians themselves – struggle to understand the new paradigm, even twelve years after Napster.

Audience demographics for New South Wales museums and galleries

Museums and Galleries NSW, the peak body for that sector in the Australian state of New South Wales, has just released an extensive audience survey. Entitled Guess Who’s Goingto the Gallery? A Strategic Audience Evaluation and Development Study, it’s a fascinating trove of demographic information about 32 museums and galleries across New South Wales.

Some of the top-line findings highlighted by the report’s authors include:

  •  One persistent finding across galleries and across regions is the skew towards females and towards the over 55’s in the audience base.
  • Around 2 in 3 visitors are female (rule of thumb) and nearly half (47%) of the audience is over 55’s. Both of these are over represented in galley audiences compared to the relevant ABS data.
  • Metro Audiences are younger than the region audiences (41% vs 30% under 44 years old). However the regional population is generally older than metro population (37% vs 18% over 55 years)
  • Also, it is interesting the public gallery audiences skew away from the under 35’s whereas the age group in the middle (35-54, ie: “the family age band”) are relatively proportionate to ABS data (around a third or 32%). In other words, the increase in over 55’s appears to be offset by the dip in under 35’s.
  • audiences in NSW public galleries are showing a skew towardstertiary degrees, particularly post-graduate degrees
  • Interest in the types of events, public programs and exhibitions at the gallery varies primarily by demographic segment. In general, younger audience members (under 35’s) show different tastes to older audience members, and in particular younger audiences have a greater interest in live performance and music at the gallery (whereas older audiences are more interested in artist talks and workshops), and have a greater interest in contemporary art and emerging art forms such as digital media arts.