Australian federal budget 2011: wrap-up of arts and cultural funding

The following article appeared in Crikey on Friday May 13th 2011. 

The 2011 federal budget contained some modest announcements for the arts and culture.

In the Arts portfolio, the government delivered on its 2010 election promise for $10 million over five years in new grants for artists to create work. The funding will support “up to 150 additional artistic works, presentations and fellowships over the next five years through the New Support for the Arts program.”

As well, $400,000 has been found for the federal government’s Contemporary Music Touring Program, a successful program which supports popular mid-level contemporary music acts to tour regional areas.

In broadcasting, $12.5 million has been provided for the proverbially penurious community radio sector, an increase of 25% for a critical area of broadcasting that generally receives very little government support

There was also a package for the screen industry, with a headline figure of $66 million (as we will see, it is actually less than this). Much of the extra money goes to production subsidies through the tax system in the form of lower qualifying thresholds for the Screen Production Incentive. According to Screen Australia, the changes include:

  • Lowering the threshold for Producer Offset eligibility from $1 million to $500,000, for features, TV and online programs

  • Replacing the Producer Offset for low-budget docos with a Producer Equity payment

  • Converting the 65 episode cap to 65 commercial hours for TV

  • Exempting documentaries from the 20% above-the-line cap

  • A reduction in qualifying Australian production expenditure thresholds, and allowances for a broader range of expenses to be eligible for QAPE.

Some really good news is the restoration of the Australian Bureau of Statistics’ screen industry survey, which provided gold-standard data on the state of the industry and which hasn’t been performed since 2007-08 (shortly before the Rudd government slashed funding to the ABS in its first budget).

But how much new money for screen is really here? Go to Budget Paper 2 and you will find that the total extra funding is only $8 million. This is because, quoting from the budget papers, “these changes will be partly offset by $48 million in savings over four years from 2011-12 by removing the Goods and Services Tax (GST) amounts from [qualifying production expenditure] for the film tax offsets and increasing the minimum expenditure thresholds for documentaries to $500,000 in production (from the current threshold of $250,000).”

Money is also being clawed back from cultural agencies through the increased efficiency dividend. Rising to 1.5% in future years, the efficiency dividend hits smaller agencies much harder than big ones. And everything in the arts is small.

The efficiency dividend measures mean the Australia Council is being asked to save $3.3 million over the forward estimates, the Australian Film Television and Radio School will have to find $1 million, the National Film and Sound Archive $1.1 million, the National Gallery $1.4 million, the National Library $2.1 million, the National Museum $1.7, and Screen Australia $759,000. That’s more than $12 million in funding cuts for cultural agencies over the forward estimates.

If we look a little closer at the portfolio budget statements, for instance from the Australia Council, we can see the effects of the efficiency dividend in falling support for artists and cultural organisations. This year there will be “a decrease of approximately $2.5 million in forecast grants expenses compared with 2010-11.” Australia Council grants funding will be only 2% above 2010 levels in 2014-15. But CPI is forecast to run at 3% annually, meaning Australia Council support for artists and organisations will fall in real terms — by perhaps as much as 10%.

In other words, the “New Funding for the Arts” money announced in this budget will be almost completely clawed back by the effects of static funding and the increased efficiency dividend on the Australia Council.

The one really big-ticket spending item in culture was of dubious policy value: the $376 million spend on helping pensioners and senior Australians to make the switch to digital TV. Opposition leader Tony Abbott has already pilloried the program as “Building the Entertainment Revolution”, while our own Bernard Keane and Glenn Dyer have pointed out “the political imperative of ensuring pensioners aren’t left without television as analog signals switch off”.

Personally, I’m sympathetic to the argument that television represents an important human service that allows older Australians to stay connected with the broader community. But the spending program should also be seen in the context of the broader budget, in which $211 million in spending is being “saved” from aged care itself. The government appears to be prioritising access to daytime television over places in aged-care facilities.

Money for art and culture is often spuriously disparaged by critics as diverting resources away from the critical services that governments provide. In reality, of course, the numbers are tiny compared to the investments annually in roads, schools and hospitals. But in this case it really does seem as though the owners of television networks are getting a subsidy at the expense of much-needed investment in aged care infrastructure.


Advertisements

Jenna Newman on the Google book settlement

In the 1st issue for 2011 of the journal Scholarly and Research Communication comes a masterful exploration of the cultural and legal issues surrounding the Google book settlement by Jenna Newman. At 75 pages, this monograph-length essay is probably the most comprehensive and certainly the most current exploration of the issues underlying this giant experiment in digital publishing.

It’s not really possible to sum up the entire essay, so I’ll just cut to the chase and quote from her conclusion, which firstly establishes in extraordinary detail just how good the deal is for Google:

If the settlement is approved, Google can congratulate itself on a particularly excellent deal. It avoids years of uncertainty, not to mention ongoing legal fees, in litigation. It avoids prohibitive transaction costs by not having to clear rights individually for the works it has scanned already and all the works covered by the settlement and yet unscanned. It will receive a blanket licence to use a broad swath of copyrighted works, and it will enjoy an exclusive position, both as a market leader and with legal peace of mind, in the realm of digital rights: its private licence goes much further than current copyright legislation, particularly with respect to orphan works, for which rights are currently unobtainable in any market. Low transaction costs and legal certainty are key requirements for any mass digitization or digital archiving project (McCausland, 2009). The settlement offers both, to Google and Google alone. It will be years ahead of any potential competitors digitizing print works and may easily end up with an effective monopoly and a leading stake in the emerging markets for digital books. And all this costs Google only U.S.$125 million—a mere 0.53% of its gross revenue, or 1.92% of its net income, for 2009 alone (Google Inc., 2010b)

Newman suggests that th deal is far more equivocal for publishers and authors, but that given the other options on the table (including the risk of a music-industry style failure to establish a viable digital publishing platform until after piracy has eroded much of the value of the market), it may represent the “best deal available.”

But the real implications are for copyright law and communications policy:

The settlement may serve publishers’ and authors’ individual or immediate interests even as it erodes their collective and long-term ones. The public, too, has a significant vested interest in the subjects of the settlement—the books themselves, repositories to centuries of knowledge and creativity—as well as the legal and cultural environment the settlement endorses. A detailed account of the settlement’s economic and cultural costs and benefits is instructive, but more importantly the settlement highlights the structural and technological deficiencies of existing copyright law. Long copyright terms and the presumption of total rights protection have created a copyright regime that privileges the potential for commercial exploitation regardless of whether that exploitation is feasible or even desired by the creators themselves. This regime is also particularly ill equipped to recognize digital possibilities. Whatever happens to this settlement, such tensions continue to strain copyright’s rules.

A number of conditions on approval could address criticisms of the settlement, but perhaps the best way to ensure Google, publishers, and authors are all treated fairly is to pursue copyright reform, not private contracts, to address the legislative problems that the settlement tries to engage. Legislative changes with respect to intellectual property rights have been slow to reflect everyday technological realities. The existence of the settlement, and much of its reception, demonstrates that private interests and public appetites are eager to move beyond the limits of the current regulations. Copyright reform will be fraught with challenges of its own, but the existing legal framework—in Canada as in the U.S.—is increasingly inadequate for accommodating common and emerging practices and capabilities: copyright law has swung out of balance. The settlement may serve as an early test bed for certain possibilities, including digital distribution and access, and the imposition of limited formalities on rights-holders. However, as a private contract, it is an insufficient guide for legislative development. The trouble with copyright does not affect Google alone. The public interest demands more broadly applicable solutions, and these will be achieved—eventually, and possibly with great difficulty—through copyright legislation. We may get copyright reform wrong, as arguably we have done in the past, but that fear should be allayed if we also recall that we have the power to revise our legislative interventions until we get them right.

 

Why AFACT’s piracy statistics are junk

Yesterday, the Australian Federation Against Copyright Theft (let’s call them AFACT or perhaps ‘Big Content’ for short) lost their appeal in the long-running and important copyright infringement suit against Australian ISP iiNet. As usual, some of the best commentary can be found by Stilgherrian (who really does need a second name, don’t you think?):

If you came in after intermission, you’ll pick up the plot quick enough. AFACT said iiNet’s customers were illegally copying movies, which they were, but iiNet hadn’t acted on AFACT’s infringement notices to stop them. AFACT reckoned that made iiNet guilty of “authorising” the copyright infringement, as the legal jargon goes. iiNet disagreed, refusing to act on what they saw as mere allegations. AFACT sued.

In the Federal Court a year ago, Justice Dennis Cowdroy found comprehensively in favour of iiNet. It was a slapdown for AFACT. AFACT appealed, and yesterday lost. Headlines with inevitable sporting metaphors described it as  two-nil win for iiNet.

But read the full decision and things aren’t so clear-cut.

One of the three appeals judges was in favour of AFACT’s appeal being dismissed. Another was also in favour of dismissal, but reasoned things differently from Justice Cowdroy’s original ruling. But the third judge, Justice Jayne Jagot, supported the appeal, disagreeing with Justice Cowdroy’s reasoning on the two core elements — whether iiNet authorised the infringements and whether, even if they had so authorised them, they were then protected by the safe harbour provisions of the Copyright Act.

There’s plenty of meat for an appeal to the High Court, and that’s exactly where this will end up going. Wake me when we get there.

As I argued today, also in Crikey, it’s ironic that Big Content seems to be about the only business lobby group in the country arguing for more regulation and red tape.

But the copyright case also comes in the wake of an interesting little micro-controversy about piracy statistics, released by AFACT late last week. Aided by an economics consultancy and a market research firm, AFACT released an impressive-seeming report that claimed that movie piracy was costing Australia $1.4 billion and 6,100 jobs a year.

Electronic Frontiers Australia made some pretty valid criticisms of the research, including the following:

1. The assumption that 45% of downloads equal lost sales is unproven and insufficient evidence is provided to support it. The survey method cited is better than assuming 100% of downloads are lost sales, but there is better analysis in other studies – for example this piece by Lawrence Lessig. If the study was correct, sales of DVDs and attendance at cinemas would be much more reduced than the reported industry figures. In fact, the movie industry is making record profits.

2. It can’t be ignored that downloads have an advertising effect both on the product downloaded and future releases. To the extent sales may be lost, these must be offset against other gains from advertising.

3. Gross revenue is not the relevant metric, due to variables such as investment in capital, distribution and costs of sales. Many of the movies downloaded may not have been available to view or buy in Australia. Profit is the metric of importance, but this is never studied.

4. Flow-on effects to other industries are wholly speculative, and lost tax on profits assumes the entities pay Australian company tax on sales pro-rata to revenue, which is not intuitive or evidenced. It also assumes that money not spent on movies is lost to the economy, instead of helping to create jobs in other sectors.

5. Peer to peer file sharing is merely the latest in a sequence of technologies since the 19th century which have been claimed to be the ruin of the creative arts. See chapter 15 “Piracy” by Adrian Johns (University of Chicago Press 2009) – the copyright owners said the same thing about copies of sheet music, tape recorders, every iteration of personal recording system and indeed public radio. However, “home piracy” acts not only as a loss to industry but also as a boon to distribution, bypassing censorship and limitations on sales by official outlets.

6. The report suffers, as have other industry-funded studies, from “GIGO”. With an assumption that “downloads = losses” unproven, all conclusions estimating the size of the loss are equally unproven. What if a vibrant sharing culture increases total sales for media respected as quality by consumers, but reduces sales of hyped media? (Research has shown that the biggest downloaders in fact spend more on entertainment than non-downloaders.)

7. The call-to-action of this report is obviously to “crack down on piracy”, shifting the cost of file-sharing from the industry to the taxpayer via increased law-enforcement. No industry, let alone the foreign-dominated entertainment industry, deserves a free ride for its business model. If instead, the industry noted that the report says 55% of downloads created a market for sales, much of which is unsatisfied due to current restrictive trade practices, then its future profitability would be in its own hands.

8. Repeated studies have demonstrated that the entertainment industry vies for money and commitment of time with all other forms of entertainment. The Internet, computer games and mobile telecommunication applications take “eyeballs and dollars” away from DVD and CD sales, but also sports arenas, sales of board games and printed works. Magazines are also suffering from a reduced value proposition with the Internet, and some forms of entertainment and some businesses in the industry will no doubt find it difficult to remain vibrant. Change is consumer-driven, and it’s futile for the industry to try to hold fast to a business model and methods of content distribution which are dying with or without fierce law enforcement of copyrights.

Unsurprisingly, AFACT  have responded, attacking EFA’s arguments.

Notably, AFACT replies that:

“The study does not assume that ‘downloads = losses’. As stated above, some 32 per cent of respondents said that they viewed an authorised version of a movie after watching the pirated version. As a result, 32 per cent of ‘all pirate views’ were removed from the ‘lost revenue’ calculations and were treated as ‘sampling’.”

This is a valid argument. AFACT has indeed removed these later viewings from their lost revenue calculations. But, as I’ll explore below, this doesn’t mean that AFACT’s methodology is sound.

AFACT’s other replies are far less persuasive. Take this line:

“It should be clearly noted that in almost all of these cases government or technology provided a barrier to prevent continued rampant infringement. In the case of public radio, legislation provided statutory copyright royalties. VHS and cassette tape may have been efficient technologies for recording, but in terms of cost and quality (analog degrades with time) they proved not to be efficient for distribution at that time. Laws were also designed to prevent mass distribution of pirated VHS tapes. Solutions, whether legislative, technological or otherwise are currently required to prevent or deter the unfettered digital distribution of pirated versions of copyrighted content.”

Not to put too fine a point on it, this is a rubbish argument. Statutory copyright royalties for broadcasters were not barriers to listeners – they were income streams to publishers. And, in fact, as EFA point out, radio proved to be such a powerful marketing tool for music labels that record companies regularly resorted to payola and other measures to get their songs on high-rating radio stations. This argument is a classic tautology: because AFACT believe that regulatory barriers are necessary to prevent infringement, they argue that the reason previous technologies didn’t lead to “rampant infrignement” was because they were strictly regulated. You don’t need a degree in logic to spot the flaw in this argument.

So who’s right?

On the whole, EFA has the better of the exchange. Indeed, there are plenty more holes you can pick in AFACT’s methodology if you wish. To start with, let’s examine their laughable “Annex 1” in the full report. This purports to explain how ABS input-output tables are used to generate a final figure for total piracy impact in terms of lost sales and job losses.

I’d like to say I carefully checked their methodology for its econometric accuracy. Unfortunately, I can’t – because the authors at Oxford Economics and Ipsos don’t publish their equations; nor do they publish their raw data.

Just as an exercise, I downloaded the ABS input-output tables and attempted to match the ABS data to the AFACT report. It’s impossible. The data tables in the AFACT report which might allow that kind of scrutiny are missing.

What Annex 1 does tell us is that Oxford Economics and Ipsos have made all sorts of behind-the-scenes calculations to do with the exact value of the multipliers they use and the precise allocation of various ABS industry data to various categories of their assumptions. But they don’t tell us how these figures were arrived at. To get a flavour of the opacity of the modelling, here’s their full explanation of two of the the multipliers they use:

Type II multipliers of 2.5 (Gross Output) and 1.1 (GDP) were estimated. This covers activity in the Australian motion picture exhibition, production and distribution industries as well as TV VOD, internet VOD, downloads of motion pictures and the retailing of these motion pictures

There is no further explanation of how the numbers of 2.5 and 1.1 were “estimated” and no equation which shows us what they multiply. Hence, it is literally impossible to verify, cross-check or otherwise scrutinise these figures. Indeed, the full report contains no true methods section. In other words, the academic credibility of these figures should be zero.

This rubbish is just another example of how lobby groups use consultants-for-hire to create vocal scare campaigns based on fictitious figures. It’s junk modelling, ordered up for the express purpose of industry rent-seeking.

Crikey’s Bernard Keane explained it helpfully for us in relation to climate lobbying in 2010:

This what you do:

  1. Commission a report from one of the many of economics consultancies that have broken out like a plague of boils in the past decade.  This should feature modelling demonstrating the near-apocalyptic consequences of even minor reform.  Even if your industry is growing strongly, you should refer to any lower rates of future growth as costing X thousands of jobs, without letting on that those jobs don’t actually exist yet, and might never exist due to a variety of other factors.
  2. Dress up the report as “independent”, slap a media-friendly press release on the top and circulate it to journalists before release, with the offer of an interview of the relevant industry or company head.
  3. Hire a well-connected lobbyist to press your case in Canberra.  When the stakes are high, commission some polling to demonstrate that a crucial number of voters in crucial marginal seats are ready to change their vote on this very issue.

Corey Doctorow rebuts Evgeny Morozov

We’ve all heard about (though I’ve ot yet read) The Net Delusion.

Now, a leading thinker/practitioner in the field of new media reviews Morozov’s book, rebutting his thesis:

At its core, there is some very smart stuff indeed in The Net Delusion. Morozov is absolutely correct when he forcefully points out that technology isn’t necessarily good for freedom – that it can be used as readily to enslave, surveil, and punish as it can to evade, liberate and share.

Unfortunately, this message is buried amid a scattered, loosely argued series of attacks on a nebulous “cyber-utopian” movement, whose views are stated in the most general of terms, often in the form of quotes from CNN and other news agencies who are putatively summing up some notional cyber-utopian consensus. In his zeal to discredit this ideology (whatever it is), Morozov throws whatever he’s got handy at anyone he can find who supports the idea of technology as a liberator, no matter how weak or silly his ammunition.

Read the rest in The Guardian

Also worth a look is Clay Shirky’s Foreign Affairs piece on the political power of social media (firewalled)

Wikileaks, information and democracy

The scene outside Julian Assange's extradition hearing at Westminster Magistrates Court, London, December 7th 2010. Image: AP Photo/Kirsty Wigglesworth

Like most of the rest of the world, I’ve been fascinated by the recent developments in the world of new media.

“New media” is a much-abused phrase, but in the case of Wikileaks and Twitter, the phrase is literally accurate. Wikileaks and Twitter really are new mediums: they are less than five years old.

A wiki and a social network like Twitter are both ultimately also platforms that rely on older and more established media and communications infrastructure: the internet itself, including the servers, routers and undersea data cables that criss-cross the world. And because of that, they can take advantage of the unique benefits bestowed by the distributed architecture created by Leonard Kleinrock, Vint Cerf and the other architects of the ARPANET – ironically, a defence project created to ensure researchers had access to significant national computing resources (and not to create redundancy in the event of a Soviet nuclear attack). The internet, in other words, began life as a communications and data-sharing technology, and the open network architecture of that initial design philosophy continues to affect the way the internet works today.

This week, courtesy of Wikileaks, we learnt a lot more about the sinews of political and financial power that link the modern internet to the security and executive agencies of the contemporary nation-state. The content of these lessons has much to teach us about the state of our democratic societies.

Under sustained pressure from US politicians, several important aspects of Wikileaks’ infrastructure were shut down by the corporations that manage them. First, Amazon shut down Wikileaks’ servers. Then PayPal stopped processing online donations to Wikileaks from supporters.

Interestingly, Wikileaks is not really a “wiki”, in the sense that Wikipedia is: it can’t be collaboratively edited and it is very far from open access.

Nor are its philosophies necessarily original: they are in fact an amalgam of the Enlightenment ideas of Locke, Mill and Paine, and the 1980s and 90s techno-millenarianism of writers such as John Perry Barlow. But in its technological sophistication, its intent and most importantly its impact, Wikileaks is a recognisably new phenomenon. There have been many attempts by internet companies and media organisations to encourage whistleblowers and apply the ideas of scrutiny to monitor governments. But none have had the political impact that Wikileaks has achieved in just a few short years. Wikileaks is new — not because it is on the internet, but because it is making powerful elites in the government and media genuinely uneasy.

Wikileaks is web publisher that relies on clever encryption and distributed servers and publishing platforms. In doing so, it necessarily relies on older and more established media and communications infrastructure: the internet itself, including the servers, routers and undersea data cables that crisscross the world. And because of that, Wikileaks can take advantage of the unique benefits bestowed by the distributed architecture created by Leonard Kleinrock, Vint Cerf and the other architects of the ARPANET — a defence project created to ensure researchers had access to significant national computing resources (and not to create redundancy in the event of a Soviet nuclear attack). The internet, in other words, began life as a communications and data-sharing technology, and the open network architecture of that initial design philosophy continues to affect the way the internet works today.

Wikileaks is certainly more than merely a very clever whistle-blower protection and publication system. While the encryption and other information security aspects of the site are impressive, perhaps more important is that Wikileaks allows disgruntled would-be leakers to turn the power of modern information technology against the nation-states and large corporations that now rely on it.

In an ironic turn that Michel Foucault would surely have applauded, the sheer amount of information now hiding behind government and corporate firewalls makes that information increasingly vulnerable to disclosure. The current cache of Wikileaks cables being released, for instance, have all been distributed on the US government’s SIPRNET, which stands for Secret Internet Router Protocol Network. However, in this context, “secret” is something of a euphemism. As Kevin Rudd himself has pointed out, more than two million US officials have access to SIPRNET. More than 180 US agencies were signed up to SIRPNET by 2005. No wonder much of this content eventually made its way into the public domain. The wonder is that it hasn’t been leaked sooner.

Some of the sharpest thinking about what Wikileaks means has come from the intelligence community itself. US security think-tank Stratfor, for instance, points out that there is a “culture of classfication” rampant inside the US government, in which even relatively mundane documents are classified under Executive Order 13526 as “confidential” or “secret”. Consequently, according to Stratfor’s Scott Sewart, “this culture tends to create so much classified material that stays classified for so long that it becomes very difficult for government employees and security managers to determine what is really sensitive and what truly needs to be protected.”

Information probably doesn’t “want to be free”, as the activist and technologist Stewart Brand famously announced but there are plenty of people who would like it to be. Some of them work in the US military, including Private First Class Bradley Manning.

The content of the Wikileaks releases so far has been devastating, not for what it says, but because it has cut through the lies, disinformation and media spin on which modern democracies increasingly depend. Many citizens will not be surprised by the dark truths that Wikileaks reveals, but they will scarcely be energised to a new optimism about their governments. That US forces violate rules of engagement to gun down innocent civilians, or that the war in Afghanistan is going badly, or that the US State Department actively spies on the UN, or that the Saudis want Iran’s nuclear facilities destroyed: none of these revelations are particularly surprising. But they tear away the veil of deceit behind which politicians and other democratic officials routinely operate in the course of their daily affairs. In the face of truth, deniability is implausible.

Much of what has been written about Wikileaks has missed this fundamental point. It is interesting that Assange himself justifies the cable releases by pointing to the lies of governments to their own people in justifying wars, writing, “there is nothing more wrong than a government lying to its people about [just] wars, then asking these same citizens to put their lives and their taxes on the line for those lies.”

As The Guardian’s John Naughton has pointed out,  there is a delicious irony to the relatively indiscriminate way in which Wikileaks has attacked the sacred cows of the left and the right. It was Wikileaks, remember, that published the hacked emails of UK climate researchers — leaks which commentators and politicians on the right were happy to seize upon as incontrovertible evidence of a giant cover-up in climate science.

Now that Wikileaks has turned the blowtorch on the cherished organs of US national security, those same right wing commentators are calling for punitive action to shut down the organisation.

Many on the left have been equally discomforted, as the confused and savage reaction of many in the Australian Labor Party demonstrates. As Simon Longstaff argued yesterday on The Drum, “it would seem incumbent on those who criticise Wikileaks to renounce the use of leaks in general”.

As with every revolution, Wikileaks has also forced politicians, corporations and officials to make snap decisions about where they stand — and with whom they stand. In the case of USinternet firms like Amazon and PayPal, that decision was to side quickly and decisively with theUS government. Further down in his article, Naughton makes the point that:

the attack of WikiLeaks also ought to be a wake-up call for anyone who has rosy fantasies about whose side cloud computing providers are on … you should not put your faith in cloud computing – one day it will rain on your parade.

 

The other really penetrating account of Wikileaks comes from European media theorists Geert Lovink and Patrice Riemens. In “Twelve Theses on Wikileaks”, they make a number of telling observations — including that some of the most uncomfortable Wikileaks revelations involve the rapidly declining potency of the media itself. They write:

The steady decline of investigative journalism caused by diminishing funding is an undeniable fact. Journalism these days amounts to little more than outsourced PR remixing. The continuous acceleration and over-crowding of the so-called attention economy ensures there is no longer enough room for complicated stories. The corporate owners of mass circulation media are increasingly disinclined to see the workings and the politics of the global neoliberal economy discussed at length. The shift from information to infotainment has been embraced by journalists themselves, making it difficult to publish complex stories. WikiLeaks enters this state of affairs as an outsider, enveloped by the steamy ambiance of “citizen journalism”, DIY news reporting in the blogosphere and even faster social media like Twitter.

 

Or, as Assange told the Sydney Morning Herald back in June, “how is it that a team of five people has managed to release to the public more suppressed information, at that level, than the rest of the world press combined? It’s disgraceful.”

Instead, of course, much of the media coverage has concentrated on Julian Assange’s sensational personal conduct, and the sexual assault allegations levelled against him by two Swedish women.

This is a different — although obviously connected — issue. It should be possible to distinguish the Wikileaks website and organisation from the personal conduct of Julian Assange. If allegations presented to the British court by Swedish authorities are true — allegations which have yet to be tested — Assange has committed a crime.

It is frankly disturbing to see many on the left who one would expect to see defending the rights of women, like Naomi Wolf (Naomi Wolf!) make disparaging remarks about the seriousness of these allegation. One of the allegations is for a rape under Swedish law: a non-consensual sex act in which Assange allegedly forced the claimant’s legs open and of ‘”[used] his body weight to hold [her] down in a sexual manner.” The facts of this matter can and should be established in a free and fair judicial process. But as a matter of principle, no should still mean no.

Ultimately, the importance of Wikileaks may be that it is beginning to reveal the contours of a new sort of social contract between citizens and their rulers: a type of relationship that historian and academic John Keane has called “monitory democracy.” For Keane, “monitory democracy is a new historical type of democracy, a variety of‘ ‘post-Westminster’ politics defined by the rapid growth of many different kinds of extra-parliamentary, power-scrutinising mechanisms.”

Monitory democracy, in which non-government and non-media organisations start to exert meaningful and impactful scrutiny of the state and the corporation, holds the promise for a more balanced informational relationship between ordinary citizens and the power elites. But it also implies some disturbing corollaries.

There is a reason conservative commentators are likening Wikileaks to a kind of informational terrorist group: it uses its military-grade encryption tools for the political goal of destabilising governments and states. In this sense, Wikileaks and especially Anonymous, the hacking group suspected of attacking Amazon, Visa and other sites in retaliation for the Wikileaks crackdown, are “non-state actors” — the term given by security and international relations analysts to terrorist groups like Al Qaeda.

We aren’t really at the beginning of the first global “information war”, but there is a grain of truth to the claims that the willingness of hackers and cyber-activists to attack web infrastructure represents something new and important. And in this analysis, the flip-side of monitory democracy is informational insurrection.

 

Subsidising paid digital content: cultural policy, French style

Ars Technica notes that:

France has decided to try something… novel. The country will attempt to prop up the digital music industry by subsidizing legal music consumption by young people. Under the initiative, citizens between 12 and 25 years old will be able to purchase a “carte musique”—a prepaid card  usable on subscription-based music websites. The card will come with €50 worth of credit, but customers only have to pay €25. The rest will be paid by the French government.

It is interesting to see a national government try a subsidy where out-and-out regulation has failed. But will it work?

The Liberal Party’s arts policy

Well, it’s four days after the Australian election and we still don’t know who will form government.

Over at the ABC’s website, I’ve published my general thoughts about the election wash-up, which I won’t repeat here.

But, given the Liberal Party remains quite likely to form the next Australian government, it might be worth having a look at their arts policy. (I had a look at Labor’s arts policy last week). This post was written in the last week of the campaign, but I didn’t quite get around to posting it.

The Liberal Party of Australia does, finally, have an arts policy. In 2007, it didn’t actually release one, although George Brandis did issue a ringing defence of the Howard Government’s arts policies in a speech to the National Press Club.

The Liberal Party’s arts policy was launched in the last week of the campaign at Jupiters Casino on the Gold Coast by Stephen Ciobo. It’s narrow and targeted at the screen industry and regional arts, but that doesn’t mean it’s not a significant engagement with those areas.

Regional arts organisations have long been the poor cousins of the big-city cultural institutions (which it must be said, got a very good run from the Howard Government), and cater to audiences that are desperately under-served for the kinds of cultural experiences that those who live in inner-city Sydney or Melbourne enjoy. The policy promises $10 million for regional galleries to buy Australian work, plus $3.85 million for the Regional Art Fund, and nearly $10 million for regional touring and exhibitions.

The Screen Producers Association is delighted with the screen announcements, including a $60 million “temporary” production subsidy that will top up existing commercial investment in mid-tier feature films with budgets in the $7-30 million range. That doesn’t sound like big money in Hollywood terms, but it effectively cuts out most Australian indie features, who have budgets in the $1 to $5 million range. Television apparently misses out.

Ciobo claims that the loans “will be recouped by the Commonwealth in line with the industry’s standard recoupment schedule,” but the dismal history of Australian government film investment suggests that most of this money will never return to Treasury coffers. Similarly, Ciobo’s claims that the new production subsidy will create 18 features and 1100 jobs must be taken with a grain of salt – the figures come from the Screen Producers Association itself.

A proposal to extend HECS-style loans to classical musicians is an innovative proposal that will help aspiring instrumentalists to acquire their very expensive trade tools. Again, however, questions must be asked as to why students of mainly classical music institutions get to benefit from such a scheme, while the practitioners of artforms that enjoy far more support in the free market, such as pop and rock, miss out. Much like the screen subsidy, it sounds a bit like picking winners to me.

None of the arts lobbyists have mentioned it, but the real difference between the major parties’ arts policies may well be Australia’s mooted National Broadband Network. As Marcus Westbury and myself have been trying to point out for some time now, the NBN is in fact a significant piece of cultural infrastructure. As readers here will be well aware, cultural content is already one of the most significant aspects of internet traffic, and this is only likely to grow as bandwidth allows Australians to access much faster video streaming, game playing and other forms of cultural expression and interaction. Labor’s NBN is the largest investment in cultural infrastructure ever announced by an Australian government.

All in all, the Liberals have at least got on the scoreboard in arts and cultural policy, even if they trail by some margin the party with the most comprehensive policy in this sphere: The Greens.

Jock Given

It’s time for a bit of fan post about Jock Given, Swinburne’s Professor of Media and Communications.

Why a fan post? Maybe it’s his recent in-depth dissection of the Australian Government’s implementation plan for the National Broadband Network. Maybe it’s his long review essay, also in Inside Story, about the future of books and print. Maybe it’s his fine monograph of 2003, Turning Off the Television, about the history and future of Australian broadcasting and communications policy.

In fact, any way you slice it, Given’s work has become central to this field. He’s got that rare combination of incisive analysis and clear, witty prose.

Take, for example, his discussion of the National Broadband Network, one of the best short introductions to this bewilderingly complex topic you’re likely to find:

WHAT McKinsey and KPMG have delivered is the most substantial public analysis of an Australian communications infrastructure project since the domestic satellite system in the 1980s. This is a major benefit, though not necessarily a good omen. AUSSAT racked up $800 million in debt within a few years. Voluminous public documentation doesn’t always lead to great decisions.

Indeed, in Australian communications, the size of the study is generally indirectly proportional to its influence. The bulky Davidson Inquiry recommending competition in telecommunications and the multi-volume Broadcasting Tribunal inquiry recommending the introduction of cable TV, both in the early 1980s, achieved close to zero. The Productivity Commission’s year-long inquiry into broadcasting in 2000 was largely ignored. But Kim Beazley’s few-page statement about telecommunications competition in 1990 blew the industry apart. By this standard, the two-and-a-half-page media release announcing the NBN in April 2009 was bound to change the world.

The McKinsey/KPMG study is testimony to the sea-change in telecommunications policy in the last two and a half years. For twenty years, both sides of politics have been getting the government out of the telecommunications business, first by allowing private competitors to take on the state-owned monopoly that ran the country’s telecoms for ninety years, then selling down the state’s ownership of it. When new mobile and fixed-line networks were built in the 1990s and 2000s, communications ministers didn’t pour over technology choices, costs, revenues, capital allocation and geographic priorities the way Postmasters-General used to do. Parliament had decided that governments made lousy decisions about those kinds of things.

At least, they weren’t supposed to be pouring over these things the way Postmasters-General used to do. The truth was they still did quite a lot of it. The Coalition government crawled all over Telstra’s timetable for shutting down its analogue mobile phone network and applied immense pressure on its plans to build and later close a CDMA network. In his bookWired Brown Land?, Paul Fletcher, chief of staff to long-term Howard government communications minister Richard Alston and now the Liberal member for Bradfield on Sydney’s north shore, says Ziggy Switkowski was not even on the shortlist of candidates for CEO until Alston insisted he be there. This was at a time when Howard and Alston were pushing their reluctant backbench to support privatisation. The government, they said, had no business controlling a telecommunications company.

But out in the new marketplace, the cable TV and eventually broadband network built in the mid 1990s by the new wholly private telco, Optus, didn’t work very well. The still-public Telstra proved more nimble and ruthless than some expected, building a similar network down many of the same streets. Both companies had to write off billions of dollars. It seemed telcos in commercial markets, even privately owned ones, could make lousy decisions too. Optus’s subsequent caution about investment in fixed-line networks and the curiously widespread, renewed enthusiasm for monopoly is the deep legacy of that time.

The government’s response has been to get back to controlling a telecommunications company. It is not the vertically integrated Telstra, it’s the wholesale-only NBN Co. McKinsey/KPMG’s Implementation Studycontains a set of recommendations that are not yet government policy, but it tells us a great deal about this new, old world.

We have a good idea – the best yet – about how much it might cost. We have lots of data and discussion about what it might earn in revenue. We have an argument about “viability,” but this is really an argument about whether the now fairly well-articulated financial returns that can be expected from the project are justified by the economic and social benefits that might not be captured by the financial modelling.

This is where faith and politics take over.

Spinning the media: important new research from the Australian Centre for Independent Journalism

Public relations content across 10 Australian newspapers, Sept 7-11 2009. Source: Crikey/Australian Centre for Independent Journalism

Crikey and the Australian Centre for Independent Journalism have combined to today release the findings of their important new research project, entitled Spinning the Media. It finds that “over half your news is spin”, that is, written directly or indirectly from press releases and driven in some part by the PR industry.

This impressive research project examined 2203 articles in 10 major daily newspapers during one five-day week  in September 2009. (Of course, the irony is that this story is also PR-driven, in the sense that I am reporting on the findings of a research project which has been announced through a story on a news website – which shows you how difficult it can be to source genuinely novel news for any journalist).

The key finding for me was this one:

Of 2203 articles, more than 500 or 24% had no significant extra perspective, source or content added by reporters.

That figure is truly scary – indicating that approximately one quarter of the daily newspaper’s news content is simply regurgitated media releases from government or corporate sources.

I’ve reproduced the key graphs from the study over the fold: Continue reading

More commentary on the iiNet case

In New Matilda, Raena Lea-Shannon has an excellent summary of the arguments and judgement in the movie studios vs. iiNet case over internet piracy:

In order to prove the studios’ case, it was necessary to show firstly that there had been infringement by users, and secondly that iiNet’s failure to do anything about the notices amounted to authorising the illegal conduct of the users. While the Court was satisfied that the detailed forensic evidence showed that a number of users of iiNet’s service had infringed copyright regarding a number of the studio’s films, it found that the notices were by no means conclusive in all circumstances or easy to decipher:
“Regardless of the actual quality of the evidence gathering of DtecNet, copyright infringement is not a straight ‘yes’ or ‘no’ question. The Court has had to examine a very significant quantity of technical and legal detail over dozens of pages in this judgement in order to determine whether iiNet users, and how often iiNet users, infringe copyright by use of the BitTorrent system.”
The Court also remarked that these notices were not verified as an affidavit or a statutory declaration would be.[…]
In making this argument, one of the main cases upon which the studios relied was the Kazaa case, in which it was found that the operators of the Kazaa file sharing software had a vested interest in illegal downloading, and further, that despite the fact that not all activity using Kazaa was illegal, it was predominantly illegal file sharing.
In that case, the Court also took into consideration the exhortations made by Sharman Networks, the operators of Kazaa, to its users to join the “revolution”, that is, the illegal file sharing revolution. The studios argued that iiNet was no better than Sharman Networks, and that the entire internet was as much a hotbed of piracy as was Kazaa’s file sharing network; iiNet was letting its users get away with daylight robbery.
Justice Cowdroy considered the key cases governing this idea of “authorisation”, and while acknowledging some differences in their reasoning he drew from them what he considered to be the underlying principle of authorisation of infringement. Authorisation, he concluded, requires the authoriser to provide the means of infringement.
In another key precedent, the 1975 Moorhouse case,  the University of NSW Library was found to have authorised an infringement by providing the photocopiers used to do the infringing; in the Kazaa case Sharman Networks provided the file sharing software.
However, Justice Cowdroy found that in this case the means of infringement was the BitTorrent system of file sharing — not the entire internet — and while iiNet made access to the internet possible, it had no control over how its users obtained and used BitTorrent and shared the studios’ copyright. “iiNet,” he concluded “has no control over the BitTorrent system and is not responsible for the operation of the BitTorrent system.” To press home the point and to conclusively distinguish these circumstances from the Kazaa case, he said:

“While the Court expressly does not characterise access to the internet as akin to a ‘human right’ as the Constitutional Council of France has recently, one does not need to consider access to the internet to be a ‘human right’ to appreciate its central role in almost all aspects of modern life, and, consequently, to appreciate that its mere provision could not possibly justify a finding that it was the ‘means’ of copyright infringement.”