- RT @cyberleagle: Thread twitter.com/The_IPO/status… 2 days ago
- @MarliesEck Interesting, I'd like to read it :) 3 days ago
- @MarliesEck The expansive scope is right there in Article 1.2: 'This Regulation protects fundamental rights and freedoms of natural persons' 3 days ago
- @MarliesEck I don't necessarily disagree, but I also don't think this A29WP quote is incorrect; not sure what they… twitter.com/i/web/status/9… 3 days ago
- @mikarv TBF, DP has long been about protection of rights + freedoms, not protection of data itself. Expansion inevi… twitter.com/i/web/status/9… 3 days ago
Researching Personal Data
June 21, 2012Posted by on
Last weekend I attended the Open Internet of Things Assembly here in London. You can read more comprehensive accounts of the weekend here. The purpose was to collaboratively draft a set of recommendations/standards/criteria to establish what it takes to be ‘open’ in the emerging ‘Internet of Things’. This vague term describes an emerging reality where our bodies, homes, cities and environment bristle with devices and sensors interacting with each other over the internet.
A huge amount of data is currently collected through traditional internet use – searches, clicks, purchases. The proliferation of internet-connected objects envisaged by Internet-of-Things enthusiasts would make the current ‘data deluge’ seem insignificant by comparison.
At this stage, asking what an Internet of Things is for would be a bit like travelling back to 1990 to ask Tim Berners-Lee what the World Wide Web was ‘for’. It’s just not clear yet. Like the web, it probably has some great uses, and some not so great ones. And, like the web, much of its positive potential probably depends on it being ‘open’. This means that anyone can participate, both at the level of infrastructure – connecting ‘things’ to the internet, and at the level of data – utilising the flows of data that emerge from that infrastructure.
The final document we came up with which attempts to define what it takes to be ‘open’ in the internet of things is available here. A number of salient points arose for me over the course of the weekend.
When it comes to questions of rights, privacy and control, we can all agree that there is an important distinction to be made between personal and non-personal data. What also emerged over the weekend for me were the shades of grey between this apparently clear-cut distinction. Saturday morning’s discussions were divided into four categories – the body, the home, the city, and the environment – which I think are spread relatively evenly across the spectrum between personal and non-personal.
Some language emerged to describe these differences – notably, the idea of a ‘data subject’ as someone who the data is ‘about’. Whilst helpful, this term also points to further complexities. Data about one person at one time can later be mined or combined with other data sets to yield data about somebody else. I used to work at a start-up which analysed an individual’s phone call data to reveal insights into their productivity. We quickly realised that when it comes to interpersonal connections, data about you is inextricably linked to data about other people – and this gets worse the more data you have. This renders any straightforward analysis of personal vs. non-personal data inadequate.
During a session on privacy and control, we considered whether the right to individual anonymity in public data sets is technologically realistic. Cambridge computer scientist Ross Anderson‘s work concludes that absolute anonymity is impossible – datasets can always be mined and ‘triangulated’ with others to reveal individual identities. It is only possible to increase or decrease the costs of de-anonymisation. Perhaps the best that can be said is that it is incumbent on those who publicly publish data to make efforts to limit personal identification.
Unlike its current geographically-untethered incarnation, the internet of things will be bound to the physical spaces in which its ‘things’ are embedded. This means we need to reconsider the meaning of and distinction between public and private space. Adam Greenfield spoke of the need for a ‘jurisprudence of open public objects’. Who has stewardship over ‘things’ embedded in public spaces? Do owners of private property have exclusive jurisdiction over the operation of the ‘things’ embedded on it, or do the owners of the thing have some say? And do the ‘data subjects’, who may be distinct from the first two parties, have a say? Mark Lizar pointed out that under existing U.S. law, you can mount a CCTV camera on your roof, pointed at your neighbours back garden (but any footage you capture is not admissible in court). Situations like this are pretty rare right now but will be part and parcel of the internet of things.
I came away thinking that the internet of things will be both wonderful and terrible, but I’m hopeful that the good people involved in this event can tip the balance towards the former and away from the latter.
June 14, 2012Posted by on
The philosopher Immanuel Kant once said that if the world were an infinite plane, then all the problems of political philosophy would be solved. If one citizen disagreed with the way his society were run, he could pack up and start a new one over there. In reality, we’re stuck with this spherical earth, and if you keep re-locating over there, eventually you’ll end up back here, to face whatever it is you were trying to get away from in the first place. So it looks like we’re stuck with each other, and the challenge of modern society is to find a compromise.
One thing Kant probably didn’t imagine is that in the 21st century, we would spot an opportunity for a new over there. Last week was the third annual conference on Seasteading. The Seasteading movement aims to create small floating cities in international waters. They envision experimental societies, intentionally-formed communities free from the regulation of national governments and the influence of social mores.
In reality, the most serious interest in seasteading has come from rich venture capitalists. Peter Thiel, the billionaire founder of paypal and noted libertarian, donated $500,000 to the Seasteading Institute in 2005. As a Silicon Valley venture capitalist, Thiel knows first-hand the downsides of government regulation. Thiel has seed-funded a Seastead off the shore of California, which will provide day trips to the mainland and promises to get around the restrictive work visa system, allowing the unrestricted flow of international capital and labour.
But for some, seasteading is more than just a legal hack. It’s an opportunity to apply the scientific method to society; each seastead an experiment to test an economic or political idea. Do financial transactions taxes really chill innovation? What are the consequences of zero welfare provision? How about if we legalise all drugs? Policies which would be impossible in a large democracy with a divided citizenry become possible in smaller communities of like-minded individuals.
So it is no surprise that seasteading is popular amongst libertarians like Thiel. And libertarian seasteads may indeed prove highly successful. But to see them as experiments in the ‘science’ of Politics is a rather dangerous mistake. Such ‘experiments’ have flawed validity; the citizenry of libertarian seasteads would end up a selective group blessed with talents and riches, who spend at least as much of their resources keeping the wrong people out, as letting the right people in.
Thiel criticises the US government immigration policy, as it prevents skilled foreign programmers from working in Silicon Valley. But the libertarian view of immigration has an ironic nuance. On the one hand, they often advocate open borders, arguing – admirably, if unrealistically – that no government should interfere with an individual’s freedom to roam the world as he wishes. On the other hand, in a libertarian society, where private property is absolute and everything is privatised, undesirable immigrants would have the same rights as trespassers, i.e. none. Some Seasteaders, fearful of climate change, have even begun building self-sustainable floating islands, impenetrable to climate refugees. Those foreign programmers on Silicon Island may be welcomed, but only at their host’s discretion. The poor, the destitute, the dispossessed, and the sick need not apply. The taxpayers on the mainland who funded the Seasteaders’ education can also forget about getting anything back.
Libertarian seasteads will be the preserve of the rich, and cut free from the draining demands of the rest of society they may well thrive. But this would hardly be a lesson for the rest of us. Those of us who know that the earth is not an infinite plane, also know that the challenge of building a good society means caring for all. The success of selective libertarian islands would constitute the failure of humanity to work together for an equitable future in a prosperous world.
June 13, 2012Posted by on
This is a follow-up to my last post, in which I argue that one of the copyright industry’s favourite arguments against digital piracy can be generalised to other ways of consuming media – including watching ad-supported content whilst ignoring the ads.
Some of the main objections from those who read the piece include (I hope I’m faithfully representing them here):
Advertisers know and expect that many people will ignore their adverts, so there’s no obligation to watch/click on them. My answer is that the same thing could be said about the copyright industries, who know and expect that their content will be copied in a variety of infringing ways by many people. They take this risk, hoping that enough people will pay to make their investment worthwhile.
If there aren’t enough people clicking on adverts to sustain the future creation of the content, it doesn’t matter because publishers and/or advertisers will change their business models to support future content. Again, the same can be said about the copyright model; it doesn’t matter if there aren’t enough people paying for their material legitimately through current channels, because content providers (whether artists themselves or their publishers) will eventually change their business models. And in fact, this is already happening. This heartening industry report from the folks at TechDirt charts the growth of alternative business models for the creative industries, which have nothing to do with copyright enforcement.
I certainly didn’t mean to imply that ignoring adverts is legally equivalent to digital piracy. I agree that legally/contractually speaking, copyright owners have every right to prosecute digital pirates, while no-one has such right with regards to ‘advert ignorers’. But the free-rider argument deployed by the copyright industries is a moral argument, the purpose of which is to convince us that copyright law as it stands is worth adhering to. It’s no good claiming that piracy is different to ignoring adverts because there are pre-existing laws and expectations regarding piracy, but different laws/expectations regarding ignoring adverts. Such an argument has no force against digital pirates, who we can assume have very different expectations and don’t believe in the laws as they currently stand. What they need is a independent reason to pay for content via legitimate channels – and the main one offered so far (the free-rider argument) unfortunately generalises to another, seemingly innocent, practice; ignoring adverts.
Finally, it was suggested that the free-rider argument could be given either a consequentialist or a ‘contractualist’ interpretation, and the latter might not generalise to advert-ignorers. I think there is an important distinction between contractualism as ethical theory and as legal theory. The first could serve as a sound justification for the law, while the second is merely a description of the law and fails to have independent moral force against pirates for the same reasons outlined above. Perhaps you could have a moral defence of copyright in contractualist terms, but that would be a different argument and certainly not something that copyright defenders routinely appeal to.
Finally, @oliverbills brough up the history and future of automatically blocking adverts in the browser, noting how pop-up ads were killed by browser plugins, only to be replaced by something else. It certainly does seem like an arms race which is likely to continue for a long while yet. The better we (or our browsers) get at ignoring/ blocking adverts out, the better they will get at attracting our attention and circumventing browser plugins. The one form of advertising I can think of which is immune to this is product placement in films. I don’t imagine we’ll see a plugin capable of blocking out, say, Tom Cruise’s RayBan sunglasses in Risky Business any time soon.
June 8, 2012Posted by on
Is downloading copyright-infringing material morally wrong?
It’s a question which the more conscientious members of my generation have probably asked themselves at some point. My best answer to date is still an unsatisfying “Sometimes yes, sometimes no”. There are dozens of different ways of looking at it, and it really does just depend.
Those who believe it is wrong (and have given some thought to the justification of their belief), tend to settle on a couple of distinct arguments. For me, the most compelling is based on the idea that by engaging in piracy, you are undermining the creation of the very thing you enjoy. If you don’t pay anything for the content you consume – whether it is music, film or literature – then future works may not be created. Thus, the problem is that piracy is a kind of ‘free ride’ (indeed, this was the title of a recent book on the subject).
Now, clearly, there are several bones which one could pick here: if you wouldn’t have paid for the media anyway then consuming it illegally it makes no difference; so-called ‘piracy’ is often the basis for new creativity: and the funnel between copyright industry revenues and pure creative activity is hardly direct, so we are only morally obliged to pay a fraction of the current prices. But let’s go along with the ‘free-ride’ argument for now, as it seems to be one that most reasonable and outspoken critics of file-sharing turn to.
My question then, is this: if piracy is wrong because it is a kind of free-ride, is ignoring adverts when enjoying ad-supported media any different? How many of us avert our gaze from the banners beside our favourite news site or social media platform? How many people mute the TV during breaks in shows, skip sponsored videos on YouTube, or even use a browser plugin to block out all adverts from the web? The content we enjoy consuming from these spaces can only survive by allowing companies to effectively advertise their products and eventually get consumers to part with their cash. If we don’t even at least look at some of these ads, let alone click on them, aren’t we undermining the future creation of the very material we enjoy?
Of course, there are differences between an ad-supported vs paid business model. But if what makes digital piracy wrong is that it is free-riding, then this should generalise to any business model. If you want to ensure the future creation of content, then you have to play along with whatever it is that sustains that content. This is true whether that means paying for copyrighted media, or subjecting your eyeballs to the adverts that support it.
This conclusion, that ignoring adverts is wrong, may strike the reader as rather strange. How could something that we do every day, without even thinking, be morally wrong? At which point the habitual file-sharer says: ‘Exactly’. If you don’t see anything wrong with ignoring the adverts that sustain your free content, then you shouldn’t see anything wrong with file-sharers ignoring the copyright model that sustains their free content. Conversely, if you still think piracy is wrong, then you’d better start clicking on those ads once in a while.
May 9, 2012Posted by on
I heard last week that UK internet service providers are going to begin censoring file-sharing link aggregator The Pirate Bay. I don’t use TPB, but I went straight to the site to see if it was still accessible (many others evidently did the same, causing an unprecedented traffic spike). As it happens, my broadband provider (BT) haven’t yet decided whether to join in the censorship. So I probably have a little while left to note down the IP address or install appropriate circumvention tools (such as this browser plugin), if I ever want to access TPB in future.
Last year I put together a review and map of some of the academic literature in this area, addressing the question of how effectively governments can censor the web. It is by no means comprehensive (leaving out some important commentators in the area such as Rebecca MacKinnon), but I’ve tried to include a representative sample of the various disciplines I think are needed to answer this question. It doesn’t just boil down to a technical question about tools for censorship and circumvention – we have as much to learn from sociological, legal, political and economic research. I break the issue down into three factors
• Technical tools and infrastructure – what do governments have at their disposal?
• Circumvention – how successful and widespread are citizens attempts?
• Limitations on government power – both constitutional and influence over private industry
I’ve represented the research relevant to each factor in the map below:
You can read the rest of the report here (PDF)
I’m also interested in trying out OONI-probe, a new tool that anyone can deploy to detect censorship on the ground. In addition to the various annual reports (from the Open Net Initiative, HerdictWeb, and others) this should prove an invaluable tool for tracking online censorship in future.
April 15, 2012Posted by on
This work takes part in the Future of Copyright Contest.
<?xml version=”7″ encoding=”UTF-8″ ?>
<publicationTitle>Journal of Information History</publicationTitle>
<articleTitle>Book Review: ‘History of Copyright 2012 – present’ by Engstrom and Grucht</articleTitle>
<authorName>Lu Xu Fei</authorName>
<pubDate>14 April, 2072</pubDate>
In History of Copyright 2012 – Present, Erik Engstrom and Karel Grucht explore the recent history of copyright, from its heyday in the early 2000’s through the middle of the century. In revisiting these issues – many of which are now of merely historical interest – the authors hope to bestow the modern reader with a sense of the road travelled towards current creative production. I here present some highlights of the book, before reflecting on its broader themes and lessons for the modern day.
The histories collected here indicate the sheer complexity of the copyright crisis of the 21st century, and the diversity of responses to that crisis from governments, businesses, artists and consumers. They also show how the disputes over copyright which took place during that time stemmed from fundamental economic, political and philosophical questions; questions about the nature of creativity, incentives and rewards, rights and freedoms, and the value of immaterial, infinitely copyable goods – questions which remain equally pertinent today.
Despite the many changes in knowledge production which have taken place over the last eight decades, the format in which most knowledge is curated – the academic journal – has remained relatively stable since its initial incarnation in the 17th century (in the form of Philosophical Transactions of the Royal Society). This very journal, which is now nearing its 60th anniversary, is testament to the resilience of the format.
What has changed is access to this realm of knowledge. When Information History was first published in 2013, it was part of a growing minority of open access academic journals. At that time, most of the world’s peer-reviewed knowledge was locked up behind paywalls; only the most well-endowed institutions could afford access to the whole catalogue. But through a slow and steady movement for open access, the dream of a free online library of the world’s knowledge was eventually realised.
This change is documented by Engstrom and Grucht, in the first chapter ‘The Demise of Closed-Access Academic Publishing’. They argue that the incumbent publishing industry eventually crumbled due to three major events. The first was driven by research funding bodies. Their increasing adoption of open-access mandates ensured the fruits of their research grants were published open-access. The second came when academics began saying ‘No’; no more submissions to closed-access journals, no more refereeing, no more editorial work. Starved of this free labour, the closed-access publishers began to lose their only source of value.
The final nail in the coffin, argues Engstrom, came when the remaining three major publishing companies went bankrupt after losing a high-profile joint lawsuit. Elsevier-Wiley-Springer vs ScholarSec (2022) was a landmark case in which the defendants, a group of students, had harvested several million academic articles from behind a paywall and disseminated them online. After a stirring defence, the prosecution lost and could not afford the court fees and damages. The bankrupt publisher’s assets – millions of copyrighted papers – were then seized and turned over to the public domain.
With the adoption of open licensing of academic literature as the default, educational opportunities opened up not only to scholars but also to those outside the walls of academia. Health workers in the developing world could access medical research. Concerned citizens could better scrutinise the scientific evidence cited in government policy-making. High school students around the world now had exactly the same informational resources as a Harvard professor – significantly levelling the playing field.
At the time when data mining was truly taking off in all areas of business, it became possible to apply these techniques to the vast trove of scientific literature. Where the legacy publishers had prohibited researchers from mining datasets attached to scientific papers, open access led to a wave of new research based on the new data mining techniques. Meta-studies proliferated, allowing researchers to gain a broader perspective on their own disciplines. New insights came from statistical inferences drawn from the mass of data. Even the humanities and social sciences were transformed by the new trend in data-driven ‘culturomics’.
(It is worth noting this chapter has great personal significance for one of the authors; Engstrom is the former CEO of Elsevier Publishing who, after a Damascene conversion in 2021, quit to become an open access advocate and historian.)
In the second chapter ‘Copyright Policy Behind Closed Doors: International Trade Agreements of the 2010’s’, Engstrom and Grucht take us back to the 2010’s, when governments of the then ‘developed’ world began attempts to negotiate their intellectual property arrangements in secret. Previous attempts to push agreements through democratic scrutiny had resulted in failure. The Anti-Counterfeiting Trade Agreement (ACTA), which consolidated a number of anti-piracy measures, had been signed by several national governments before being put to the EU Parliament in 2012. A workshop on ACTA was organised by the Commission for members of parliament and civil society groups. The Commission’s approach demonstrated their disdain for public engagement. Archived twitter messages from the time indicate that when the audience clapped to show appreciation for the case made against ACTA, they were asked to be quiet or leave. But after a successful citizen campaign, the Parliament rejected ACTA, and the international treaty was abandoned.
However, this was just the beginning. The dead body of ACTA came back, zombie-like, again and again. Subsequent proposals were negotiated in secret and had equally obscure acronyms: TPP (Trans-Pacific Partnership), CAUD (Coalition Against Unauthorised Duplication), PITTA (Preventing Idea Theft Technology Alliance), PASTA (Preserving Artificial Scarcity Trade Agreement), and many more. While drafted by the governments of developed nations, each was the result of heavy input from copyright industry lobby groups.
At first it was just Hollywood and the software industry who wanted worldwide legislation to control what individuals could do with their networked personal computers. But as computation became ever more embedded in products – from fridges to cars to pacemakers – and these products became connected to the emerging internet of things, virtually every consumer goods industry had an economic interest in copyright enforcement. Owning copyright over the code that runs on their products became an essential part of their business model, allowing them to control the way customers interacted with their new computerised environments. The combined weight of their lobbying efforts accelerated government attempts at global copyright enforcement into overdrive.
Some developing economies accepted these extreme measures, for fear of invoking hostile trade relations with first world governments. Others did not, and suffered trade embargoes as a result. But the price they paid was, in many ways, worth the benefit: rapid economic development fuelled by free access to knowledge and the fruits of technological innovation. Several rogue European states – especially those who had fared worse in the collapse of the eurozone – also elected to reject the new copyright enforcement measures. In their conclusion to this chapter, the authors argue that this was the beginning of the current divide in the global economy between ‘open’ and ‘closed’ economic models.
Chapter 3 deals with the entrance of 20th century works into the public domain, with the illuminating case study of Martin Luther King’s ‘I Have A Dream’ speech. Unlike many public speeches, audio and video of speech was under copyright (administered by EMI publishing) and was not widely available. In 2038, King’s speech was released into the public domain in several countries (U.S. citizens are still waiting; copyright over the work is due to expire in 2068 due to the 20 year extension of copyright term). For many, this was the first time they had seen the most famous footage of the US civil rights movement in full. The video was watched by millions, and reinterpreted in light of the political and civil rights struggles of the 2030’s. The authors track how it very quickly became a viral meme, remixed and cut in to thousands of new works, exploring just about every issue affecting 2030’s and 40’s society.
The fourth chapter charts the history of two international organisations; the IAA (Independent Artists’ Alliance) and the CCA (Content Consumers Alliance). They had their origins in opposing sides of the copyright debate, but by mid-century, it became clear that each one had the key to solving the others’ problems; their individual interests were actually in alignment towards a common future.
The IAA had its origins in disputes between the representative bodies of big content companies and the artists they represented. The former had a deserved reputation as staunch defenders of copyright maximalism, having lobbied heavily for SOPA, PIPA, ACTA and their various later incarnations (covered in the second chapter of this volume). By the late 2010’s, sentiment against these industry lobbies was fermenting among a majority of artists. The RIAA in particular was increasingly seen by artists as the mouthpiece of recording industry executives, at the expense of the artists it claimed to represent. The Independent Artists Alliance was then formed by a group of artists who had rebelled against their own labels by encouraging fans to illegally download their music. The IAA became the new de facto representative body for music artists whose interests were no longer represented by the RIAA. It later joined forces with authors, video makers, games designers and others, all of whom were similarly at odds with their former representative bodies.
The Content Consumers Alliance (CCA) was an initiative started in 2019 by a worldwide coalition of consumer rights bodies, technology companies and activists pushing for the reform of copyright laws. Despite traditionally being positioned on opposite sides of the copyright debate, by the mid 2020’s, the CCA and IAA had begun to see their interests as aligned around a set of emerging funding models. Their first joint campaign was to liberate the huge portion of 20th and 21st century culture – music, movies, books, art – for which legal copies were effectively unavailable. The CCA had published figures estimating that around 95% of cultural works created in the preceding 70 years were no longer available. Large multinational companies who owned the copyright were unable or unwilling to release them, while they continued to promote a narrow selection of mainstream content. Most fans had turned to the darknet, where illegal copies thrived.
The IAA successfully lobbied for new laws governing contracts between artists and companies. Artists were granted new rights to renegotiate unfair contracts, and to reclaim copyright over their works if companies failed to make them legally available for a pre-negotiated period of time. Meanwhile, the CCA facilitated crowdfunding campaigns to encourage 20th century artists, many of whom had faded nearly into obscurity, to make use of their new powers, regain copyright over their work and release it to fans. Engstrom and Grucht argue that this combination of legal reforms lobbied for by artists, and fan-based crowdfunding, gradually freed up a treasure trove of cultural works. It led to a renaissance of 20th century subcultures, reinterpreted and appreciated by 21st century sensibilities.
In their final chapter, Engstrom and Grucht attempt what must be a near impossible task; an overview of the business models which gradually overtook the default 20th century model of copyright. This one-size-fits-all model of ‘create, copyright, then sell copies’ is now just one model out of many, a niche practice which works only in a particular set of social, legal and technological circumstances. The authors are quick to point out that there are almost as many business models as there are types of creative practice. And even when they focus on particular creative practices, broad generalisations prove impossible.
Take, for instance, film and video. In the first decades of the century, the film industry had argued in vain that movies needed strong copyright enforcement to survive. In reality, their business model was supplanted by not one, but dozens of alternatives. As increasingly high quality film cameras were embedded in every personal device, and sophisticated CGI effects became available to bedroom amateurs, many more films were produced than ever before, and at lower cost than ever before. Some blockbusters were shown for free and relied on product placement and merchandise for profits. Others survived and even thrived on ticket sales alone, by building in live audience interaction and participation as an essential feature (it is no surprise that the 2030’s saw a revival of all-night cinema parties reminiscent of the 1970’s Rocky Horror Picture Show). But perhaps the most significant change came with film-makers turning to fans for capital. By 2026, 57 out of the top 100 U.S. box-office hits were financed primarily through crowd-funding websites, where thousands of fans put up the cash to get the films they wanted to see, made.
The film industry was not an isolated case. In every niche of every content industry, from romance novels to political documentaries, from collaborative storytelling to augmented reality games, new business models and revenue streams proliferated. But very few of them had anything to do with enforcing copyright. Their combined effect on the cultural landscape was huge, highly unpredictable and incredibly varied. If there’s one thing we can learn from Engstrom and Grucht’s final chapter, it’s that there was not one answer to fixing the 20th century’s copyright crisis. There were hundreds.
But in reading this book, one common theme clearly emerges. For most creators of video, music, art or text in the 21st century, it was no longer a case of selling copies of immaterial goods. Their recipients were no longer consumers interested in buying discrete digital products which can be infinitely copied at zero cost. Instead, they had become patrons, who wanted to support culture through real life experiences and human relationships. And unlike digital goods, which are by nature infinitely copyable, experiences and relationships cannot be pirated.
Lu Xu Fei, 14th April 2072.
This work is licensed under CC-BY-SA.
March 25, 2012Posted by on
I’ve been a fan of the Open Rights Group – the UK’s foremost digital rights organisation – for a few years now, but yesterday was my first time attending ORGcon, their annual gathering. The turnout was impressive; upon arrival I was pleasantly surprised to see a huge queue stretching out of Westminster University and down Regent’s Street.
The day kicked off with a rousing keynote from Cory Doctorow on ‘The Coming War On General-Purpose Computing’ (a version of the talk he gave at the last Chaos Communication Camp, [video]). In his typical sardonic style, Doctorow argued that in an age when computers are everywhere – in household objects, medical implants, cars – we must defend our right to break into them and examine exactly what they are doing. Manufacturers don’t want their gadgets to be general-purpose computers, because this enables users to do things that scare them. They will disable computers that could be programmed to do anything, lock them down and turn them into appliances which operate outside of our control and obscured from our oversight.
Doctorow mocked the naive attempts of the copyright industries to achieve this using digital locks – but warned of the coming legal and technological measures which are likely to be campaigned for by industries with much greater lobbying power. In the post-talk Q&A session, an audience member linked the topic to the teaching of IT in schools; the need for children to understand from an early age how to look inside gadgets, understand how they work and that they may be operating against the users best interests.
As is always the way with parallel sessions, throughout the day I found myself wanting to be in multiple places at once. I opted to hear Wendy Seltzer give a nice summary of the current state of digital rights activism. She likened the grassroots response to SOPA and PIPA to an immune system fighting a virus. She warned that, like an overactive immune system, we run the risk of attacking the innocuous. If we cry wolf too often, legislators may cease to listen. She went on to imply that the current anti-ACTA movement is guilty of this. Personally, I think that as long as such protest is well informed, it cannot do any harm and hopefully will do some good. Legislators are only just beginning to recognise how serious these issues are to the ‘net generation’, and the more we can do to make that clear, the better.
The next hour was spent in a crowded and stuffy room, watching my Southampton colleague Tim Davies grill Chris Taggart (OpenCorporates), Rufus Pollock (OKFN), and Heather Brooke (journalist and author) about ‘Raw, Big, Linked, Open: is all this data doing us any good?’ The discussion was interesting and good to see this topic, which has until recently been confined to a relatively niche community, brought to an ORG audience.
After discussing university campus-based ORG actions over lunch, I went along to a discussion of the future of copyright reform in the UK in the wake of the Hargreaves report. Peter Bradwell went through ORG’s submission to the government’s consultation on the Hargreave’s measures. Saskia Wazkel from Consumer Focus gave a comprehensive talk and had some interesting things to say about the role of consumers and artists themselves in copyright reform. Emily Goodhand (more commonly known as @copyrightgirl on twitter) spoke about the University of Reading’s submission, and her perspective of as Copyright and Compliance officer there. Finally Professor Charlotte Waelde, head of Exeter Law School, took the common call for more evidence-based copyright policy and urged us to ask ‘What would evidence-based copyright policy actually look like?’. Particularly interesting for me, as both an interdisciplinary researcher and believer in evidence-based policy, was her question about what mixture of disciplines are needed to create conclusions to inform policy. It was also encouraging to see an almost entirely female panel and chair in what is too often a male-dominated community.
I spent the next session attending an open space discussion proposed by Steve Lawson, a musician, about the future of music in the digital age. It was great to hear the range of opinions – from data miners, web developers and a representative from the UK Pirate Party – and hear about some the innovations in this space. I hope to talk to Steve in more detail soon in lieu of a book I’m working on about consumer ethics/activism for the pirate generation.
Finally, we were sent off with a talk from Larry Lessig, on ‘recognising the fight we’re in’. His speech took in a bunch of different issues: open access to scholarly literature; the economics of the radio spectrum (featuring a hypothetical three way battle between economist Robert Coase, dictator Joseph Stalin and singer Hetty Lamar [whom I’d never heard of but apparently co-invented ‘frequency hopping’ which paved the way for modern day wireless communication]); and corruption in the US political system, the topic of his latest book.
In the Q+A I asked his opinion on academic piracy (the time honoured practice of swapping PDFs to get around lack of institutional access, which has now evolved into the twitter hashtag phenomenon #icanhazPDF), and whether he prefers the ‘green’ or ‘gold’ routes to open access. He seemed to generally endorse PDF-swapping. He came down on the side of ‘gold’ open access (where publishers become open-access), rather than ‘green’ (where academic departments self-archive), citing the importance of being able to do data-mining. I’m not convinced that data-mining isn’t possible under green OA; so long as self-archiving repositories are set up right (for example, Southampton’s eprints software is designed to enable this kind of thing).
After Lessig’s talk, about a hundred sweaty, thirsty digital rights activists descended on a nearby pub, then pizza, then said our goodbyes until next time. All round it was a great conference; roll on ORGcon2013.
January 27, 2012Posted by on
I’ve just put a post up on the Students for Free Culture Europe blog, about the latest goings-on in the EU’s attempt to push through ACTA. Over the next few months I’ll be helping to organise a potential Students for Free Culture conference in the EU parliament in June; the ratification of ACTA looks set to be a central theme for discussion.
October 26, 2011Posted by on
This is my attempt to articulate what seems like a contradiction in our modern attitudes to the production and consumption of physical versus digital goods. It’s not new, but I often find it lurking the background of much of what I think and read about.
On the one hand, it is increasingly clear that we have begun to push the planet to its limits. We use more and more of the earth’s finite resources, plundering them faster than they can be replaced. Throughout the ages, we have been able to do this without facing negative consequences. Why replant the forest when you can go and chop down another tree? Why create new energy sources when we can continue drilling for oil? This way of thinking is deeply ingrained in our economic model. Growth relies on consumption, and the resulting environmental degradation is not easily factored in to calculation. But even as it becomes clear that the natural world can no longer be treated as abundant,, we continue to act as if it is.
On the other hand, intellectual goods – by which I mean knowledge, culture, art, music, literature – are now more abundant than ever. They have, for most of history, been bounded by the scarce physical matter which allowed their transmission from one mind to another. The production and dissemination of knowledge and literature was for a long while dependent on paper, printing presses and costly distribution chains. Music was limited first by proximity to musicians, and later, by the material format on which sound was stored. Now, with the advent of the web, the cost of a copy of a book, song or image approaches zero. Modern technology enables us to have more intellectual goods than we could ever consume in a lifetime.
And yet the prevailing economic model for the production of intellectual goods requires us to behave as if they are scarce. The ‘content’ industries – those whose products exist as particular strings of 1’s and 0’s – have to limit the supply of their product to maintain its value. If just anyone can access to the particular string of 1’s and 0’s which makes up an mp3 audio file, then the intellectual good loses its value in the marketplace. According to some, this ultimately leads to no new intellectual goods being produced in the first place, but that’s another story. In any case, this imposed scarcity is artificial in the sense that there is no technological reason why everybody cannot access those bits or run that piece of code.
In both cases, our beliefs about the value and availability of a given resource are grounded in the reality of the past. For centuries, the earth’s resources really were abundant, and the dominant attitude towards them was appropriate; it allowed human civilization to progress. Likewise, intellectual goods actually were scarce, so our consumption of them really did have to be limited. But now that the situation is reversed, our assumptions have failed to catch up. We treat our natural resources as if they are abundant, and intellectual goods as if they are scarce, when the environmental and technological realities suggest the exact opposite.
October 23, 2011Posted by on
This year’s Open Government Data Camp, hosted by the Open Knowledge Foundation, was held in Warsaw, in the incredible post-industrial Soho Factory. A gathering of open government data enthusiasts from around the world, it was a platform for sharing experiences, tracking progress and debating pressing issues for the future of the movement.
This being my first visit to an event of this kind, I was impressed by the number of attendees – apparently a significant increase on last year – as well as their diversity (although it was disappointing to see no female keynoters). I joined on the second day, which got off to a swift and serious start with keynote presentations.
Andrew Rasiej made a rousing case against ‘E-government’ and in favour of ‘WE-government’. The former implies governments delivering wasteful IT services to citizens, while the latter is about governments opening up their datasets and allowing anyone to build on top of them. Tom Steinberg’s presentation about MySociety was a perfect example of what can be achieved with this approach. Chris Taggart from OpenCorporates set a sober tone by outlining why he believes the open government data movement will probably fail. The majority of the world’s data is held by a relatively small number of companies which show no sign of opening it up, and there are too many open data projects and initiatives which are operating in silos. He concluded that with even with hard work, the odds are still stacked against the movement. Andrew Stott, (UK cabinet office’s Director of Digital Engagement) urged the audience to watch Yes, Minister, the classic British TV comedy set in the corridors of Whitehall, in order understand how ‘they’ think and the barriers to opening up data.
Nigel Shadboldt outlined a number of important developments in open data, and briefly mentioned another issue which is set to grow in importance over the next few years; that of individuals getting access to the data that companies are gathering on them. Personally I see this being manifested in two ways. The first is a government and business-led approach, along the lines of the UK government’s recently announced ‘MyData’ initiative (for which Nigel is an advisor). The idea is that companies will release their customer’s data to individuals, who then give it to third parties, who use it to create services to sell back to the customer – imagine, for instance, an app which tracks your calorie intake by analysing your supermarket purchases. The other is a bottom up, consumer-led approach, the beginnings of which we can already see in the fast-growing ‘Europe against Facebook‘ campaign, which aims to give Facebook users control over the data stored on the social networking site. It will be interesting to see whether and how these two approaches interact in the near future, and how they both relate to the open data movement.
Tom Steinberg explained how his latest project – FixMyTransport – was actually designed to ‘trick people into their first act of civic engagement’. The words ‘activism’ or ‘campaign’ don’t appear on the website, because that kind of language can often be alienating to the target audience, who just want to sort out a problem with their daily commute. The simple interface makes it very easy for a user to make a complaint. One complaint on its own have very little effect, but the site makes it very easy for individual complaints to aggregate publicly. With the support of five or more people, transport operators tend to take notice. The site is a few months old and some early successes suggest the approach could work on a large scale. I really liked the idea of enticing ordinary people with no interest in or knowledge of open data to take part by creating a really simple and attractive interface and purposefully leaving out any political language.
The enigmatically titled ‘Open… ‘ session turned out to be a somewhat philosophical discussion led by Andrew Rasiej and Nigel Shadbolt about the meaning of terms like ‘open’ and ‘public’ when applied to government data. Does data published as a PDF count as public, or does it need to be machine-readable? In a world where more and more of our information-processing is done by machines, ‘public access’ to data which can only be processed via feeble human eyes means very little. Data which has to be scraped from a website is not, Nigel suggested, good enough. Clearly, the ideal would be a presumption that ‘public’ entailed access to the data in formats which allow sophisticated manipulation rather than mere eyeball-scanning.
Generally, there seemed to be surprisingly little discussion of the Open Government Partnership (an intergovernmental initiative to secure commitments to open up government data). When it was mentioned it was often accompanied by scepticism. Although there may be problems with the approach, and it may yet turn out to be another opportunity for governments to enthuse about open data without actually doing much, I wonder if it deserves more optimistic engagement at this stage. That said, there were so many conversations going on in parallel sessions that I may have missed the more positive opinions floating around the camp.
All in all, it was a fascinating snapshot of the current state of the open government data movement. While many challenges lie ahead, the next year is sure to be interesting. I look forward to attending the next event.