Chapter 2 – The Beginnings of the End

Through a shift in time

For 400,000 years, human beings have been talking to each other. But we’ve only been doing so with the help of mechanically animated moving images for a little over a century. In a world saturated with TV and film, it’s hard to imagine that before the end of 19th century, the only way we could see other walking, talking human beings was by observing them physically, in the flesh. But now virtually everyone in the Western world, and the majority in many other parts of the globe, can simply flick a switch on a device located in one or more rooms of their homes, and moving images of other human beings will stream out.

For the first half of the 20th century, we had to head off to a big darkened hall to experience this pleasure. Only those in large enough towns could have the privilege, and they had to accept what was showing in their local cinema – or travel to another city for a different choice. This was a primarily passive experience, with the darkness and seating arrangements in cinemas designed to immerse viewers in the film world. We could choose to ignore what was on the screen, talk to our neighbours, or engage in more involved back-row activities. But we couldn’t change what was onscreen. The multiplex has only partially improved choice, whilst the return of 3D has merely reinforced isolated, passive immersion.

When TV started to become widespread in Western countries in the middle of the 20th Century, suddenly we could choose to stay in instead – if we were rich enough to afford a set. Then, a few years later, in the UK there was a choice of two channels, and around a decade later a third. In the US, the narrower wavebands, amongst other factors, have enabled broadcast TV to offer more channels from an earlier date, and cable TV has been in existence for decades more too. So the range of channel choice has been a more ingrained part of American TV culture than in most other parts of the world.

But it was still slightly inconvenient to change channels for the first few decades of TV, encouraging you to stay with one channel for longer periods. In the days of the dial, dexterity was required to get the tuning right. And you still had to get out of your seat even when your TV had pre-programmed buttons. Only with the arrival of the remote control did the more viewer-focused notion of “channel surfing” arrive. Although designed in the mid 1950s, the remote control didn’t become common until the 1970s, and was still far from universal even in the 1980s.

Around the same time, the 1980s, the VCR arrived to add another dimension of choice, providing the opportunity to choose not only what to watch, but when to watch it. More recent developments such as the personal video recorder (or PVR) and TiVo box have enhanced and automated this process. Although the concept is basically the same as the VCR, these devices have created a very negative climate for advertisers. It is now so easy to skip commercial breaks that the value of these slots has plummetted to crisis levels. However, this in itself hasn’t been enough to engender the tectonic shift now shaking the world of television.

Alongside these technological developments, the number of channels available has multiplied like a warren full of audiovisual rabbits. Cable, satellite and the increasingly available digital terrestrial TV (DTT) services have seen choice grow from four or five analog channels in the UK to 30 or more on DTT, with hundreds available on cable and satellite. The most recent of these developments have happened only within the last few years, too. After 400,000 years of human language, in the space of little more than a century audiovisual entertainment has gone through an exponential rate of change. From very little choice, the range of options is increasing every day. But that was before the arrival of the Internet. This could be seen as merely a continuation of the trend towards a greater range of selection. But it’s much more of a qualitive upheaval than previous developments, even if broadcasters are continuing to pigeonhole the Internet as merely a new distribution method for essentially the same content.

The contrast is best illustrated by comparing the BBC’s iPlayer and in the US with YouTube. The first two repackage existing TV content for the Internet, making it simply a more convenient PVR. You don’t even need to remember to set your machine to record your content, as everything broadcast will be available for you to watch later online, whenever takes your fancy. does allow you to share your favourite programmes with friends, but YouTube, with questionable legality, has allowed the sharing of favourite portions of programmes, although copyright holders do clamp down on this regularly. Even more, the variety of user-generated content has gone far beyond what we’re used to seeing on TV.

You could see much of this content as merely individual examples of the kinds of clips already featured on programmes such as America’s Funniest Home Videos or the UK’s Adam and Joe Show. However, there are numerous differences, and where these kinds of TV programme do feature Internet-originated video clips, they strip them of their context within Web culture. Video bloggers or Internet characters such as Annoying Orange, Smosh or Yogscast gain large regular followings for content which would not be cost effective to produce on TV, or would end up as filler between longer programming, although some do make it onto mainstream broadcast TV, such as Annoying Orange’s series on Cartoon Network.

But these series garner their huge followings due to their status within a community of sharing, and it’s hard to understand their popularity from outside the communities that share these clips between them. As Henry Jenkins has argued, one of the primary reasons why YouTube succeeded amongst numerous other similar video streaming websites was thanks to its built-in social facilities. A comments system similar to blogging sites was built into YouTube at an early stage, and even more importantly the code for embedding clips in external websites was made extremely easy to extract (just copy and paste) and actively encouraged.

So although YouTube is still very much a destination site you are meant to go to in order to browse for content, and indeed this has made it one of the top three Internet destinations in the world, the true secret of its success is the facility of using YouTube content elsewhere, as part of a wider cultural conversation. It’s the underlying engine for most video embedding, accounting for around 82 per cent of this activity according to Sysomos (, and early on provided the tools for sharing favourite clips you have discovered via all the many available channels – email, blogs and social networks. The strength of a clip is increasingly not so much how many people have watched it, but how many pass it on to their friends. For this reason, online video tracking company Unruly Media’s Viral Video Chart ( switched to counting shares rather than views in its listing of the most significant clips of the moment.

During the history of TV, content has been narrowly arranged into channels. Individual shows have always been important, but still broadly arranged within the remit of a channel. In the UK, the arrival of Channel 4, with its unusual part-public service, part-commercial remit, allowed an explosion of content which hadn’t previously fitted within the remits of either BBC or ITV channels. This model has also acted as a gatekeeper, as only content capable of a certain level of viewing figures has been able to remain with a regular commission, or get shown at all in the first place. Now, though, the Internet has seen the potential demise of content channels as the most significant force. Instead, individual programmes essentially become content channels, and don’t necessarily even need an audience at all to get made and put up online. The rise of the multi-series DVD box set could also be viewed in this context, becoming separated from the channel that hosted the series originally – a trend that has continued into online rental services such as Netflix and Lovefilm.

It would be particularly telling here to consider how TV has been adopted in the developing and non-Western world, which as a subject could fill a whole series of books rather than merely part of a chapter in one. However, most relevant for the current argument is the notion that developing countries have the option to skip a generation, or two. Mobile phones have frequently found favour over landlines in many developing countries, because the infrastructure is often cheaper to install than brand-new physical wiring, and also easier to protect where copper wire theft is common.

The uptake of television is a little more complex, as mass broadcasting has different political implications than one-to-one personal communications. The smartphone, which we will be returning to in a later chapter, further complicates the issue. But as the Arab Spring and the riots in the UK in the summer of 2011 have shown, the relevance of mass communications can easily be sidelined by social media and viral usage of one-to-one or one-to-few communications. It’s a powerful thing when networks of individuals can be mobilised to mass effect in a relatively non-hierarchical manner. The political upheaval in Egypt, Libya and Syria, collectively known as the Arab Spring, has shown that the mass model of TV can be rather impotent when faced with motivated social sharing of content.

The rise of guerilla video

It really isn’t a surprise that the Internet video challenge to broadcast TV comes primarily from the US, and not just because the Internet itself is a very American-centred cultural phenomenon. There were other conditions which made the US the natural breeding ground for YouTube and its competitors. Making your own videos as an individual used to be impossible, unless you were rather wealthy, and that includes the Super 8 cine camera craze, which wasn’t exactly the commn people’s choice due to the cost. But the arrival of the camcorder portapack in the late 1960s made a whole new era of independent video production possible, with significant political consequences. The considerable drop in production costs meant that stories of the underground and alternative culture could be recorded. Dierdre Boyle has traced this development in great detail in her excellent book Subject to Change.

Although 16mm film and synchronised sound using the Nagra had already given cinema much greater flexibility, spawning the Direct Cinema and Cinema Verite movements, it was really with the arrival of video that production could broaden out well beyond a dedicated elite. The seminal Video Toaster further made elaborate video effects available to a much wider group, acting as the harbinger of what was to come a few years further on. Later in the 1990s, digital camcorders and non-linear computer video editing further democratised the means of producing video, making it cheaper and easier to shoot and cut content. But one piece of the puzzle was still missing – how do you make your video available for people to watch it?

Until around the turn of the millennium, this still amounted to organising events to show your work. At the most basic level, this meant inviting friends and family round your house or a rented space. Or maybe you could be lucky and have your film accepted for a short film festival. In the UK, there were also film clubs showing short pieces, such as The Halloween Society and Exploding Cinema, or there was the Undercurrents VHS distribution system. But this was still a very small, underground audience.

In the US, however, with its ubiquitous cable TV service, public access cable meant anyone had a right to put something on TV – even if it ended up being at 3am in the morning. This fuelled a much more vibrant underground videomaking movement in America than in many countries. There were even collectives sharing satellite bandwidth across the whole country, such as Deep Dish TV and Paper Tiger Television. Facilities such as DownTown Community Television in New York City were set up to cater to those looking to produce this “guerilla video” content, with cheap rates for editing suites and equipment hire. Indeed, the American public access TV show became an icon, and has been immortalised in the popular movie Wayne’s World. Outside the US, people just thought this film was funny. But its subject matter represented the considerably greater access to the means of broadcasting already enjoyed in the US compared to Europe and elsewhere.

With public access cable TV as its forebear, the US was already ripe for what the Internet had to offer. It was obvious well before the days of broadband, even when all you could put up was a clip the size of a postage stamp, that the Internet could potentially provide a way to “cut out the middle man”, and allow video producers to present their content directly to viewers, without a broadcaster or other distributor sitting in between. In the US, where it was already possible to produce your own public access TV, and local TV stations were often a good deal smaller than national stations outside the US, Internet video sharing was a natural progression.

So camcorders, then digital camcorders, and now camcorder-equipped smartphones, have provided the means of shooting video, whilst cheap computers and software have supplied the means of editing video. Sites like YouTube, Metacafe, Vimeo and the plethora of alternatives have added the means of distributing video. In fact, with audiovisual content becoming more seamlessly intertwined with website coding than ever before in the fifth generation of HTML, even these video hosting sites aren’t absolutely necessary. But neither this nor a hosting site like YouTube provides access to an audience, unless you are randomly very lucky. This is something we will begin to look at in the next chapter.

Enhanced by Zemanta

Chapter 1 – Introduction

The gogglebox. The tube. The idiot box. If you were born after 1950 and haven’t spent your entire life living in the remotest Amazonian rainforest, you will have grown up with television. Maybe your family didn’t actually own one until the 1970s, or maybe you think it’s mostly full of puerile rubbish and never watch. But we now take that luminescent screen sitting in the corner of our living rooms for granted. Like sliced bread or global warming, TV feels like it’s here to stay, despite having existed for little more than half a century.

But even though the television has become a standard household appliance like a fridge or cooker, viewing figures have reached saturation. The World Cup may still be garnering billions of viewers every four years, with FIFA claiming increasingly inflated figures each time. However, the general trend for the most popular examples of everyday programming is down – in fact, considerably down. Until the London Olympics in 2012, the top ten most-watched programmes in the UK were all from the 20th Century, and most from the 1980s or earlier. The most popular episodes of the most popular programmes, usually soap operas, used to achieve 20 million viewers or more on a regular basis. In the 21st Century they are lucky to surpass ten million. The US is even further down this route, with half of the top ten programmes ever dating back to the 1970s. Some new formats have engendered a mild renaissance, in particular the hybridisation of reality programming and talent show best epitomised in the UK by ITV’s X Factor. But these still haven’t returned viewing figures to the glory days of the 20th Century.

Running parallel to our familiarity with TV, we now think it’s perfectly normal that films are around two hours long and we see them in large darkened public rooms. The trend for 3D hasn’t altered this fundamental format in any serious way. Yet this was also a format that took a few decades to form. Before D W Griffiths’ seminal 1915 epic Birth of a Nation proved that films longer than an hour could garner large audiences, movies were much more varied in length. Indeed the first actualitĂ© movies were just the duration of a single reel of film. Edison also conceived the movie as a personal viewing experience rather than a theatrical performance to be seen by large groups.

In contrast, television has clearly become a more domestic pastime, watched alone or in small groups. But there was no guarantee at the beginning of the 20th Century that these would be the forms our foremost duopoly of audiovisual entertainments would take. During the 1936 Olympics, for example, Germany broadcast near-live footage of sports events to salons and clubs equipped with screens, in an early precursor of today’s sports pubs and bars, or the giant screens redestributing live sports occasions to parks and public squares. Television wasn’t conceived as a home device. But that didn’t turn out to be the dominant format when TV took off around the world after the Second World War, primarily driven by the US and UK.

Since then, TV has settled into a relatively stable form during the 20th century. More channels, and the remote control’s ease of changing between them, have given viewers greater control over what they watch. The VCR and more recently the PVR have allowed us to choose when we watch our favourite programmes. Satellite, cable and digital TV have expanded the choice still further, and the TiVo has made it possible to fit our viewing habits even more closely around our personal preferences rather than vice versa. Yet we still watch programmes on the (increasingly large) screen in our living rooms with similar formats to the ones we did in the 1950s. There are game shows, dramas, news, documentaries, and comedy.

But as the 21st century gets into full swing, TV’s dominance has come under increasing attack. Thanks to the rise of the home computer, Internet and smartphone, more and more of us are obtaining our audiovisual content in different ways. At the beginning of 2012, YouTube was delivering over three billion videos a day to 800 million users a month. Even by May 2010, BBC’s iPlayer was receiving 123 million play requests a month. According to Comscore, by mid 2007, 75 per cent of Internet users in the US were watching 181 minutes video per month online. An ICM survey for the BBC in 2006 found that nearly half of those watching video online consumed fewer hours of television as a result of their online viewing. The trend has continued upwards since all these statistics were reported.

The format of that content is changing to fit the new way we’re watching, too. YouTube only altered its policy to allow for videos longer than ten minutes in July 2010, and its previous focus on short pieces has encouraged a rather different range of formats than has dominated TV over its reign. Short comedy sketches, video blog diaries, favourite clippings from popular TV shows, and – most importantly – opportune moments from life best epitomised by Charlie Bit My Finger… again! have racked some incredible viewing statistics, with a few topping hundreds of millions of plays. A cat with seemingly ninja-like skills of stealth may be considered puerile compared to carefully constructed TV programmes, but people want to watch such things, and often in great numbers.

Together, these factors raise the question, Are we witnessing the beginning of the end for TV as we know it, or is this trend just a fad? Those with vested interests in the technology and commerce of traditional TV will be hoping the latter is true, but there are many indications that it’s not. The figures show that for decades in the UK we have been watching an average of 25-35 hours of TV a week, depending on the time of year, and that has been spreading over an increasing choice of channels, and this trend is mirrored in most developed nations. The real growth in audiovisual consumption is elsewhere. This book traces the rise of alternative viewing modes and novel formats, looking towards a future where TV itself could become marginal, like music hall and travelling mummery before it. Television may not be about to cease existing entirely. But its dominance is under serious threat.

Enhanced by Zemanta