In Camera: a Video Practice of Living, Learning and Connecting

The State of Video


Chapter 2: the State of the Art

Having set forth a theoretical framework interested in the material details of medium specificity as they relate both to tangible technologies and socio-cultural context, I am in this chapter to explore the spectrum of contemporary video technologies, platforms, practices and communities. Foregrounding the inherent diversity of video practices, I seek to find productive understanding in both the tensions and continuities threaded through the field from the most casual life-flow video making and sharing up through the most mature forms of industrial entertainment production. I am especially interested in shaping a clear picture of the middle of the spectrum; the boundary between amature and professional where we find enthusiastic and independent practices, spreading innovation up and down the video ecosystem. This middle space is the space from which I forged my own path and understanding of video practice, riding the wave of the digital transition in the early years of the 21st century, learning alongside other practitioners who embraced new tools and shared their progress in online communities such as dvxuser.com. The key tension within this space was an almost paradoxical process in which video makers went to great lengths to use their tools to replicate the material qualities of film, emphasizing frame rates and depths of field that sacrificed acuity in favor of mimicry, chasing some notion of aesthetic legitimacy. As I survey the state of the art, I hope to interrogate these divisive pulls and suggest ways in which vide practice can retain its innovative capacities without subsuming its distinct identity into the high-end hollywood practices that have slowly enveloped and adapted video into their own working flows.




Say Yes to Vertical Video  

Even the popular dialog around the most casual video practices is inflected with aesthetic expectations based on industrial standards of entertainment production. With the proliferation of smartphone devices equipped with video and still cameras of ever increasing resolution has come a proliferation of online video being shot and shared with a vertical aspect ratio. This prompted a tongue in cheek backlash video called “Vertical Video Syndrome: a PSA” that uses puppets and sarcasm to decry the use of vertical aspect ratios. Though clearly meant to be humorous, the video still articulates a deep seated sense of superiority and sophistication based on an understanding of ‘how video is supposed to look.’ The video received millions of views and spawned numerous blog posts and comments across the internet that reiterated the opinions expressed therein, and I can attest through recent personal experience that being dismissive of vertical video is still a culturally recognizable mantra of superior aesthetic knowledge of moving images.

What interests me in particular about the “Vertical Video Syndrome” piece is the way it invokes entertainment industry standards and history to portray its warnings of a dire future. The video explains that movies, movie theaters and televisions have always been horizontal and that if people continue to create vertical videos on their mobile devices, this infrastructure will have to be scrapped and new high rise vertical theaters will have to be created. Worse still, it will create an opportunity for George Lucas to go back and re-lease Star Wars in a vertical aspect ratio. The hypothesis is an absurd gesture of humor, yet the implication that popular image-making practices might threaten entrenched industrial technologies and practices is telling. There is a tension between the potentially exciting proliferation of video making opportunities for the masses, and a fear that those same masses will use the technology the wrong way. And though the ‘wrong way’ may be determined by entertainment industry practices, the opinion is often articulated and enforced not from the top, but from the middle, from the video bourgeois who want to distinguish themselves from the ignorant rabble and defend the fact that they are also using affordable cameras and means of distribution to tell their own stories and to legitimate their practices by mimicking the perceived Hollywood gold standards as closely as possible. The makers of “Vertical Video Syndrome” are prolific webseries video makers, clearly benefitting from innovations in affordable digital video acquisition and distribution, but have no trouble mocking the popular uptake of these tools when they exhibit formal characteristics that deviate from their aesthetic ideology. Of course, “Vertical Video Syndrome” is a comedy video, but later in the chapter I will return to this core tension and explore the ways in which it threatens to mire video practice in the material and aesthetic conventions of an antique medium.    


Hold to Record

Opinions on vertical video aside, popular use of mobile and smartphone technologies to make and share video offer some very interesting and novel materialities to consider. I would even argue that vertical video has some distinct merits as an aspect ratio; it fits the human form, placing an upright human body in the center of the screen; and it fits the screen of the device as we hold it in our hand. The smartphone screen is increasingly our window on the world, so why not create videos that fill that window and fit the orientation that suits us? Most videos recorded directly with a smartphone’s native recording application will end up on video sharing services like YouTube or on FaceBook. The process though is not perfect and in terms of social media flows, video has generally been subordinate to the still image. The most visible forms of video sharing were exceptional; viral videos that get passed around for some exceptional quality. Most social media sharing has focused on the still image. You can compose a visual moment of time that can be easily and carefully constructed and then consumed at a glance in the cascading flow of social media streams. Here, I am talking about the way we share media amongst networks of friends and family, not necessarily seeking a wider public audience, but not necessarily warding against such either. In this space, the Instagram app came to dominate still image making and sharing.

The qualities that made the Instagram app popular were also those that made me wary of it. I indulged my own snobbishness, though for slightly different reasons than those described above. The Instagram app limits the resolution and aspect ratio of an image, creating a standardized square image that is easy to display across a wide variety of contexts and that creates a uniform visual experience. The other signature feature of Instagram is its use of filters; along with the square aspect ratio, the app supplies built in processing options to create an image that is reminiscent of vintage looking instant photography. My reluctance to participate in this trend was due to what I saw as a mucking up of the intrinsic material qualities of the medium being employed; why would you want to throw away pixel information and inflect your image with a degraded aesthetic that arrives from no internal motivation of the subject?

It was not until Instagram added video that I took a second look at the app and realized that my previous reading was limited. I was judging the software as something that was getting in the way of the full capability of the tool and imagined that it was therefore limiting potential experimentation and innovation. This view had grown out of my experience of working with cameras where the software capabilities were limited. The smartphone is a different tool altogether, and the use of software based image processing built into the Instagram app is simply utilizing a different aspect of its materiality. I may not personally care for the choices, but they do represent a different kind of innovation. In assessing new forms of social video capture and platforms, I have tried to keep my mind more open and to consider them on their own terms and within a framework that considers materiality.

Instagram video came after Vine. Vine attempted to bring video into the social flow by placing strict and careful boundaries around what a video could be; Vine videos (Vines) are limited to approximately six second loops and square aspect ratios. Rather than limiting uptake and use, these constraints have actually spawned whole new genres of video making and micro storytelling. Separate from Vine content, I am interested in the way that the app creates a different kind of capture experience. To create a Vine recording, the user presses and holds on the smartphone screen and a status bar fills up showing how much allotted recording time is left. Lifting the finger pauses the recording so that multiple shots can be stitched together until the allotment is reached. Although the gesture is simple, it creates a powerful physical aspect to the process of digital capture; a feeling that the tangible act of pressing and holding on the screen is responsible for filling up the recording. The duration of the video is now linked sensorially to the duration of a physical act. And though the ‘press and hold’ is a simple act, it has a feeling of immediacy, almost urgency, that evokes a pleasant sense of focus when creating this new kind of video.

Other video apps aimed at the social space employ a similar ‘hold to record’ functionality, but with a variety of other affordances and constraints concerning the form of video produced. Instagram’s video recording allows for a length between 3 and 15 seconds. Instagram also allows users to include an existing video clip from their device’s internal storage. Earlier versions of the Instagram app included more extensive controls for integrating existing clips, allowing users to define the section of video to include and to determine how the video would be cropped to adapt a wide screen image to the square Instagram aspect ratio, but later iterations limited this functionality and nixed the ability to mix new recordings with saved content. This sort of progress that is marked by retraction of controls and features is indicative of the goals of an app like Instagram that aims to make an easy to use and easy to understand experience for as many people as possible. It also points to how quickly video can become unwieldy and complex. Authoring with time and space can quickly break the sheen of uniformity that is an inherent goal of an app like Instagram. Instagram is meant to make it easy to share video snapshots of life, to allow a user to carefully construct a shareable vision of their experience in and through the app, the device on which it resides, and the platforms on which the flow of images propagates and spreads. The touch screen is the portal by which pressing and holding to record transmits the real body into real virtuality based on the constraints of the tool being employed.   

In contrast to the intentionally constrained capabilities of Instagram and Vine, Mixbit attempts to create a social video experience that is much more aligned with database identity and the flows of real virtuality and the infinitely recombinatorial nature of life expressed through media. The recording interface is similar and the press to record feature persists. But whereas the other apps have a finite allotment of recording time that fast fills up along the status bar, Mixbit does not limit recording time at all. Each new shot is represented with a different colored bar and the shots themselves are easy to reorder and trim in the editing phase, something that is less easily achieved in the other apps. But the real difference of Mixbit is that even after the video has been finished, saved and shared, the integrity of each constituent shot is preserved; by accessing their content through the web based Mixbit interface, a user can always go back to the source material and remix or remake their work. And other users can do the same. All users have the ability to remix other users’ work at the level of the shot. This preservation of the basic units of the source material is an important distinction. When video workflows bake-in editing choices and the original shots are lost, then the sense of life, identity and expression that crystallizes in a given work becomes an isolated cell, linked from one to the next through context potential recontextualization, but the potential to revisit and relearn from individual moments is hampered. When the raw material of a video work is preserved and made accessible, there is always the possibility of a deeper kind of continuity, where even a finished edited derivative work maintains a meaningful connection to the source flow, the branching system of roots remains intact from one outgrowth to the next.

The mobile devices we use to make and share images of our lives are also the conduits through which we enact a large proportion of our cultural rituals and social activities. Using them to converse with friends and family in text based conversations, and then switching to their camera functions creates a practice of mediation imbricated with all other aspects of our digital selves. The off- and on-screen worlds blur together seamlessly and the potential for projecting our identities in and through malleable media spaces is ever increased as new forms and affordances arise. The recently introduced Hyperlapse is made by the same entity as Instagram but stands alone. Meant exclusively for video, Hyperlapse offers a way of condensing long experiences into short videos. Instagram and Vine are suited to video snapshots; moving moments of time and micro narratives. But their affordances break at the prospect of a longer arc. In its simplest sense, Hyperlapse allows the user to make a long recording and then play process and preserve it at a high playback speed. In this way, a user could potentially keep their mobile device on and recording throughout the duration of a long form experience such as a music festival or wilderness hike. The practice is already there for many; people already have their devices in hand or close to hand, are already framing the experience through the screen of the device, whether they are taking a series of still images or various video clips. Hyperlapse enables a sensible step in the evolution of such practices, allowing for almost continuous recording that is still watchable in a condensed version. And the trace of the original is eradicated once a playback speed is baked in during the processing phase. So even as the user experiences an event in real time, they are projecting themselves into a potential future where time collapse is already accepted and understood, so that even in the moment of recording, their perception reaches out towards an altered state of understanding where sound is lost and motion is smoothed.

These two features of Hyperlapse are less visible but equally important. In its first iteration, at least, sound recording is totally elided in favor of a smooth visual experience. And smooth visuals are emphasized through an impressive image stabilization technique that uses the devices internal gyroscope to create a video product that seems almost to float. All signs of shaky hands, the percussion of footfalls, the flutter of heartbeats that would otherwise be accentuated through high speed playback of handheld video are all smoothed away in Hyperlapse. And though this might seem like a disconnect or an erasure of the body, subsuming the original perspective in the algorithmic vision of the machine, it actually more closely resembles the experience of the moment of recording; when you hold the screen in your own hand, your body moves in sync, your eyes and hand and screen are all moving together, so that shifts and jerks and bumps seem smooth until you see them played back on a fixed monitor. Watching a Hyperlapse feels more like your own conception of intentional movement through space. Even the collapse of time feels germaine to the dynamic sense of time’s passing that each of us experiences at different speeds with different weights depending on our feelings, moods and intentions. Hyperlapse is gimmicky and meant to be fun. But that sense of experimentation and fun, anchored to a simple concept and tweak of time is an important form of video experimentation, offering new ways of experiencing the self in explicitly social spaces. Hyperlapse, Vine, Instagram, Mixbit; all are meant to be used to make and share immediately, to shorten the spaces between experience, recording and distribution to the point that the boundaries blur and all of life is a potential video, whether the camera is rolling or not, and choosing the tools and moments for actual recording is a pleasure, just another action of deciding who to be and how to live.


Dumb Devices in the Middle Ground

When the tools and actions of recording are not linked explicitly to immediate sharing within the social flow of digital life, the stakes of practice change. I want to look now at the practices and social currents surrounding the middle ground of contemporary video practice, highlighting similarities and differences between casual social use and the more costly intentional uses of enthusiasts, independent artists and professionals, delving deeper into the defensive identity issues proposed at the beginning of the chapter that run tenuously all along the spectrum of video practices. I want to start by looking at aspect ratios and frame rates.

My first video camera purchase was a Canon GL1 in 1999. The GL1 was had an integrated zoom lens and recorded a standard definition digital video signal onto MiniDV tapes. It was a 3CCD camera with a native aspect ratio of 4:3 and recorded a 60i signal. Each of these features was standard for North American video at the time. What made the camera interesting, and marked it as a transitional product at a transitional time was the recording format and the presence of an IEEE 1394 ‘firewire’ port that allowed for a lossless transfer from the MiniDV tape into the digital space of a computer hard drive and digital non linear editing system. These features had been cropping up in high end consumer cameras and leading more people to want to use them for independent cinema projects - the image quality was high enough that they were suited to a level of production beyond home movies. DV recording in full size DVCAM represented high quality and high cost. Betamax and Digibeta were the high end professional formats of choice. But low end video still looked too much like video. Now that resolution was sharpening, people were looking for other ways to make their video look more like film. This came down, to a large extent, to frame rate, aspect ratio and depth of field, and led to the kind of intermediate steps visible in the GL1, each of which degraded the quality of the video signal in order to enhance its aesthetic appeal to those desirous of a ‘film look.’

The aspect ratio of the GL1’s image sensors was 4:3 but it had a feature that would create a 16:9 aspect ratio by effectively cropping and then stretching the image during recording. Once properly digitized, video editing and playback software could render the image back in a 16:9 aspect ratio. This achieved the desired widescreen appearance by sacrificing image resolution. Along the same lines, the GL1 had a feature that approximated the look of a 24 frames per second image stream called ‘frame’ mode. A standard video signal at the time was composed of interlaced fields, half the horizontal lines and then the other half of horizontal lines that compose the frame of the screen, rendered and alternating at 60 fields per second to generate 30 full frames of video. In film, each frame is whole and distinct and there are 24 of them per second. The term ‘progressive’ refers to a video image where all of the horizontal lines are rendered for a frame at once, rather than alternating in an interlaced manner. The GL1 approximated this look, using interlaced CCDs but scanning more lines at once, about three quarters, and deriving the remainder algorithmically (interpolation). This produces an image where each individual frame is of higher resolution and looks more like a film image, but it sacrifices the ability to render fast motion action smoothly.

This trend of trade offs continued at this level of the video camera market. Various cameras from Canon, Sony, Panasonic and JVC had different approaches to similar features in their cameras. The next ‘big thing’ came with the introduction of 24P video recording on the Panasonic DVX100 in 200?. Whereas cameras such as the Canon GL1 approximated a progressive image, the DVX100 was able to achieve a true 24 frames per second progressive image, dubbed 24P. Because it still recorded its video stream in a system that required a standard video frame rate, the camera did something like an on the fly telecine process, the process where film is transferred to video and an additional ‘pull down’ frame is created at a certain cadence or frequency to conform the 24 frames per second of film to the not-quite 30 frames per second of video. When paired with the right editing software, DVX100 footage could be processed through something like a reverse telecine, where the extra pull down frames could be removed, resulting in a 24 frames per second image stream. The advantage of this was that it looked more like film, and that it would make it easier to create a film print from the video, which was still necessary at the time for theatrical distribution. These features and possibilities were very popular, but in practice, many people just recorded in 24P without ever actually removing the pulldown and edited and distributed their work with the extra frames, resulting in exaggerated motion blur.    

These sorts of tricks and trade offs described up until now all came built-in from the manufacturer. But as these cameras became more popular, enthusiastic users went to even greater lengths to make their video cameras behave like film and a whole market grew up around aftermarket modification processes. With regards to aspect ratio, the DVX100 had two ways of creating a 16:9 image from its 4:3 sensors - it could simply matte the footage, or it could do an anamorphic stretch the way the Canon GL1 did. Either way you were losing lines of resolution. A new option emerged in an anamorphic lens adapter that screwed onto the front of the DVX100’s built in zoom lens. This made the camera bulkier and less sensitive to light, but it provided an optical means of aspect ratio alteration that allowed the user to maintain full resolution from the camera’s sensors.  

This trend went even further with the next generation of mid range enthusiast cameras and the move towards high definition video cameras (HD). At its outset, HD was too expensive for anyone but the highest end industrial users because the capture medium, HDCAM, was so expensive. With the HVX200, Panasonic introduced a solid state disc based HD acquisition system that allowed users to record to proprietary P2 memory cards, rather than to HDCAM tape, and to transfer their footage directly to computer hard drives for storage and editing, reusing the P2 cards ad infinitum. The HVX200 had 16:9 image sensors so aspect ratio was not an issue. However, since its imagers were small, the depth of field of the image was fairly deep. In order to get a shallower depth of field, many independent filmmakers began using lens adapter systems like the one from Redrocks that allowed them to use all kinds of different lenses in front of the built in zoom lens, effectively turning the built in lens into the image sensor from the adapted lens on front. This made the whole camera system bulky and hard to move, it cut down light sensitivity, and it resulted in an inverted image. But it was still a popular choice if you wanted your video to look like film.

Vibrant online communities sprung up around these technologies so that users could share their practices, learn from each other, and stay up to date on tools and techniques. DVXuser.com was the hub of such communities, and even now remains a vibrant source for discussion around all the successive generations of tech that have emerged since its eponymous camera has slid into obsolesence. Common thread topics included detailed technical discussions of the tools themselves as well as links and examples to work that legitimized the tools as ‘professional.’ These common common posts exemplified the sometimes schizophrenic nature of the practitioners, at once excited by and dedicated to the tools that allowed them to advance their craft, but also defensive against appearing like amateurs for using them. This defensiveness is visible in posts that highlight how the cameras have been used successfully by commercial productions, and also by demonstrating a fluency in the aesthetic language of mainstream Hollywood image making. Focusing on these two common types of discussion thread is not meant to take away from the generally positive and productive nature of these forums; many users, including myself, benefitted from the free exchange of knowledge and the sense of community.

One positive aspect of these online communities was the way in which they increased the direct loops of feedback and interaction between users and manufacturers. On DVXuser.com, Jan Crittenden, *title*, became a fixture on the message boards, representing Panasonic and engaging in active dialog about product developments and feature wishes for future updates and products. This sort of feedback and direct interaction evolved in several different, though interrelated, directions. As dialog increased with big manufacturers, there were also many new smaller manufacturers introducing aftermarket accessories suited to the needs and practices of video makers in the middle ground.

  • Trends towards dialog with manufacturers, both Panasonic, and then the growing companies making after market accessories, lens adapters, stabilizers etc.

  • RED

  • DSLR

  • Stabilization and movement as kind, Kickstarter videos.

  • How the practices at the middle pushed hard enough to change the top. But they were so consumed with looking like 35mm, that the ripples that rose to the top didn’t really change anything, they were already suited towards that style of production. Maybe some changes with Peter Jackson and HFR etc.

  • But for me, there is a threat. We live through this tech and the cinema industry positions us in the dark and immobile space. If we are so committed to that single aesthetic, then we are making it real in the world around us and possibly bringing along the regressive power relationships inherent in a mature capitalist system.

  • Need to push in other ways, because the other side of those practices is inspirational, that our desire to frame and share our experience of the world can drive whole new ground-up businesses and manufacturing, where before, these were the hardest to break through.

  • Exciting possibilities of immersive, aerial, ultra high frame rate and ultra high def.

  • The ways in which the bottom pushes on the middle. The touch screens and video game aesthetics.

  • The database distribution of ‘pro content’ in systems like NEtflix etc., starts to look more like how we shape and share our online identities. This is where I see a real opportunity, how to connect the social and our own content into the platforms that invite sit back attention in the living room.

  • How we pick and use tools on our own terms, for our own needs, to strengthen our own voices and enhance our understanding of ourselves in and of the world.

  • Positive examples, very cool experimentation, video art like Hockney’s Yorkshire Landscapes (other examples from Australia show), r/stabilization and the stabilized GIFs. Enhance a new kind of looking on the world, also emphasize materiality and make the process visible in the presentation, and, are using the tools in an elegant, personal way. Not bending over backwards (mimicing film) to try to HIDE the materiality, the apparatus, trying to make it look like something else so that it can reaffirm the same worldview that has pervaded hollywood output. Acknowedge why that aesthetic has valid merit - it makes an image acceptable in a certain kind of way to a certain kind of audience, and it has beauty in its light and movement. I am not saying its bad, just that it is important to interrogate the motive for why you want your images to look a certain way, and that there is much that can be gained by exploring the tools on their own terms as well. My own efforts to use video tools for expression, but also as a way of knowing.

  • In the next chapter, I trace my personal development alongside some of the tools mentioned here, my successes and challenges, balancing wanting to make images that are comprehensible but also operate on their own terms. That drive



Touch screens in other cameras, the difference between a button on the camera body and gestures on the screen, an interface between you and reality.

Aspect ratios and frame rates. The transition to digital video and early work arounds to get 16:9 aspect ratio and frame rate. Mucking up the signal, throwing away pixels or putting extra glass in front of the lens. Frame rates adding motion blur and pulldown frames that most people just left in instead of removing in the NLE. Obscures some of the real potential of these tools. Some of the tensions exist even at the high end.

High end and high frame rate. Hobbit example and Jackson using Red cameras and high frame rates. A smooth look that is disconcerting to narrative movie consumers, but that technically is giving a more lifelike view. But also more video gamelike.

Other places of convergence between computer imagery and video imagery. My focus on video made with cameras, not pure simulation. But the move towards making cameras capable of images that look more like simulations. Examples like immersive, drones, ultra high frame rate (wedding slo mo booths) etc. Emphasis on movement and stabilization.

Innovation in stabilization and movement devices. Kickstarter projects and excitement. This kind of amature enthusiast practice that can harness demo and design fiction video to become a viable manufacturing company.

New video camera makers get into it too. Red as a great example, but also GoPro and Black Magic and now the Digital Bolex. So that middle ground practice is no longer just adapting the technological offerings of big corporations, but making their own. They were already giving feedback and engaging in direct conversations with these companies through internet forums and driving the evolution of the technology that way as we saw in first the 24p mini-dv switch with DVX and rippling on through the DSLR revolution, and now actually making the tools themselves.


Distribution

Life flow video snaking through the platforms through which we exercise our database identities.

Digital cinema packages and the complexity of commercial digital theatrical distribution.

The middle road of internet video platforms on smart TVs and set top boxes like Roku. Different from encountering YouTube videos on your phone or computer. It is a sitback experience that you can navigate with a few buttons on a remote control. Grid view databases with different qualities and different ways of suggesting content.

Distributors becoming content creators. Content creators looking for ways of spreading and sharing.

I think this is an important space for innovation. Making for better and easier ways of bringing non-commercial content into living rooms and even theatrical spaces. The process of making is enhanced by mobility, but in many cases the process of viewing still benefits greatly from sitting back, sitting still, and really watching.



























First, I circle back from theoretical to very practical.  In Chapter 2, taking my cues from Friedberg, Cubitt and others discussed in Chapter 1, I offer an in-depth look at the state of video technologies and practices.  Including camera devices, editing tools, platforms for sharing and distribution.  At each of these levels there are powerful and compelling tensions and currents.  Video being remade in the image of a dying (or at least utterly transforming) film industrial complex.  But also proliferating, democratizing.  Even in the commercial space where DIY ambitions drive kick starter campaigns that create real products for camera stabilization, lights, and now a whole new generation of cameras.  Manufacturing decisions are moving from distant engineers and product managers to crowd sourced.  Go Pro.  Smart phones.  YouTube to Instagram and Vine.  Netflix, Hulu, Amazon.  But what is it like to actually make video?


Lenses


Codecs




Lit

Manovich recent?

Materiality, recent book

Boyd?



Tech -


Digital video

Movement and movement devices

Stabilization hardware vs software

Immersion

Occulus viewer and drone combo as the ultimate example (though still imperfect) of the screen filling your field of view and the camera being able to move freely in space in a way that was previously the domain of video games and CGI.


Touch screen cameras, inviting the body into the capture process in new ways. Touch to focus and adjust, ‘here, this is where to look’ by touching the screen. Video as interface. Already augmented reality.


The onscreen displays of the GH4 and it’s touch functions and high res.


Touch to record Vine Instagram etc. Holding down to fill up the allotment. Analysis of these platforms and how they fit with flows of real virtuality and database identities. Short clips vs Mixbit and remixability.



This page has paths:

This page has tags:

Contents of this tag:

This page references: