Where am I? - A Thought Experiment into the nature and implications of new media art.

I’m sitting here which will become there. I’m sitting here now which will become there, then.

Through the actual utterance of these concepts around where and when I am doing what I am doing, I aim to evoke a sense of mental disconnectedness from linear ideas of space, place, time and perhaps narrative by calling a mental subroutine of imagination.

This imaginative sub-routine call or process, this re-imagining and re-construction of time, space and linearity, as, initially, a purely mental process will hopefully allow me to investigate new media art as a means of understanding ourselves and the world around us.

The term New Media is problematic at best. Here is a definition from Webopedia (http://www.webopedia.com/TERM/N/new_media.html):

“A generic term for the many different forms of electronic communication that are made possible through the use of computer technology. The term is in relation to “old” media forms, such as print newspapers and magazines, that are static representations of text and graphics. New media includes: 

• Web sites 

• streaming audio and video 

• chat rooms 

• e-mail 

• online communities 

• Web advertising 

• DVD and CD-ROM media 

• virtual reality environments 

• integration of digital data with the telephone, such as Internet telephony 

• digital cameras 

• mobile computing “

This definition helps to contextualise some of the issues involved in trying to find a starting point from which we can investigate new media:

Firstly, this definition itself is a new media “product” or artifact in that is a (HTML) web page that itself links to other wepages and other social sites via hyperlinks. It is a subjective artifact of that which it tries to define.

Secondly this HTML page is transient in nature – although this definition may well be the case today (which is not necessarily the case) it does not follow that this definition will hold true tomorrow. This subjective artifact of the medium it purports to define is time specific. 

Thus there is a peculiar (perhaps even peculiarly postmodern) aspect to the very definition of new media, whereby it can be said that new media does not necessarily know what it is and a definition will always be hard to nail down because of the word “new”. 

In this way, new media can be seen not only as transient, fluid and ever changing in terms of both its form and content, but also as being in a liminal state; a state of “becoming”, of evolving, of metamorphorsing from what it is “now” to what it “will be”.

At this point I am going to halt my own linear narrative and take a little time to reflect on what I have just written in order that I can get some critical distance that is necessary for a “reflective experience” (Lee, 2009). 

What I will do however is place a mental hyperlink from this point to a point in the future where I can re-open this particular window of investigation.

So, Jumping back to our fleeting and fluid definition of new media, it suggests that “electronic communication and computing technologies” are at the heart of new media. So let us open that window and investigate…

(@Schrodingers cat has just tweeted me saying he’s been sick in his box!)

Bill Nichols in his essay “The Work of Culture in the Age of Cybernetic Systems” describes the computer as “..more than an object: it is also an icon and a metaphor that suggests new ways of thinking about ourselves and our environment, new ways of constructing images of what it means to be human and to live in a humanoid world.” (from “Theories of New Media”).

The iconic computer from the Xerox PARC through the Amigas and Spectrums to the iPhone and iPad represent a scenario unique in cultural and social history. 

For the first time, the means of production (the tools), the routes of dissemination (the network) and the means of consumption are all mediated through the same technologies, based on the computer interface. 

From work to leisure we are now almost ubiquitous computer technology users and while these developments can promote the democratisation and participatory nature of personalised media and art production there are also ever present the corporate capitalist “traditional” re-mediated media that perhaps seek to control and redefine the new in order to develop “TV adverts only better”.

However, before this story moves toward an exploration of what Bob Stein refers to as the “M word” (for both Marketing and Marxism) in new media, I feel it important for me to browse more laterally towards an examination of the computer as interface (the GUI).

Page loading…

So I’m still sitting here/there (actually I’m somewhere and some-when different) and trying to articulate some of the plethora of thinking around new media art. 

As ever I find myself in front of a computer and more precisely in front of a screen. The display; the screen itself is what is often cited as critical area of investigation in new media that can perhaps offer the widest potential in terms of understanding ourselves and the world around us.

As Hyun Jean Lee states in “The screen as boundary object in the realm of imagination”:

“As an object at the boundary between virtual and physical reality, the screen exists as both a displayer and as a thing displayed, thus functioning as a mediator. The screen’s virtual imagery produces a sense of immersion in its viewer, yet at the same time the materiality of the screen produces a sense of rejection from the viewer’s complete involvement in the virtual world. The experience of the screen is thus an oscillation between these two states of immersion and rejection.”

The screen and the frame, the viewed and the viewer are all familiar areas of investigation in the history of art but the key difference between new media art and what has gone before is this sense of “immersion” and interactivity.

Whereas in traditional media forms there is a sense of broadcast, a one way mediated “lecture” between producers and audience via the frame or screen, the new media screen offers a dialogue between the window object of perception/presentation and the viewer/user.

This shift from viewer to user forms a central investigation point in new media criticism – the concept of a realisation and representation in real time of a window that can also mirror and reflect a users actions offers up new forms of experimentation and investigation.

The key here is the digital nature of the new media forms. At the heart of any digital representation is digital data that due to its construction (made up of 0s and 1s at its most fundamental) can free form from content and can therefore be manipulated and re-represented:

Thus in the digital world of the window mirror we can re-represent data feeds from weather stations as digital graphs, visualise audio data and even “listen” to digital images. 

As experimental art practice has taken advantage of the particular peculiarities of the development of digital datas ability to free form from content, the folding of both space and time through digital manipulation and re-mapping has led to interesting insights and a re-imagining of our perceptions of the world around us.

For example the work of Tamás Waliczky where he describes his “time crystal” works as aiming 

“..to preserve in frozen form brief moments in an individual’s life. These crystals exist simultaneously alongside each other in space, and a virtual camera (whose viewing angle is to some extent the lofty vantage point of God) can observe them from any desired location. By travelling through the time crystals, the camera can re-produce the original movement, but from a diverse range of perspectives and at varying speeds.” (http://www.waliczky.net/pages/waliczky_sculptures1-frame.htm)

Also other works of note that involve the folding and re-representation of time and space are:

The 4th Dimension where the artist “uses images like geological layers. He does not play with the image as a background/form, but as a geological mound. For him, each line becomes a system that can be isolated, the way a geologist approaches and analyzes each single stratum” (http://www.zbigvision.com/The4Dim.html)

And Steina’s “Bent Scans” (2002)

“The installation uses four computers resulting in four different image projections. Though all four computers have the same camera input, a different program on each creates a very different video image on each projection. By stepping into the camera view, the visitor will experience a different view of him or herself in an immediate past time.” (http://www.vasulka.org/Steina/Steina_BentScans/MOV_Bent_Scans.html)

This small sample of works that emphasise our relationship with time and space perhaps have their origins in moving image experimental work that introduced specific time and space based interventions in real time:

For example, Peter Campus “Origins” (1972) and Peter Weibels 1973 work “Observation of the Observation: Uncertainty” and Dan Grahams “Present Continuous Past(s) (1974).

The commonality between these works often involves the shifting perspective of the participant/viewer when engaging in the interactive feedback loop offered through the screen/frame that represents both window and mirror. The user participant uses the screen/frame both as a window through which to look into a parallel space-time and as a mirror that reflects his own interaction and participation in the feedback loop. In this way the screen becomes the mediator of the experience as well as existing as a central object in the space of the work.

This notion of feedback and interaction is critical in investigating how the re-visualisation of digital data can impact on the viewer/user.

Interaction in art is not a new media phenomenon, there has always been dialogue between artist/producer and viewer that exists as a feedback loop whereby the viewer (of a traditional static artwork) can reflect his own interpretation onto the work.

This act of interaction as interpretation has traditionally allowed a critical space for reflection on, and reading of, an artwork, but within the screen of new media, that room for reflection, a space and time to evaluate the experience is often lacking, as the screen/frame itself, as part of the experience, begins to dissolve and the critical distance between artwork and viewer diminishes.

Therefore, in order to encourage that critical reflective space, the new media window/mirror finds itself in a shifting state of presence: sometimes window, sometimes mirror and sometimes traditional frame, in order that the viewer/user can interact, be immersed and reflect on the experience.

Thus in new media the frame of the screen becomes mirror and window, and as the complexity in terms of spatio-temporal layers of interaction increase, so the frame becomes a frame within a frame; windows that look at mirrors and mirrors that reflect disparate times in that same windowed space.

The window I am looking at now as I type this, contains a representation of the time I have spent doing this typing. Simultaneously, in the same space where I type this there are several other windows that offer me a view into other “times” and “places”. 

As these windows literally open up in terms of the GUI of the screen, we are faced with interacting with a mirror world that is a multi-temporal space.

This spatio-centric world of my desktop thus offers me windows into (or onto) digital imagery and film (as I wait for my film to upload to Vimeo), the digital content of my computer, a program I am developing in Processing, my website etc. whilst in the same space I am engaged with and interacting with what is now and what will be as I continue to type.

This multi-temporal spatio-centric world of the computer with which we all now seem to be familiar, is suggested by Manovich as being an underlying condition of the “digital cinema”, whereby we may come to expect an increasing emphasis on spatial elements of arrangement, of montage and of experience as opposed to the “show and replace” temporal approach of traditional moving image practice.

This is but one example of thought around the implications for narrative in the new media frame. In Manovich’s digital cinema, temporal narrative is seen to be being replaced by a rediscovered spatial narrative akin to the art of Giotto and Courbet.

A second concern around the nature of narrative in a new media concept loops and links back to my earlier exposition of the interactive new media loop in the ever-morphing window/mirror. Manovich insists that the re-discovery of the loop as a narrative driver for new media has its basis in the mechanical; as in the cranking, circular loop of early movie cameras and the Zootrope. So the loop has a rightful place in the digital cinema as a “re-found” means of communication and representation.

However, the loop of the new media window/mirror can be seen to break with traditional “Aristotlean” narrative (which has a beginning, middle, end with its archetypes and prescriptive form).

The conflict between Aristotles Poetics and new media can be seen to centre around “..his idea of ‘mimesis’ as a truthful reflection of ‘reality’…(which) cannot hold since today it would make more sense to talk of ‘multiple realities’ for different readers(users). The reader activity and background therefore becomes much more important in thinking about what happens when hypertext (new media) narratives are ‘read’ ” (http://www.cyberartsweb.org/cpace/ht/hoofd3/)

The nature of this narrative break may lie in the digital nature of contemporary new media. Aristotles poetics relies on a production line system where one process follows another in a linear fashion. In the digital world this can be compared to early DOS programming where commands were executed one after the other sequentially so as to produce a linear system. 

With the advent of Object Orientated Programming (OOP) (notably with the development of Apples Operating System) systems became co-existant and able to communicate with each other at any time depending on the route of the data flow. 

For example, the word processor I am using now has the text objects themselves as I type, but simultaneously there exist a number of tools that can affect the form of this content if I choose to activate them (for example change the font, colour, size etc.). 

Once again we witness windows in windows, form and content separated and a multitudinous narrative spreading out before us that is dependent on interaction via a GUI, and both dependent on and as a result of the digital nature of the truly multi-media machine (as in literally meaning many ways or forms of communicating).

So this narrative break, whether heading towards a more spatially based narrative and/or away from Aristotles Poetics poses an issue in new media art.

Within the digital context as investigated here, narrative seems to be in a  constant process of creation through interaction that leaves the idea of an “ending” somewhat redundant. 

So, because of the very nature of the user/viewer interaction and the doubling multi-path options provided by that interaction in the non-linear window/mirror interface, there is a sense of new media art being un-finished, in flux or becoming.

This notion of “un-finishedness” fits well with the fleeting temporal nature of new media although it can create a sense of “failure” and even death. The term “unfinished” is tainted with failure as well as the romanticism of imagination it imbues within us. 

For example the notion of how Benjamin would have formulated and presented a final version of his “Arcades Project”. How would Schubert’s unfinished symphony have turned out? And what impact on the world for the continued work of Keith Haring and Paul Monete?

Thus the unfinished inspires imagination but also conjures up concepts of failure and the incomplete.

But perhaps, as Paul Lunenfeld expresses in his essay “Unfinished Business”, it is the process of becoming rather than the end, that is in need of celebration. 

A constant flux, movement and dialogue between what is and what may be, can perhaps shine a light on that which “..is not a resolution, but rather a state of suspension” within which there are constantly emerging new opportunities and developments whereby we are encouraged to re-imagine, reflect and re-invent ideas of our own perceptions.

New media with its constant re-invention of itself through its own mediation perhaps has an affinity with the unfinished – to quote Ted Nelson “Everything changes every six weeks now”.

Through new platforms, new programs, new approaches and new tools that are developing, that obsolesce and that are replaced, perhaps exemplify the position of new media to be at the forefront of investigation into this philosophy of the unfinished narrative of the self.

Lunefeld goes on to explore how new media activities and engagement can be compared to the flaneur and the mid 20th century avant-garde movement the Situationist International (SI).

The flaneur with his altered, aloof and observational vantage point and the SIs notion of the “derive” encourage a re-examination of the urban cityscape in order to “engage with the city as an open-ended place of play and investigation”.

The derive can be described as “a technique of rapid passage through varied ambiences. Dérives involve playful-constructive behavior and awareness of psycho-geographical effects, and are thus quite different from the classic notions of journey or stroll.” (http://www.bopsecrets.org/SI/2.derive.htm)

This notion of the meander through the post-modern isolation of the city-scape can be paralleled with the new media experience in terms of both experience and language: we often “browse” the internet and create complex cyber-psycho-geographies of the online world. 

We follow where mood and links (hyper and physical, HTML, feet and trains) take us in order to discover something new, or reflect on something known or to gain a new perspective.

This flaneuring typifies my meandering through this particular presentation of new media. My derive seems to drive or link me to a playful idea of the analogy between my imagination as a system for the re-mapping of space, place and time and the world I gaze into, the windows I use and the mirrors in which I see my thoughts reflected.

If therefore, new media possesses aspects of the “un-finished” and elements of the “derive”, this helps me to hyperlink back to the anchor icon I left earlier when discussing liminality.

As indicated through the constant re-invention and the very aspect of the “new-ness” of new media any art/media produced within this all consuming re-mediating space possesses an element of “becoming” and thus an aspect of being liminal; between states, neither this nor that, somehow intermediate.

The consequences of this line of thought lead to number of different scenarios that can offer unique insights and discourse with regard to an examination of ourselves and our environment.

The cybernetic world of the “Post Human” exemplified through the work of Orlan and Stelarc, offers new ways and platforms to discuss issues around gender, feminism and politics as our physically augmented selves (through cosmetic surgery, pacemakers, dialysis to name but three current cybernetic enhancements) “becomes other than (themselves) which is mediated through the new technology which determines it” (Clarke in Cybernetic expts).

However, with this liminality comes an opposing view of the liminal as “monstrous, diseased, queer, black, female, insane” and “polluted” (Clarke). 

As much of the literature draws on concepts behind moving image works such as Terminator (1984), Robocop (1987), Johnny Mnemonic (1995), Blade Runner (1982) and the mulititude of “Frankenstein” adaptations, the cybernetic investigation calls us to question our nature in terms of how we react to “the other”, how we can come to terms with the “unheimlich” of the cybernetically “altered” and what this represents in terms of our understanding of our own natures.

If we apply these concerns to the new media, the sense of the “unfinished” and the “derive” of non-linear narrative brings into question the supposed “rationality” of the machine. 

We are often halted in our everyday pursuit of operational procedure by claims of “illegal operations” and “fatal exceptions” as the temporal logic of the digital production line command clashes with the “illogic”; the “irrational”.

This error; the polluted, the illogical, the irrational, the corrupt, the bug, the glitch, can often stop a process dead – “finished”. 

But the glitch itself, the ghost in the machine, the “irrational” is what may help us glean a deeper understanding of ourselves as irrational systems; it can force a new perspective.

Thus, the glitch, the error, the corrupt, may inform a digital derive that we had not thought of before and hyperlink us into a new loop that helps lay out a new, new media narrative that is closer to our irrational selves and may help us to understand, interact with and perceive the world around us in new ways.

So, as I hyper-jump via an irrational glitch to a new window/mirror I will not finish this by looping back to my imagined starting point wherever and whenever that may now exist and restart in the middle of the narrative with a quick “Where was I?” and “Where am I going”?

</start>

Bibliography

Bignell, J. (2000), Postmodern Media Culture. Endinburgh: Edinburgh University Press

Bolter, J.D. Gromala D. (2000), Remediation: Understanding New Media. Mass: MIT Press

Bolter, J.D. Gromala D. (2005), Windows and Mirrors; Interaction Design, Digital Art, and the Myth of Transparency. Cambridge, MA: The MIT Press

Caldwell, J. T. (Editor), (2000), Theories of the New Media: A Historical Perspective. London: Athlone Press

Lee, H. J. (2009). The Screen as Boundary Object in the Realm of Imagination. Georgia: Georgia Institute of Technology

Lunefeld, P. (Editor), (1999), The Digital Dialectic: New Essays on New Media. Mass: MIT Press

Manovich, L. (2001), The Language of New Media. Mass: MIT Press

Rieser, M. (Editor), Zapp, A. (Editor), (2004). New Screen Media: Cinema/Art/Narrative. London: BFI Publishing

Zylinska, J. (Editor), (2002), The Cybernetic Experiments. London: Continuum

Data Moshing #4

As opposed to doing some work (which explains why I am posting so much this weekend) I was aiming to datamosh an explanation of datamoshing as part of my writings on the nature of new media interactive art.

As this is for me a purely experimental process, the product I was perhaps looking for did not necessarily emerge. However, it seems to me that this cyber-flaneuring in which I am currently engaged really does underline the use of the glitch to free oneself from the temporal narrative of traditional moving image experience and practice. Away from the loop (and maybe because of it) we enter into the realm of the unfinshed and the derive:

"a technique of rapid passage through varied ambiences. Dérives involve playful-constructive behavior and awareness of psycho-geographical effects, and are thus quite different from the classic notions of journey or stroll."

So, borrowing heavily from ideas from the Situationist International, my meanderings this weekend have followed a derive that leads me not to that but to this, not to there but to here….

More DataMoshing

This time it’s straight from the datamoshing recipe (links over on the blog):

…and a bit of randomness that perhaps gives more of an insight than it should..?

Getting my Glitch on…

After getting extremely excited about datamoshing, databending and all things glitch I’ve started some experiments using Processing to bend some jpgs.

This is part of my ongoing investigations into the nature of the “ghost in the machine”, randomness and new media. As I’m putting together an investigation into the nature of interaction in new media art I though this might be a good starting point.

Definitely worth checking out in relation to these ideas plus the data bending/moshing stuff over on the blog is this book: http://www.openhumanitiespress.org/immersion-into-noise.html

Projection Mapping Experiments

Projection Mapping

Using projected images that are mapped to the surface features of, for example, a building has presented me with a great deal of inspiration regarding the re-mapping and re-imagining of space and place with specific regard to the moving image.

Although familiar with projecting live and composited images in VJing, I had never before come across this idea of using structures, their textures and nuances as a projection screen (examples here: http://mashable.com/2011/04/24/3d-projection-mapping/#OqLFYcretDg)

Inspiring in terms of both its impact on how this method can transform place, coupled with the potential scale of what could be achieved using Puredata and nothing more than a projector, I wanted to investigate the impact of projection mapping on a small and large scale and also develop means of incorporating my research to date into a final piece.

For these experiments and the final piece I wanted to revisit some of the themes that had drawn my attention during previous modules including Experimental Practices and Histories and Theories and use these as a basis to inform my development of a projection mapping patch in Puredata.

TEST #1 During the two week period 11th to 21st July 2011, I was able to set up an installation as part of the “Great Big Empty Shop Experiment” (http://cci.glam.ac.uk/big-shop/) which was inspired by my Experimental Practices around re-mapping time and space, the sense of the uncanny (das unheimlich) (Freud 1919) and how the outside can become inside, daytime in the dark (Carroll 1996) and the divided cinema (Cubbit, 2004).

(Stills from the Great Big Empty Shop experiment. The installation was only viewable through a reversed spy hole on the door.)

This experience gave me a great opportunity to experiment with perception (the viewer only being able to witness a fish-eye view of the installation) and the sense of place (the projected image was close ups of lush greenery somehow out of place amongst a room of shop mannequins).

Taking these ideas forward and using projection mapping techniques developed through Puredata, I produced the following test piece which again explores the ideas of un-heimlich (the familiar made unfamiliar) through re-mapping and animating the mannequin’s face.

TEST #2 Proceeding with the idea of smaller scale projection mapping, in this test I was interested in getting much closer to the objects being mapped and getting a sense of texture, and a sense of re-mapped space and time.

The “still life” here had some issues of spill due mainly to the arrangement being moved prior to projecting, although the handheld camera phone does allow the viewer to get a sense of the textural nature of the piece, especially the bottle as it is not completely covered.

(Mapping a still life with animated textural grid)

TEST #3: Final Piece For this final test piece I wanted to bring together some of the ideas, technologies and theory discussed in this document; including projection mapping (this time on a larger scale), OSCeleton, perception of space and place, QR tags and Blender simulations. Although at the time of writing there is no ReacTIVision component, it could be incorporated as a Puredata patch (as I did with the OSCeleton patch) but I feel it would be more suited to a collaborative interactive piece.

Another addition to this piece is the iPad OSC controller developed through the open source Mrmr (http://mrmr.noisepages.com/) interface designer. This controller links to Puredata via a computer-to-computer network and while it can be developed to send multiple data streams (including accelerometer data), for the purposes of this test it mainly controls the activation of video sources.

(Developing my Mrmr iPad interface controlling Puredata)

(Final Piece)

Conclusions, Issues and Development

These projection mapping tests have allowed me to bring together various areas of interest into one piece that effects perception and experience of space and place, the “real” and “virtual” and how they can help re-map and re-envisage a space.

Whilst these are very much test pieces I think that they can inform the direction of my development moving towards my main production project.

In terms of development, the positive to come out of this research and development is a library of Puredata patches that can be re-used and re-configured for different purposes.

Kinect Investigations

Microsoft Kinect Sensor and OpenNI Framework

Originally known as Project Natal, the Kinect is a motion-sensing device used as a hands free controller with Microsoft’s X-Box games console. The kinect uses motion sensing data that includes depth data (via infrared) in order to allow users to control gameplay through gestures, motion and speech.

In 2010, open source drivers started to become available for the kinect and it can now be used with multiple open source libraries as a sensing device, much like a webcam, that will output data relating to X, Y and Z positions and movement velocity via OSC (Open Sound Control) (http://opensoundcontrol.org/).

Thus the data captured from the kinect can be used to control, manipulate and interact with space and place by routing the data accordingly.

For the purposes of these three tests, I was mainly interested in how to unpack the kinect OSC data using Puredata and re-represent that data in a number of different ways in order to control virtual objects and interact with the spaces.

TEST #1: This model was based on the “picture cube” that I remember from the 1970s, and was developed to be shown at the annual digital storytelling festival in Aberystwyth in June 2011. In the demonstration we were attempting to describe how new media applications and new technology might impact on the “traditional” form of digital storytelling.

The original application consisted of two picture cubes controlled by two users (in this case myself and a member of Dr. David Frohlichs’ team) where the concept was to locate and tell the story related to a picture that evoked a particular memory. By “stepping out” of the space and then re-entering, the second user was able to take control of the first cube and vice-versa, encouraging a collaborative storytelling effort based on memories of the same picture.

(The “picture cubes” at DS6, June 2011)

For the purposes of demonstrating this application the following is a single user version of the picture cube. Using Apple’s Quartz Composer (http://developer.apple.com/graphicsimaging/quartzcomposer/) to build a 3D cube and render a picture on each of the six sides, and the openNI framework for capturing the kinect data, this application allows a user to control the rotation of the picture cube across multiple axes using their hands as controllers.

(OpenNI client on the left and the Quartz picture cube on the right)

TEST #2: OpenNI2TUIO and Puredata. This is a proof of concept aimed at extracting and using the depth data from the kinect to control virtual objects (in this case a random film player screen). The data is unpacked and routed in Puredata via TUIO protocol.
  

(OpenNI2TUIO interface on the right and PureData output on the left)

TEST #3: OSCeleton and Puredata. OSCeleton is another kinect library based on the OpenNI build that provides a much more comprehensive data set that translates each joint position on a tracked skeleton. In this example the texture of the joints is mapped from the random video player in TEST #2.

Conclusions, Issues and Development

Over the last two years the kinect has become a large focus of interest for those involved in pervasive computing and a simple Google search will reveal a plethora of projects, innovations and libraries that can really open the possibilities of this device as an interactive controller that can re-map and re-visualise space and place.

The main focus of interest in this device for me has been mainly to provoke an insight into its potential applications within a moving image and new media context. The kinect is very adept at communicating with other devices and other software in order to produce work that goes far beyond “blob tracking” (as in CVC and other tracking software), even allowing users to appear and interact in virtual spaces such as OpenSim and Second Life (http://www.youtube.com/watch?v=YDVDQJLStYo).

Throughout these experiments it seems to me that the most relevant client would be OSCeleton with its ability to motion track multiple users in 3D, however, all of these OSC clients have presented their own issues around unpacking and routing the data appropriately in Puredata. Having said that, once proof of concept is established, the Puredata patches can easily be saved as abstractions and called into future projects very easily. So in this way I am building a repository of kinect patches which can be modified and re-used dependent on the project needs.

In terms of development of the application of the kinect in interactive spaces, there are more libraries and clients appearing almost weekly as the online community develop new and innovative uses for the kinect. One of many examples that I am currently investigating involves the interaction with a 3D video wall using only kinect based gestures: as the user “touches” the video screen the movement of the hand can interact with and change the playback speed, audio track and so on (http://www.keyboardmods.com/2010/12/howto-kinect-openninite-skeleton.html).

The kinect seems to really represent a shifting idea of perception of experience that seems to be blurring the lines between “real” and “virtual” as actions performed in “real” spaces (both individually and collaboratively) become the triggers of “virtual” experience in “virtual” places.

Speculative Shoots Final Shots

Spec Shoots #1

First experimental shot. Using time and clocks as a focus of the shot, this sequence explores the representation and experience of time from a cinematic point of view. Time is not necessarily experienced in a linear way in our conscious lives and this piece examines relationship between actual, the virtual (Bergson) and the “arrow of time” as a commodity.

Spec Shoots #2

Following on from the first shot, I was intrigued by how time and movement can be represented in cinema. This piece takes inspiration form Agnes Varda and reading around Deleuze’s “crystal image”. Movement within the frame is powered physically by my feet, plus through wind that effects the mirror and reveals not only what is outside the scene but also the relationship the crystal image has with time in terms of the clocks that are “revealed” through the motion of the mirror. This shot also examines repetition and is a result of research that has led to examination of the “death drive”.
The driver of the mechanical nature of cinema is further represented by the train on the soundtrack which reverses to denote the end of the shot.

Spec Shoots #3

This shot examines how camera movement can reveal the scene and place emphasis on both time and the crystal image in the frame. Clocks once again play an important role in the shot and the secondary action of the lights represent a driver that pushes the shot in a new direction; from the camera movement as driver to electricity as a driver.
The rhythm of the on/off of the lights tends to pre-empt the emergence of narative and explores a retrospective analysis of cinema as being “always” narrative (Cubbit)

Spec Shoots #4

This shot again follows on with concepts around time with the mirror and the clocks. Cinema as a spectacle is also explored from without the scene. Here the ideas around repetition, resetting and the need to return to a “before-ness” is highlighted through the repetition of applying lipstick. Although the lipstick is applied and re-applied there is always a subtle difference as explored in Deleuze’s thinking around the time image especially in relation to difference and repetition. There is also an interesting testimony form the actor here where she explores that concept of repetition and her experience in the scene (to follow)

Spec Shoots #5

This last piece again aims to explore the “divided cinema” and makes reference to the work of Peter Kubelka and the “flicker film”. The projector sound is within the scene and the images themselves aim to provoke thinking around light and shadow. The projection texture (the wall and ivy) is deliberately out of focus in order to encourage the viewer to concentrate on the light or daytime images being projected onto the dark or night-time wall. The interplay and counterpoint of light and dark, day and night, sun and shadow are an attempt to explore the insight that the moving image can give us into concepts of time and memory with a deliberate effort to focus attention on the projected treated images.

Bibliography:

Carroll, N. (1996) Theorizing the Moving Image. Cambridge Mass: Press Syndicate of the University of Cambridge

Cubbit, S. (2004) The Cinema Effect. Cambridge Mass: MIT Press

Deleuze, G. (2005) Cinema 2: The Time Image, trans H. Tomlinson and R. Galeta. London: Continuum

Mulvey, L. (2006) Death 24x a second. London: Reaktion

Wollen, P. (1998) Signs and Meaning in the Cinema. Bury: St Edmundsbury Press

Sinha, S. (no date) Remembrance of Images Past: Cinema, Memory and the Social Construction of the Concept of Time. Available at http://silhouette-mag.wikidot.com/article-cat:vol3-cover-pg3 (Accessed May 2011)

Vitale, C. Networkologies. Available at http://networkologies.wordpress.com/2011/04/04/the-deleuzian-notion-of-the-image-a-slice-of-the-world-or-cinema-beyond-the-human/ (Accessed: May 2011)



Speculative Shoots Ideas: “Playing” the Image

Initial Ideas:

Falling out of this was the idea of “memory triggers” and if we could “play” our memories, emotions and ideas just like we’d play a piano and interact with the visual forms triggered in real time as we might do with a musical instrument. By “playing” the pre-constructed memories as shots we can create a montage of musical memory that uses sound to affect colour, time, motion and movement of the images presented.

Developing the concept:

In a more detailed analysis here, I’m thinking about the nature of the content as well as the interactive element. Here’s my thinking around the nature of the content and how the initial treatment addresses key concepts of the moving image.

1. Memories created using images (found or filmed/ still and moving) and voice over

-exploration of the relationship between still and moving image

-exploration of colour and memory

2. Animated memories created in a virtual environment using particle systems, emitters, boids etc.

-how does the integration of still and moving image in a virtual (“un-natural”) environment affect out relationship with the image.

-opportunity for generative, artificial intelligence (via boids systems) to introduce an organic development of shots

Set up

In this set up I’m using 3 MIDI triggers for both visual and audio which are composited and re-introduced into the system. Sound (generative audio and voice tracks) is layered both in the A/V mixer and the synth modules allowing for interactive generation of A/V montage based on the users (me!) interpretation and reaction to both sound and image.

The A/V mixer can be set up in a number of different ways with different triggers effecting different parameters.

In this way the 5 shots that can be generated would be an exploration using the audio itself to trigger paramaters in the A/V mixer that relate to:

1. colour - volume, velocity (how hard a key is pressed) and pitch can be used to directly effect the colour balance of a clip or A/V mix.

2. time - similarly triggers can be used to affect the start and end point of a clip or speed and direction of the playhead.

3. Movement - created directly in terms of the order of playing the clips in order to create an A/V montage.

4. Sound - this shot would be related directly to how a piece of music can create a specific shot.

5. Interaction - the key to this is the relationship between sound, image and user in no particular order and having run a few tests (see below) with some clips it is really interesting to explore how the image encourages a certain key press and how the sound itself leads the user on to more experimentation: what I mean by that is it’s a very moreish toy! :)

Augmented Reality and Physical Computing….some thoughts

Augmented Reality (AR) is a technology that blurs the line between what’s real and what’s computer-generated by overlaying digital images, video and other information onto real world content.” (http://www.augmentedreality.co.uk/)

Augmented Reality in Reality

Although Augmented Reality is not a new phenomenon (its roots can be traced back as far as Ivan Sutherlands’ work in 1968) the last few years have seen a dramatic rise in terms of its application and use in everyday life.

With the rise of ever more complex and faster processing capabilities of computers, phones and other hand held devices, augmented reality applications are becoming common place as tools that can help users understand and interpret data around them in new ways; from applications in engineering and medicine that allow professionals to more readily visualise potential solutions, to personal use that might help a user find the nearest geo-tagged restaurant.

The “overlay” nature of AR separates it significantly from the immersive nature of Virtual Reality (VR) and it is this “mixed” reality that has appeal within both the professional and personal fields; data no longer has to be a string of numbers to be interpreted by experts and can now be designed and represented in a manner more fit for purpose for consumption by humans.

For example, a “layers” application, now readily available for the vast majority of GPS and internet enabled devices, sends and receives data based on position and correlates that data with a multitude of social media data (from Twitter, Facebook, Flickr etc) in order to allow the user to view the latest and most local data, for example photographs on flickr, from within their phones’ camera interface without any need for complicated data management; the photographs are “overlayed” as icons on the phones’ screen.

                         (Screenshot of “layers” application Wikitude)

It is this “intelligent” data visualisation that can help AR applications convey more meaning and personal relevance to the user and create new means and forms of communication and interaction beyond the “data layer”; in this scenario the data handling is managed mostly through an Application Programming Interface (API), a protocol that allows programs and applications (e.g Twitter, Facebook etc.) to communicate with each other and the user with a minimum of programming intervention, providing more opportunity for designers to concentrate on the nature of the visualisation as opposed to how the data is captured and handled.

AR, the Giant Finger and Design

While current developments around the use and application of AR might encourage us to rethink our relationship with technology, there are other implications that suggest a much more interactive and exploratory investigation of our relationship with technology and how that relationship might develop.

Since the inception of the computer as we know it, the means of interaction has mainly remained focussed on productivity and efficiency through the mouse/keyboard/monitor.

It is this focus that emergent thinking around the nature and application of physical computing is currently challenging.

The “traditional”, productivity based model has little in common with natural human interaction, and while this model has prompted cultural shifts in the way we interact with and understand technology (keyboards and mice being almost ubiquitous), as far as the technology itself is concerned, the user is no more than a giant finger, which for the user inevitably means there are a limited number of ways in which to interact.

This shift from “How do we react to technology” to “How does technology react to us” has opened an intriguing new line of thought among many designers and artists across the web and a focus on how the technology can interact more “human-ly” with users has become a rapidly evolving field.

From a physical design point of view, sensory data such as motion and gesture detection and face recognition as well as API based data from open online data sources can now be captured, analysed and interpreted by artists and designers using open-source Integrated Development Environments (IDEs) that have a focus on design and visualisation. Thus dialogue and experimentation may begin to challenge productivity and efficiency.

This experimental design can manifest itself in intriguing new ways; varied data sources can act as triggers within a physical space as well as physical interaction creating data visualisation.

The combination of both of these approaches creates its own data set that in turn can be visualised or act as secondary triggers, creating an organic and evolving interactive space that can allow us to investigate the nature of interaction, design and our environment.

(Data flow of organic physical computing setup)

As the advent of Web 2.0 technologies encouraged the use and production of new and experimental forms of media, intelligently designed social applications, physical computing and data visualisation can be seen as an essential element in helping us to understand and produce new media that can, in turn, help us understand and develop new ways and forms of communication and interaction.

These new media that connect, that are interactive and that are social have the potential to allow us to reflect on our relationship with technology, with the environment in which we exist and the people around us.