Memory maps in interactive dance environments

Simon Biggs and Sue Hawksley

First published in International Journal of Performance and Digital Arts Media, Vol 2 No 2 2006

Abstract

This essay explores current research by the authors into dance in interactive mediated spaces. This text seeks to articulate the thematics underlying the creative thinking in the work and the technical and methodological aspects of how it is being developed and realised. This research develops out of previous collaborations between Sue Hawksley and Simon Biggs involving dance performance and interactive visual art, as well as other collaborations between Biggs and, variously, Sarah Rubidge, Stuart Jones and Stephen Petronio . The work discussed here is part of a longer term project developing a range of specific interactive realtime authoring systems for use in performance and interactive installation works.

Introduction

The authors of this text are artists who most often work independently from one another but who, from time to time, choose to work together on jointly authored projects or, as is more often the case, with one responding to the other as primary author. This text discusses work that is jointly authored and that draws on previous work that has been developed collaboratively and independently.

For some years, Biggs has been exploring the use of interactive systems and computation to create installations and artworks in related forms that focus on the manner in which the viewer negotiates their "being" within the space and in their relations with others. The self is posited as something unfixed and fluid, a function of dynamic relational existence in a specific space-time moment (Biggs 2003).

Much of Hawksley's work with dance and movement has concentrated on the exploration of a range of somatic techniques focusing on structure and emphasising awareness of movement patterns to inform and facilitate effective articulation. Her creative practice often addresses inner dis-ease and tension and the representation of these factors through the body and performance.

Previous collaborative works by Hawksley and Biggs include the interactive installation "Waiting Room" (in collaboration with Stuart Jones), which employed a large database of dance material derived from Tango inspired choreography which was then recombined into new dance material through the interaction of viewers within the installation space. "I am I was (a dying swan)" employed a live multi-channel digitally delayed video projection system in combination with structured lighting to constrain and enable a solo dance performance where the dancer, along with the audience, was confronted by their own image shifted out of immediate time and space.

"<bodytext>" is the first stage of a further collaboration between Hawksley and Biggs investigating the somatic sensation of inner turmoil, as addressed in a number of previous works by Hawksley . In this new work memories, in the form of motion capture data, are seen to be fracturing, multiplying, colliding, dividing and mutating as visual and other datasets are physically attached to abstracted sequences of movement. Another work, "Blow-Up", continues this theme, employing live video imagery of the dancer which is manipulated and recombined by motion capture acquired movement data generated by the performer. These pieces investigate how memories are embodied and signified, patterning movement behaviour and addressing how media can be used to record complex data sets and thus function to preserve or disturb a sense of self.

Initial thinking in this work involved notions of memory, identity, data and recording systems. Memory was addressed in reference to Francis Yates "The Art of Memory" (Yates 1992), a seminal text tracing the practice of various mnemonic systems through European history since the pre-Socratic Sophists. One practice of particular relevance here is that of locative memory, or what is sometimes called "memory theatre". This was a technique practiced by the early rhetoricians where they sought to remember detailed and exhaustive arguments for public presentation and debate. The technique involved the psychological investing in specific places and/or objects particular memories, which could then be reconstructed as an argument as the rhetorician moved about the lecture space "performing" their lecture. There are implicit parallels here with theatre, as well as with the non-linear data structures we are all familiar with from computing and the internet.

Similarly, in today's highly mediated and mediatised society, the issue of identity has been expanded to include not only the subjective sense of self (whether that of the self or the other) but also its objective representations in a cultural economy seemingly obsessed with quantitatively describing everything and attaching that data to the subject itself as value. Thus identity can be found not only in our personal sense of self, or in how others might perceive us, but also in our passports, credit cards, social security numbers and the emergent bio-technological datasets which function to socially prescribe ourselves. All this data can be tagged with the relevant social signifiers (citizen, criminal, immigrant, deviant, consumer) and employed in institutional databases to measure the status we have accrued, whether in the form of "Nectar Reward Points", "Driver Penalty Points" or "Right to Abode Status", determining how we are penalized or rewarded.

A previous major performance work that, to our minds, seems to foretell this state of affairs is Samuel Beckett's "Krapps Last Tape".

"Just been listening to an old year, passages at random. I did not check the book, but it must have been at least ten or twelve years ago" (Beckett, 1960).

In "Krapps Last Tape" a life preserved by recordings on audio tape spools is revisited, reviewed and rewound by its subject. The character performs alone with only a tape recorder and audio tapes for company, listening to and commenting upon these fragmentary "memories" of his life. Through this device Beckett explores how the subject can be configured relative to its own objective histories and, through a process of re-imagined narrative, formed through the fragmentary access the subject gains to the audio tapes and the partial and decontextualised narratives they contain. In this manner Krapp can be seen to weave new fictions to describe his being and to come into conflict with aspects of a self preferably forgotten, thus forging a proto-schizoid disassociation between memory and self-perception.

In the research detailed here we are seeking to similarly "un-spool" memories from the body and to rewrite them into a digital record, relocating them away from the corporeal self and without recourse to the linearity of traditional recording systems. The computer, as a random access memory device, allows us to "embody" this data in a non-linear format, enabling not only a hypertextual approach to how that data can be retrieved but also the facility to compute novel recombinations that result in the generation of new data that was never actually recorded. Employing established motion-capture techniques to record dance movement we are exploring techniques to embed the resulting data into the visual image and to use it as a control system relative to previously recorded or live datasets, thus re-embodying the image with vestiges of corporeal movement. These datasets can be formed from multiple data types, not only including visual material but also audio, movement and the more abstract data types found in biometrics and other similar identity systems. In this respect the software systems developed for this work function not only as a means to manipulate various media with movement data but to also structurally combine diverse media into coherent multimedia artifacts.

The choreographic material for this work has been developed through an iterative action research methodology. The choreographer has approached the body as if an archaeological site, a living non-linear and modifiable history that can be excavated and comparatively interpreted. Alongside this subjective approach, which will ultimately be used to translate people's stories into movement material, more abstract and objective personal identification data, such as passport codes, postcodes, account numbers, Social Security numbers and PIN's, have also been used to generate movement material. Using a more or less arbitrary alphanumeric codifying system such personal identification data has been translated into movement phrases. The resultant choreographic material seeks to question the arbitrary nature of how these codes function to signify self as well as to isolate and identify patterns of tension that define movement behavior and contain information indicative of our identity and histories. This codifying process is an explicit evocation of the computational processes occurring within the software systems that underpin the visual component of the work.

That the physical nature of current motion capture systems function to constrain the performer in many ways, often profoundly diminishing the movements they can safely perform, is also an important choreographic consideration. The systems of constraint generated by these high-frequency monitoring technologies also become consciously employed as conditioning elements in the determining of choreographic material. The movement phrases thus generated function to contain the frustration of being physically and socio-semiotically constrained.

An example of this approach was evidenced in a group movement exercise, directed by Hawksley with the Australian choreographer and co-director of Company in Space Hellen Sky, during the Digital Cultures Laboratory. This exercise involved the setting of a series of simple tasks aimed at exploring the complexity of giving and receiving movement information and the pursuant frustration of being constrained. Performers were asked to visually copy movement, follow oral instructions and respond to touch information, separately at first and then overlaid. This quickly led towards an overload of information and attendant tension and stress. We observed that people tended to move away from the constraints of the tasks toward freer and more expansive improvisation, using the task as a starting point but allowing themselves the freedom to interpret the information. This exercise was carried out, in part, in preparation for the multi-sensory demands of using motion capture systems, particularly in the context we are exploring. This exercise can be seen as a developmental step in the choreography for the works discussed here and also as an example of how that process is undertaken in the studio.

<bodytext>


Fig.1 Screen grab showing the interface where the designer can assign various details from selected media to the motion capture nodes.

"<bodytext>" is both a software environment for the development of multimedia dance works and a dance work. The "<bodytext>" software environment allows for a number of approaches to how media can be prepared and then mapped onto a motion capture dataset. The user of the software, who might be a visual artist, a scenographer or a choreographer, is able to employ the software's interactive visual interface (fig.1) to select any suitably prepared still image, text document or QuickTime file, such as video or audio, and then attach it to a node on a skeletal model of a human figure. In the case of still images the user can also determine any section of an image to be attached to a specific node. By this latter method the user is able to determine the mapping of any image to the skeletal frame, which allows for novel re-mappings of bodies, or any other imagery, to motion capture data. Once the user has finalised the mapping of media and resources to the skeletal nodes they can then input a motion capture data file that is compatible with the 30 node skeletal model. This can then be played, with the user-defined media attached to and animated by the data (fig.2).


Fig.2 A matrix of nine screen grabs showing frames from an animation sequence generated by the "<bodytext>" software.

Initial R+D for "<bodytext>" employed a series of still 2D photographic images of a performers body, photographed from various angles. More recently QuickTime based media have been included in the capabilities of the software. This has allowed, for example, the motion capture animation of multiple video sequences within the resulting image, such that a performer can physically determine the location of video sequences relative to other sequences. The z dimension of the motion capture data is employed to control the scale of the still image or video sequence, such that the performer can manipulate what is effectively a zoom function that determines the spatial location of each element.

Audio files have been treated in a similar manner to this. In this case y data has been employed to control the volume of the sound whilst the multiple mapping of the sound files has allowed for motion capture controlled audio mixing. A near-term objective for development is the capability to use x and z data from motion capture datasets to pan sounds in space, such that a performer would then have control of the spatial characteristics of the sound.

Whilst "<bodytext>" currently works only with recorded motion capture files the intention is to develop it as a live performance system. A key consideration here is which motion capture system to employ in conjunction with the software system. To date extensive experiments have been carried out using acquired data from the ReActor system, with significant success. Work has also been undertaken with recorded data from the Gypsy 4 exoskeleton system, but with less success, partly due to the innate complexity of the data formats employed by the Gypsy system. Whilst the ReActor system outputs data as simple Cartesian coordinates in 3D space, with each frame composed as a list of sensor nodes, the Gypsy system employs a hierarchical polar coordinate system. This means that each node is output as a set of figures describing its attitude and dynamic behaviour in relation to the sensor node further up the hierarchical structure. Thus a hand depends on a wrist depends on an elbow, etc. Whilst this makes good conceptual sense it does lead to very large and complex proprietary datasets which are, in the first instance, difficult to develop parsers for and secondly challenge even high performance computers in realtime situations.

Due to the problems inherent in all the currently available motion capture systems their regular and frequent use as live performance systems is still some way off into the future. Therefore whilst it is possible to take forward this research in laboratory conditions, and complete work that needs to be done and is of clear developmental value in its longer term aims, the question now is which type of system is most likely to be most suitable for live use. Will it be a lightweight and easy to install medium resolution system iterated from a system like ReActor or will it be a versatile, mobile and hopefully less difficult to work with system developed from systems similar to the Gypsy? As the answer to this question is as yet undetermined care has been taken with the design of "<bodytext>" to ensure that it is a fully modular and object-oriented piece of software, facilitating the relatively easy re-authoring of the parser and mapping components of the software to take into account unknown future systems and data formats.

In his previous work Biggs has employed unencumbered approaches to technology and interaction, and the profound mediation of the body by technology required for motion capture data acquisition is in marked contrast to this aesthetic. This interest in keeping the technologies of interaction hidden is sustained in this collaborative work.

The question here is whether, in this current work, the approach will be to allow the technological systems, whether motion capture rigs or exoskeletons, to be visible or whether to hide them. This is more than just an aesthetic consideration on the level of costuming or a wash of light for, as observed above, the technology mediates and conditions what is choreographically possible. Our response to date has been to allow the technology its visible place in the work and to work with the constraints it imposes in a not dissimilar manner to how many artists employ external constraints as generative systems for authoring or conditioning material.

Experiments at the Digital Cultures Laboratory

During the Digital Cultures Lab, in Nottingham 2005, Hawksley and Biggs explored techniques to use motion capture systems for real-time control of other datasets. The initial objective had been to build a software interface between a ReActor motion capture system, installed by Essex Dance for the duration of the workshop week, and the "<bodytext>" software system.

After resolving a number of technical problems with the installation of the motion capture system the amount of time actually remaining for attempting to achieve this was reduced to two days. Nevertheless, it was followed through as a fully functioning parser had already been written, exhaustively tested, and the technical issues that needed to be resolved were thus focused on the software interface between the systems. Given the short period of time available for research and development it was decided that rather than author such an interface from scratch we would seek to construct it from various existing software and hardware elements. Working with Manchester based independent software developer Guy Hilton and with the valuable input of Mark Coniglio, New York based author of Isadora performance software and co-director of Troika Ranch, we managed to take the realtime output of the ReActor system and patch that into a copy of Isadora running on a "bridge" computer. This data was then converted by Isadora into MIDI data and output onto a MIDI network.

This approach was chosen as Biggs has previously developed MIDI software interfaces for the acquisition of realtime control data for use in interactive installations and other systems and was thus able to quickly and easily re-engineer this as an input interface for the "<bodytext>" system. This was achieved and a proof of concept was established when a live performer within the ReActor system was able to send realtime motion capture data to the "<bodytext>" system and physically manipulate a nearby large scale projected image of the human form and a live video stream of the performer.

However, whilst proof of concept was achieved it was clear that as a technical solution the approach we had taken was of limited value. Even though the ReActor system is fairly low-fidelity, compared to systems such as the Vicon, the amount of data it generates is still high. Whilst the "<bodytext>" software can manage to process such data bandwidths the MIDI protocol, which was designed for the recording and transmission of music related data, is simply too slow and low bandwidth to be reliable or effective as an interface component in motion capture work. To ensure that the MIDI system was not overloaded with data we needed to slow down the capture rate of the system from its optimal 30 frames a second to 4 frames a second. This level of temporal resolution is obviously entirely inadequate for live performance or for the realistic recording of subtle detail present in human movement.

Another fundamental limit with MIDI is the number of channels it can readily send and receive data on. This is usually limited to 16 channels. So as to avoid longer development times and arrive at a "quick fix" we decided to map the motion capture sensor nodes of the performer to these 16 MIDI channels. However, there are 30 sensor nodes worn by the performer within the motion capture system. These nodes are also organised in a hierarchically hard-wired structure, meaning that you cannot simply pick and choose which nodes you might or might not employ. Thus we arrived at a solution where the performer was able to output data from only one side of their body.

Whilst the positive results of this workshop was a proof of concept, and the not insignificant capability of MIDI compatibility being added to the "<bodytext>" system, it is clear that future research into live systems will not employ this approach and other protocols and formats will have to be used. However, given the development time available it was valuable to have achieved the outputs we did.

An unexpected outcome of these experiments was the real-time manipulation of facial expression by a live dancer in the motion capture system. Arthur Elsenaar, currently undertaking his PhD at Sheffield Hallam University's Art and Design Research Centre is undertaking research into the computer control of the human face as a performative instrument. To achieve this Elsenaar wires his face with electro-stimulators, which in turn are connected to a computer that regulates current to the various facial muscles. This allows the control of the face through a software interface he has developed and which exists in a number of variants that allow other people, whether locally or remotely (over the internet, for example), to alter the facial expression of the subject/performer (Elsenaar and Scha, 2002).

Once we had achieved the realtime interface from the ReActor system through Isadora and MIDI to the "<bodytext>" software it was then possible to patch Elsenaar's existing software systems into this network. The result enabled the Portuguese dancer Joao Costa, within the motion capture system, to take control of Elsenaar's face with, for example, the mapping of an arm to the left side of the face and a leg to the right side. By this method Costa was able to manipulate, with his own body and in realtime, another's human face. Elsenaar's face was video projected on to a wall facing Costa, within the motion capture system, achieving a rough and ready, but quite effective, laboratory based scenography (fig.3 and fig.4).


Fig.3 Joao Costa performing within the motion capture system. On the wall opposite can be seen two video projections, the one on the right showing the user interface for the motion capture system and the one on the left being of Arthur Elsenaar's face being manipulated by Costa's movement.


Fig.4 Arthur Elsenaar (on the right) wired up and reacting to Costa's movement. Guy Hilton is in the centre of the picture.

Although the control and response of the system was relatively basic and unsophisticated (for example, raising an arm caused the raising of an eyebrow) it was sufficient to demonstrate potential for a fascinating interface between performers in very different physical modes. There was poetry in that moment where one person's emotional visage was animated by the movement pattern characteristics of another's body. Although the resulting interactivity was effectively basic puppetry the array of technologies separating the two performers created a curious distantiation between actor and emotion.

Blowup

Whilst "<bodytext>" is both a dance work and a software environment, for authoring or assisting live performance works, "Blowup" is a dance piece utilising custom written software. Nevertheless, "<bodytext>" and "Blowup" are closely related works in progress, arising contemporaneously from the same creative processes and utilising related software techniques.

"Blowup" consists of live performed choreography and the live acquisition of motion capture data from that performance which is employed in realtime to computationally modify the live video stream of the performance being video projected as part of the staging of the work.

This work can be seen in part as related to the earlier work "I am I was (a dying swan)" in its integration of live processed video and dance performance. In "I am I was" the dancer performed on stage observed by three video cameras, each utilising varied focal and delay techniques to create progressive manipulation of the video stream. To the rear of the stage three video projections, of the solo dancer, appeared two or three times life size. In effect the performer was surrounded by their own giant image, shifted slightly out of time and space. The objective here was not to augment the choreography or create an interactive immersive system but rather to amplify the physical actuality of the performance and thus enhance audience perception of the dance.

With similar intent, but employing more sophisticated software and technological systems, "Blowup" works by acquiring live motion capture of the performer and using this data to manipulate the related live video stream of the performance. The degree of manipulation permitted is high, with the performer being able to control many of the aspects of the representation in the projection, including the positions of various sections of the image, and thus their own body parts, in relation to one another (fig.5). Other characteristics under performer control include the relative size of sections of the image and even the capability to live-mix previously recorded data with contemporary material. The title "Blowup" refers to the facility given to the live performer to manipulate and even violently distort their live projected image and references Michelangelo Antonioni's film of the same title , although the last section of his later film "Zabriskie Point" possibly functions as a closer visual analogue of the effect achieved in the piece.

Research and development on this piece is at an advanced stage within a studio environment. However, due to issues with the performance readiness of the technology, the work is still far from realisation in a full performative context. Current research has been undertaken employing pre-recorded motion capture data to manipulate streamed video of live dance. Any other video material or source can also be similarly processed, suggesting that the software could also be employed as a video post-production tool in potentially diverse circumstances.

The research and development work undertaken into live motion capture data acquisition during the Digital Cultures workshops in Nottingham, as described previously, was, whilst not entirely successful, nevertheless a critically important step towards the realisation of this work. Similarly to "<bodytext>", "Blowup" will be further developed employing larger scale motion capture systems such as the Vicon. However, the intent is to do this in a manner that the software can be scaled appropriately to future systems more appropriate to live performance in theatres and other performance environments.

An important element of "Blowup", shared with "I am I was (a dying swan)", concerns how the choreographic material that forms the final work is being developed. "I am I was" was conceived as an exploration of the loss of identity and vulnerability felt by a person immersed in the processes of decay. Initial discussions of this concept led to Biggs designing the interactive system so as to provide the environment within which the movement could be choreographed. The system consisted of three digital video cameras, each with an independently programmed digital delay function, and three video projectors. Some cameras were oriented towards points in the performance space occupied by the performer whilst others were arranged so as to video the projections from the other cameras. Each camera in the sequence was also progressively de-focused. This approach created an effect of a decaying series of video projections, clearly live and contemporaneous with the performer, but slightly and progressively shifted out of congruence by the time delay and focal characteristics of the projections.

This video capture and projection system was installed in a rehearsal studio for two weeks during which time the performer/choreographer (Hawksley) was able to fully explore its implications. This approach was arrived at in the awareness that many stage scenarios, even when the effects are often essential to the dance, are typically conceived and realised after the choreographic process is more or less complete. In "I am I was" the scenario was first developed as a systems based concept, realised as technology, that functioned similarly to the underlying theme of the work (the decay and loss of memory and identity) and the choreographer/performer then undertook to develop the choreography within this system over an extended period of time. Thus the style and form of the movement vocabulary, primarily derived from the time spent immersed in the system during rehearsal, was dictated by the visual, spatial and temporal characteristics of the camera/projection installation.

In the early stages of rehearsal there was a great deal of improvisation and play to explore the possibilities and permutations of point of view and, at one point in the final performance, the space is indeed used in a freer improvised way, producing a tumbling set of images in various stages of decay that exploit this acquired knowledge by the performer. However, a decision was reached early on that the most poetic use of the system was to tightly choreograph the majority of the movement. The images projected in each of the three screens were a function of the positioning of the dancer in the space and of each other. To precisely frame a specific image in one frame and then define the resultant image in the second or third elements of the sequence demanded minute attention to detail of placing. By altering the spatial permutations of the image the performer could control whether one, two or three screens were 'live' and how the performers body was oriented to the audience in each projection.

As a function of this the choreography took on a quality of tautness through an obsessive attention to detail in gesture and placement. The experience, as performer, was of a sense of tension and entrapment within a Panoptican-like device, rendering the self-image to the scrutiny of the surveillance apparatus . Because of the magnification of detail in the projections the performer was confronted by the exaggerated consequences of any gesture, intended or otherwise. This can be understood in relation to Todd's notion of 'structural hygien' (Todd 1937). Due to the magnification of detail in the projections the performer is exaggeratedly confronted by the consequences of any misplaced gesture. "Small strains and tensions then assume an importance quite incommensurate with their initial dimensions" (ibid) the body subjected to a sensation of self-image being placed under both spotlight and microscope.

A further challenge was to allow the development of the work to be led by the images without the performer becoming self-absorbed in them. The situation emphasised the difficulty for the artist who "is called upon to be completely involved while distanced - detached without detachment" (Brook 1968).

Daily videoing of rehearsals, together with input from Biggs, gave Hawksley an external view of the evolution of the work. It was apparent that the audience's gaze would tend to be drawn to the projections because of the dominating quality of the huge images. A device emerged whereby the performer faced upstage throughout almost the entire performance, avoiding revealing the face or making eye contact with audience or image. Through this technique it was hoped to contribute to a situation where the viewer would be aware of the emerging problematic between the real and virtual.

In "Blowup" this same process of choreographic development will also be employed, the choreographer/performer spending many hours immersed within the operating interactive system. However, an intention in "Blowup" is to explore the separation of the roles of performer and choreographer, with a separate dancer or dancers working within the environment.

Conclusion (deferred)

This text has sought to describe and analyse an ongoing body of works employing dance and technology intended to problematise the awareness of memory and identity. The work uses surveillance and data acquisition technologies so as to facilitate the remapping of data across typologies and media, from objective information to the subjective sense of memory. The software being researched and developed will allow the integration of visual, auditory and temporal media and the arbitrary assignment of data-sets across these media in order to establish new and unexpected relationships between phenomena and our perception of them.

As work that is in progress it is not yet possible to evaluate the outcomes. That will have to wait until at least interim public performances are possible. Given the current trajectory of research and development it is hoped that this will be a medium term objective. Whatever that outcome we will seek to ensure the work remains open and experimental in intent, allowing for further iteration in its development beyond the laboratory and studio. This in part reflects the character of the work itself, but also is evidence of the project's partial origins in interactive installation, where it is the viewer, not the performer, who is the primary interactor.

This last issue is one that appears to be key when dealing with interactive media in live performance. In previous live works it has been observed by the authors that much of the value inherent for the viewer in experiencing interactive artworks, the sense of immediate engagement in the emergent phenomena that is an artwork and the implicit ontology's that are thus embodied through physical interaction, is deferred when the viewer is required to be the member of an audience passively regarding the spectacle of the performer interacting with the work on their behalf. This process of deferment will be foregrounded in the performance works in order to enhance audience awareness of the relationship between the performer and their subjective experience of interaction with the installation space of the work.

The key issue, with the work described here, is the role of memory (that of the performer and viewer, whichever is the subject) in relation to the recording and manipulation capabilities of current information technology. Whilst musique concrete shows us that it is possible to manipulate audio tape, and other analogue media, to produce apparently new material from pre-existing recordings the more recent coming together of recording technologies and the computer has led to a situation where material can not only be manipulated or post-processed but also re-computed, creating new instances of data. This permits a condition where the recorded self can be represented in highly novel ways that lead to the questioning of what the self, and its representations, might become, especially where that recorded self is "re-computed" and "re-played" in realtime. The ontological implications of this will very likely continue to occupy the authors for some time yet.


Fig.5 A "film-strip" of screen grabs showing images from a video sequence of a dancer being graphically manipulated in realtime by equivalent motion capture data.

References:

Beckett, Samuel (4th edition 1960), Krapps Last Tape and Other Dramatic Pieces, New York, Grove Press
Biggs, Simon (2005), "Multiple Perspectives/Multiple Readings", originally presented at the User_Mode symposium, Tate Modern, London, 2003. Published in Distributed Aesthetics, Fibreculture Journal, Sydney Australia (December 2005)
Brook, Peter (1968, reprinted 1990) The Empty Space, London, Penguin Books, p.131
Elsenaar, Arthur and Remko Scha (2002), "On Electric Performance Art and its History", Leonardo Music Journal volume 12, USA
Todd, Mabel (1937, reprinted 1997) The Thinking Body: a study of the Balancing Forces of Dynamic Man, London, Dance Books Ltd, p.43
Yates, Frances (reprinted1992), The Art of Memory, London, Pimlico