With the recent proliferation of machine learning algorithms, there have been increasing attempts in analysing human movement using computational models, creating tools in visualising motion and even building generative systems that could output choreographed dances learned from observation. But these have been mainly aimed at professional dancers and choreographers, requiring computer scientists to build specialised tools from scratch. In these solution-focused endeavours, a playful exploration of motion and digital interaction is limited.
The aim of this project is not to solve the existing problems in building computational models that can identify personal movement variation, or even address the current machine learning techniques. Rather, it is to explore human movements, and what constitutes it, through a series of interactions and interpretations, while inviting the audience to take part.
Through the use of the three-dimensional capture camera, the audience can interact and play in the ‘particle field’, where individual shapes interact, propelled by the velocities of the movements. The movements are captured as flow-fields, indicating the relative motion in space. These flow-fields are then used as training information for a probabilistic prediction algorithm that renders a sequence of movement, which, in the end, is interpreted as series of visuals. Three types of movement are represented as different colours and shapes. The ‘core’ motion, emanating from the centre of the body, is associated with darker colours, involving a large portion of the body to produce big and slow movements. These then give way to lighter movements represented as brighter colours, and finally, swift and large motions are represented as light colours. Each shape produced is unique, yet all are inextricably linked to each other, representing all the encountered movements as one.​​​​​​​
Experimentation & Tests - Selected Frames
Back to Top