OpenPTrack: Robust Open-Source Person Tracking For Creative Coding – Booth 29 and 30Demonstration
Randy Illum, University of California, Los Angeles, USA
OpenPTrack was developed to enable educators, artists, and cultural institutions to incorporate collaborative body-based interfaces into interactive experiences at a low cost. This motivated our goal of providing scalable, low latency, occlusion resistant tracking via networks of off-the-shelf 3D imagers (like the Microsoft Kinect). OpenPTrack can track many people in large spaces allowing for group interactions and collaboration, and its data is easily integrated into “creative coding” platforms.
The use of interactive technologies in museum and informal learning spaces to engage visitors in exhibit content is becoming normal practice, due to the proliferation and declining costs of sensing and computing hardware. Even with the increase in implementation of interactive systems there are issues surrounding the current technologies: they limit collaboration between participants, and interactive spaces are often small due to intrinsic limitations of the hardware systems used. OpenPTrack addresses these issues by allowing for many people to collaborate with interactive content in large spaces through open source person tracking technology. Its development was led by the UCLA Center for Research in Engineering, Media and Performance (REMAP) for robust positional tracking of people using networked 3D imagers.
For this demonstration, we will install a multi-imager OpenPTrack system and demonstrate experimental applications on a large display: 1) The CyberMural: An interactive digital mural in which the content of the mural is explored through single or group movements; 2) Molecule Game: a mixed reality interactive science game where participants become a molecule and can create solid, liquid, or gas states by moving at different speeds in conjunction with other participants.
Munaro, Matteo, Horn, Alex, Illum, Randy, Burke, Jeff, and Rusu, Radu OpenPTrack: People Tracking for Heterogeneous Networks of Color-Depth Cameras. In IAS-13 Workshop Proceedings: 1st Intl. Workshop on 3D Robot Perception with Point Cloud Library, pp. 235-247, Padova, Italy, 2014