A playback system that slows down videos to respond to the speed of its viewers is being developed by Lancaster University in the UK, Stanford University in California, and the FX Palo Alto Laboratory research centre in the US. Adaptive video playback in Reactive Video, a vision-based system supports users learning or practising a physical skill.
The system, as reported by New Atlas, uses a Microsoft Kinect device and bespoke software to track the positions of the viewer's elbows, knees, hands, arms, hips and legs. This data is used to create a real-time animated model of the viewer.
Once the session gets underway, the system compares the recorded movements of the instructor's model to the corresponding real-time movements of the viewer's model.
As the instructor is performing a certain action, probabilistic algorithms determine approximately how long it will take the viewer to do that same thing. The video is then automatically slowed down, in order to keep the instructor from getting too far ahead of the viewer.
The use of pre-existing videos removes the need to create bespoke content or specially authored videos, and the system can provide real-time guidance and feedback to better support users when learning new movements.
Adaptive video playback using a discrete Bayes and particle filter are evaluated on a data set collected of participants performing tai chi and radio exercises. Results show that both approaches can accurately adapt to the user's movements, however reversing playback can be problematic.