The AI system consists of LiDAR laser motion tracking, a machine learning system that predicts motion behavior, and an evaluation system that makes automated decisions and interacts with the audience via projections, moving heads, and synthetic voice.
CODING - The nerds develop the machine learning system, the control software for lights and projections, and the geometric calculations to evaluate movements.
The machine learning is based on the Trajectory Forecasting Framework (TrajNet++). The algorithm attempts to predict the movement behavior of people in groups. The software development can be followed onHub.
TRACKING - The LiDAR lasers can be used to measure the exact position of each person in the room. This way, the AI knows where people are at all times, how they are moving, how fast or slow they are walking, and how close they are getting. We use six lasers distributed around the scene area so that all people are always in view of the AI and are assigned unique IDs. There's no hiding from the "laser eyes."
CALIBRATION - The laser tracking system and the ground projection are calibrated to each other. During the first tests, the X and Y position and the ID are projected.
PREDICTION - Real-time tracking data is processed by the ML system and future movement is predicted.
MOVING BEAMS - The light beams become the arms of the AI, with the help of which it can intervene in the action. Anyone who breaks a rule, such as getting too close to the others, will be blinded.
LIGHT NET - A mirror installation creates a network of light. Uncountable rays cross in space - a metaphor for the artificial neural network. In the test, two mirror objects are printed, glued and hung to test the angles.
mail: contact@artesmobiles.art
connect with us on:
Signal: signal.artesmobiles.art
Telegram: t.me/artesmobiles