Actions (2023)
Perception engines techniques extended to video sequences. Each is an ink print created and recognized by neural networks trained to classify dynamic behaviour from videos of human actions. Shapes and colors in this sequence were generated by Microsoft's X-Clip models which process a stack of sequenced input images - extending CLIP for general video-language understanding. Each activity is chosen from Deepmind's Kinetics dataset which is the main evaluation metric for testing video understanding in machine learning. In each canvas print the motion is encoded using an op-art inspired style which is unrolled for evaluation as a video.