Archived entries for eye movements

Toolkit for Visualizing Eye-Movements and Processing Audio/Video

Screen Shot 2015-02-06 at 6.24.27 PM

Original video still without eye-movements and heatmap overlay copyright Dropping Knowledge Video Republic.

From 2008 – 2010, I worked on the Dynamic Images and Eye-Movements (D.I.E.M.) project, led by John Henderson, with Tim Smith and Robin Hill. We worked together to collect nearly 200 participants eye-movements on nearly 100 short films from 30 seconds to 5 minutes in length. The database is freely available and covers a wide range of film styles form advertisements, to movie and music trailers, to news clips. During my time on the project, I developed an open source toolkit, C.A.R.P.E. to complement D.I.E.M., or Computational Algorithmic Representation and Processing of Eye-movements (Tim’s idea!), for visualizing and processing the data we collected, and used it for writing up a journal paper describing a strong correlation between tightly clustered eye-movements and the motion in a scene. We also output visualizations of our entire corpus on our Vimeo channel. The project came to a halt and so did the visualization software. I’ve since picked up the ball and re-written it entirely from the ground up.

The image below shows how you can represent the movie, the motion in the scene of the movie (represented in … Continue reading...

Facial Appearance Modeling/Tracking


I’ve been working on developing a method for automatic head-pose tracking, and along the way have come to model facial appearances. I start by initializing a facial bounding box using the Viola-Jones detector, a well known and robust detector used for training objects. This allows me to centralize the face. Once I know where the 2D plane of the face is in an image, I can register an Active Shape Model like so:

After multiple views of the possible appearance variations of my face, including slight rotations, I construct an appearance model.

The idea I am working with is using the first components of variations of this appearance model for determining pose. Here I show the first two basis vectors and the images they reconstruct:

As you may notice, these two basis vectors very neatly encode rotation. By looking at the eigenvalues of the model, you can also interpret pose.… Continue reading...

6DOF Head Tracking

The following demo works with SeeingMachines FaceAPI in openFrameworks controlling a Mario avatar.  It also has some really poor gesture recognition (and learning but it’s not shown here), though a threshold on the rotation DOF would have produced better results for the simple task of looking up/down left/right gestures.

6DOF Head Tracking from pkmital on Vimeo.

interfacing seeingmachines faceapi with openFrameworks to control a 3D mario avatar

This is just with the non-commercial license. The full commercial license (~$3000?) gives you access to lip/mouth tracking and eye-brows, as well as much more flexibility in how to use their api with different/multiple cameras and accessing image data.

Of course, there are other initiatives at producing similar results. Mutual information based template trackers, for instance, seem to be state-of-art. Take a look at recent work by Panin and Knoll using OpenTL:

 

I imagine a lot of people would like this technology.… Continue reading...

DIEM Website

The DIEM Project (Dynamic Images and Eye Movements) has a sleek new website which you can check out here: http://www.psy.ed.ac.uk/diemContinue reading...

CLOSE-UP 2

An event organized by the new Center for Film, Performance, and Media Arts (CFPMA) was held today in Edinburgh University discussing recent topics in… film, performance, and media arts.

It was an interesting group of people that strangely somehow all had much in common. I had the fortune of presenting my research as it relates to DIEM in the place of Tim J. Smith.

The schedule:

Close-Up 2: Schedule for Wednesday 17th June 2009

10am coffee and tech checks (G.11, William Robertson Building)

10.30 Welcome, Annette Davison (Music, ACE and Director, Cfpma), Martine Beugnet (LLC, Convener of Film Studies)

Who is who, where is what? People and resources for which the Cfpma will provide a point of convergence.

11am Individual Presentations (MAX 10 mins each):

Andrew Lawrence (African Studies, SSPS) — Difficult satire under austerity: the films of Sissako and Amoussou

Martine Beugnet (Film, LLC) — “The Wounded Screen”

Richard Williams (Architecture, ACE) — “The Modernist City on Film”

Stephen Cairns (Architecture, ACE) — “Cultures of Legibility: Emergent Urban Landscapes in Southeast Asia”

Simon Frith and Annette Davison (Music, ACE) — “The Role of Cinemas in the History of Live Music”

Mary Fogarty (Music, ACE) — “The … Continue reading...

Eye movements during video

Part of my research entails investigating ways of visualizing eye-movement data during dynamic images.

Some vids:

And one of President Obama’s inaugural speech:

from VisCogEdinburghContinue reading...


Copyright © 2010 Parag K Mital. All rights reserved. Made with Wordpress. RSS