visualization - http://archive.pkmital.com https://archive.pkmital.com computational audiovisual augmented reality research Mon, 09 Feb 2015 20:12:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 C.A.R.P.E. version 0.1.1 release https://archive.pkmital.com/2015/02/09/c-a-r-p-e-version-0-1-1-release/ https://archive.pkmital.com/2015/02/09/c-a-r-p-e-version-0-1-1-release/#respond Mon, 09 Feb 2015 20:01:12 +0000 http://pkmital.com/home/?p=1887 Screen Shot 2015-02-09 at 2.59.14 PM

I’ve updated C.A.R.P.E., a graphical tool for visualizing eye-movements and processing audio/video, to include a graphical timeline (thanks to ofxTimeline by James George/YCAM), support for audio playback/scrubbing (using pkmAudioWaveform), audio saving, and various bug fixes. This release has changed some parameters of the XML file and added others. Please refer to this example XML file for how to setup your own data:

See my previous post for information on the initial release.

Please fill out the following form if you’d like to use C.A.R.P.E..:

Continue reading...

The post C.A.R.P.E. version 0.1.1 release first appeared on http://archive.pkmital.com.

]]>
https://archive.pkmital.com/2015/02/09/c-a-r-p-e-version-0-1-1-release/feed/ 0
Toolkit for Visualizing Eye-Movements and Processing Audio/Video https://archive.pkmital.com/2015/02/06/toolkit-for-visualizing-eye-movements-and-processing-audio-video/ https://archive.pkmital.com/2015/02/06/toolkit-for-visualizing-eye-movements-and-processing-audio-video/#comments Fri, 06 Feb 2015 23:53:18 +0000 http://pkmital.com/home/?p=1852 Screen Shot 2015-02-06 at 6.24.27 PM

Original video still without eye-movements and heatmap overlay copyright Dropping Knowledge Video Republic.

From 2008 – 2010, I worked on the Dynamic Images and Eye-Movements (D.I.E.M.) project, led by John Henderson, with Tim Smith and Robin Hill. We worked together to collect nearly 200 participants eye-movements on nearly 100 short films from 30 seconds to 5 minutes in length. The database is freely available and covers a wide range of film styles form advertisements, to movie and music trailers, to news clips. During my time on the project, I developed an open source toolkit, C.A.R.P.E. to complement D.I.E.M., or Computational Algorithmic Representation and Processing of Eye-movements (Tim’s idea!), for visualizing and processing the data we collected, and used it for writing up a journal paper describing a strong correlation between tightly clustered eye-movements and the motion in a scene. We also output visualizations of our entire corpus on our Vimeo channel. The project came to a halt and so did the visualization software. I’ve since picked up the ball and re-written it entirely from the ground up.

The image below shows how you can represent the movie, the motion in the scene of the movie (represented in … Continue reading...

The post Toolkit for Visualizing Eye-Movements and Processing Audio/Video first appeared on http://archive.pkmital.com.

]]>
https://archive.pkmital.com/2015/02/06/toolkit-for-visualizing-eye-movements-and-processing-audio-video/feed/ 4
Real-Time Object Recognition with ofxCaffe https://archive.pkmital.com/2015/01/04/real-time-object-recognition-with-ofxcaffe/ https://archive.pkmital.com/2015/01/04/real-time-object-recognition-with-ofxcaffe/#comments Sun, 04 Jan 2015 03:53:48 +0000 http://pkmital.com/home/?p=1764 Screen Shot 2015-01-03 at 12.57.23 PM

I’ve spent a little time with Caffe over the holiday break to try and understand how it might work in the context of real-time visualization/object recognition in more natural scenes/videos. Right now, I’ve implemented the following Deep Convolution Networks using the 1280×720 resolution webcamera on my 2014 Macbook Pro:

The above image depicts the output from an 8×8 grid detection showing brighter regions as higher probabilities of the class “snorkel” (automatically selected by the network from 1000 possible classes as the highest probability).

So far I have spent some time understanding how Caffe keeps each layer’s data during a forward/backward pass, and how the deeper layers could be “visualized” in a … Continue reading...

The post Real-Time Object Recognition with ofxCaffe first appeared on http://archive.pkmital.com.

]]>
https://archive.pkmital.com/2015/01/04/real-time-object-recognition-with-ofxcaffe/feed/ 4
Tim J Smith guest blogs for David Bordwell https://archive.pkmital.com/2011/02/20/tim-j-smith-guest-blogs-for-david-bordwell/ https://archive.pkmital.com/2011/02/20/tim-j-smith-guest-blogs-for-david-bordwell/#respond Sun, 20 Feb 2011 03:43:36 +0000 http://pkmital.com/home/?p=540 Tim J Smith, expert in scene perception and film cognition, and of The DIEM project [1] recently starred as a guest blogger for David Bordwell, a leading film theorist with an impressive list of books and publications widely used in film cognition/film art research/studies [2]. In his article featured on David’s site, Tim expands on his research on film cognition including continuity editing [3], attentional synchrony [4], and the project we worked on in 2008-2010 as part of The DIEM Project. Since Tim’s feature on David Bordwell’s blog, The DIEM Project saw a surge of publicity and our vimeo video loads going higher than 200,000 in a single day and features on dvice, slashfilm, gizmodo, Rogert Ebert’s facebook/twitter, and the front page of imbd.com.

Not to mention, our tools and visualizations are finally reaching an audience with interests in film, photography, and cognition. If you haven’t yet seen some of our videos, please head on over to our vimeo page, where you can see a range of videos embedded with eye-tracking of participants and many different visualizations of models of eye-movements using machine learning, or start by reading Tim’s post on Continue reading...

The post Tim J Smith guest blogs for David Bordwell first appeared on http://archive.pkmital.com.

]]>
https://archive.pkmital.com/2011/02/20/tim-j-smith-guest-blogs-for-david-bordwell/feed/ 0