Latest Entries

Creative Community Spaces in INDIA

Jaaga – Creative Common Ground
Bangalore
http://www.jaaga.in/

CEMA – Center for Experimental Media Arts at Srishti School of Art, Design and Technology
Bangalore
http://cema.srishti.ac.in/site/

Bar1 – non-profit exchange programme by artists for artists to foster the local, Indian and international mutual exchange of ideas and experiences through guest residencies in Bangalore
Bangalore
http://www.bar1.org

Sarai – a space for research, practice, and conservation about the contemporary media and urban constellations.
New Dehli
http://www.sarai.net/

Khoj/International Artists’ Association – artist led, alternative forum for experimentation and international exchange
New Dehli
http://www.khojworkshop.org/

Periferry – To create a nomadic space for hybrid art practices. It is a laboratory for people cross- disciplinary practices. The project focuses on the creation of a network space for negotiating the challenge of contemporary cultural production. It is located on a ferry barge on river Brahmaputra and is docked in Guwhati, Assam.
Narikolbari, Guwahati
http://www.periferry.in/

Point of View – non-profit organization that brings the points of view of women into community, social, cultural and public domains through media, art and culture.
Bombay
http://www.pointofview.org/

Majilis – a center for rights discourse and inter-disciplinary arts initiatives
Bombay
http://majlisbombay.org/

Camp – not an “artists collective” but a space, in which ideas and energies … Continue reading...

Facial Appearance Modeling/Tracking


I’ve been working on developing a method for automatic head-pose tracking, and along the way have come to model facial appearances. I start by initializing a facial bounding box using the Viola-Jones detector, a well known and robust detector used for training objects. This allows me to centralize the face. Once I know where the 2D plane of the face is in an image, I can register an Active Shape Model like so:

After multiple views of the possible appearance variations of my face, including slight rotations, I construct an appearance model.

The idea I am working with is using the first components of variations of this appearance model for determining pose. Here I show the first two basis vectors and the images they reconstruct:

As you may notice, these two basis vectors very neatly encode rotation. By looking at the eigenvalues of the model, you can also interpret pose.… Continue reading...

Short Time Fourier Transform using the Accelerate framework

Using the libraries pkmFFT and pkm::Mat, you can very easily perform a highly optimized short time fourier transform (STFT) with direct access to a floating-point based object.

Get the code on my github:
http://github.com/pkmital/pkmFFT
Depends also on: http://github.com/pkmital/pkmMatrixContinue reading...

Real FFT/IFFT with the Accelerate Framework

Apple’s Accelerate Framework can really speed up your code without thinking too much. And it will also run on an iPhone. Even still, I did bang my head a few times trying to get a straightforward Real FFT and IFFT working, even after consulting the Accelerate documentation (reference and source code), stackoverflow (here and here), and an existing implementation (thanks to Chris Kiefer and Mick Grierson). Still, the previously mentioned examples weren’t very clear as they did not handle the case of overlapping FFTs which I was doing in the case of a STFT or they did not recover the power spectrum, or they just didn’t work for me (lots of blaring noise).

Get the code on my github:
http://github.com/pkmital/pkmFFTContinue reading...

Augmented Sonic Reality

I recently gave two talks, one for the PhDs based in the Electronic Music Studios, and another for the PhDs in Arts and Computational Technology. I received some very valuable feedback, and having to incorporate what I’ve been working on in a somewhat presentable manner also had a lot of benefit. The talk abstract (which is very abstract) is posted below with a few references listed. Please feel free to comment and open a discussion, or post any references that may be of interest.

Abstract:
An augmented sonic reality aims to register digital sound content with an existing physical space. Perceptual mappings between an agent in such an environment and the augmented content should be both continuous and effective, meaning the intentions of an agent should be taken into consideration in any affective augmentations. How can an embedded intelligence such as an iPhone equipped with detailed sensor information such as microphone, accelerometer, gyrometer, and GPS readings infer the behaviors of its user in creating affective, realistic, and perceivable augmented sonic realities tied to their situated experiences? Further, what can this augmented domain reveal about our own ongoing sensory experience of our sonic environment?

Keywords: augmented, reality, sonic, enactive, perception, memory, Continue reading...

Tim J Smith guest blogs for David Bordwell

Tim J Smith, expert in scene perception and film cognition, and of The DIEM project [1] recently starred as a guest blogger for David Bordwell, a leading film theorist with an impressive list of books and publications widely used in film cognition/film art research/studies [2]. In his article featured on David’s site, Tim expands on his research on film cognition including continuity editing [3], attentional synchrony [4], and the project we worked on in 2008-2010 as part of The DIEM Project. Since Tim’s feature on David Bordwell’s blog, The DIEM Project saw a surge of publicity and our vimeo video loads going higher than 200,000 in a single day and features on dvice, slashfilm, gizmodo, Rogert Ebert’s facebook/twitter, and the front page of imbd.com.

Not to mention, our tools and visualizations are finally reaching an audience with interests in film, photography, and cognition. If you haven’t yet seen some of our videos, please head on over to our vimeo page, where you can see a range of videos embedded with eye-tracking of participants and many different visualizations of models of eye-movements using machine learning, or start by reading Tim’s post on Continue reading...

Responsive Ecologies Documentation

As part of a system of numerous dynamic connections and networks, we are reactive and deterministic to a complex system of cause and effect. The consequence of our actions upon our selves, the society we live in and the broader natural world is conditioned by how we perceive our involvement. The awareness of how we have impacted on a situation is often realised and processed subconsciously, the extent and scope of these actions can be far beyond our knowledge, our consideration, and importantly beyond our sensory reception. With this in mind, how can we associate our actions, many of which may be overlooked as customary, with for instance, the honey bee depopulation syndrome or the declining numbers of Siberian Tigers.

Responsive Ecologies is part of an ongoing collaboration with ZSL London Zoo and Musion Academy. Collectively we have been exploring innovative means of public engagement, to generate an awareness and understanding of nature and the effects of climate change. All of the contained footage has come from filming sessions within the Zoological Society; this coincidentally has raised some interesting questions on the spectacle of captivity, a issue which we have tried to reflect upon in the construction and presentation of … Continue reading...

Streaming Motion Capture Data from the Kinect using OSC on Mac OSX

This guide will help to get you running PrimeSense NITE’s Skeleton Tracking inside XCode on your OSX.  It will also help you stream that data in case you’d like to use it in another environment such as Max.  An example Max patch is also available.

PrimeSense NITE Skeletonization and Motion Capture to Max/MSP via OSC from pkmital on Vimeo.

Prerequisites:

0.) 1 Microsoft Kinect or other PrimeSense device.

1.) Install XCode and Java Developer Package located here: https://connect.apple.com/cgi-bin/WebObjects/MemberSite.woa/wa/getSoftware?bundleID=20719 – if you require a Mac OSX Developer account, just register at developer.apple.com since it is free.

2.) Install Macports: http://www.macports.org/

3.) Install libtool and libusb > 1.0.8:

$ sudo port install libusb-devel +universal

4.) Get the OpenNI Binaries for Mac OSX: http://www.openni.org/downloadfiles

5.) Install OpenNI by unzipping the file OpenNI-Bin-MacOSX (-v1.0.0.25 at the time of writing) and running,

$ sudo ./install.sh

6.) Get SensorKinect from avin2: https://github.com/avin2/SensorKinect/tree/unstable/Bin

7.) Install SensorKinect by unzipping and running

$ sudo ./install.sh

8.) Install OpenNI Compliant Middleware NITE from Primesense for Mac OSX: http://www.openni.org/downloadfiles

9.) Install NITE by unzipping and running

$ sudo ./install.sh

When prompted for a key, enter the key listed on the openni website.

Getting it up and running:

1.) Download the … Continue reading...

Responsive Ecologies Exhibition

Come checkout the Waterman’s Art Centre from the 6th of December until the 21st of January for an immersive and interactive visual experience entitled “Responsive Ecologies” developed in collaboration with artists captincaptin. We will also be giving a talk on the 10th of December from 7 p.m. – 9 p.m. during CINE: 3D Imaging in Art at the Watermans Center.

Responsive Ecologies is part of a wider ongoing collaboration between artists captincaptin, the ZSL London Zoo and Musion Academy. Collectively they have been exploring innovative means of public engagement, to generate an awareness and understanding of nature and the effects of climate change. All of the contained footage has come from filming sessions within the Zoological Society; this coincidentally has raised some interesting questions on the spectacle of captivity, a issue which we have tried to reflect upon in the construction and presentation of this installation. The nature of interaction within Responsive Ecologies means that a visitor to the space cannot simply view the installation but must become a part of its environment. When attempting to perceive the content within the space the visitor reshapes the installation. Everybody has a degree of impact whether directed or incidental, and … Continue reading...

6DOF Head Tracking

The following demo works with SeeingMachines FaceAPI in openFrameworks controlling a Mario avatar.  It also has some really poor gesture recognition (and learning but it’s not shown here), though a threshold on the rotation DOF would have produced better results for the simple task of looking up/down left/right gestures.

6DOF Head Tracking from pkmital on Vimeo.

interfacing seeingmachines faceapi with openFrameworks to control a 3D mario avatar

This is just with the non-commercial license. The full commercial license (~$3000?) gives you access to lip/mouth tracking and eye-brows, as well as much more flexibility in how to use their api with different/multiple cameras and accessing image data.

Of course, there are other initiatives at producing similar results. Mutual information based template trackers, for instance, seem to be state-of-art. Take a look at recent work by Panin and Knoll using OpenTL:

 

I imagine a lot of people would like this technology.… Continue reading...

Keyframe based modeling

Playing with MSERs in trying to implement an algorithm for feature-based object tracking.  The algorithm first finds MSERs, warps them to circles, describes them with a SIFT descriptor, and then indexes keyframes of sift vectors by using vocabulary trees.   Of course that’s a ridiculously simplified explanation, but look at what it’s capable of!!!:

 Continue reading...

Microsoft Kinect

This is big.  In less than a week, the Kinect has been hacked and ported for windows, osx, linux, java and processing, max/msp (almost), and flash…

Much much more to come: Continue reading...

“Memory” Video @ AVAF 2010

Please rate, share, and comment!

Memory @ AVAF 2010 from pkmital on Vimeo.

‘Memory’ is an augmented installation of a neural network by Parag K Mital & Agelos Papadakis.
hand blown glass, galvanized metal chain, projection, cameras; 1.5m x 2.5m x 3m

Ghostly images of faces appear as recorded movie clips within neural-shaped hand-blown glass pieces. As one begins to look at the neurons, they notice the faces as their own, trapped as disparate memories of a neural network.

Filmed and installed for the Athens Video Art Festival in May 2010 in Technopolis, Athens, Greece. The venue is a disused gas factory converted art space.

Also seen at Kinetica Art Fair, Ambika P3, London, UK, 2010; Passing Through Exhibition, James Taylor Gallery, London, UK, 2009; Interact, Lauriston Castle, Edinburgh, UK, 2009.

DSC_0466.jpg
DSC_0551.jpg DSC_0420.jpgDSC_0591.jpgDSC_0569.jpgContinue reading...

Facebook Graph API

If you are one of the +500 million users of facebook, and you know your user id, try plugging it in here: http://zesty.ca/facebook/

This uses the Facebook Graph API to get information about Facebook users in a very accessible manner.  Of course, it is only your “public” information that is accessible without authorization.  But once you “allow” an application to access your information, you’re allowing access to EVERYTHING.

Generally, these items are publicly known:

{
   "id": "0123456789",
   "name": "Parag K Mital",
   "first_name": "Parag",
   "middle_name": "K",
   "last_name": "Mital",
   "locale": "en_US"
}

and also your Profile picture.

Check out a montage of the first 3600 Facebook user’s profile pictures, obtained just by using the public url: http://graph.facebook.com/USER_ID/picture


And an image of the average of all 3600 profile images:

Continue reading...

X-Ray @ the Roxy Arthouse

Come to the Roxy Arthouse on June 10 for Neverzone and June 26 for Is This a Test? where I’ll be presenting my latest installation X-Ray, as well as to catch other brilliant Scotland-based artists.  More info on the flyers below.


Neverzone / Thursday June 10th 19:00-23:00


Is This a Test? / Saturday June 26th 19:00-23:00

[update] Plug for Is This a Test? on creativeboom.co.uk

[update 2] video now online:

X-RAY from pkmital on Vimeo.


Continue reading...

Dynamic Scene Perception Eye-Movement Data Videos and Analysis

Over the past 2 years, I have been working under the direction of Prof John M Henderson together with Dr Tim J Smith and Dr Robin Hill on the DIEM project (Dynamic Images and Eye-Movements). Our project has focused on investigating active visual cognition by eye-tracking numerous participants watching a wide-variety of short videos.

We are in the process of making all of our data freely available for research use. As well, we have also worked on tools for analyzing eye-movements during such dynamic scenes.

CARPE, or more bombastically known as Computational Algorithmic Representation and Processing of Eye-movements, allows one to begin visualizing eye-movement data together with the video data it was tracked with in a number of ways. It currently supports low-level feature visualizations, clustering of eye-movements, model selection, heat-map visualizations, blending, contour visualizations, peek-through visualizations, movie output, binocular data input, and more. The videos shown above on our Vimeo page were all created using this tool. Head over to Google code to check out the source code or download the binary. We are still in the process of stream-lining this process by creating manuals for new users and uploading more of the eye-tracking and video data so … Continue reading...

Memory and ChaoDependant at the Athens Video Art Festival 2010

DSC_0569.jpg May 7-9 saw the 2010 Athens Video Art Festival where with collaborator Agelos Papadakis, Memory saw its latest installation. The venue, a 2,500 square meters disused gas factory called Technopolis, or more commonly referred to as Gazi (Gas), was a brilliant display of warehouse spaces littered by gas pipesDSC_0063.jpg and oil still dripping from the cracks.

DSC_0180.jpg

Over 2,700 submissions were received for a total of 450 presenting artists to see over 13,000 visitors during the weekend. Among the hundreds of video art, animations, and installations, were a number of performances including dance and music.

DSC_0591.jpg

DSC_0420.jpg

Both myself and collaborator Agelos Papadakis were interviewed by ERT, or loosely translated as Hellenic Radio and Television (something like the BBC). It is all in Greek, except for my interview.

Picture 28

Video Link to the interview on EPT.

Feel free to check out pictures from the festival and my travels on my flickr page.… Continue reading...

Neverzone, 10th June

Currently working on a video installation for an exciting gig on the 10th of June dubbed Neverzone. Check out the insanity on this flyer and information on the just as insane artists at Black Lantern Music

neverzone10jun2010_smallContinue reading...



Copyright © 2010 Parag K Mital. All rights reserved. Made with Wordpress. RSS