Parag Kumar Mital

Parag K. MITAL, Ph.D. (US) is an Indian American artist and interdisciplinary researcher working between computational arts and machine learning for nearly 20 years. He currently works as CTO and Head of research at HyperSurfaces working on AI audio research and at Never Before Heard Sounds as their Chief AI Scientist. His varied scientific background includes fields such as machine and deep learning, film cognition, eye-tracking studies, EEG and fMRI research. His artistic practice combines generative film experiences, augmented reality hallucinations, and expressive control of large audiovisual corpora, tackling questions of identity, memory, and the nature of perception. The balance between his scientific and arts practice allows both to reflect on each other: the science driving the theories, and the artwork re-defining the questions asked within the research.

His work has been published and exhibited internationally including the Prix Ars Electronica, Walt Disney Concert Hall, ACM Multimedia, Victoria & Albert Museum, London’s Science Museum, Oberhausen Short Film Festival, and the British Film Institute, and featured in press including BBC, NYTimes, FastCompany, and others. He has also taught at UCLA, University of Edinburgh, Goldsmiths, University of London, Dartmouth College, Srishti Institute of Art, Design and Technology, and California Institute of the Arts in both Undergraduate and Graduate levels in primarily computational arts applied courses focusing on machine learning applications. Finally, he is also a frequent collaborator with artists and cultural institutes such as Massive Attack, Sigur Rós, David Lynch, Google, Es Devlin, and Refik Anadol Studio.

Feel free to contact me.

me

scholargithubtwitterflickrvimeo

Current Work

Founder (2010-current) The Garden in the Machine, Inc., Los Angeles, CA, U.S.A.
Computational arts practices incorporating applied machine learning research, generative film, image, and sound practices, and working with artists and cultural institutes such as Massive Attack, Sigur Rós, David Lynch, Google, Es Devlin, Refik Anadol Studio, LA Phil, Walt Disney Concert Hall, Boiler Room, XL Recordings, and others.

CTO/Head of Research (2017-current) HyperSurfaces, Mogees, Ltd., London, U.K.
HyperSurfaces is a low cost, low power, privacy preserving technology for understanding of vibrations on objects and materials of any shape or size, bringing life to passive objects around us and merging the physical and data worlds.

Chief AI Scientist (2023-current) Never Before Heard Sounds, New York, U.S.A.
AI powered music studio in the browser. Leading the research and development of AI tools for musical content creation, composition, arrangement, and synthesis.

Adjunct Faculty (2020-current) UCLA, Los Angeles, CA, U.S.A.
Cultural Appropriation with Machine Learning is a special topics course taught in 2020 and soon again in Fall 2023 where students of the Design and Media Arts program are taught both a critical framing for approaching generative arts as well as how to integrate various tools into their own practice.

Previous Work Experience

Creative Technologist (2019-2020) Artists and Machine Intelligence, Google, Inc., Los Angeles, CA, U.S.A.
Worked with international artists, including Martine Syms, Anna Ridler, Allison Parrish, and Paola Torres Núñez del Prado, to help integrate machine learning techniques into their artistic practice.

Director of Machine Intelligence / Senior Research Scientist (2016-2018) Kadenze, Inc., Valencia, CA, U.S.A.
Built bespoke ML/DL pipeline using Python, TensorFlow, Ruby on Rails, and ELK stack for recommendation, personalization, and search of various media sources on Kadenze, Inc. Setup backend using CloudFormation, ECS, ELK stack and various frontend visualizations of trained models on Kadenze data using Kibana, Bokeh, D3.js, and Python. Built bespoke ML/DL solutions to enable analysis and auto-grading of images, sound, and code. Built, taught, and continue to deliver Kadenze’s most successful course on Deep Learning, Creative Applications of Deep Learning w/ TensorFlow, and fostered and built collaborations with Google Brain and Nvidia (including usage of their HPC cluster) to partner on the course.

Artist Fellow (2016-2018) California Institute of the Arts, Valencia, CA, U.S.A.
Mentorship and development of arts practice with BFA, MFA students in the Music Technology and Intelligent Interaction Design program. Teaching of “Audiovisual Signal Processing” and “Augmented Sound”.

Freelance Consultant (2017) Google and Boiler Room, London, U.K.
Developed bespoke neural audio synthesis algorithm for public launch of the Google Pixel 2 phone, including backend for website which served pre-trained neural audio model for stylization and synthesis and live processing of voices. Built with node.js, Redis, TensorFlow, and Google Cloud.

Freelance Consultant (2017) Google Arts and Culture/Google Cultural Institute, Paris, FR
Ported a model to TensorFlow to reproduce results of a Inception-based deep convolutional neural network written in another framework.

Senior Research Scientist (2015) Firef.ly Experience Ltd., London, U.K.
Machine learning and signal processing of user behavior and activity patterns from GPS and smartphone motion data.
MongoDB cluster computing; Mapbox; Python; Objective-C, Swift; Machine learning; Mobile signal processing.

Visiting Researcher (2015) at the Mixed Reality Lab Studios at the University of Southern California, Los Angeles, CA
Augmented reality; Unity 5; Procedural audiovisual synthesis.

Post-Doctoral Research Associate (2014-2015) at the Bregman Media Labs at Dartmouth College, Hanover, NH
Exploring feature learning in audiovisual data, fMRI coding of musical and audiovisual stimuli during experienced and imagined settings, and sound and image synthesis techniques. Designed experiment for fMRI and behavioral data, collected data using 3T fMRI and PyschoPy, wrote custom pre-processing of data using AFNI/SUMA/Freesurfer using the Dartmouth Discovery supercomputing cluster, and developed methods using Univariate and Multivariate methods including Hyperalignment measures using PyMVPA. Principal Investigator: Michael Casey

Research Assistant (2011) London Knowledge Lab, Institute of Education, London, U.K.
ECHOES is a technology-enhanced learning environment where 5-to-7-year-old children on the Autism Spectrum and their typically developing peers can explore and improve social and communicative skills through interacting and collaborating with virtual characters (agents) and digital objects. ECHOES provides developmentally appropriate goals and methods of intervention that are meaningful to the individual child, and prioritises communicative skills such as joint attention. Wrote custom computer vision code for calibrating behavioral measures of attention within a large format touchscreen television. Funded by the EPSRC. Principal Investigators: Oliver Lemon and Kaska Porayska-Pomsta

Research Assistant (2008-2010) John M. Henderson’s Visual Cognition Lab, University of Edinburgh
Investigating dynamic scene perception through computational models of eye-movements, low-level static and temporal visual features, film composition, and object and scene semantics. Wrote custom code for processing large corpus of audiovisual data, correlating the data with behavioral measures from a large collection of human subject eye-movements, and applied pattern recognition and signal processing techniques to infer the contribution of auditory and visual features and their interaction within different tasks and film editing styles. The DIEM Project. Funded by the Leverhulme Trust and ESRC. Principal Investigator: John M. Henderson

Education

Ph.D. (2014) Arts and Computational Technologies Goldsmiths, University of London.
Thesis: Computational Audiovisual Scene Synthesis
This thesis attempts to open a dialogue around fundamental questions of perception such as: how do we represent our ongoing auditory or visual perception of the world using our brain; what could these representations explain and not explain; and how can these representations eventually be modeled by computers?

M.Sc. (2008) Artificial Intelligence: Intelligent Robotics, University of Edinburgh
B.Sc. (2007) Computer and Information Sciences, University of Delaware

Publications

Parag K Mital. Time Domain Neural Audio Style Transfer Neural Information Processing Systems Conference 2017 (NIPS2017), https://arxiv.org/abs/1711.11160, December 3 – 9, 2017

[github] [arxiv]

Christian Frisson, Nicolas Riche, Antoine Coutrot, Charles-Alexandre Delestage, Stéphane Dupont, Onur Ferhat, Nathalie Guyader, Sidi Ahmed Mahmoudi, Matei Mancas, Parag K Mital, Alicia Prieto Echániz, François Rocca, Alexis Rochette, Willy Yvart. Auracle: how are salient cues situated in audiovisual content? eNTERFACE 2014, Bilbao, Spain, June 9 – July 4, 2014.

[online] [pdf]

Parag K. Mital, Jessica Thompson, Michael Casey. How Humans Hear and Imagine Musical Scales: Decoding Absolute and Relative Pitch with fMRI. CCN 2014, Dartmouth College, Hanover, NH, USA, August 25-26, 2014.

Parag Kumar Mital. Audiovisual Resynthesis in an Augmented Reality. In Proceedings of the ACM International Conference on Multimedia (MM ’14). ACM, New York, NY, USA, 695-698. 2014. DOI=10.1145/2647868.2655617 http://doi.acm.org/10.1145/2647868.2655617 .

[website] [online] [pdf]

Tim J. Smith, Sam Wass, Tessa Dekker, Parag K. Mital, Irati Rodriguez, Annette Karmiloff-Smith. Optimising signal-to-noise ratios in Tots TV can create adult-like viewing behaviour in infants. 2014 International Conference on Infant Studies, Berlin, Germany, July 3-5 2014.

Parag K. Mital, Mick Grierson, and Tim J. Smith. 2013. Corpus-Based Visual Synthesis: An Approach for Artistic Stylization. In Proceedings of the 2013 ACM Symposium on Applied Perception (SAP ’13). ACM, New York, NY, USA, 51-58. DOI=10.1145/2492494.2492505
[website] [online] [pdf] [presentation]

Parag K. Mital, Mick Grierson. Mining Unlabeled Electronic Music Databases through 3D Interactive Visualization of Latent Component Relationships. In Proceedings of the 2013 New Interfaces for Musical Expression Conference, p. 77. South Korea, May 27-30, 2013.
[website] [pdf]

Parag K. Mital, Tim J. Smith, Steven Luke, John M. Henderson. Do low-level visual features have a causal influence on gaze during dynamic scene viewing? Journal of Vision, vol. 13 no. 9 article 144, July 24, 2013.
[online] [poster]

Tim J. Smith, Parag K. Mital. Attentional synchrony and the influence of viewing task on gaze behaviour in static and dynamic scenes. Journal of Vision, vol. 13 no. 8 article 16, July 17, 2013.
[online]
Tim J. Smith, Parag K. Mital. “Watching the world go by: Attentional prioritization of social motion during dynamic scene viewing”. Journal of Vision, vol. 11 no. 11 article 478, September 23, 2011.
[online]

Melissa L. Vo, Tim J. Smith, Parag K. Mital, John M. Henderson. “Do the Eyes Really Have it? Dynamic Allocation of Attention when Viewing Moving Faces”. Journal of Vision, vol. 12 no. 13 article 3, December 3, 2012.
[online]

Parag K. Mital, Tim J. Smith, Robin Hill, John M. Henderson. “Clustering of Gaze during Dynamic Scene Viewing is Predicted by Motion” Cognitive Computation, Volume 3, Issue 1, pp 5-24, March 2011.
[online] [pdf] [videos]

Projects

C.A.R.P.E. | Jan 2015

C.A.R.P.E. is the computational and algorithmic representation and processing of eye-movements. The original C.A.R.P.E. was built in 2008 with The DIEM Project. This software is a rewrite from the ground up and capable of visualizing a large range of eye-movements together with clustering, multiple conditions/subject-pools heatmaps, and video and audio analysis. Some more information posted here.

YouTube Smash Up | Aug 2012

YouTube Smash Up attempts to generatively produce viral content using video material from the Top 10 most viewed videos on YouTube. Each week, the #1 video of the week is resynthesized using a computational algorithm matching its sonic and visual contents to material coming only from the remaining Top 10 videos. This other material is then re-assembled to look and sound like the #1 video. The process is not copying the file, but synthesizing it as a collage of fragments segmented from entirely different material.

PhotoSynthesizer | Aug 2012

This (free) iOS app takes existing images in your photo library and mashes them up to resynthesize a image using a computational model based on human perception.

[View in App Store]

The Simpsons vs. Family Guy | Sep 2011

I have developed a method for resynthesizing existing videos using material from any other video(s). This process starts by learning a database of objects that appear in the set of videos to synthesize from. The target video to resynthesize is then broken into objects in a similar manner, but also matched to objects in the database.

Harry Smith vs. Pink Elephants | Dec 2011

A perceptual model based on proto-objects is presented as a visual reconstruction of Harry Smith’s Early Abstractions. I train the model on a scene from Dumbo, Pink Elephants, asking it to interpret Harry Smith, having only knowledge of Dumbo. The reconstruction is surprisingly able to capture a wide variety of the abstract images and movements in Harry Smith, as well as capture after-image. This model is an early prototype of my PhD work on visual resynthesis.

Infected Puppets | Nov 2011

Part of the SURFACES exhibition at BAR1 in Bangalore, India, this piece organizes the thoughts and speech of numerous Indian politicians using a microphone input. Participants are invited to use a microphone in front of a 3-channel audiovisual installation where the patterns of sound coming into the microphone are matched to a large database of different speeches by Indian politicians. The resulting cut-up fragmented narration by the different politicians feels like an infected synthesis of promises, lies, and puppetry. Made in collaboration with Prayas Abhinav and 9 students from the CEMA course at the Srishti School of Art and Design, Bangalore, India.

Future Echoes | Nov 2011

Part of the SURFACES exhibition at BAR1 in Bangalore, India, participants enter an ambisonic audio environment where they each become a character in a post-apocalyptic cyber-punk tale of a one-dimensional fate. As they enter the 3x3x3m space, audio cues spatialized from their perspective are triggered based on a randomly chosen character. Cut-up fragments of the voice of the character are synthesized where the participant stands. As they move within the ambisonic space, their character’s narrative is unfolded further, revealing more of their story. Other participants can enter the space and hear each of the participant’s tale synchronized to each of the participant’s location through the use of ambisonics. Made in collaboration with Prayas Abhinav and 9 students from the CEMA course at the Srishti School of Art and Design, Bangalore, India.

Michael Jackson vs. Chris Watson | Oct 2011

An auditory reconstruction of Michael Jackson’s beat it using “Memory Mosaicing“. Every sound being played comes from a sample of Chris Watson’s nature recordings.

Real-time Auditory Memory Mosaicing | Jun 2011

Memory Mosaicing is a new type of Augmented Sonic Reality that resynthesizes your sonic world using recorded segments of sound from the microphone. You can also add a song from your iTunes Library to the app’s memory, creating a mashup of sounds in your sonic environment based on your favorite music, techno-fying or hiphop-i-fying your world.

[View in App Store]

Oramics | Jun 2011

this project focused on an iPhone and Desktop (for the Science Museum in London) emulator which would try to bring the sound of Daphne Oram’s “Oramics Machine” to life, through the Oramics drawn sound technique. the interactive desktop app goes live in the science museum of london on july 29th, with the iPhone app being released soon after.

[http://daphneoram.org]

[View in App Store]

ECHOES | Jun 2011

ECHOES is a technology-enhanced learning environment where 5-to-7-year-old children on the Autism Spectrum and their typically developing peers can explore and improve social and communicative skills through interacting and collaborating with virtual characters (agents) and digital objects. ECHOES provides developmentally appropriate goals and methods of intervention that are meaningful to the individual child, and prioritises communicative skills such as joint attention.

[http://echoes2.org]

Real-time Source Separation | Mar 2011

Separating foreground from background can also elicit a model of auditory saliency, or a model of what is likely to be important in the auditory stream of information. First a chunk of audio is trained as a background in real-time. Next, the audio is discretized into matrix factors through a number of maximum likelihood iterations (Expectation-Maximization) into 3 variables representing basis components in the 2D spectrum, their weights, and the impulses of where they occur. Foreground is a reprojection of this data onto additional components. This project works in real-time on an iPhone 4.

Real-time Binauralization | Feb 2011

this works extends the IRCAM Listen database for real-time cluster-based binauralization on the iPhone, allowing up to 30 sound sources to be spatialized in 3D in real-time on an iPhone 4, using the GPS, compass, and altitude information.

Bronze Format | Feb 2011

Bronze is a new non-interactive music format in which recorded material is transformed in real time, generating a unique and constantly evolving interpretation of a song on each listen.

[http://bronzeformat.com/]

[View in App Store]

Responsive Ecologies | Dec 2010 – Jan 2011

exhibited at the Watermans between 6th December 2010 and the 21st January 2011. The installation was in the form of a 360 degrees multi-screened projection or CAVE (Cave Automatic Virtual Environment). The presence of people within the space would be tracked and used to deconstruct and interlace the video in response to their movement. The video documentation below was taken from the installation (throughout this video the camera is panning around the space in order to record all sides of the CAVE).
Read more.

http://captincaptin.co.uk

Sonic Graffiti | Nov 2010

sounds are placed in the city as graffiti using the iPhone 4’s GPS and microphone. the result is a sonification of the graffiti around you as a spatialized orchestra in 3D sound.

Sound-seeer | Nov 2010

In collaboration with R. Beau Lotto of lottolab studios, and Mick Grierson of Goldsmiths, University of London’s Department of Computing, this project sought to allow children to design visual search experiments investigating the relationship of sound and vision. Setup during the november Science Museum of London LATES exhibition, and the i,Scientist 2011 program, participants were blindfolded and navigated a maze using the sound from this iPod app, which converted the camera image into spatialized sounds.

[LottoLabs]

The Trial | Jun 2010

A collaboration between Christos Michalakos, Lin Zhang, and myself, ‘The Trial’ was presented as a live laptop set for the Dialogues Festival in Edinburgh’s Voodoo Rooms, as a support act for Rune Grammofon artist Humcrush

http://www.dialogues-festival.org

http://www.runegrammofon.com/

Calibration | Apr 2010

A collaboration between Christos Michalakos and myself, ‘Calibration’ was the continuation of an audiovisual synaesthetic duo exploring raw symmetry with digitally-controlled analog aesthetics between sound and visuals.

http://christosmichalakos.com

Memory | 2009-2010

‘Memory’ is an augmented installation of a neural network employing hand blown glass, galvanized metal chain, projection, cameras; 1.5m x 2.5m x 3m. As one begins to look at the neurons, they notice the faces as their own, trapped as disparate memories of a neural network. Filmed and installed for the Athens Video Art Festival in May 2010 in Technopolis, Athens, Greece. The venue is a disused gas factory converted art space. Also seen at Kinetica Art Fair, Ambika P3, London, UK, 2010; Passing Through Exhibition, James Taylor Gallery, London, UK, 2009; Interact, Lauriston Castle, Edinburgh, UK, 2009.

http://agelospapadakis.com

Colony | Summer 2010

A microscope video feed is processed for numerous tracking parameters and influences the resulting re-projected visuals. Live audio is also processed based on these same parameters. Additional performers can “plug-in” by receiving the tracking information, or simply by viewing the other performers or visuals. This unadulterated (and very rough) clip was initiated at the Edinburgh HackLab on 17 Sept 2010 with Shiori Usui on Live Instruments, Sarah Roberts on Microscope, and Parag K Mital on Audio/Visual processing (other clips feature Owen Green also on Audio processing).

http://edinburghhacklab.com/

X-RAY | Jun 2010

X-RAY invites participants to interact with a seemingly broken television installed as part of a 1970’s living room. Television signals are affected by the surrounding audio textures in the room as well as a novel measure of attention invoked while a user views the television.

First installed as part of Neverzone on 10 June 2010 (pictures

http://archive.pkmital.com/2010/06/07/x-ray-the-roxy-arthouse/

Polychora | Feb 2010

As an exploration of synaesthesia, the visuals are created as an audio-reactive algorithm based on brightness, panning, texture, noisiness, pitch, and their combinations. By combining the amorphous space of possible impulses and the range of sound textures, the polychoron takes a visual shape altered by the different dimensions of texture. Presented at the Soundings Festival on February 6th and 7th, 2010 (curated by Andrew Connor).

http://www.music.ed.ac.uk/soundings/

Colony | Summer 2010

COLONY is a multi-faceted networked and interdisciplinary platform for exploring creative ideas. The idea behind this particular incarnation is that a microscope video feed is processed for numerous tracking parameters and further influencing the resulting re-projected visuals. Live audio is also processed based on these same parameters. Additional performers can “plug-in” by receiving the tracking information, or simply by viewing the other performers or visuals. Featuring Sarah Roberts, Shiori Usui, Owen Green, and myself.

Attention | Spring 2008

Participants of a quadrophonic multiscreen immersive installation are eye-tracked for eye-movements while watching a continuous 2×2 film narrative. The resulting installation creates an entirely algorithmically edited film based on the participant’s eye-movements, creating real-time edits of sound and video to create atmospheric sounds, off-screen voice-overs, and video edits between close-ups, mid-shots, and wide-shots, all depending on the eye-trackee’s original attention to the film.
Read more.

Ask Me About Iran | Sep 2009

Set in the city center of Edinburgh during the end of the largest arts festival in the world, a small group of individuals were curious to document the current public opinion about Iran. Spontaneously finding a piece of cardboard, a marker, and a stick, we drew a sign which said, “Ask me about Iran” and waited for anyone willing to start a conversation.

[Video]

Geodesic Dome Projection Mapping | May 2010

Custom built software for interactive projection mapping of a geodesic dome. Dome design by Tom Clowney for the artist Cardboard.

Photos

Dynamic Images and Eye-Movements | 2008-2010

The DIEM Project (Prof John Henderson, Dr Robin Hill, Dr Tim Smith, Parag Mital) are developing new visualisation tools for eye movements in dynamic images, as well as new data analysis tools and techniques based on dynamic-regions-of-interest (DROIs) for use in film and video. We applied these new methods to investigate how people see and understand the visual world as depicted in film and video in developing a stronger theory of active visual cognition.

http://thediemproject.wordpress.com

Attention: The Experimental Film | 2008

A collaboration between stefanie tan, dave stewart, and myself, “attention” explored how eye-tracking could be used to algorithmically edit new films, using sound and video databases (supervised by Tim J. Smith). a pov 2×2 film shot in the style of michael figgis’s timecode was created alongside of additional wide/mid/and close-shot videos. sound bytes and narratives were also collected. a final installation of dual projection and quadrophonic audio was algorithmically edited in real-time based on viewers eye-tracking information.

http://theexperimentalfilm.com

Interactive Light Field Renderer | 2006

As part of a post-doctoral seminar I attended at the University of Delaware, I implemented bespoke software for a Light Field Renderer with support for aperture size, synthetic focal length, and translational motion of the virtual viewing camera. This project was under the direction of Dr. Jingyi Yu.

Gradient Domain Context Enhancement Using Poisson Integration | 2006

As part of a post-doctoral seminar I attended at the University of Delaware, I built bespoke software to correct either a highly saturated day-time video using video material from the night, or vice-versa correcting a overly dark night time video using material from the day time. This project was under the direction of Dr. Jingyi Yu.

Teaching

Cultural Appropriation with Machine Learning

UCLA DMA 2020

This course guides students through state-of-the-art methods for generative content generation in machine learning (ML) with a special focus on developing a critical understanding surrounding its usage in creative practices. We begin by framing our understanding through the critical lens of cultural appropriation. We then extend our understanding into topics such as deep-fakes and bias. Next, we look at how machine learning methods have enabled artists to create digital media of increasingly uncanny realism aided by larger and larger magnitudes of cultural data, leading to new aesthetic practices but also new concerns and difficult questions of authorship, ownership, and ethical usage. Finally we speculate on the future of computational practices, as machine learning becomes an increasingly predominant tool for creatives, and ask of its trajectory, “Where is it all going?”

Center for Experimental Media Art: Interim Semester

Lecturer, Center for Experimental Media Arts @ Srishti School of Art, Design, and Technology. Bangalore, India – Fall 2011

Taught during the interim semester, the course entitled, “Stories are Flowing Trees”, introduced a group of 9 students to the creative coding platform openFrameworks through practical sessions, critical discourse, and the development of 3 installation artworks that were exhibited in central Bangalore. During the first week, students were taught basic creative coding routines including blob tracking, projection mapping, and building interaction with generative sonic systems. Following the first week, students worked together to develop, fabricate, install, publicize, and exhibit 3 pieces of artwork in central Bangalore at the BAR1 artist-residency space in an exhibition entitled, SURFACE, textures in interactive new media.

Digital Media Studio Project – Various

Supervisor, School of Arts, Culture, and Environment, University of Edinburgh
(2010) Supervisor for 3 MSc Students on Augmented Sculpture
(2009) Supervised 6 MSc Students on Incorporating Computer Vision in Interactive Installation

Various

Engineering and Sciences Research Mentor. Seminar. McNair Scholars, University of Delaware, 2007
Instructor. Web Design. McNair Scholars, University of Delaware, 2007
Teaching Assistant. Introduction to Computer Science. University of Delaware, 2006

Talks/Posters

Parag K. Mital, “Computational Audiovisual Synthesis and Smashups”. International Festival of Digital Art, Waterman’s Art Centre, 25 August 2012.
Parag K. Mital and Tim J. Smith, “Investigating Auditory Influences on Eye-movements during Figgis’s Timecode”. 2012 Society for the Cognitive Studies of the Moving Image (SCSMI), New York, NY. 13-16 June 2012.
Parag K. Mital and Tim J. Smith, “Computational Auditory Scene Analysis of Dynamic Audiovisual Scenes”. Invited Talk, Birkbeck University of London, Department of Film. London, UK. 25 January 2012.
Parag K. Mital, “Resynthesizing Perception”. Invited Talk, Queen Mary University of London, London, UK. 11 January 2012.
Parag K. Mital, “Resynthesizing Perception”. Invited Talk, Dartmouth, Department of Music. Hanover, NH, USA. 7 January 2012.
Parag K. Mital, “Resynthesizing Perception”. 2011 Bitfilm Festival, Goethe Institut, Bengaluru (Bangalore), India. 3 December 2011.
Parag K. Mital, “Resynthesizing Perception”. Thursday Club, Goldsmiths, University of London. 13 October 2011.
Parag K. Mital, “Resynthesizing audiovisual perception with augmented reality”. Invited Talk for Newcastle CULTURE Lab, Lunch Bites. 30 June 2011 [slides][online]
Hill, R.L., Henderson, J. M., Mital, P. K. & Smith, T. J. (2010) “Dynamic Images and Eye Movements”. Poster at ASCUS Art Science Collaborative, Edinburgh College of Art, 29 March 2010.
Robin Hill, John M. Henderson, Parag K. Mital, Tim J. Smith. “Through the eyes of the viewer: Capturing viewer experience of dynamic media.” Invited Poster for SICSA DEMOFest. Edinburgh, U.K. 24 November 2009
Parag K Mital, Tim J. Smith, Robin Hill, and John M. Henderson. “Dynamic Images and Eye-Movements.” Invited Talk for Centre for Film, Performance and Media Arts, Close-Up 2. Edinburgh, U.K. 2009
Parag K. Mital, Stephan Bohacek, Maria Palacas. “Realistic Mobility Models for Urban Evacuations.” 2007 National Ronald E. McNair Conference. 2007
Parag K. Mital, Stephan Bohacek, Maria Palacas. “Developing Realistic Models for Urban Evacuations.” 2006 National Ronald E. McNair Conference. 2006

Exhibitions

(2018) Peripheral Visions Film Festival, NYC, USA
(2016) Espacio Byte, Argentina
(2015) Re-Culture 4, International Visual Arts Festival, Patras, Greece
(2015) Cologne Short Film Festival, New Aesthetic, Köln (Cologne), Germany
(2015) Blackout Basel, Basel, Switzerland
(2015) Prix Ars Electronica, Linz, Austria
(2015) Oberhausen Short Film Festival, Oberhausen, Germany
(2013) Media Art Histories/ART+COMMUNICATION 2013 (SAVE AS), RIXC, Riga, Latvia
(2013) Breaking Convention, University of Greenwich, London, U.K.
(2012) Digital Design Weekend, Victoria and Albert Museum, London, U.K.
(2012) SHO-ZYG, Goldsmiths, University of London, U.K.
(2011) SURFACES, Bengaluru Artist Residency 1 (BAR1), Bengaluru (Bangalore), India (Co-Curator and Artist)
(2011) Bitfilm Festival, Goethe Institut, Bengaluru (Bangalore), India
(2011) Oramics to Electronica, Science Museum. London, U.K.
(2011) Edinburgh International Film Festival. Edinburgh, U.K.
(2011) Kinetica Art Fair 2011, Ambika P3. London, U.K.
(2010-2011) Solo Exhibition, Waterman’s Art Centre, London, UK.
(2010) onedotzero Adventures in Motion Festival, British Film Institute (BFI) Southbank, London, UK.
(2010) LATES, Science Museum, London, UK.
(2010) Athens Video Art Festival, Technopolis. Athens, Greece
(2010) Is this a test?, Roxy Arthouse, Edinburgh, UK.
(2010) Neverzone, Roxy Arthouse, Edinburgh, UK.
(2010) Dialogues Festival, Voodoo Rooms, Edinburgh, U.K.
(2010) Kinetica Art Fair 2010, Ambika P3. London, U.K.
(2010) Soundings Festival, Reid Concert Hall, Edinburgh, U.K.
(2010) Media Art: A 3-Dimensional Perspective, Online Exhibition (Add-Art)
(2009) Passing Through, James Taylor Gallery. London, U.K.
(2009) Interact, Lauriston Castle Glasshouse. Edinburgh, U.K.
(2008) Leith Short Film Festival, Edinburgh, U.K. June
(2008) Solo exhibition, Teviot, Edinburgh, U.K. April

Research/Technical Reports

Parag K. Mital, Tim J. Smith, John M. Henderson. A Framework for Interactive Labeling of Regions of Interest in Dynamic Scenes. MSc Dissertation. Aug 2008
Parag K. Mital. Interactive Video Segmentation for Dynamic Eye-Tracking Analysis. 2008
Parag K. Mital. Augmented Reality and Interactive Environments. 2007
Stephan Bohacek, Parag K. Mital. Mobility Models for Urban Evacuations. 2007
Parag K. Mital, Jingyi Yu. Light Field Interpolation via Max-Contrast Graph Cuts. 2006
Parag K. Mital, Jingyi Yu. Gradient Based Domain Video Enhancement of Night Time Video. 2006
Parag K. Mital, Jingyi Yu. Interactive Light Field Viewer. 2006
Stephan Bohacek, Parag K. Mital. OpenGL Modeling of Urban Cities and GIS Data Integration. 2005

Associated Labs

Bregman Media Labs, Dartmouth College
EAVI: Embodied Audio-Visual Interaction group initiated by Mick Grierson and Marco Gilles at Goldsmiths, University of London
The DIEM Project: Dynamic Images and Eye-Movements, initiated by John M. Henderson at the University of Edinburgh
CIRCLE: Creative Interdisciplinary Research in CoLlaborative Environments, initiated between the Edinburgh College of Art, the University of Edinburgh, and elsewhere.

Summer Schools/Workshops Attended

Michael Zbyszynski, Max/MSP Day School. UC Berkeley CNMAT 2007
Ali Momeni, Max/MSP Night School. UC Berkeley CNMAT 2007
Adrian Freed, Sensor Workshop for Performers and Artists. UC Berkeley CNMAT 2007
Andrew Benson, Jitter Night School. UC Berkeley CNMAT 2007
Perry R. Cook and Xavier Serra, Digital Signal Processing: Spectral and Physical Models. Stanford CCRMA 2007
Ivan Laptev, Cordelia Schmid, Josef Sivic, Francis Bach, Alexei Efros, David Forsyth, Zaid Harchaoui, Martial Hebert, Christoph Lampert, Ivan Laptev, Aude Oliva, Jean Ponce, Deva Ramanan, Antonio Torralba, Andrew Zisserman, INRIA Computer Vision and Machine Learning. INRIA Grenoble 2012
Bob Cox and the NIH AFNI team, AFNI Bootcamp. Haskins Lab, Yale University. May 27-30, 2014.

In the News

The Space (BBC/Arts Council England)
Globalish
CreativeApplications.net
Fast Company: Co.Design
The Creators Project (Vice/Intel)
BBC News
BBC News
NY Times
CreateDigitialMotion
David Bordwell
Makematics/Kyle McDonald

Volunteering

ISMAR 2010
CVPR 2009
ICMC 2007
ICMC 2006


Copyright © 2010 Parag K Mital. All rights reserved. Made with Wordpress. RSS