Archived entries for copyright

NFT Pixels

NFTs, or non-fungible tokens are data stored on the blockchain that certifies and records transactions associated with a digital asset. Effectively, they are certificates of authenticity of digital assets traded on blockchain-based exchanges that have spurred an economy of scarcity for digital assets such as images, GIFs, sound clips, and videos. In the past, artists producing digital content may have also considered selling their work in places such as instagram, behance, deviant art, or via their own websites. Though, it is unlikely that this content would have been actually bought in any form. This is likely because the content is already accessible to all. On these platforms, if we are to call it art, it could be akin to a digital version of public art in some ways, for all to see and consume. Or more likely, we would consider it either documentation or advertising and marketing of the artist’s work.

The entire point of NFTs, and the reason they are so marketed as being valuable, is because we believe there is digital scarcity of the content being traded and sold.

Prior to NFT marketplaces for digital art, let’s say that someone were to decide that they wanted ownership of … Continue reading...

YouTube’s “Copyright School” Smash Up

Ever wonder what happens when you’ve been accused of violating copyright multiple times on YouTube? First, you get a redirect to YouTube’s “Copyright School” whenever you visit YouTube, forcing you to watch a cartoon of Happy Tree Friends where the main character is dressed as an actual pirate:

Second, I’m guessing, your account will be banned. Third, you cry and wonder why you ever violated copyright in the first place.

In my case, I’ve disputed every one of the 4 copyright violation notices that I’ve received under grounds of Fair Use and Fair Dealing. Here’s what happens when you file a dispute using YouTube’s online form (click for high-res):






3 of the 4 have been dropped after I’ve filed disputes, though I’m still waiting to hear about the response to the above dispute. Read the dispute letter to Sony ATV and UPMG Publishers in full here.

The picture above shows a few stills from what my Smash Ups look like. The process described in greater detail on createdigitalmotion.com is part of my ongoing research into how existing content can be transformed into artistic styles reminiscent of analytic cubist, figurative, and futurist paintings. The process to create the videos … Continue reading...

An open letter to Sony ATV and UMPG

Dear Sony ATV Publishing, UMPG Publishing, and other concerned parties,

I ask you to please withdraw your copyright violation notice on my video, “PSY – GANGNAM STYLE (?????) M/V (YouTube SmashUp)” as I believe my use of any copyrighted material is protected under Fair Use or Fair Dealing. This video was created by an automated process as part of an art project developed during my PhD at Goldsmiths, University of London: http://archive.pkmital.com/projects/visual-smash-up/ and http://archive.pkmital.com/projects/youtube-smash-up/

The process which creates the audio and video is entirely automated meaning the accused video is created by an algorithm. This algorithm begins by first creating a large database of tiny fragments of audio and video (less than 1 second of audio per fragment) using 9 videos from YouTube’s top 10 list. From this database, the tiny fragments of video and audio are stored as unrelated pieces of information and described only by a short series of 10-15 numbers. These numbers represent low-level features describing the texture and shape of the fragment of audio or video. These tiny fragments are then matched to the tiny fragments of audio and video detected within the target for resynthesis, in this case the number one YouTube video … Continue reading...

Intention in Copyright

The following article is written for the LUCID Studio for Speculative Art based in India.

Introduction

My work in audiovisual resynthesis aims to create models of how humans represent and attend to audiovisual scenes. Using pattern recognition of both audio and visual material, these models use large corpora of learned audiovisual material which can be matched to ongoing streams of incoming audio or visual material. The way audio and visual material is stored and segmented within the model is based heavily on neurobiology and behavioral evidence (the details are saved for another post). I have called the underlying model Audiovisual Content-based Information Description/Distortion (or ACID for short).

As an example, a live stream of audio may be matched to a database of learned sounds from recordings of nature, creating a re-synthesis of the audio environment at present using only pre-recorded material from nature itself. These learned sounds may be fragments of a bird chirping, or the sound of footsteps. Incoming sounds of someone talking may then be synthesized using the closest sounding material to that person talking, perhaps a bird chirp or a footstep. Instead of a live stream, one can also re-synthesize a pre-recorded stream. Consider using a database … Continue reading...


Copyright © 2010 Parag K Mital. All rights reserved. Made with Wordpress. RSS