Category Archives: Projects

Over the winter, I interned at NEW INC, an incubator program run by the New Museum in Manhattan. My main responsibility there was to update the design and functionality of while migrating it to Squarespace for better blogging integration. The old site was created when NEW INC was in its infancy and focused more on publicizing and establishing the tenets of the program, so one of the main design challenges was how to more prominently integrate content created by the community on the new site.

While the Squarespace platform made most of the content migration easy, I had to use extensive CSS styling in order to make the site match the style guidelines outlined by NEW INC and the New Museum. I also made some Javascript/CSS changes to the site’s template in order to alter the default blog entry display, reconfigure the site’s breakpoints for better responsive design, and add some aesthetic features that couldn’t be done through Squarespace’s CMS.

Here are some of the areas where I implemented significant changes from the old site’s design:

Slapstream v2.0

The Github repo for this project is located here.

Late last year, I had the opportunity to present an updated version of Slapstream, one of my old projects from my first semester of ITP, at Razorfish’s NY offices as part of Future Interfaces, a one-night technology exhibition run by the NY Media Lab. You can see more details from my older post linked above, but here’s the description from that page:

“Slapstream is a video game where you have to dodge onscreen obstacles by slapping yourself in the face.

It uses a Kinect to track the position and velocity of your hands and their distance from your face. Once it detects a slap, your onscreen character will move in the direction that your face reels. A more powerful slap to the face will result in greater movement onscreen.

The idea came out of my interest in making a novel game interface, and also sort of as an experiment to see how much inconvenience people will put up with for the sake of fun or competition.”

Videogames traditionally never emphasized the means of control as a way to add to the total experience of gaming. At first, controllers were designed to be purely functional. As they evolved, they became more ergonomic, so that the player notices them less and less as they become increasingly immersed in the game’s world. However, more recently there has been a reaction in the opposite direction with innovations such as the Wii and Kinect, where the mode of interaction becomes physical and interfaces with the onscreen world to become part of the experience of play. I designed Slapstream as a different take on this new mode of interaction in order to explore the entertainment value that comes from doing something unexpected and shocking to play an otherwise simplistic game.

I had left the project in an extremely prototypical state, and it was fun to revisit an old project now that I have a little more knowledge about programming. I worked on refining the slapping mechanic, overhauling the interface, outfitting the game with retro-style graphics, and adding a high-score system to encourage competition. You can see the video I made from exhibition footage above.

Update 7/2015: Slapstream has since been shown at Tribeca Film Festival’s Interactive Playground and at NYC Resistor’s annual interactive art show.

Accelerometer Labyrinth

As a summer intern at HAVAS Worldwide’s Innovation Lab, I was given the task to create a short and entertaining experience that people could connect to and control with their smartphones. I ended up developing an online version of the classic game Labyrinth that could be tilted and played with the accelerometers present in mobile devices. The rationale behind my choice was that, while mobile gaming is now a widespread practice, most of it involves interaction solely with the screen of the smartphone. I wanted to create an interaction where the physicality of the phone mattered more than the display, so that users could have an experience using their phones in an atypical and more active manner.

I learned and practiced several new technologies during the process of creating this game; the main game was coded in Javascript using the three.js library for visuals and the Physijs plugin for in-game physics. I used Spacebrew to manage websocket connections so that the smartphone could connect to the game, and used Javascript and Phonegap to write the app that sent accelerometer data from the phone.

Although the game was built for (and functioned best with) control from mobile devices, the documentation above unfortunately only shows gameplay with keyboard controls.

[Thesis] We’re Still Here


My thesis is entitled We’re Still Here, and it is the beginning of a planned series of ordinarily mundane objects that have evolved personalities and behaviors that are based on their original functions. Here’s the longer blurb:

“Why should our sleek, sexy, status-symbol gadgets get all our attention?

We’re Still Here is an exploration of the ordinary objects in our lives that perform their duties day in and day out without much acknowledgement or conscious thought from their users. Each object in this collection is modified to display surprising behaviors or personality traits that are derived from how it normally operates; the series begins with a neurotic, overly needy alarm clock and a dutiful-yet-exhausted coatrack that just wants to catch a short break.

By giving personalities to these objects, I’ll playfully invoke a new way to look at and think about the myriad commonplace, “boring” tools that quietly contribute to our lives.”

I’ve been interested in the idea of personifying objects since my beginnings at ITP; my physical computing final, The Embarrassed Book, was based on the idea that a book may not always be necessarily interested in divulging the information it contains. It also fascinates me how, out of the large variety of objects we use everyday, a select few are extremely predisposed to become status symbols while the vast majority have no chance to ever really capture our attention or imaginations. I want to help our largely unconsidered tools become more interesting to us.


The Alarm Clock:
The alarm clock consists of the shell of a real alarm clock fitted with a seven-segment LED display, two piezobuzzers, and capacitive sensing controlled by an Arduino Pro Micro. It will be extremely needy, desiring constant touch, and will count down to a specific time at which it will cry out for attention and express its disappointment with you. If you happen to touch it independently of its time limit, it will chirp pleasantly and display its satisfaction.

The concept for its behavior is based on the idea that a normal alarm clock is essentially already a highly neurotic object that goads you into touching it regularly at a very specific time every day. Users depend on alarm clocks to wake them up, but they never consider what the clock’s needs might be in this relationship.

Pictures of construction:

Video of an early prototype in action – no sounds yet, but displaying positive messages:

The Coatrack:
The coatrack is interested in performing its duties as a coatrack well, but gets physically tired from holding up bulky coats all day. Thus, when it thinks you’re not around to watch it, it likes to slouch over to take a small break. The concept is based on the idea that a coatrack stands perfectly straight and bears a heavy burden for its entire existence, performing its duties admirably, and yet nobody ever really notices or appreciates that it’s doing so. It must be a thankless lifestyle.

The coatrack will consist of segments of PVC pipe with 3-D printed caps at the end that will allow 3 cords to run through them. At the base of the rack are three high-torque servos that will hold the cords taut for a straight appearance, or selectively allow slack in different areas to let the segments lean in a specific direction. Photocells under the three hooks will let the Arduino know when coats are hanging and decide which servo to let slack. The rack will detect if people are approaching using two IR rangefinders.  Some concept/testing pictures:

And a video of the most recent prototype with the full-size PVC segments:

Development & Documentation

The Alarm Clock:
Since the prototyping phase, I’ve refined the clock’s behavior to make it more believable and engaging as an object. Beforehand, it started out with a countdown to when it wanted to be touched; now, it acts as a normal clock for a random amount of time, and it switches to the countdown when it starts to get impatient with your lack of attention. These behaviors are both present in the documentation video, just shortened to keep the video at a reasonable length.

You can see the basic behavior from the video, but not the variety of messages it displays. Ignoring it will always trigger a regular alarm clock beep, but the clock will choose from a pool of guilt-tripping messages including “don’t leave”, “deserter”, and “you’re cruel”. Appeasing it will trigger one of a variety of happy chirps and messages such as “again, again” and “ahhh, so good”.

The Coatrack:
The coatrack is still a work-in-progress, as it is more physical/mechanical in nature and thus a harder problem to solve. The idea I had from the prototyping phase was that when it sees you, it stands up straight, but when it thinks you’re not around, it slowly starts to slouch so that it can relax a little bit. I have encountered some mechanical issues that have slowed progress – namely, putting even a light load on the hooks inhibits its ability to stand up straight.  You can see a short clip below with a sampling of its different movements both with and without a coat on its hooks. I hope to continue working on it over this summer to improve the design and advance it to its original planned state.

The Presentation

Here, you can witness a view of my slides and listen to my discussion regarding my thesis that I presented for ITP Thesis Week. It’s roughly 14 minutes long and includes both the video segments shown above.

Sculpted Ocean

Sculpted Ocean is a unique kind of globe. Instead of focusing on the features and topography of the world’s landmasses like most globes, it tries to shine a light upon the depth and composition of the world’s vast oceans.

To make the globe, Amanda Gelb and I used over 1.6 million points of worldwide oceanic depth data from the NOAA, and chlorophyll level imaging data from the NASA SeaWIFS satellite monitoring program.
Data was cleaned and parsed in Ocean Data View and Python, mapped and interpolated in ArcGIS (with help from the data specialist librarians at NYU Data Services), and prototyped and modeled in different phases in Processing, Rhino, and ZBrush. It was printed on the 3DSystems ZPrinter 650 powder printer at NYU’s Advanced Media Studio.



Pop-Up Window Display – Sock Monster


For our 7-week Pop-Up Windows class, we were tasked to form teams and design an interactive window display for a vacant building on the NYU campus. During this time, my team and I planned and installed a window exhibit in which a sock is lowered into a giant washing machine and is eaten by the monster that lives there. The project involved animation, puppetry, fabrication, set design, projection, and physical computing. We built the washing machine by stretching fabric over a wooden frame with an open back so that we could project onto its front. A button was placed on the outside of the window display, and when it was pushed, a stepper motor lowered the sock into the machine, and then animations played of the monster interacting with, and ultimately pouncing upon, the shadow of the sock. After that, the lights would dim on the display and the sock would retract to its original position, ready for the next user.



The Sock Monster display next to its neighbor The Internet.


An Arduino controlled the stepper, the DMX lighting, and the button control, and interfaced with MAX/MSP to randomize and play the different animations at the appropriate times. My role on the project was primarily creating the monster’s animations and doing the MAX/MSP programming. I also assisted with the construction of the frame and with the Arduino code as well.

Here’s a timelapse video showing an early stage in the construction process – our team comes in in the second half of the video to start work on the frame:


NYC Food Crawl – ITP Winter Show 2013

This post assumes knowledge of the concept behind this project. To view the project proposal, click here.
The Processing code that I wrote is posted on Github. Click here to view.


NYC Food Crawl is a physical data representation that uses diorama sets with live cockroaches, along with an accompanying screen-based visualization, to represent the frequency of vermin violations in New York City area restaurants. I took the restaurant food inspection results dataset from NYC OpenData and brought the main database and the violation codes database into Google Refine and Excel for cleaning. After cleaning this data and isolating the vermin-specific violation codes, I brought both of these datasets into Processing. By doing so, I was able to calculate how many restaurants were in each inspection grade level, how many of those restaurants had had a vermin-related violation recorded, and how many of those violations had occurred within the past year or sooner.

I then used these figures to build the physical part of the project: restaurant dioramas, one for each grade, with a representative amount of live cockroaches inserted into each one to reflect the data. I decided to keep the representation simple: the number of cockroaches in each grade’s diorama would represent the percentage of restaurants in that grade that had had a vermin violation. Since my order of cockroaches (linked if you’re curious – most people at the show were) came in mixed sizes, I decided to make cockroach size indicative of recency: the ratio of larger cockroaches to total cockroaches in each diorama is the ratio of recent violations to total violations. More simply, the more detectable the presence of the large cockroaches were in each box, the higher percentage of recent violations there were for that grade level.

A concern I had from the perspective of the viewer was that, while memorable and attention-getting, the cockroach dioramas provide a very shallow representation of the data. Also, viewers might receive a skewed representation of the data depending, as the cockroaches like to hide out of sight and may not be all visible at once. To address this, I built out an interactive graphical visualization in Processing that both emulates the physical display for each grade level and provides additional statistics. I also included statistics for the data sorted by borough instead of inspection grade level, in case viewers wanted to explore the data further. You can see a video of this below:


Here are some pictures of my setup for the show, with both the dioramas and screen visualization: