Tag Archives: drawing

[sims 2014] Ribbon drawing prototype

Screen Shot 2014-12-01 at 11.20.36 PMI’m trying to finish this “ribbon drawing” tool. It is now stable, allowing users to add, reset and erase the shapes. I also added some new physics simulations (attraction, wind) and fixed others — springs are stable and oscillation can interact with other simulations.

Finally, I added the option to “playback” the drawing and record everything as a png sequence — so people can use it to make videos.

Github repo here.

[sims 2014] Vector Fields

1. Changing direction based on mouse movement.

2. Reading pixels from camera and changing direction of vector field based on their brightness.


3. 3D vector field with Perlin noise. The skecth is pretty buggy, though. I wasn’t able to add a camera nor particles. Any simple change makes it crash.

Github repo here.

[sims 2014] Ribbon drawing prototype (midterm)

Screen Shot 2014-10-06 at 11.18.33 PM

Screen Shot 2014-10-06 at 11.20.42 PM

I’m trying to make this drawing tool that generates ribbons. The final goal is to allow people to draw some interesting things — I can only think of calligraphic shapes, but anyway. And, hopefully, they can apply some physics effects and export everything as a video.
This tool has ONE user in mind: Laura. She needs it to make some letterings.

Everything on the class github.

The Little Prince – Final

Screen Shot 2013-12-16 at 3.35.14 PM Idea
An interactive app based on The Little Prince book. The app covers the chapters in which the prince travel through seven planets before coming to the Earth.

The project draws inspiration from interactive children’s books. Particularly the ones by Bruno Munari, which explores the interaction with paper in non-traditional forms.
The purpose then was to make something analogous using a laptop. How can a user interact with it besides using mouse and keyboard?

See previous posts for more precedents and process.

The project uses 3 different inputs: camera, light sensor and speaker. Each planet respond for only one of them — except for the first one, which is supposed to be a tutorial for the other parts.
Screen Shot 2013-12-16 at 3.35.26 PM

This was my last prototype before the final version. All interactions and navigation working. Drawings still in sketch version — though the final is a sort of sketch, too!


Original post: http://codeforart.cc/fall-2013/the-little-prince-final/

The final code is here.

Download the app here.

Interactive animation – 2nd Prototype

Christopher Niemann’s app “Petty Zoo” is pretty close to the interactions and animations I’m planning. In each scene the user interaction triggers different animations:

Picture by Christopher Niemann

As for the drawing style, it will look more like this John Porcellino gif: John_stroll

Main Interaction

This is my 2nd prototype for the Code for Art finals. I’ve changed the idea a little bit. The user will no longer play with all the 3 inputs (sound, light and camera) in the same scene. Instead, I’ll have 7 different planets from the Little Prince book. In each one, the user will play with a single input, as in the table below:
Screen Shot 2013-12-03 at 6.04.52 PM

The first scene/planet will respond to all 3 inputs because it will also work as an introductory tutorial.

I decided to keep the navigation as a separate input, using keyboard. I think that it needs to be something more stable and less exploratory — and also clearly separate from the animation interactions.

These are my sketches for the scenes so far. The next step is to integrate them with the coding from the previous prototype.
Screen Shot 2013-12-03 at 6.03.39 PM Screen Shot 2013-12-03 at 6.04.09 PM Screen Shot 2013-12-03 at 6.03.55 PM Screen Shot 2013-12-03 at 6.03.52 PM Screen Shot 2013-12-03 at 6.03.50 PM Screen Shot 2013-12-03 at 6.03.48 PM Screen Shot 2013-12-03 at 6.03.45 PM Screen Shot 2013-12-03 at 6.03.43 PM

Original post: http://codeforart.cc/fall-2013/interactive-animation-2nd-prototype/

Interactive Animation – 1st Prototype

An interactive animation that responds to non-conventional user input.
Talking about his Face Tracker in an interview, Kyle McDonald pointed out that:
“…as far as a practical applications, I could imagine it augmenting the way the computer understands us. I’ve been thinking a lot about this recently. Your computer has a microphone to listen to you, an accelerometer to know when you drop it, a camera to watch you, an ambient light sensor to know how bright the screen should be. I have to wonder if it makes sense to respond to our pose and facial expressions.”
Though I’m not using face tracking, this paragraph sums up my inspiration for the project.
My aim is to explore those different inputs as much as possible, using an interactive animation as a basis for that.

I will use 3 different inputs from hardware available on my macbook: ambient light sensor, mic and camera.
See previous proposal for more details.

Input Checking

The circle position changes according to the average optical flow inside the central grid.
The circle brightness responds to the ambient light; the size, to the sound input volume.

1st Prototype

I want the inputs to have an intuitive connection to the elements:
– the brightness changes the daylight (sun/moon);
– the sound has a wind effect;
– the user movement rotates the planet.
Now that the technology is working, I’ll add more elements to the animation. The interaction between elements will trigger different events — two touching clouds may cause lightnings, clouds over roses may rain etc.

Code for the input checking example here.
Code for the first prototype here.
You’ll also need two add-ons to run the code:  ofxOpenCV (already included in the oF add-ons folder) and ofxOpticalFlowBarneback.

Original post: http://codeforart.cc/fall-2013/interactive-animation-1st-prototype/