xmedia5

Archive for 2010|Yearly archive page

Making Music!

In arduino, sounds, systems on March 17, 2010 at 3:47 pm

Configuration of sensors and what they trigger

How music is played

The dotted lines representing E/D pairs can be thought of as invisible strings. Each of the vertical E/D pairs on the inside of the big arch are coupled with a range-finder.
This means that everytime someone waves their hand in such a way that it cuts across the beam of the E/D pair, the musical note for that particular instrument is played. (guitar tones for the E/D pairs on the left, and piano tones for the ones on the right.)

How will these notes vary?

The range-finder coupled with E/D pairs on each side will measure the height at which the E/D pairs were triggered. This will directly influence the pitch/note played.

The single E/D pair on top is to enable play for a single person. Triggering this particular E/D pair would initiate a 20-second long percussion sequence which can act as a background tune while other tones are activated.

Photo-sensors on the smaller arc (red dots) will act as drum beats whenever someone taps on them. So in effect, for the player, it would be like a set of drums.

One photo-sensor will be kept on top next to the solar-panel for day-night detection and entering the module into power-save mode during night-time.

Summary document : sound-interaction-1-page-summary

Waveshield + sensors

In Uncategorized on March 17, 2010 at 3:46 pm

Soldering the waveshield components & the amplifier

Setup of the waveshield & sensors

This week we worked on soldering the waveshield and amplifier components. Once that was done, we decided to take our waveshield for a test-drive using one of our initial experiments with the range-finder. By using some of the sounds from the OLPC music library (there’s tons of wav files of different instruments there), we added some of them to the SD card and we able to successfull play different wav files depending on the distance recorded on the range finder sensor. For instance, if our hand was brought very close to the range-finder sensor, then it would play a low piano sound, whereas if the hand was far from the sensor, it would play a higher note.

We then proceeded to add photo-sensors to our circuit and make them act like drums for the outer side of the smaller arch in our module. This was also completed successfully!

Experimenting with sounds & rangefinder

In arduino, MIDI, sounds on March 10, 2010 at 5:22 pm

Yesterday, we spent some time trying to use the data stream from the range-finder to drive the interval rates of certain frequencies that were then played on a piezo-speaker.

During our attempts, we concluded that we would definitely need a richer sound stream, like a MIDI file for the aural experience to be something that people can enjoy.

In order to use the arduino board as a MIDI controller (:output), we’ll need a MIDI connector which costs ~$2. A tutorial on the ITP program’s Physical Computing lab page here shows the procedure to do this, and also mentions at the end how these different tones can be driven by an analog sensor instead of a switch. The tutorial also challenges us in the end to figure out ways to make different instruments’ sounds using the same circuit setup with differing MIDI values.

For this purpose, there are many MIDI libraries available online which we can make use of. Also, to make ‘melodious’ tones, we’ll need to stick to some grounded rules of music theory like the Octave or Chords. Since our experiment yesterday revealed that the range-finder’s linearized data-stream is noisy for slight variations in distance, we probably shouldn’t directly correlate a tune to the distance value, and rather give it in a synthesizer format whenever a change in factor of distance (say every 10th centimeter) is detected.

Some examples of making music with the Arduino and basic tutorials on creating melodies:

http://createdigitalmusic.com/2008/09/08/making-music-with-the-arduino-wires-solder-and-sound-round-up/

http://www.apronus.com/music/lessons/unit01.htm

http://www.tigoe.net/pcomp/code/input-output/analog-output

Updated Digital Interaction Thoughts

In elements, Meeting Notes, team progress report on March 9, 2010 at 1:46 pm

There are 2 different sizes of archs, small and big. They mix together presumably according to the pictures in the document on the freedom park proposal. For both sizes, it is possible to make them digitally interactable the same way and let their physical shapes change the type of interaction (sitting as opposed to climbing, etc). We can place an IR rangefinder inside the arch, as well as the laser trip wire right next to it. When the laser is tripped, the input from the rangefinder will be read and a note will be played – maybe a piano note. The notes will always be in-tune as opposed to “slide-able” with the tone, since the rangefinder was fidgety when we tested it, but accurate to a certain degree (always in ranges but not specific values, we’ll post on this information soon).

With one of these setups per arch, we use all our sensors. We can also put 3 photo sensors per arch, and covering a photo sensor causes the sound to drop an octave, allowing for different types of interactions as more people come along and cover the sensors and allow the sound produced to change. Perhaps one should not be interactable if it is used to sense sunlight for power purposes.

As for using multiple IR emitter/detector pairs: Music theory is (at least according to Bach’s way of writing it) strongly based off the triplet chord, where there’s the base note, and then you play the note a 3rd above it, and then the note a 3rd above that one (usually in a major scale, for example C E and G). The bottom one could trigger the bottom of the chord, middle triggers the middle, top triggers the top of the chord. The bottom note would be varied based on the IR rangefinder.

Redesign sketches

In designs, models, sketches on March 3, 2010 at 10:22 pm







Design 1 photos

In Uncategorized on March 3, 2010 at 10:12 pm
These are some of our photos from the first Design competition

Poster and Model

A side view

Top View

Another side model

Universal Design Matrix – Play Structure

In design rules, universal design matrix on February 27, 2010 at 10:43 pm
Universal Design Matrix – Play Structure
Physical Design Structure
Universal Principles High Medium Low
Equitable Use Wheelchair accessible, gentle slopes and sound system allow visual and physically impaired people.
Flexibility in Use Some people can play with sound, others can engage physical, others can just simply relax.
Simple & Intuitive Space Symbols located with notes. Function becomes apparent visually.
Perceptible Info Symbols located with notes. Function becomes apparent visually.
Tolerance for Error No inappropriate uses so no problems.
Low Physical Effort Minimal to no physical effort.
Size & Space for Approach & Use Different spaces for different sized people, handicap accessible. Experience is similar for disabled
Physical Design Structure
Norman’s Principles High Medium Low
Visibility Apparent upon observation.
Feedback
Constraints Surface not climbable.
Mapping
Consistency Similar shapes communicate function.
Affordance
Interactive Component
High Medium Low
Visibility Form implies use, discovery on exploration.
Feedback Audio feedback occurs.
Constraints
Mapping Positioned according to pitch.
Consistency Consistent indicators for sound.
Affordance Tactile feedback.

The Design of Future Things (D. Norman)

In Readings, Week 3 Readings on February 3, 2010 at 9:04 pm

The design of future things pertains to the importance of having some sort of feedback designed into objects. The design of objects has changed over time; people often opt to create silent devices for elegance or a quiet home or workplace. However, according to Norman, this has led to the development of objects that no longer give us the feedback that old devices used to. For example, a person may miss an elevator because it’s entire operation is quiet, and the old noises and chimes of an arriving elevator were no longer there to cue the individual.

According to Norman, a device must provide feedback, which is essential for reassurance, progress reports and time estimates, learning, special circumstances, confirmation, and governing expectations. For example, when driving a car, the sound of the engine could provide feedback for how many rpm it is running at, or if it is breaking down.

Ultimately, Norman contests that when a user fails at using an object, the object is at fault, not the user. He argues this point with the Apple Newton, a handwriting recognition system in 1993. The system was “released with great fanfare,” but ultimately was difficult to use and failed.

Norman goes on to suggest designs that give feedback to the user that requires almost no thought, or, rather, only natural thought from the user. These include natural signs, such as varying distance between two hands to signal a driver how much space they have between the car and a wall; and natural mappings, such as knobs on a stove in the same shape as the burners (as opposed to in a line).

Ultimately, Norman suggests that design of “smart” machines should follow six rules:

1. Provide rich, complex, and natural signals.
2. Be predictable.
3. Provide a good conceptual model.
4. Make the output understandable.
5. Provide continual awareness, without annoyance.
6. Exploit natural mappings to make interaction understandable and effective.

In the second portion of the reading, Norman plays the part of one interviewing the machines, and produced the following design rules for “machines to improve their interactions with people”:

1. Keep things simple.
2. Give people a conceptual model.
3. Give reasons.
4. Make people think they are in control.
5. Continually reassure.
6. Never label human behavior as “error”.
(copied from reading)

From “The Design of Future Things”, Ch 8, by D. Norman

Physical Interfaces in Electronic Arts- Bongers

In Readings, Week 2 Readings on January 27, 2010 at 5:40 pm

,In Physical Interfaces in Electronic Arts, Bongers talks about a couple different ways of interaction between electronics and the audience. He speaks more about audio than other systems, but it can be translated to all electronics. Bongers goes on to talk about about three different ways of interactions in music. The first is a system interaction which is a musician playing his instrument. The second interaction is a system-audience which is an installation art. Lastly, we have a performer-system-audience interaction which makes it a two way process between performer and audience, through the system.

In chapter 2, Bongers talks more about the mechanics of digital sound and different sensing technologies and how human output is the starting point for information. Some examples of human output are muscle action, blowing, voice, blood pressure, and heart rate. Two muscle actions that Bongers talks about are isometric(pressure sensors and switches) or movement(displacement).

Week 2 Reading : Chapter 6

In Readings, Week 2 Readings on January 27, 2010 at 4:27 pm

Starts off talking about difficulties of describing tasks in terms of design.
When discussing interface design, 2 main problems with theoretical design models.
1. Laborious and Time Consuming.
2. Theory-laden and beyond the scope of a typical HCI designer.

Tangible computing has 3 related ways of connecting activities and space in which they’re carried.
1. through the configurability of space
2. through the relationship of body to task
3. through physical constraints

Much of interaction involves communicating realistic data.
Additionally, users tend to actively improvise while operating within the set virtual reality.

Design changes to support improvised action by giving users control over management.
Support improvisation by making the dynamic situation visible to the user.
Generally common expectations, meaning, systems, based on this operation. Relates to how mind works.
Means we need to pay attention to how users’ behavior evolves around technologies over time.

Six Design Principles Explored:

Computation is a Medium
This basically means that we are modulating our behavior and that the computer translates it via computation.
The computer is not relevant unless we look at how it is computing.
We modulate based on how the computation works.
Therefore we must look at how computation affects modulation, which in turn affects meaning and defining.
Focus not on capability of technologies per se, but how technology embeds into set of practices.

Meaning Arises on Multiple Levels

objects carry meaning on multiple levels.
on their own right, as signifiers of social meaning, as elements in systems of practice, etc.
creating and managing meaning cannot simply be up to the designer.
Items have iconic and symbolic meanings.
Items also vary between items of object to items of action.
We have to revise this

Users, not Designers, Create and Communicate Meaning
Users, not Designers, Manage Coupling

There is the artifact, and how the artifact is used.
Designer cannot completely control how people use an artifact.
Need to be alert to how systems may be used.
Need to be aware of how it operates within a set of existing practices.
Designers can only suggest coupling of meanings.
Ultimately the users and perception of how things are used determines coupling.
Designers stance, or conception of what he is doing.
Stance needs to change to operate as the user, not as a dictator to the user.
Different set of issues need to be seen.
This can be done by looking at a user and acting with and through them instead of for them.

Embodied Technologies Participate in the World they Represent

Meaning arises from engaged action.
Embodiment doesnt mean physical reality, it means parcipitative status.
Creating a language through engaged action requires participation in existing schema.
In roller example, it’s about the rollers and how they integrate into human interaction.
Not about the rollers. Not about interaction.
Representation and participation are important considerations in system design.

Embodied Interaction Turns into Meaning

Relationship between action and meaning is central to embodiment.
This is within the perspective of reality and the rest of the world.
Didn’t really understand this last section.