SIM Sketches, Generative Art, UX/UI ResearchRole: Artist, Interactive Designer, Researcher, Developer
As part of my ongoing research into how sound plays a role in immersive environments, micro-interaction, and audio cognition, I created a gallery of experimental works called “SIM Sketches” (Sound:Interaction:Motion). These sketches are used to investigate the intricacies of rich internet, interaction, and user experience design. In the area of visualization, sound often plays a secondary role in the design experience. Web-based audio, however, is advancing with examples like Oculus audio sdk and spacial audio, the rise of voice-based interfaces, low latency audio SDKs, SSML, to mention a few. Also, UX Sound studios-like CMoore are helping to define sound UX style guides. To help me learn and be apart of this trend, I made this series of sketches to investigate and discover problems as well as solutions.
To make these sketches, I am coding simple animated movements and using them as event targets for attaching audio using javascript, html5 canvas, and web audio API. The process has taught me about audio clipping, zero-point crossing, and best practices with audioParam. Currently, I am working to resolve problems with motion and audio crackle. Moving forward, I plan to explore sending audio through Web GL, GLSL shaders, and three.js. Here is a simple example.
This can quickly get more complicated.
Our life experiences are rich with natural sounds, ambiance, distortion, harmonics, audio cues, and more. Visual design is fundamental in micro-interactions and user experience, but without sound, we are only given a fraction of the story. This research has potential applications in the areas of interaction design as well as embodied music cognition, generative music, and sound-based visualizations. This project helps me gain insight into how we can creatively use sound to contribute to an ever-expanding conversation on the user’s experience.