SERVING THE ON-LINE RPI COMMUNITY SINCE 1994
SEARCH ARCHIVES
Current Issue: Volume 130, Number 1 July 14, 2009

News


EMPAC’s potential discussed

International panel shares work in art, design, science

Posted 10-27-2008 at 10:06PM

Arleen Thukral
Senior Reporter

Last Friday, the Curtis R. Priem ’82 Experimental and Performing Arts Center hosted a diverse group of international researchers from leading institutions at a research symposium entitled “Transcending Boundaries in Sciences, Arts, and Media Research.” The topics of discussion ranged from 3-D animation films to the technology of music to augmented realities. The idea of the symposium was to expose people to the interdisciplinary research being conducted around the world, to give a taste of what will come to EMPAC. President Shirley Ann Jackson was present at the symposium and spoke of how EMPAC has gone from being “the mystery on the hill” to a “platform on the hill.”

The first symposium was moderated by James Handler, senior tetherless world constellation professor. He introduced the first speaker, Rebecca Allen, director of Nokia Research Center Hollywood and founding chair of the Department of Design and Media Arts at the University of California Los Angeles. For over thirty years, Allen has investigated techno-culture, which humanizes technology while maintaining a firm presence in society. During her lecture, Allen shared some of her work on 3-D computer animation films, music videos, large-scale performance works, video games, artificial life systems, and virtual reality. The Catherine Wheel (1982), one of Allen’s projects, was the first computer-generated animation of a character (in this case, Catherine, the dancing choreographer). Allen’s motive was to better understand human motion and bring a “human presence into technology.”

Another example of this phenomenon, according to Allen, was the Bha House Theatre, in which the abstract costumes played with the human role in respect to space. Allen spoke of breakdancing as “an exciting new dance form,” and has merged her skills into music videos. She identified a dancing tree video that incorporates fractural organic movements. All her experience in media and technology has led her to the Nokia Research Center, where she led the MyoPhone project to create an eyeglass display (small projector) using an electromyography sensor that wirelessly displays caller ID information and explores the body as an interface for communication. This makes the media experience revolutionary, as mixed realities are created through the merging of the virtual and the real.

Allen explains that the popularity of devices such as the iPod, iPhone, and the Wii have little to do with the digital; it is the incorporated analog gestures in the software that create an “opportunity for media augmenting our mind and body” and provide a creative learning context.

The next speaker was Steven K. Feiner, director of Computer Graphics and User Interfaces Laboratory and professor of Computer Science at Columbia University. Feiner’s expertise lies in embedding user interfaces in the real world in what he calls “Augmented Reality.” In Columbia’s Computer Graphics and User Interfaces Laboratory, he has experimented with the development of wrist-worn projection displays equipped with touch sensors, orientation sensors, and position trackers.

This concept has helped Feiner’s team aid botanists in identifying new and existing species, using multi-touch surface desktop interaction. The potential for this technology to be applied to video games is huge.

The second symposium on Friday was moderated by Wolf von Maltzahn, acting vice president for Research and professor of Biomedical Research. He introduced the first speaker for Symposium Two, Thanassis Rikakis, director of the Art, Media, and Engineering Program at Arizona State University. Rikakis spoke of abstract visual movements at high levels in his project Biofeedback, intended for rehabilitation. The primary goal of this project is the development of a multimedia-based system that integrates task-dependent physical therapy and cognitive stimuli within an interactive, multi-modal environment. The environment provides a purposeful, engaging, visual, and auditory scene in which patients can practice functional therapeutic reaching and grasping tasks while receiving different types of simultaneous feedback, indicating measures of both performance and results. The development of a portable immersive multimodal environment will help reduce rehabilitation time, promote more extensive recovery, alleviate rehabilitation monotony, and could also be used in the home. Rikakis was also involved in the motion project, which brought together choreographers, media artists, composers, lighting designers, and AME artists and engineers for the creation of new motion analysis systems and interactive technologies.

The next speaker was Bangalore Manjunath, director of the National Science Foundation’s Inegrative Graduate Education and Research Traineeship program on Interactive Digital Multimedia and Director of Center for Bio-Image informatics at the University of California at Santa Barbara. Research in his Vision Research Lab covers a broad spectrum of multimedia signal processing and analysis. In recent years, researchers in the lab have pioneered the development of feature extraction with application to image registration, segmentation, steganography, and information retrieval from large multimedia databases. The group’s goal is to develop, test, and deploy a unique, fully operational, distributed digital library of bio-molecular image data, accessible to researchers around the world.

Symposium Three, held the following day, was moderated by Jonas Braasch, assistant professor of Architecture. He introduced Stephen McAdams, director of the Centre for Interdisciplinary Research in Music Media and Technology at McGill University in Montreal. McAdams is interested in auditory perception and cognition in everyday listening. The primary emphasis of the research is on psychophysical techniques capable of quantifying relations between the properties of vibrating objects, acoustic signals or complex messages, and their perceptual results. The hope is that the research will answer questions such as: “How is sound created by instruments or computers? How can we analyze music with computers in real time? How can a multimodal experience be created in virtual musical reality? How can computers help composers compose? How can new instruments be created? How can two players play in real-time in two different locations?”

Some projects that have created models are Haydn the Orator Project, which recreates historical musical environments virtually, The Digital Orchestra Project, which extends orchestral composition through new technologies, and the Gestural Control of Spatialization Project, which analyzes the control of sound spatialization by live performers.

The next speaker was Chris Chafe, director of Center for Computer Research in Music and Acoustics and Duca Family Professor of Music at Stanford University. Chafe’s work involves tapping into the Internet as an acoustical and musical medium as the effects of latency on ensemble accuracy are explored. He discovered that there is actually a sweet spot delay as sound travels that allows ensembles to keep tempo. The experiment was set up as such: The drums were played in Montreal, while the bass and saxophone were played in Stanford. The physical property of the speed of sound is not uniform and in-phase asymmetric speed is exactly the same bidirectionally, so Internet acoustics follow slightly different rules, as the musical instruments are objects created in software.

Discussion among these two speakers disclosed an interesting phenomenon occurring in universities today. According to McAdams, the jargon of cognitive science is leaking to other disciplines, even music theory. In the realm of interdisciplinary studies, it is being found that the divides in the departments are merging as there is a leaking of concepts. Moreover, people are beginning to see the value of humanistic proposals as equal to that of engineering proposals. McAdams sees biocomputing and computer music as the next frontiers in interdisciplinary sciences, as the fascination with effective computing, that is, emotional expression on faces through video and biosensors increases.

The final symposium was moderated by Robert Linhardt, the Ann and John Broadbent Jr. ’59 senior constellation chair of biocatalysis and metabolic engineering. He introduced Sally Jane Norman, director of Culture Lab at Newcastle University, UK. The Culture Lab has worked on interesting projects such as the Memory Kitchen, which explores the use of pervasive computing for assisted living. The kitchen is aware of how food and utensils are being used. Tags integrated into food items and appliances, together with sensors integrated into the bench and cupboards, allow the locations of objects to be monitored; a pressure sensitive floor allows people in the kitchen to be tracked. Projectors integrated into the workbenches calmly display contextual information, such as appropriate recipes and the nutritional value for food that is on the kitchen work surface. Other examples of personalized technology design include Bluetooth jewelry, infrared emitting boxes, and ambient intelligent systems.

The next speaker was Atau Tanaka, chair of Digital Media at Newcastle University and founder of Sensorband. Tanaka was directly involved in Biomuse, which enabled musicians to make use of any bioelectric signal to control MIDI code—and thus give voice to the body electric. Tanaka is the first BioMuse composer and performer. His inaugural BioMuse performance was in 1989 in Stanford, Cailf., using a pre-production unit. Tanaka’s compositions use complex muscle signals underlying hand and arm gestures to create an entirely new music repertoire.

Lastly, the keynote address was delivered by Roger Malina, chairman emeritus of the International Society for the Arts, Sciences, and Technology, and president of the Observatoire Leonardo des Arts et Technosciences. Malina spoke of a 40-year framework of mediating the hard humanities and the intimate science. He spoke that now is the time for artists to work in labs and scientists to work in studios. Interesting projects have arisen from this concepts, including the sound of trees growing, by artist David Punn.

.



Posted 10-27-2008 at 10:06PM
Copyright 2000-2006 The Polytechnic
Comments, questions? E-mail the Webmaster. Site design by Jason Golieb.