Just how did we get here? How did we get from the point where a musician such as Mozart sat down at his harpsichord playing his original compositions to a few elite nobles with the economic wherewithal and the geographic luck to witness a genius at work to an electronically projected anime character who’s lyrics and stage presence are electronically composed by
an engineer with the musically of the Wizard of Oz. Just don’t look behind the curtain…
Of course, a few centuries of music technology has something to do with it.Hatsune Miku is a animae avatar created in 2007 by Yamaha Corporation’s using its Vocaloid 2 synthesizing technology (In Wikipedia, 2010; Pata, 2010) . Her voice is sampled from Japanese voice actress, Saki Fujita, her image crafted by manga artist by the name of Kei Garo (Tofugu, 2011). Hatsune Miku has performed at her concerts onstage as a projection image created through a magical illusion called “Peppers Ghost Illusion” and is far from the first to utilize the technology for “live” performances (Meiko, 2011; Marx, 2011). The Gorillaz, a virtual band who sang with an animated Madonna during the 2006 Grammy Musical awards first brought the idea of an animated live performance to the West using this century old magic trick (2006; 3:11).
Although the performances were animated and the band considered “virtual”, the Gorillaz vocals were from “real” singers. Hatsune Miku has taking this to another level, one in which both the musician and the performer are virtual. What does this mean for the future of live musical performances? As youtube commenter LittleSisVideos commented….”Even real
singers are becoming a thing of the past”. (LittleSisVideos, 2011).
Is this true? Are real singers becoming a thing of the past? I guess that would depend
on how a “real” singer is defined. For me, the ideal is a singer song-writer capable of playing their own instrument. Michael Stipe, lead singer of R.E.M., noted the advantage of vocaloid techonology ability to preserve a musician’s voice for future use should they lose their own (Tofugu, 2011). Adele has recently experienced vocal hemoraging and had to cancel many concerts in order to recover (Huffington Post, 2011). Should she loose her voice, could she use vocaloid to continue her
But is vocaloid the savoir or the death of true, live vocalists and performers?
It’s strange to me how dehumanizing this technology is. Yet it is bringing song creation and
performance to the masses, a democratizing process, especially for those not
born with natural talent. It’s the technology truly dehumanizing or sublimation?
Perhaps there is a cultural answer. It’s sublimation for Japan: dehumanizing for western culture.
Hatsune Miku is the creation of the ideal, something that Japan strives for with its technology. In the US, it can be seen as a dehumanization of the song writing and artistic performance. We frown upon lip synching or anything that smacks that there is anything fake about a performance. Don’t believe me? Then ask Milli Vanilli or Jessica Simpson, although you may not know that she is a singer after her fall from grace after she was caught lip synching to the wrong song on Saturday Night Live (Gee, 2010; de Moraes, 2004).
People have different thresholds for animae and vocaloid. I think my threshold is lower
than others. My dislike for Hatsune Miku reminded me of Freud’s “uncanny valley” theory in which humans can identify with robotics with human characteristics up to a certain threshold. Once robotics enter an area in which they are perceived to be too humanistic, humans then become repulsed under to the point they become easily distinguishable from a human, at which point they become acceptable again (Project Haruhi, 2009; Edwards & Newell,2011, Sofge, 2010). While some robotics researchers believe the uncanny valley is an oversimplification, I’ll take my music live.