I had already thought of this topic to write yesterday, but things happened and I got home late and lost any mood to write whatsoever. As I mentioned at one of the very first few blog entries, I’m probably going to be angsty and cross for a long time yet, and every once in a while.
But it was good that I ended up writing today, because in the morning I heard electronic music coming from an adjoining room, which further reinforces the experiential aspect of this entry. Because electronic music is really jolly different from other kinds. It tends to be really jarring after a while.
Rock music can be jarring too, I admit. You have an electric guitar and if you tune it a certain way, the chords produced are painful and rough, which go well with rough voices and rough melodies. And of course now you can apply an electronic quality to voice too, in the form of autotune, such as the autotuned chirpy female voice in Hyadain no Kakakata Kataomoi, as well as Vocaloids. And it applies to some extent to groups like Supercell. Supercell is made up of an actual singing voice, but with effects applied to it. It therefore sounds more normal than the artificially created voices of Hyadaruko and Hatsune Miku.
Yet people love them. People like electronica and go gaga over the voice of Miku. Some people have tried to copy her singing, but no matter what there’s always a difference, I feel, between her shrill pitch and any normal soothing vocal. The only person who can be compared to Miku is KyaryPamyuPamyu, the singer of PONPONPON. Admittedly I can’t tell whether her voice has any autotune elements in it. It sounds incredibly like it, and yet she vehemently calls herself talented and holds live concert tours worldwide. You can’t do that with an autotune, can you?
Nowadays with the concept of desktop music production, I’m quite curious if anyone can be a musician. There is a course taught in the conservatory of music about desktop music production and the use of professional multi-track digital audio software to record, edit, process and master music. The requirement, though, is that students have to actually know music. So is it more important to know how to use computer software, or to know how to make music out of chords? I suppose in the modern era people are expected to know multiple skills.
When it comes to digital audio software, I automatically think of things like Audacity, Adobe Audition and GarageBand. But I do believe that when people buy special Vocaloid kits, they are also given an audio maker thing to make the Vocaloid in the kit sing whatever they want. So in a way it’s a customised music maker too. But really, what is the skill level involved in using these audio systems properly to create pleasant music? Does this mean that everybody can make music now, even without learning how to play an instrument? I tried using these software, though, but only came up with limited capability. So it seems now that the quality of music created out of these things depends on your computer proficiency more than anything else, no?