Happy Metal GuyI guess you could say this is Happy Metal Guy’s version of Angry Metal Guy’s well-known article on objectivity mixed with Steel Druhm’s rant about the trials of a music reviewer.

In research methodology, there is a phenomenon called “carry-over effects”. This refers to the problem of a previous experimental treatment’s effects on research subjects carrying over to the next experiment the subjects are participating in, which is likely to confound the results of that experiment.

This problem typically arises because of the desire to stay within a certain budget while conducting experimental research. You see, some researchers try to save as much money as they can, and one way of doing it is using the same sample of research subjects for multiple experimental treatments. This is called the “repeated-measures” design. There are two well-known examples of such a design:

I. Experimental research involving a single group of research subjects, which are administered either different food products or health supplements one after another within a single sitting. This is because the researchers involved in this type of experiment typically try to find out which food product or health supplement results in the greatest benefit to human cognitive aspects such as astuteness, memory strength, speed of learning etc. Examples of such food products or health supplements include energy drink Red Bull, various brands of canned coffee drinks and extracts of ginkgo biloba leaves (a traditional Chinese medicinal herb).

II. Experimental research involving a single group of research subjects seated in an auditorium, which are administered segments of songs from different music genres, one after another within a single sitting. The researchers involved in this type of experiment are typically hired by record labels or music industry figures to find out which genre of music is the “best” or most popular with the general public, and they then make their decision based on the scores that the research subjects assign to the song segments from various music genres.

As you can probably already tell, II is most relevant to my discussion here, but I will elaborate further on how it relates to music reviewers in the context of music reviewing later on. For now, let me explain why researchers utilize the “repeated-measures” design.

(1) Experimental research involving different groups of subjects is very common and always faces the problem of variability: the quality of being subject to slight differences in a certain type of measurable quantity. Such an experimentClockwork typically involves two different groups of subjects, and one group is given the experimental treatment while the other is given the placebo treatment. On one hand, let’s say that Group A people got the real health supplement/food product and display results that seem to indicate that the health supplement/food product has a positive effect on humans. But for all we know, they could just happen to be having a good day and appear more alert or healthier than usual due to a perfect hormone balance. On the other hand, Group B people, who received the placebo treatment, could just happen to be experiencing hormone imbalance on that day and hence, appear less alert or healthier. Hence, researchers cannot confidently conclude that the health supplement/food product administered to Group A has real benefits for humans.

The “repeated-measures” design, however, solves this problem. By using a single group of subjects, it eliminates the need to account for variability between different groups of subjects involved in an experiment. This allows researchers to spend their time in a productive manner as they can then focus more precisely on treatment effects.

(2) The “repeated-measures” design is more economical since the same subjects are subjected to multiple treatments, which means fewer subjects needed and money saved.

As you can see, this particular area of research methodology is akin to the practice of reviewing music. Every active music reviewer is basically being subjected to a continual experiment based on the “repeated-measures” design. Now, allow me to explain it in the context of music reviewing.

testingThe Black Dahlia Murder is generally considered to be a “melodic death metal” band, but I always found it hard to hear the “melodic” in their brand of “melodic death metal”. But this could possibly be due to the fact that I heard Finnish “melodic death metal” way before I heard the American version of it; Children of Bodom and Kalmah were some of the first “melodic death metal” bands I ever heard. We all know how melodic the Finnish brand of “melodic death metal” can be due to the prominence of the synthesizer and expressive guitar melodies, and the carry-over effects from my listening to this version of “melodic death metal” must have made American “melodic death metal”, which mainly focuses on the electric guitar instrument, sound not as melodic by comparison.

Thinking of each instance of reviewing a particular record as a unique experimental treatment then, whenever music reviewers review a particular album, they are never truly assessing it based solely on its ‘own merit’. Unless you are someone who has only ever listened to and reviewed one record, you are sure to have a history with previous records, and this becomes an extraneous variable that confounds the result of the reviewer’s latest assessment of the subject album. For example, I could have been listening to Jay-Z before listening to the new record by Amon Amarth in preparation for reviewing the latter. Even though both artists belong to entirely different genres, the effects of the previous treatment of listening to Jay-Z would not have worn off by the time the treatment of listening to Amon Amarth’s new record is administered. If I happen to really enjoy listening to rap music and nothing else at this point in time, my assessment of Amon Amarth’s new album is going to be negative. If I happen to really hate listening to rap music and like just about any other music genre at this point in time, my assessment of Amon Amarth’s new album is going to be positive. But of course, this is assuming that I would be listening to that new Amon Amarth record immediately after listening to Jay-Z.

So you see, carry-over effect occurs when a treatment is administered before the effects of the previous treatment have worn off. Researchers usually avoid this problem by allowing sufficient time, aka a break, between different treatments.Experiment Theoretically, I could avoid this problem by taking a breather after listening to Jay-Z before I start listening to the new Amon Amarth record.

But in practice, this is very unlikely to happen, especially for those music reviewers who either review music albums for a living or as a passionate hobby. Based on personal experience, I’d say that the right amount of “sufficient time” needed to avoid the problem of carry-over effect in practice would have to be at least one week. This is, however, too much time wasted for the aforementioned music reviewers. Even for music reviewers who do it for fun rather than a career, they are already reviewing multiple albums per week. So can you imagine how many albums the paid music reviewers are typically reviewing within a normal working day/week? They certainly cannot afford to take a breather for one day/week if reviewing albums is their job.

Additionally, if you really want to interpret the argument presented earlier in a strict manner, you could go on to say that no amount of break-time is ever going to eliminate the problem of carry-over effects in music reviewing. This is because as long as you possess memories of your listening experience with other music whenever you are reviewing a particular album, those memories will always interfere and influence your judgment of the record you endeavor to review. The effects of the previous treatments, aka listening to other records in the past, are never going to go away! Also, the latest treatment, aka whichever album you are reviewing, will instantly be inducted into the Hall of Previous Treatments when you move on to the next album, and this vicious cycle of un-objectivity continues until you smash the computer keyboard into eleven pieces, adopt the philosophy of an ethical subjectivist and point out to everyone and anyone that everything is very subjective and there is no such thing as objectivity! Hah!

robot attackI express such verbal frustration because herein lies the logical consequence of the carry-over effects influencing the process of music-reviewing: the general population’s trust in the objectivity of paid/unpaid-but-busy music reviewers is misplaced. Even if I was listening to an old Kalmah album before listening to and reviewing the new Kalmah album, I would still not be judging the latter by its ‘own merit’ because I would be subconsciously comparing it’s qualities to the former’s. (Yeah, I’m looking at all you reviewers out there who always compare a band’s latest material to their old material). [That’s just how it’s done, sonSteel Druhm].

Welp, there you have it. This is basically one of the many other long-winded explanations of why objectivity doesn’t exist in the practice of reviewing music.