4 Comments

Of the more than 100 new audio products that pass through my home every year, few stay longer than a couple of months. But after I tested a sample of the AKG K371 headphones borrowed from another reviewer, I immediately ordered a set from Amazon. Not only do they sound great; they’re a superb reference for any reviewer or headphone enthusiast. More than probably any other headphones available today, they tell us what sound most listeners like. The K371s’ response closely matches the Harman curve, developed by running hundreds of headphones past hundreds of listeners in blind tests to find out what kind of sound most listeners prefer.

AKG K371

Although I respect the science behind the K371 headphones, I don’t have to trust it, because the results from my listening panelists -- who had no idea of the K371s’ pedigree -- confirm the headphones’ excellence. Based on the many glowing reviews I’ve read of the K371s, I expect they will become an industry standard, alongside the Sony MDR-7506 headphones -- which were launched more than a decade before the Harman headphone research began but happen to come very close to the recommended target response. The science worked.

The tuning of the K371s was developed using short-term, blind comparisons of different headphone response curves -- yet this same process is derided by most high-end audio reviewers and publications as insignificantly revealing of flaws. Their preference is for long-term, casual, sighted listening tests where few, if any, controls are in place. They’ve recently come up with a sexy marketing term for this: “slow listening,” perhaps inspired by a 2018 paper titled “On Human Perceptual Bandwidth and Slow Listening” by Thomas Lund and Aki Mäkivirta.

Lund and Mäkivirta cite various scientific references to show that the amount of information the human brain can take in over a short time interval is limited (a fact that should come as a surprise to no one). They imply that short-duration listening tests are inadequate, although the only purported failure of such tests that they mention is that they “have been used to promote lossy data reduction” -- and they present no specific examples of where, exactly, short-duration tests have failed to identify flaws in lossy data reduction technologies.

AES

Although I’m an Audio Engineering Society member and an enthusiastic reader of its publications, I’d have missed this paper had it not been cited by Stereophile editor Jim Austin in a recent opinion piece that embraces the notion of “slow listening.” Now that the term has been touted in an influential audio publication, I expect its use to spread in audio reviews and forums -- and more ominously, in marketing efforts used to promote products whose performance falls short of modern standards.

Lund and Mäkivirta see the danger here, warning that “. . . repeatable procedures need to be established so ‘more testing time’ does not become a way of defending just any claim.” But “more testing time” has become the mantra of audiophiles and reviewers whose beliefs continually fail to be confirmed by short-term controlled testing. With “slow listening,” we now have a marketing phrase with an artsy, organic vibe to glamorize uncontrolled, casual listening tests in which no attempt is made to eliminate the effects of listener bias.

When you compare the results of short-duration blind tests with the results of “slow listening,” it’s clear which technique works and which doesn’t. The AKG K371 headphones are only the latest example of what short-duration blind listening tests can achieve. More importantly, short-duration blind tests gave us the speaker-design standards developed at Canada’s National Research Council.

As anyone who followed the development of speakers in the 1990s can tell you, we began that decade with speakers as wildly different as headphones are now. I remember in 1991 asking veteran audio reviewer Len Feldman to recommend a good, cheap speaker, and the only one he could think of was the Boston Acoustics A40. As the results of the NRC research started to spread, speaker design improved radically. By the end of the decade, I noticed that the majority of the conventional speaker designs passing through my listening room delivered good, and often great, performance. I can now easily name a dozen speakers that are better than the A40 but sell for less even before adjusting for inflation.

Boston Acoustics

While we often read claims of audio products being developed through innumerable listening tests over the course of months -- i.e., slow listening -- I can’t think of an example of such a product that delivers demonstrably better performance than competitors developed using short-duration blind tests.

Worse is that the slow-listening review process is now resulting in raves for products with obvious flaws that controlled, blind listening tests and measurements would quickly reveal. Examples include large single-driver loudspeakers with unavoidably narrow dispersion and severe colorations in the upper midrange and treble, and single-ended tube amplifiers with audible treble and bass roll-off as well as high output impedance that causes your speakers or headphones to deviate substantially from their intended response.

After 29 years of reviewing, and thousands of product reviews, I can certainly think of times when I uncovered some minor performance flaw in a product after long-term exposure to it. But every month I encounter new examples of short-duration comparison testing revealing flaws that I missed in casual “slow listening” sessions.

I hope someday someone will present substantial experimental data that shows multiple examples where long-term listening revealed something missed in short-duration blind tests. But until then, whenever I hear the phrase “slow listening,” I’ll get the impression someone’s trying to sell me something that can’t hold up to a serious, controlled evaluation.

. . . Brent Butterworth
This email address is being protected from spambots. You need JavaScript enabled to view it.

Say something here...
Log in with ( Sign Up ? )
or post as a guest
People in conversation:
Loading comment... The comment will be refreshed after 00:00.
  • This commment is unpublished.
    Doug Schneider · 4 years ago
    This article came to mind as I've been reading Malcom Gladwell's "Blink," which, in a nutshell, is about how we can make extremely fast, very accurate judgments on things. Now does this extend to our hearing? I haven't finished the book, so I don't know if he addresses that specifically, but what he talks mostly about is how the brain can compartmentalize things and provide extremely fast processing when needed. It seems to me that this would include what we hear, too.

    Doug Schneider
    SoundStage!
  • This commment is unpublished.
    Ryan · 4 years ago
    But have you ever had headphones that sounded good at the beginning but later you could not stand the sound of? At first I thought I liked Etymotic earphones because there was detail, then a long time later I got tired of the weak bass they have. But it took long time
    • This commment is unpublished.
      Brent Butterworth · 4 years ago
      That's a good point. I should have stressed that I'm talking about professional evaluation, in which it's a reviewer's or manufacturer's job to evaluate the product -- in which case they should have procedures that reveal the performance and character of the device under test fairly quickly. Presumably in the case of a consumer who bought the headphones, they don't have established, time-proven procedures to evaluate the product, so their opinion may change after a certain period of casual use.

      For example, if one were to buy a set of headphones and spend a few weeks listening to classical instrumental recordings, then get the jones for pop vocal recordings, bass-heavy hip-hop, etc., one's opinion of a set of headphones might change well into the ownership period.
      • This commment is unpublished.
        ryan · 4 years ago
        Can I ask how long you take to review? I mean the length of time to come to conclusion. R
        • This commment is unpublished.
          Brent Butterworth · 4 years ago
          It depends on the headphones and the schedule, but typically I will do a couple of weeks of casual evaluation then a few hours of focused evaluation. For the panelists, it depends on how long they want to spend, and how much they like the headphones, but they tend to spend about 20 minutes per model.
  • This commment is unpublished.
    Mauro · 4 years ago
    Hi Brent,
    Out of curiosity: when you blind test many headphones and just one is different from the others, how can you avoid being biased?
    This came to my mind, when I read your open back article at the wirecutter. There you seem to characterize the Philips Fidelio X2 as the only one with too much bass. How do you know if that bass is too much?
    Actually they are in line with Harman Curve...so the question
    https://www.dropbox.com/s/b09edinwg6tw5vi/Philips%20Fidelio%20X2HR.pdf?dl=0

    https://thewirecutter.com/reviews/best-open-back-headphones-under-500/
    • This commment is unpublished.
      Brent Butterworth · 4 years ago
      I don't blind test headphones. It's complicated to say the least, and in ways it's basically impossible. The listener has to adjust the headphones for the best fit, and of course has to touch the headphones, and you could attach little handles to them, but then you can affect the sound, etc., etc. Or you can measure the curve and replicate it in a reference headphone, but you're not really doing a consumer product test at that point (although it's fine for scientific research). So there's lots of bias in all my headphone reviews, and everyone else's, too. I wish there was something I could do about that, but I don't think there is.

      The best I can do to avoid bias is to use outside testers who aren't headphone enthusiasts and thus don't have a preconceived idea of what a certain brand/design sounds like (which is what I usually do), or use panelists who've heard so many headphones that they've heard good ones and bad ones from just about every company (which is what Wirecutter usually does).
      • This commment is unpublished.
        Mauro · 4 years ago
        That makes perfect sense. Even if I thought that other testers put headphones on your head from the back. Too much imagination going on here on my part (wm)
  • This commment is unpublished.
    Mauro · 4 years ago
    Well said! As an engineer, music addicted and lover of better-made audio products, I am happy to see an article like yours!

    Fresh air (handshake

Latest Comments

The placebo effect is strong here.
@K MurrTidal!
@SteveTotally agree and I miss MQA very few tracks now available on Tidal. Even with ...
Like the article explained well, there will always be subjective notions to what we as ...
@Brent Butterworth"Too much bass" is too much bass if you think that the bass is too ...
How would you compare the sound stage between the T1 and T5?
You didn’t mention anything about time domain correction which is a big benefit of MQA. ...
Although I concur with most of the critique of the Kinki Studio THR-1 in your ...
Dustin 2 months ago Do You Need a Headphone Amp?
Amps make no difference in sound quality unless they alter the frequency response, which they ...
good drive 2 months ago KZ AZ09 True Wireless Adapters
had the same setup with these (t3 plus) and when i read your review i ...