19 Comments

You can’t get far into an audio forum or the comments sections of audio websites without encountering the statement “Some products that measure well sound bad, and some products that measure poorly sound good.” Depending on who said it, it’s at best uninformed and at worst a lie. And it’s a lie that sometimes sticks listeners with underperforming audio gear.

The inaccuracy of the first half of that sentence can be shown in scientific papers and in the absence of documented examples. The second half could be true if the words “. . . to me” were added, but to the best of my recollection, I’ve always seen it presented as a universal statement, in which case it’s false. This platitude reflects not wisdom, but a rejection of science by people who, as far as I can tell, haven’t bothered to look into the science and have no measurement experience.

The Biggest Lie

One of the most glaring examples of this sentiment appeared just this month, in a review of the Tannoy Revolution XT 6 speaker by Herb Reichert in the July 2020 issue of Stereophile. The first sentence of the review reads, “I’ve been wrestling with my elders about new ways to measure loudspeakers, lobbying for methods that might collaborate [sic] more directly with a listener’s experience.” In another article, the same writer states his opinion more directly: “As a tool for evaluation, or as a predictor of user satisfaction, today’s measuring procedures are almost useless.” As we’ll see, this review clearly shows why measurements are so essential in the evaluation of audio products.

Both of the author’s statements reflect ignorance of the subject. In the case of speakers, measurement methods that have been shown to predict user satisfaction with 86% correlation were established more than 30 years ago. They were developed largely through extensive research led by Dr. Floyd Toole, conducted at Canada’s National Research Council (NRC) in Ottawa, and continued at Harman International. Countless speaker companies now use these methods as a design guideline. That’s because they know that speakers that measure well according to these principles will sound good to most listeners.

Some might point out that the model fails 14% of the time, but it’s unlikely that the 14% of speakers that measure well but didn’t win universal love from the listening panel sound “bad,” unless they have, say, high distortion -- which a different set of measurements could easily detect. Regardless, it’s absurd to proclaim an 86% success rate “almost useless.”

More recently, scientific research has produced headphone and earphone measurements that predict user satisfaction about as accurately. For example, in AES paper 9878, “A Statistical Model that Predicts Listeners’ Preference Ratings of In-Ear Headphones: Part 2 -- Development and Validation of the Model,” a Harman International research team of Dr. Sean Olive, Todd Welti, and Omid Khonsaripour report a 91% correlation between measurements and listener preferences in an evaluation of 30 earphones using 71 listeners.

AES 9878

I’ll agree that measurements don’t predict which amps, DACs, and other electronics people will like. But that’s not because of flaws in the measurements -- it’s because listeners rarely agree about which audio electronics they like. Blind tests seldom show clear differences between, or preferences for, certain models, brands, or types of amplifiers, for instance. Reviews of these products do not indicate preference trends among reviewers; they tend to rave about all sorts of amps and DACs. If a statistically significant number of participants in controlled listening tests don’t express affection for some audio electronics and disdain for others, there’s no way measurements or subjective reviews can predict listener preference.

What about the idea that “some products that measure poorly sound good”? A solid argument against this notion came from Stereophile technical editor (and former editor-in-chief) John Atkinson, who, in a summary of his 1997 AES presentation, stated, “. . . once the response flatness deviates above a certain level -- a frequency-weighted standard deviation between 170Hz and 17kHz of approximately 3.5dB, for example -- it’s unlikely the speaker will either sound good or be recommended.” And he’s talking here about the speakers recommended by Stereophile writers. Research shows that a panel of multiple listeners in blind tests would likely be even less forgiving of speakers that measure poorly.

AES Atkinson

Of course, even a clearly flawed audio product might sound good to somebody. To find an example, look no further than the very same Tannoy review. Atkinson’s measurements show that, as he puts it, “. . . the tweeter appears to be balanced between 3dB and 5dB too high in level,” which creates an “excess of energy in the presence region, which I could hear with the MLSSA pseudorandom noise signal when I was performing the measurements.”

To get a rough idea of what this sounds like, turn the treble knob on an audio system up by 4dB. It’s far from subtle, and it’s not pleasant. I can’t look at that measurement without thinking the factory used the wrong tweeter resistor. In a blind test with multiple listeners, such as the evaluations conducted by the NRC or Harman, this speaker would almost certainly score poorly.

Yet I find no mention of this flaw in the subjective review. In fact, the reviewer describes the speaker’s sound as “slightly soft,” and concludes with the words “Highly recommended.” Based on this review, at least, it seems likely that if a measurement technique could be found that reliably predicts which speakers this reviewer likes, most listeners won’t like those same speakers.

Fortunately, those who read the measurements got the real story. Those who ignored the measurements because they’ve been told they’re “almost useless” may end up buying a speaker with an obvious tonal-balance error.

Don’t get me wrong -- I don’t mind if someone raves about an audio product with a huge, demonstrable flaw, just as I’d hope no one minds if I occasionally enjoy listening to Kiss’s Alive! album. I’ve read many such reviews, and rarely felt inspired to comment on them. But dismissing decades of work by some of the world’s most talented audio scientists just because it doesn’t fit your narrative is as frivolous as claiming that Gene Simmons is the greatest bass player of all time.

I would hope that audio writers would be curious about their avocation and want to learn everything they can about it, but a huge percentage of them have shut themselves off from any new information that might cast some of their beliefs in doubt. In their rejection of science, they’ve mired their readers and their industry in nonsense -- and in many cases, they’ve stuck their readers in the infinite loop of buying underperforming products and then selling those to buy other flawed products, instead of simply learning key facts about audio so they can buy good gear the first time.

Frequency response curves

I’m encouraged, though, because the headphone community isn’t burdened with an anti-science attitude. On the contrary, headphone enthusiasts are putting together measurement rigs, reading the research, and working to understand how their headphones and amps work and interact. Yet they understand that science provides only guidelines, and that they ultimately have to listen for themselves and trust their ears to make the final judgment. Most important, they are getting better reproduction of, and more enjoyment from, their music. I think and hope that this is the future of audio.

. . . Brent Butterworth
This email address is being protected from spambots. You need JavaScript enabled to view it.

Say something here...
Log in with ( Sign Up ? )
or post as a guest
People in conversation:
Loading comment... The comment will be refreshed after 00:00.
  • This commment is unpublished.
    Dustin · 10 months ago
    @Brent Butterworth You make good points. Here is one of the flaws that Floyd Toole has pointed out about his measurements that I was thinking about when I wrote that:

    https://www.avsforum.com/forum/89-speakers/710918-revel-owners-thread-461.html

    See post # 13813
  • This commment is unpublished.
    Brent Butterworth · 10 months ago
    @Dustin Agreed, and that's another great point. Why should I care about what emotional reaction some audio writer had to a speaker -- a reaction that's determined at least as much by the writer's mood, the setup of the speakers, the anciliary gear, the chosen music, and the writer's biases for or against the brand or technology as it is by the performance of the speakers? On what basis do they assume the reader would have a similiar emotional reaction? I'll grant that some of these guys are amusing writers, but this is not a serious or useful means of testing.
  • This commment is unpublished.
    Mark · 10 months ago
    @Brent Butterworth there are basically two failure modes in loudspeakers: thermal and mechanical. With regards to "clipping" a sure way to fail a loudspeaker is to drive them into mechanical clipping by applying excessive power.
  • This commment is unpublished.
    Dustin · 10 months ago
    @Brent Butterworth I totally agree with all your comments.

    The point I kept trying to make, that went completely unacknowledged, was that if there is a discrepancy between a speaker’s measurement and a subjective sighted review, then it would surely be worthwhile to wonder if sighted bias influenced the review, especially since science has demonstrated this can happen. Instead, they jump to the conclusion that the sighted review is the be and end all explanation of how the speaker sounds, and that the measurements must not be adequately taking into account everything that we hear. To me, the simpler explanation of what is going on hear is so painfully obvious. I don’t understand how they can’t see that. This guy is apparently a trained physicist too. So to see the logic he uses disappoints me even further.

    His comments on the added value that Stereophile can bring to this discussion (which he says is somehow missing from Toole’s work) in evaluating the “emotional communicativeness” of the speakers make no sense to me at all. Let’s see one of his reviewers take a blind test between a speaker they reviewed and a reference speaker that measures well with the Spinorama method (e.g. Revel F208) and let’s see if their impressions remain the same as their sighted review did. In the case of these two particular reviews (the Totem and the Tannoy), I highly doubt they would. And they can take all the time they need. They just can’t know which speaker they are listening too. But they would never do this because they know if would throw their entire model into question.
  • This commment is unpublished.
    Brent Butterworth · 10 months ago
    @Dustin I read that thread on their site. He's really digging in on their "telling the reader what components SOUND like" talking point -- as if a single reviewer in uncontrolled conditions, often operating in considerable ignorance of how audio and psychoacoustics work, could give us a definitive assessment of a component's sound.

    I was more appalled, though, by his assertion that he, Atkinson and Kal are all familiar with Toole's work -- which implies that most of his writers doing speaker reviews are not familiar with what is almost surely the most important research on speaker performance to date. This is like staffing an astronomy magazine with writers who aren't familiar with Edwin Hubble's work.
  • This commment is unpublished.
    Dustin · 10 months ago
    @Dustin FYI - Quote from Floyd Toole in post #10499:

    "A correlation coefficient of 0.86 is not perfect". True, but some context helps to explain the shortfall. Those same papers explained that bass extension and smoothness accounted for about 30% of the factor weighting in the sound quality ratings. Because the 70 loudspeakers in that test included full range floor standers and small bookshelf units it was obvious that some of the variation was due simply to the differing bass performances - a good speaker with bass beats a good speaker with less bass. When a subset of bookshelf speakers was tested - having similar bass performance - the correlation coefficient was 0.995 - i.e. perfect. So, when "apples" are compared to "apples" in terms of bandwidth, the correlation is truly amazing."

    https://www.avsforum.com/forum/89-speakers/710918-revel-owners-thread-350.html

  • This commment is unpublished.
    Dustin · 10 months ago
    @Dustin FYI - Quote from Floyd Toole in post #10499:

    "
    Olive, S.E. (2004a). “A multiple regression model for predicting loudspeaker preference using objective measurements: part 1 – listening test results”, 116th Convention, Audio Eng. Soc., Preprint 6113.
    Olive, S.E. (2004b). “A multiple regression model for predicting loudspeaker preference using objective measurements: part 2 – development of the model”, 117th Convention, Audio Eng. Soc., Preprint 6190."

    https://www.avsforum.com/forum/89-speakers/710918-revel-owners-thread-350.html

  • This commment is unpublished.
    Brent Butterworth · 10 months ago
    @Dustin Atkinson's methods are indeed flawed -- quasi-anechoic is inherently so -- but they're more than adequate to tell you if a speaker performs well. He is honest and knowledgeable when discussing the flaws in his methods. In the article I cited, Atkinson's measurements easily found the Tannoy speaker's glaring flaw, while the reviewer completely overlooked it. The subjective reviewer -- whose knowledge is scant and whose hearing acuity is unknown -- is the problem here, not the measurements.

    I worry that if we demand our speaker measurements come from an anechoic chamber or a Klippel analyzer, we make the perfect the enemy of the good. We make it impossible for new reviewers and audio publications to even consider doing measurements, and we just encourage startup (and established!) manufacturers to "trust their ears" rather than learn the science behind their craft.
  • This commment is unpublished.
    Brent Butterworth · 10 months ago
    @Todd Hi, Todd. I really don't know. It would sure be fun to test that, though.
  • This commment is unpublished.
    Brent Butterworth · 10 months ago
    @Bob Johnson OK, but I have worked for at least 10 publications that review audio gear, and TMK, what you're describing has happened with only one writer I've worked with -- whom I was in the process of firing when he quit. I've lost plenty of advertising dollars when people didn't like their review or didn't get as many reviews as they liked, and I know many competing publications have, too.
  • This commment is unpublished.
    Bob Johnson · 10 months ago
    As someone who used to spend $1,000's every month on magazine ads, everyone forgets the ONLY real purpose of a magazine is to sell advertising! Readers are there only as something to sell to advertisers and set ad rates, the amount of money coming in from subs is negligible compared to ad revenue. Magazine reviews are worthless for decision making, basically they are "payoffs" to advertisers for their advertising dollars. One example, after getting a publisher to agree to a product review, he told me to contact the reviewer to arrange it. I called the reviewer, an older, very respected member of the community, and started talking about shipping arrangements. His response was "hell kid, I don't have time for that crap, send me some pretty pictures and I'll take care of writing the review." Very enlightening.
  • This commment is unpublished.
    Todd · 10 months ago
    Question for Brent,

    Brent, I recently heard Andrew Jones remark that it is a misnomer that the majority of speaker damage is caused by amplifier clipping. Rather, he said speaker damage is almost always caused by too much power. I was surprised, and would like to hear your view on this if you care to opine.
  • This commment is unpublished.
    Dustin · 10 months ago
    @Joe Pop Floyd Toole has pointed out a number of times over at AVS Forum that John Akinson's measurement are inadequate in many ways. The best way to interpret how a loudspeaker will sound through measurements is through the "Spinorama" method. Check out Audio Science Review. The founder has been measuring many speakers using this method.
  • This commment is unpublished.
    Dustin · 10 months ago
    Great article, Brent. There was also a fair bit of back and forth on this topic in the comments section in another recent Stereophile article (Totem Skylight speakers). I tried to argue in favour of what the science has demonstrated (username: buckchester). I even got some replies from Jim Austin, the new editor. I was disappointed with his responses. It’s frustrating when so many people in this hobby are so obviously irrational.

    Floyd Toole posts quite often on AVS Forum and he has actually stated that when speakers of similar bass capability were used, the correlation actually increased from 86% t 99%. I can find you the exact quote if you’d like.
  • This commment is unpublished.
    Brent Butterworth · 10 months ago
    @Dr. Ears If you haven't read it yet, this paper by Floyd Toole goes into depth about frequency response curve preferences for speakers. https://www.harman.com/documents/AudioScience_0.pdf
  • This commment is unpublished.
    Dr. Ears · 10 months ago
    @Brent Butterworth My gutted and re-done Green Mountain Audio's sound great for two mains reasons, first they are time aligned & phase coherent, secondly, they are 4-ways using only first order cross overs. Matching drivers is a bitch and a once in a lifetime achievement.
  • This commment is unpublished.
    Dr. Ears · 10 months ago
    The biggest lie in audio is, "I think your system sounds better than mine".
    I have been buying & selling New Old Stock audio tubes for four decades.
    Whenever I buy a decent size lot, I take the best and worst testing pairs and listen to them, I have never heard an audible difference, so I concluded long ago that whatever we are testing for cannot be heard.
    As components have gotten better with the notable exception of audio vacuum tubes, we can now reproduce a flat frequency curve better than ever.
    However, I believe that most of us find a flat frequency curve to sound harsh with listening fatigue occurring fairly quickly.
  • This commment is unpublished.
    Brent Butterworth · 10 months ago
    @Erin Nice!

  • This commment is unpublished.
    Brent Butterworth · 10 months ago
    @no ne Great question, and one that real speaker aficionados (e.g., people who measure, design, build, etc.) often discuss. The key here is "blind listening tests." B&Ws enjoy an esteem that almost certainly helps them in sighted testing; I suspect a desire to maintain that brand identification is the reason they stuck with Kevlar diaphragms long after most others abandoned them. But I cannot recall a situation where B&Ws excelled in a multi-listener blind test. In fact, most of the times when manufacturers have demoed their speakers for me in blind tests, a B&W model was among those they chose to demo against.

    I haven't measured or evaluated a B&W model in three or four years, but I have reviewed many, going back to the early 1990s. From what I have observed, the company's speaker lines are inconsistent. There seem to be great and mediocre models within each line. I cannot identify a common design philosophy running through them. In comparison, I have done a blind test with multiple models from the Revel Perfoma2 series, and they sound (and measure) almost the same.

    Speaker popularity from a sales standpoint has little to do with performance -- if memory serves, Bose was the most popular speaker brand from the late 1980s until a few years ago, when they were replaced by Amazon. Neither is revered for sound quality.
  • This commment is unpublished.
    Brent Butterworth · 10 months ago
    @John Mayberry Hi, John. Much of what you've said is new to me. With waterfall responses, have we determined what "passable" is? Is there research that ties these to blind listening test results?

    Ditto for impedance -- my measurements demonsrate the corrleation of headphone impedance curves with sensitivity to output impedance of the source device, but I don't know of any research that ties speaker impedance curves to listening test results, other than a <4-ohm impedance is more than a lot of amps can handle.

    I measure only about 15-20 speakers a year right now, but I did a lot more when I worked for Sound & Vision. Off the top of my head, I'd guess that a third of them measured pretty well. Maybe even half of them. Of course, those were mostly fairly mainstream products; if you measured all the speakers at a high-end audio show, I expect the percentage wouldn't be as high.

Latest Comments

As a long-time audiophile with roots in high-end audio, including manufacturing, I couldn’t agree more ...
Brent Butterworth 20 hours ago How Audio Writers are Killing the Audio Industry
@Audioreviev.dkThanks!
The writing is just so much spot on. And I am saying this as a ...
@todd fettermanWe did it with headphones, at least! I might do something on my speaker, subwoofer ...
@Brent ButterworthYes, of course, there are others, doing measurements and I appreciate you turning me onto ...
@todd fettermanThank you so much, Todd! But Soundstage deserves a lot of credit, they've been doing ...
@Rudi I think TapeOp is one of the best magazines I have ever read. Agreed that ...
@Chris LaunderKudos to Harry Pearson for conceiving such an appealing idea and message. But his construct ...
Excellent piece. Somebody had to come forward and say the emperor has no clothes. Its ...
This is a good piece and, while I largely agree with it, I think putting ...