To convince journalists about the audio quality of its new HomePod smart speaker (here’s my review), Apple did something smart: Before we were given our review units, we were required to attend a listening session. Mine was held in Apple’s New York City public-relations loft, a mockup of an apartment.
Four speakers were on a counter against a wall: Sonos One ($200), Google Home Max ($400), the HomePod ($350), and the Amazon Echo Plus ($150).
The PR person could switch playback from one speaker to the other without missing a beat. They even had a halo light rigged to turn on behind whichever speaker was playing, so you’d know which was which.
There was not a shred of doubt: In this side-by-side comparison, the HomePod sounded better than its competitors.
Most of the reviews, including mine, said the same thing: that the HomePod isn’t as smart as the other smart speakers (among other problems, its voice control is limited to iTunes and Apple Music — no Spotify), but that it sounds amazing.
What Hi-Fi (a British audiophile site): “The HomePod is the best-sounding smart speaker available—and by quite a margin.”
Pocket-Lint (tech site): “The best sounding speaker of its type.”
The Verge: “It sounds far better than any other speaker in its price range.”
Tech Crunch: “HomePod is easily the best sounding mainstream smart speaker ever. It’s got better separation and bass response than anything else in its size.”
Still, when I tweeted about the test, a couple of people were suspicious of the setup, which of course was entirely controlled by Apple. What was the source material? What was the wireless setup?
An Apple rep told me that the test songs were streaming from a server in the next room (a Mac). But each speaker was connected to it differently: by Bluetooth (Amazon Echo), Ethernet (Sonos), input miniplug (Google Home), and AirPlay (HomePod), which is Apple’s Wi-Fi-based transmission system.
Since the setup wasn’t identical, I wondered if it was a perfectly fair test. (Bluetooth, for example, may degrade (compress) the music it’s transmitting, depending on the source and the equipment.)
So I decided to set up my own test at home.
I hid the four speakers behind a curtain — a sheet of thin, sheer fabric that wouldn’t affect the sound. It took me a Sunday to figure out how to get the A/B/C/D switching to work seamlessly, but I finally managed it: All four speakers would be streaming from Spotify, all four over Wi-Fi. I’d use the Spotify app’s device switcher to hop among speakers without missing a beat.
I chose five songs, each with different styles, instrumentation, and sonic demands:
“Star Wars: Imperial March.” Full orchestra, full volume, full of brass.
“Havana” (Camila Cabello). Current pop hit. Distinct bass, drums, piano, and voice. Lots of rhythm.
Brandenburg Concerto No. 3 in G Major. All strings, full range of pitches and dynamics.
“Hallelujah” (Pentatonix). A cappella ballad, five voices, very exposed and close to the mikes.
“Helpless” (from “Hamilton”). Broadway pit band, pop sound, female harmonies.
In these kinds of tests, volume matching is incredibly important, for a couple of reasons. As Tom’s Hardware puts it: “First, if sources are at different levels, they’re easy to tell apart. From there, the test is no longer blind. Second, us humans tend to prefer (all other factors being equal) louder sources.”
For my dress rehearsal the night before, I volume matched them as best I could by ear.
The panelists at the dress rehearsal were my wife Nicki and my friend Mike, a professional guitarist who spent years as an audio technician for big-name touring bands.
I gave each panelist a score sheet, with room for notes, and asked them to rank the four speakers, 1 through 4, after each listening test. I sat at the laptop to control the tests; I played the same section of each song for about 20 seconds on each speaker. Panelists were free to ask for re-plays, or to hear any speaker again, or to hear two speakers in a different succession.
At the end of the rehearsal, I asked the listeners to choose a winner, based on how many first-place finishes they’d marked down. Both Nicki and Mike declared the HomePod to have the best sound, hands down.
The next day, the Yahoo film crew arrived. Our sound recordist, Dave, used level meters to help me volume-match the speakers more precisely.
My five panelists included Darwin, a professional violinist who spends a lot of time listening to recordings on nice gear; Julie, an entrepreneur and homeowner who is precisely the target market for these speakers; Dana and Tori, high-schoolers who haven’t yet begun to lose their ability to hear high frequencies; and Rob, a sound technician for Yahoo.
I didn’t tell them which speakers would be involved. I said only that there were four of them behind the curtain, and I’d refer to them as speakers A through D.
I handed out their score sheets and began the test. Five songs, 20 seconds each, free replays when requested. For each song, I played the speakers in a different order (A to D sometimes, D to A sometimes).
Of course, I knew what the results would be. I’d heard them myself in the Apple demo; I’d read the other reviews; and I’d done the dress rehearsal the night before. Every time, the HomePod won the match easily.
At the end of my own listening test, then, I handed out signs that said “A,” “B,” “C,” and “D,” and asked the panelists to hold up their winners’ signs on the count of three. I knew what they would say: “B,” “B,” “B,” “B,” and “B” (that was the HomePod’s letter).
That’s not what happened.
They held up their signs. Two of them ranked the Google Home Max (“D”) as the best. Three of them ranked the Sonos One (“A”) the best.
Nobody ranked the HomePod the best.
I actually have no great explanation for this outcome. Most of the panelists had ranked the HomePod (“B”) as first on some of the songs — just not most of the songs.
Rob: “For me, A, the Sonos, consistently had the most robust sound of all of them.”
Tori: “The Sonos won two of them for me. ‘B’ [HomePod] won the ‘Star Wars.’”
Dana: “’B’ [HomePod] won one of mine. I felt like ‘A’ [Sonos], a lot of times, sounded a lot more sharp.”
Julie: “I picked between B and D [HomePod and Google Home Max] as being the two best. B and D were pretty clear. And C [the Amazon Echo] came in consistently last for me.”
Darwin: “I actually found A [the Sonos] to be the one that I hated the most. B [HomePod] did win one for me. It won ‘Havana,’ because it had a better low end. But I generally picked D [Google Home Max], because it had a clearer, nicer range. As a classical person, I definitely would go with D. But if I were listening to more pop stuff, I could see where ‘A’ [Sonos] could win.”
So what are we to make of this? Why did none of my panelists rank HomePod a solid No. 1, when most critics all do (and so do I)?
Was something wrong with my setup? Well, no, because the night before, using the same setup, Nicki and Mike both ranked the HomePod No. 1.
Here are my theories:
Different music is different. My panelists all conceded that there was some variation depending on the material. “Honestly, they were pretty on par,” Rob said. “I don’t know that one stood out that much more than the other.” “It was much different with different music,” Darwin added. “It varied a lot for me, depending on the song,” Tori agreed.
Different people are different. I said that most professional critics ranked HomePod as No. 1, but not all of them. Buzzfeed’s critic Nicole Nguyen, for example, concluded: “Ultimately, none of this is a hard science, and audio preferences are highly subjective. Reactions to its audio quality from the four people who listened to it for this review.. were mixed. The HomePod outperformed other speakers in some situations and not others.” And the Wall Street Journal’s Joanna Stern wrote, “The HomePod’s bass is impressive for the size of the speaker, but in many songs, it’s far too front-and-center in the mix.”
Nobody else did blind tests. As far as I can tell, none of the other critics who declared HomePod No. 1 actually set up their own blind A/B/C/D tests. Maybe their conclusions wouldn’t have been so emphatic if they had.
Apple’s setup was different. Remember, Apple’s four speakers were each connected to the source material differently: Two wired, one over Wi-Fi, one over Bluetooth. Maybe that wasn’t an even playing field — and for sure, it wasn’t a real-world playing field. Most people, most of the time, just connect these speakers to their Wi-Fi networks and stream music from an online service.
What I can say for sure is this:
To my ears, the Apple HomePod generally sounds better than any other smart speaker—but only somewhat, and only in direct A/B/C/D tests. If you listened to the HomePod, Sonos, and Google Home an hour apart, you’d never be able to declare one a clear winner. (Everyone agrees that the Amazon Echo Plus is the loser in this roundup, but then again, it’s $150 and the size of a Pringle’s can; it’s not a fair fight.)
You can get two Sonos Ones for the price of a single Apple HomePod. You can use them as a stereo pair, or put them in different rooms and control them by voice. And you can have your choice of 42 music services (Spotify, Pandora, TuneIn, etc.) — not just Apple Music. And you can use all of Amazon’s Alexa voice commands (and, soon, Google’s commands and even Siri’s commands!), meaning you can control a vastly larger range of smart-home devices than the HomePod can.
Music gear (and listening tests) are famously contentious; they’re probably responsible for triggering more flame wars online than abortion and gun control put together. I’d love to hear your thoughts on Apple’s test and mine in the Comments!
David Pogue, tech columnist for Yahoo Finance, welcomes non-toxic comments in the Comments below. On the Web, he’s davidpogue.com. On Twitter, he’s @pogue. On email, he’s email@example.com. You can sign up to get his stuff by email, here.