In its day, Google has produced some truly bizarre hardware products. (Remember the Nexus Q, Google’s “set-top sphere”? Me neither.)
Well, don’t look now, but here comes the company’s weirdest hardware yet: Google Clips ($250).
It’s a tiny, thin camera, about the size of two stacked Triscuits, that combines elements of a spy camera, GoPro camera, and cellphone camera.
The Clips is designed for parents (of children or pets). Of course, we all have perfectly good cameras in our phones — actually, better cameras. But using only our phones presents a few problems:
You’re never in the pictures with your your kid or pet.
Babies and toddlers often stop whatever cute thing they’re doing when they see your phone come out, because it’s kind of big and intrusive.
You can’t predict when your subject is going to do something adorable; odds are pretty good that you’ll miss it.
If you film or shoot enough that you always capture the good stuff, then you’ve got endless quantities of stuff to edit.
All your photos and videos of your kid are taken from the same angle: Your height.
The Clips is just thick enough that it can balance on its edge. It also comes with a rubbery holder/case, which can act either as a kickstand or a clothespin, so you can clip it to things to get cool angles. (The name “Clips” is a pun, involving both the rubbery clip and the short videos that the camera captures. More on that in a moment.)
When something adorable starts happening, you pull out the Clips; rotate its black lens to turn it on; and set it down (or clip it) between three feet and eight feet from the action.
At this point, of course, there’s nobody pressing the shutter, and there’s no self-timer. Instead — this is the Clips’s headline feature — the camera uses artificial intelligence to decide what and when to capture. Whatever it grabs shows up on your phone, in the Clips app (iPhone or Android).
The camera supposedly learns, over time, who’s in your family, by seeing which faces appear most often. (The camera’s ability to recognize people, dogs, and cats is brought to you buy the AI built into Google Photos. In fact, if you’ve used Google Photos to tag faces with names, the Clips treats those people as familiar faces, and favors them in its photography.)
There’s one button on the camera, too, which you can use to snap portraits manually, as a way of telling it, “This is one of the people I care about.”
Clips and privacy
Once you’ve turned the lens to turn on the Clips, it watches the room for three hours on a charge. An LED indicator gently blinks to tell you that the camera is watching, but you get no indication when it’s actually capturing.
Clearly, there’s a creep factor to a camera that decides on its own what to shoot and doesn’t tell you when it’s rolling. For that reason, Google has gone to extremes in trying to reassure you about privacy:
This camera isn’t connected to the internet — can’t be connected. All of the AI and learning is done right on the camera, not on some cloud servers. (Google says that that feature, building machine learning AI into something this tiny, is a big accomplishment. A camera like this could not have existed a couple of years ago — that much computing power would have eaten up the battery charge in a heartbeat.) The only connection is to your phone.
The photos are encrypted on the camera. If someone steals it, they’ll have no access to what you’ve shot.
The camera doesn’t record sound with its videos.
Man, that one hurts. No sound? So what does it record? Like so much about the Clips, this part requires some explanation.
The Clips snaps bursts of 105 photos, which it insta-stitches together into what Google calls a Motion Photo — basically, a seven-second silent video clip. One that plays a not-very-smooth 15 frames a second. (TV, for comparison, shows you 30 frames a second.)
What’s impressive is how fast the camera sends fresh recordings to the corresponding Clips app on your phone (it uses a private Wi-Fi Direct connection).
Here’s what else you can do in the app:
See a live preview of the camera’s view, since the camera itself has no screen.
Manually trigger a capture.
Quickly and efficiently scroll through the captures: swipe left to discard one, swipe right to save it to your phone’s camera roll. On the iPhone, it becomes what Apple calls a Live Photo — a still photo that, when hard-pressed with your finger, plays a three-second video clip. (In this case, the Live Photo has a seven–second video clip, which represents some sneaky engineering by Google.) On Android, it remains a Motion Photo.
Shorten or crop a video.
Pull out one frame of the video as a still image, although it’s common to get motion blur in these.
Use the app’s own AI to choose a subset of the captures — the “winners” — automatically.
Adjust settings so that the camera captures shots with greater or lower frequency.
The app is really well done. The actual photos are another story.
What you get
Despite the cool idea of an AI camera, the results are disappointing.
The photos don’t look as good as your phone’s. In low light, they’re grainy; indoors, there’s often motion blur.
The camera has a fixed-focus, very wide-angle (130-degrees) lens. As a result, anything closer than three feet is out of focus, and anything farther than eight feet looks really tiny. And anything near the edge of the frame gets bizarrely stretched and distorted.
But the bigger issue is that the AI doesn’t work especially well. It captures things, all right, but I’m not sure that its artificial intelligence is any match for your intelligence.
I spent a morning with my adorable five-month-old friend Cody and his mom Lauren. The Clips caught plenty of cute clips — but not always the great ones. At one point, Cody managed to flip himself from back to front. “Good job!” his mom exclaimed. “Did it record that?” she asked me.
No, it did not.
At my own house, I love tossing cat treats for Wilbur the Wonder Cat. He bounds across the slippery floors, chasing it like a cat out of hell, and then pounces on the treat, skidding hilariously three or four feet. I set up the Clips at the right spot for the landing and tossed the treat on target over and over again. The Clips couldn’t get the Wilburdive.
Then there’s also the central concept of trusting the capture. Yes, it’s AI, but what does that mean?
Google says that the camera is waiting for the right combination of lighting, composition, and smiling faces. But do you want photos (or silent video clips) only of the happy moments in your life? Is it possible that you might sometimes want to capture an unhappy moment — say, the tragicomic moment when your 4-year-old’s ice-cream scoop falls off its cone? Google’s AI won’t capture that. (The company says that it plans to offer preference settings for emotional tone in a future update.)
I love the idea of a camera that uses AI to capture the good stuff all by itself. And I do love the freshness of the angles and positions that the Clips’s clip permits.
I just don’t think that much of the Clips’s clips.
You’re paying $250 for a camera that can’t directly take stills and can’t capture video with sound. It doesn’t work as an “ambient camera,” like a security camera that’s rolling all the time. It doesn’t work as a GoPro-type camera, either; its super wide angle means that if it’s clipped to, for example, your body, the video is unwatchably jerky. And its AI-only sort of works.
I’m glad that Google did the Clips experiment, because there are some really good ideas here, and real-world problems to be solved. I just don’t think you should buy it.
David Pogue, tech columnist for Yahoo Finance, welcomes non-toxic comments in the Comments below. On the Web, he’s davidpogue.com. On Twitter, he’s @pogue. On email, he’s firstname.lastname@example.org. You can sign up to get his stuff by email, here.