At Google’s Pier 57 offices in New York overlooking the Hudson River earlier this month, I had the future in my hands — and on my face. I wore wireless glasses with a display in one eye that could project Google Maps onto the floor in front of me, show me Uber updates, and automatically recognize and translate languages spoken aloud. I could understand a conversation in Chinese.
I tried another pair of glasses, connected by cable to a phone-like puck. This pair could run apps in front of me, just like a mixed-reality VR headset. I could connect with a PC, click on floating cubes with my hands and play 3D games. It was like a Vision Pro I could carry in my jacket pocket.
That future is upon us. You’ll be able to try out those glasses for yourself in 2026.
But those two very different stylings — one everyday and subtle, one more like a tiny AR headset — are just a glimmer of what’s coming.
My desk is covered with smart glasses. A pair of large black frames that show me a color display in one eye and that have a neural wristband I can use to relay commands. A regular-looking set of Ray-Bans that play music and take photos.
Then there’s the pair of black glasses that have lenses I can snap in, with green monochrome displays and ChatGPT integrated. And the thin glasses that have displays and a companion ring, but no speakers. And the glasses built to assist my hearing.
To watch movies or do work, sometimes I plug a completely different set of glasses that can’t work wirelessly at all into my phone or laptop with a USB cable.
Smart glasses are the biggest new product trend as we cross the halfway mark of the 2020s. Glasses with smart features may conjure up visions of Tony Stark’s eyewear, or those world-scanning glasses in the Marksman movies, and that’s exactly what most big tech companies are aiming for.
“What we talked about originally, when we brought up the vision of this platform, was the old Iron Man movies where Tony Stark has a Jarvis that’s helping him,” Google’s Android Head, Sameer Samat, tells me. “That’s not a chatbot interface — that’s an agent that can work with you and solve a task in the space that you’re in. And I think that’s a super exciting vision.”
But it’s taken a long time to get here, and the vision is still coming into place. Over a decade ago, Google Glass sparked debates about social acceptance, public privacy and “Glassholes.” In a review back in 2013, I wrote: “As a hands-free accessory, it can only do so much, and it doesn’t mirror everything I can see on my phone. In that sense, I currently feel the urge to go back to my phone screen.”
While the tech has advanced a lot in the last 12 years, smart glasses still face that same challenge.
At least now they’re finally becoming functional, less cumbersome and regular-looking enough to live up to their never-ending hype. They’re probably not everything you’d expect, and many have significant tradeoffs and drawbacks. But what they can do is astonishing. And a little bit scary.
The capabilities and features vary widely, but all have one thing in common. They aim to be what you want to wear, ideally every day and all day long. They could well become constant companions like your earbuds, smartwatch, fitness band and wellness ring, and as indispensable as your phone.
Are you ready for that?
So, so many smart glasses
Today’s explosion of smart glasses is reminiscent of the early 2010s, when dozens of different watches and bands were all trying to find a way onto our wrists, from the early Fitbits to the first stabs at smartwatches like the Pebble and Martian. The question back then was whether we’d really end up wearing something like this on our wrists all the time. The answer turned out to be an emphatic yes.
Now the push is to figure out computing on your face. Those in the hunt include a litany of everyday names in the consumer tech and eyewear sectors, from Meta, Google, Samsung, Amazon, Snap and TCL to EssilorLuxottica, Warby Parker and Gentle Monster.
Smart glasses are starting to find their footing. Meta’s Ray-Ban glasses went from a weird, creepy novelty when they arrived in 2021 to something I regularly take on vacations, and even wear half the time. Companies like Nuance make FDA-approved hearing-aid glasses that are already in stores. But the biggest movers haven’t arrived — Google and Samsung are next on deck, and Apple could be announcing glasses next year, too.
What’s still lacking is a concise definition of what “smart glasses” actually are. Even Samsung and Google have subdivided the category into a number of product types, ranging from phone-tethered, sometimes-on visors to completely wireless glasses. Some smart glasses just have audio assistance, like earbuds, and others add cameras. Some have displays, but what they’re used for — and the quality of the display — can vary widely. Some show notifications from your phone. Some browse apps. Some can act as viewfinders for your on-glasses camera. Some can do live captioning.
As companies try to conjure up super-glasses that can do it all, we’re seeing a whole lot of experimentation. It’s something that will no doubt be a big theme at CES in early January. Smart glasses are also being positioned as the ultimate gadget for tapping into AI, the massively disruptive, ever-shifting technology that Big Tech can’t get enough of.
But there are still mundane, yet essential, factors that need to be addressed, like battery life, display quality, size and comfort. Plus how information gets delivered from the phone, questions of accessibility, privacy, function and social acceptance. And how exactly will they fit in with the phones, earbuds and watches we’re already using?
Sorting all that out is what the next 12 months are all about. Let’s dive in.
AI: The glue and the reason
I’ve spent a lot of time walking around my neighborhood wearing a large pair of glasses, looking at things around me and wiggling my fingers to interact with a band on my wrist. Meta’s Ray-Ban Display glasses are showing me answers to my questions. I’m getting pop-up text responses to things it’s taking little pictures of using the frame’s camera. It’s call and response, as Meta AI attempts to help me on the fly.
This is what most of the glasses-making big tech companies are dreaming of — smart glasses as a wearable assistant, equipped with audio, a miniature display and a handful of connected apps and AI tools.
At Meta’s Menlo Park headquarters in September, I spoke with CTO Andrew Bosworth about the company’s big, unfinished push to make true AR glasses that blend 3D imagery and advanced interfaces. A year earlier, I’d tried Orion, Meta’s prototype with full and immersive 3D displays and the ability to track both my eyes and my wrist gestures. But that product isn’t yet ready for the mainstream — or affordable. Instead, we had this year’s Ray-Ban Displays, with a single color screen, no 3D and no extra apps, though it does have that wristworn neural input band to interpret hand gestures like pinches and swipes.
Bosworth foresees a spectrum of different-featured glasses, not one ultimate model.
“We are seeing strata emerge where there’s going to be lots of different AI glasses, platforms, AI wearables in general. And people are gonna pick the one that fits their life, or their use case,” Bosworth says. “And they’re not always going to wear the Display [glasses], even if they have them. They might sometimes prefer just having the [screen-free] Ray-Ban Metas.”
Meta’s smart glasses have been a success story, especially for partner EssilorLuxottica, which saw a 200% increase in sales of the Ray-Ban Metas in the first half of 2025, with over 2 million pairs of glasses sold. Those numbers are nowhere near the sales of smartphones or even smartwatches, but for the first time, there are signs of growth. (That’s for Meta’s screen-free glasses, which have cameras, audio and AI. The more expensive Displays only just came out in September.)
Meta’s entire lineup of smart glasses has live AI modes that can see what I’m seeing and respond to my voice prompts. It’s a very mixed bag, though. Often, I find the suggestions unhelpful or the observations slightly off — it misidentifies a flower, or it guesses at a location, or hallucinates things that aren’t there.
While a long-term goal for AI is to develop “world models” of what’s around you, using that to help map and understand your environs, right now AI on glasses is just doing quick spot-checks of photos you take or things it hears through microphones. Still, it’s the closest way that AI can come to really observing your life right now, which is why Meta and Google see glasses as the ultimate AI doorway, even as a variety of pins, rings and pendants compete to be the AI gadgets of choice.
The big new catchphrase to keep an eye on is “contextual AI,” which refers to the hoped-for stage when AI will be able to recognize what you’re doing and meet you more than halfway. How? By understanding where you are or what you’re looking at, similar to the way a search engine knows your browsing history and stores cookies to serve up ads, or your social media has an all too eerie sense of what you’ve been up to.
The best preview of how things could work is inside a new VR/mixed-reality headset, the Samsung Galaxy XR, which has been perched on my face for the last few months. It can see everything I’m seeing and use that to fuel Gemini, Google’s AI platform. But in Galaxy XR, I can circle to search something in my space, ask Gemini what’s on my desk or get it describe a YouTube video.
Samsung and Google are leaning on the bulky and not-very-glasses-like Galaxy XR to explore how they can bring “live AI” to actual glasses soon. Warby Parker and Gentle Monster smart glasses coming next year are going to lean on camera-aware AI just like Meta does, but with a lot more possible hook-ins to Google services and to other apps — like Google Maps and Uber — that live on phones.
“Our goal is to go beyond the world of assistance that’s on demand, and more to a world where it’s proactive, and that requires context. Your personal assistant can’t act in a proactive way without context of you and what’s going on around you,” Google’s Samat says.
Samat sees XR, or extended reality — the mix of virtual reality, augmented reality and your actual real-world environment — as fertile ground for that to take root.
“There’s a less established interface … so it’s a perfect opportunity to define something new, where the personal assistant is an integral part of the experience,” Samat says. “And the system has a perfect view into what you are seeing and hearing, so that connection of context is made easier.”
But the more advanced glasses get, the more they’ll need more complex ways to control them.
Wrists: Gestures start here
Meta’s Ray-Ban Displays have an extra that points toward the future of glasses like a big flashing arrow. A neural band on my wrist, looking like an old-school screenless Fitbit, is studded with sensors that measure electrical impulses and turn my finger gestures into controls.
But a dedicated band isn’t the only way to register hand gestures. Smartwatches could be used as glasses controls, too. Samsung and Google — both of which have their own smartwatch lines — see this as an opportunity, and not just for gestures.
“Suppose you have smart glasses without a display,” Won-joon Choi, Samsung’s COO for mobile experience, tells me. “We do have a lot of other devices, even wearable devices, that have a display so you can utilize that.”
Google’s glasses next year will work with watches, both for gestures and simple tap interactions. They’ll be optional accessories for your glasses, in a sense.
Meta’s Bosworth has similar feelings about how the neural band could evolve, saying it could be integrated into a watch strap or gain a watch-like screen in the future.
There’s precedent for a symbiotic relationship between gadgets. Apple’s AirPods and watch form a wearable pairing — as do other smartwatches and buds — and what’s especially interesting is that the Apple Watch and AirPods have gesture controls of their own. I can double-tap or flick my wrist on my watch, or nod and shake my head with AirPods. Add glasses and a few more gestures to the mix and you can see where things are going.
Or the companion for smart glasses could be on your finger. The newly released Even Realities G2 display-enabled glasses work with a separately sold G1 ring that has a touchpad to let you swipe and tap glasses functions, and that doubles as a fitness ring. Halliday glasses, which also have a display in them (over one eye), have a ring too.
Which raises a conundrum. Despite being a reviewer of advanced wearable tech, I don’t want to wear lots of extra things on me. It’s becoming more than I can keep track of, including the need for multiple charging cables. The solution feels obvious: integrate the controls into the watches we’re already wearing, rather than make something new and extra.
But that also points to an even bigger part of the glasses problem right now: our smartphones and the ecosystems, controlled by Apple and Google, that run on them.
For glasses to integrate well with smartwatches, the companies making them need enable the connections. That’s something Google and Samsung look close to tackling in the next year. (Apple, as always, remains more of a mystery.)
Will Wang, co-founder of Even Realities, worked at Apple on wearable forms of human interfaces, including the Apple Watch. The lack of connectedness on the Apple Watch for richer third-party apps, he says, restricts Even Realities from pairing with the watch — hence the ring. Meta, which has no phone or watch of its own, is partnering with Garmin for its fitness glasses.
We’re going to need smart glasses makers to figure this out quickly, to help us better navigate apps on their displays. That’s not so easy when you’re wearing Meta Ray-Ban Displays, even with a gesture band to swipe between apps. Is eye-tracking technology the answer? Don’t count on it anytime soon, even if it does exist to some degree in the Orion glasses and on the larger Apple Vision Pro and Samsung Galaxy XR mixed-reality headsets.
I don’t need something bleeding-edge fancy. I just want something easy and effortless — a few taps of my finger to make something happen, not a lot of gestures that make me feel like I’m navigating a phone on my face. But to do that, you’d need smarter, more contextual AI.
Voice commands are an option, but they’re hardly perfect. My glasses don’t always understand my requests, and conversations take too long. Gestures can shortcut and bypass voice, and for gestures, you need either camera tracking or a thing to wear on your hand.
Don Norman, a former Apple designer and the author of The Design of Everyday Things, which reflects on the future of smart glasses, sees a challenging landscape.
“The obvious solution is to use exotic gestures or spoken commands, but how will we learn and remember them? The best solution is for there to be agreed-upon standards,” Norman writes in the 2013 update to his classic book. “But agreeing on these is a complex process, with many competing forces.”
A decade plus later, we’re still a long way from a common interface.
Displays: How good could they get?
Every once in a while, I unfold a pair of Xreal glasses that look almost like a pair of everyday sunglasses, except for the USB cable I plug into my phone or my laptop. A big and surprisingly good virtual display floats in front of my eyes, and I can watch a movie on a flight or work on a virtual monitor.
These glasses can’t work as something I wear around all the time. Display glasses like Xreal’s, or those from competitors like Viture and TCL, use bulky lens systems to project micro OLED displays that aren’t fully transparent, and they can’t be battery-powered yet.
But I’ve had a peek at how these types of tethered glasses are evolving. Those glasses I saw in early December — Google and Xreal’s Project Aura, coming in 2026 — have a larger screen and can connect with PCs and run VR apps, just like the larger Samsung Galaxy XR headset that went on sale in October. Think of it as a portable Apple Vision Pro in glasses form.
Display-equipped smart glasses, with their transparent lenses, are more limited. Many, like those from Even Realities, Halliday and Rokid, use monochrome green micro LED display tech to show plain text. Meta’s Ray-Ban Displays have a single, smaller but high-resolution color screen that pops up in one eye, using LCOS (liquid crystal on silicon) display projector, and a new type of lens tech called a reflective waveguide in which tiny mirrors bounce the light back.
Google’s 2026 smart glasses will have similar single-eye displays. They can play back YouTube videos, but the displays still feel small for watching anything like a movie. Right now, the viewing area’s limited to what feels like the screen of a smartwatch floating up in front of one eye.
But Schott, a Germany-based optics company that manufactures reflective waveguide-equipped lens technology, sees possibilities. Rudiger Sprengard, head of augmented reality for Schott, says a larger display area of 60 degrees is possible. That’s around the virtual screen size of what I get on Xreal’s tethered display glasses.
But even when that happens, they still might not be ready to play movies like plug-in glasses can. The concern is battery life: Meta Ray-Ban Displays only show occasional information and heads-up text, and don’t work as a way to browse and play back videos — the battery life would get chewed up fast.
“It’s not limited by the waveguide,” Sprengard says of wireless glasses’ smaller displays. “It’s limited by the overall system and the requirement to make it fashionable, small, lightweight, and the electronics and optics related to it.”
Also, smart glasses lack higher-speed wireless connections to phones to make video playback work. Smart glasses right now use Bluetooth to pair with phones. To sync bigger files, like photos and videos on Meta’s Ray-Bans, I need to connect temporarily using an awkward local Wi-Fi link. Google’s new Android XR OS for glasses and headsets is looking to bridge that gap and make glasses work more seamlessly with phones. Expect Apple to do the same.
How much smaller can they be?
Putting more features on smart glasses means creating space for them. On a pair of glasses you’re meant to wear all the time, that’s not easy. Space is severely limited, and weight limits are unforgiving. Ray-Ban Displays are passably fashionable, but even Bosworth admits Meta lucked out that chunky glasses are in right now. They’re big by necessity. Batteries, display projectors, speakers, processors, cameras — they all need to be tucked in there.
Smart glasses can be really good at being headphones, projecting audio from small speakers in the arms, or taking phone calls using an array of directional microphones. But some don’t have audio at all.
Even Realities is choosing to leave features out. The company’s G2 glasses have monochrome displays and microphones, but forgo speakers and cameras. That could be a plus for people who don’t like the idea of a camera on their face. It also helps Even Realities push for smaller sizes and better battery life. I was impressed that the G2 glasses look remarkably thin, even with for two small bulges on the ends of the arms.
Nuance Audio, an assistive glasses manufacturer, takes another approach by focusing entirely on medically cleared hearing aid technology, plus long battery life. Size isn’t an issue; they look like a regular pair of glasses.
But the components could shrink further. I got a look at extremely small speakers on custom semiconductor chips made by xMems Labs that, in demos, sounded as good as everyday headphones. These smaller chips could shrink the arms of audio-equipped smart glasses, says Mike Housholder, vice president of marketing for xMems. They could also offer cooling, since these little solid-state speakers are basically tiny air pumps.
The goal for the weight of smart glasses seems to be between 25 and 50 grams, the range of what non-smart glasses weigh. Nuance audio felt confident its 36-gram size fits what a standard pair of glasses should weigh; the G2 glasses from Even Realities weigh the same. xMems quoted me a similar size goal for smart glasses. Meta Ray-Ban Displays tip beyond this, at about 70 grams, while the display-free Ray-Ban smart glasses are around 50 grams.
Meanwhile, expectations keep increasing for what a pair of smart glasses should have in the first place.
Something like a true Tony Stark pair of augmented reality glasses would be super bulky — witness Meta’s full-featured, eye-tracking-equipped, 3D display-enabled Orion prototype — but there’s hope the tech will keep shrinking. A pair of TCL RayNeo X3 Pro glasses I just started testing feels heftier and more “techie” than most smart glasses, yet at around 80 grams is also relatively compact. And that’s with dual displays and 3D graphics, plus cameras onboard.
The stubbornest challenge for any smart glasses that want to be stylishly sleek and lightweight? Battery life. Some glasses that are light on features — Nuance, Even Realities — last a full day on a charge. Meta’s Ray-Bans have gotten to six hours or more, its more computing-intensive Ray-Ban Displays only last a couple of hours, and its live AI modes, which tap into continuous camera connection, conk out after an hour at most. Snap’s full-AR Spectacles, a developer model for glasses expected next year, currently only last 45 minutes.
There are a lot of compromises at the moment, but a full day of use seems like the necessary goal post.
Assistive dreams and lens challenges
I’ll tell you my biggest worry: A lot of today’s VR headsets and glasses don’t work for everyone who wears prescription eyewear. I have pretty severe myopia and also need progressive lenses for reading. I’m around a -8. It turns out that’s sort of a breaking point for a lot of current smart eyewear and headsets, whose lenses tend to max out near -7.
VR headsets have started offering a wider range of prescription inserts, but smart glasses are another story. Meta’s Ray-Bans don’t officially support eyes beyond +6/-6, although I’ve fitted a higher-index set of lenses into mine. The more advanced Ray-Ban Displays only support a range of +4/-4, largely because the new waveguide technology can’t accommodate it yet.
But there are signs of hope. Even Realities supports a much wider range of prescriptions up to -12/+12, and so does Nuance. Other smart glasses manufacturers are leaning on inserts. I use pop-in lenses on the Xreal and Viture display glasses and TCL RayNeo X3 Pro glasses, and magnetic clip-on lenses on Rokid glasses. The result is sort of weird, but at least functional.
I’m hopeful more prescription support is around the corner.
Schott’s Sprengard tells me it’s entirely feasible to make higher-index lenses with more advanced waveguides like Meta is using. “The technical complexity to solve eye correction is rather straightforward compared to the challenges to making [our] waveguide.”
To work, though, the layering of prescription lenses to glass that has the waveguide needs to be properly tested and cleared by agencies like the FDA. “It’s a logistical challenge,” Sprengard says.
Even Realities has made wider ranges of eye prescriptions for its display tech, seeing the lens issue as the most important problem to solve. Figuring that out is what’s needed to make smart glasses tech appealing for those who’ll be “wearing it for 16 hours a day,” says Wang.
Crack that problem and you’ll have a ready-made clientele.
“We think people who wear glasses daily will be the first group of people to adopt smart glasses because they’re comfortable with glasses on their face,” he says.
Glasses have always been technology designed to improve your eyesight. Some of the assistive functions of smart glasses are embryonic, but others are surprisingly advanced.
Meta’s audio-equipped Ray-Bans are already visual aids for some, including the father of one of my closest college friends. He started wearing a pair to help him read things he can’t read, or to describe things he can’t see. Those Ray-Bans use camera-aware AI to snap photos and then analyze what’s in the image, much like Gemini and other phone-based AI platforms can.
Meta is integrating its smart glasses with Be My Eyes, a Danish vision assistance app that can tap into smart glasses to help people see what’s around them, sharing the feed with a live volunteer who can help.
“The [Meta] glasses have been a game changer for me,” my friend’s father, who lost his vision to retinitis pigmentosa, says over a text message sent from his glasses. “I can look at a menu and the glasses will read it to me instead of having someone else read it to me. I can answer the phone on the fly, which means I miss fewer calls. The glasses give me more independence.”
But unexpectedly, it’s in hearing assistance more than vision assistance that smart glasses may have their first big medically cleared breakthrough. Wearables have already gotten FDA clearance as approved hearing aids. Apple’s AirPods Pro earbuds do it, and so do Nuance’s glasses, which are made for hearing aid functions and nothing else.
I’ve tried Nuance’s glasses, which use beam-forming microphones to enhance sounds coming from in front of whoever’s wearing the pair, filtering out noise from other parts of the room. While I can’t fully appreciate the impact for someone with significant hearing loss, I can say they did effectively isolate what I needed to hear. Even better, they just look like glasses.
Nuance has a unique perspective in that it’s providing a medical solution, says Mike Dondero, the company’s vice president of business. At the same time, it recognizes the consumer-focused imperative to make its glasses both comfortable to wear and durable, leaving out extra smart features to make that happen.
“You can imagine the tradeoff that we have to find between how many features to allow the wearer to gain the benefit of amplification for 8 hours or 6 hours in a noisy place,” he says. “It has been super hard.”
Privacy and safety questions galore
So now we get to the elephant in the room: privacy. With smart glasses as vessels for AI, there are massive questions about how companies will responsibly handle the collection of data as you move through the world, how they’ll make others aware you’re collecting it and how they’ll securely store and share it.
Meta is a prime suspect for concern, given its dismal track record with web and phone apps. I’ve reviewed and recommended Meta’s Quest VR devices for years, but because those mostly play games and aren’t worn all the time — and don’t process lots of real-world camera data via AI — they haven’t been as worrisome. But Meta’s increasingly capable smart glasses are made to be aware of your surroundings and help you understand them.
Then there are a host of privacy concerns. Are the glasses recording things around you without anyone else knowing? Are they doing it without you knowing, too? Even worse, some people have already found ways to mod and remove LED indicators of when Meta’s glasses are recording.
Beyond that, the generative AI feeding you information through your glasses may not be entirely trustworthy — the technology has well-documented issues with hallucinations, bias and sycophancy. It gets things wrong a lot, no matter what glasses I wear. I don’t know how much I want to rely on it.
There are basic safety issues, too, especially when you’re in motion, whether on foot, on a bike or in a car. Smart glasses with displays often throw images in front of your eyes at random times — potentially dangerous distractions. While most let you turn off the displays or switch to driving mode, they’re not on by default.
Also, on your phone, you can choose which AI apps to install or whether to install them at all. But with smart glasses, you’ll likely be locked into a single, unavoidable AI.
There need to be more options to let people select what AI services to add or remove, and phone controls to better manage how they’re collecting and sharing data. And it all needs to be clearer and better laid out. I currently manage smart glasses via piecemeal phone apps with hidden device settings and confusing relationships to limited phone hook-ins, like Bluetooth or location-sharing toggles. It’s squirrely, even for a seasoned tech reviewer like me.
One big problem is that phone-makers like Apple limit the ways glasses can connect with phones. Google is trying to break down those barriers with Android XR, which Even Realities’ Wang describes as a work in progress.
“All the services we’re providing still need to be run on the [phone] app, so the app always needs to be running in the background,” he says. “If you kill the app, you kill the brain of the glasses.”
If smart glasses are ever going to end up on more faces, it can’t feel this haphazard, this weird to set up and connect. Smartwatches figured it out. Glasses can, too.
“I hope, and think, that as the smart glasses industry evolves, there will be platforms or standards,” Wang says.
Where smart glasses go next
That demo I did just week ago, when I put Google and Xreal’s Project Aura on my face, I saw how far glasses could go. A Windows PC monitor floated to the left of me, a YouTube video on the right. I multitasked, running apps side by side, scrolling and clicking with taps of my fingers in the air. Then I loaded up Demeo, a 3D role-playing game for VR, which floated in the room in front of me as I used my hands to pick up pieces and play cards from my hands.
Project Aura is a testbed for how glasses could replace VR and mixed-reality headsets, and maybe all our big screens, too. A pair of folding glasses and a phone-sized processor can run everything. Much like Meta’s Project Orion, they’re true augmented reality. While they can’t be worn on your face as everyday glasses all the time, and they don’t work with your phone yet, they’re another step toward that moment.
“Maybe in three to five years, you pull out your phone and then you connect your glasses with it, and you have a brand new kind of experience,” Xreal’s founder and CEO, Chi Xu, says.
That future is making its way toward us. In a kitchen at Snap’s New York headquarters this fall, I got a peek at software dreaming up how AI could start offering live instructions overlaid on our world. I saw step-by-step instructions, drawn and typed in the air over a coffee machine and a refrigerator: in-glasses generative AI assistance in live graphic form.
Bobby Murphy, Snap’s CTO, tells me he envisions blocks of swappable AI tools that could let people create on the fly, making custom mini-apps Snap calls Lenses — something beyond what today’s apps can do.
Snap, which has made smart glasses for years, is aiming for its next-gen consumer pair of AR smart glasses to go on sale next year. CEO Evan Spiegel says these glasses will be something you can wear everywhere, which is great — but the prototype developer glasses I tested still only have a 45-minute battery life.
But one thing’s clear: By the end of 2026, we’re going to see a lot more smart glasses — in the shops where we buy our everyday glasses, on the faces of fashion models and influencers and in the praises of people who find them essential as assistive tools. We’ll be trying them out as portable movie theaters, vacation glasses or personal wearable cameras.
Still, as I look at the glasses scattered across my desk, I can’t help remembering the long path of smartwatches, those days of excitement over wearables made by Misfit, Jawbone, Pebble and Basis.
Many of them are gone now.
Will it be the same with smart glasses? Probably so. But the companies that survive will have figured out how to make high-tech eyewear that I’ll really want on my face all the time, that I’ll be able to wear all the time. With my prescription. Without needing constant recharging.
Pebble founder Eric Migicovsky wears Meta’s Ray-Bans as sunglasses — and takes them off when he goes inside. “Meta Ray-Bans are great, but everything else is not even at smartwatches in 2014.”
We’re not there yet. But I think we’re getting awfully close.
Visual Design and Animation | Zain bin Awais
Art Director | Jeffrey Hazelwood
Creative Director | Viva Tung
Camera Operator | Numi Prasarn
Video Editor | JD Christison
Project Manager | Danielle Ramirez
Editor | Corinne Reichert
Director of Content | Jonathan Skillings
