The Future of AI Devices: From Smartphones to Wearable Companions

The CEO of Qualcomm, Cristiano Amon, recently shared some fascinating insights about the future of artificial intelligence at the World Economic Forum in Davos. He believes the market opportunity for AI devices is enormous, potentially reaching 10 billion units. That’s even larger than the current global smartphone market, which sells about 1.2 billion units a year. This prediction points to a major technological shift on the horizon.

Amon draws a parallel to the smartphone revolution. Phones started as simple communication tools but evolved into pocket-sized computers with the advent of mobile internet. He argues AI is at a similar turning point. As computers begin to truly understand our speech, images, and text, the nature of the device we interact with will change. The future is not just about carrying a computer, but about wearing one. We will move from “carryables” to “wearables” like smart glasses, rings, watches, or bracelets. The key driver for this is the AI agent—a digital assistant that needs to be with us 24/7 to be truly useful.

But why wearables? Why not just put advanced AI into our already-powerful smartphones? Amon’s answer touches on human psychology and habit. We are already accustomed to wearing items like jewelry and glasses; they are part of our identity and self-expression. Integrating AI into these familiar forms is a more natural evolution than asking people to adopt entirely new gadgets. It’s easier to add smart functions to a glasses frame we already know than to invent a completely new object.

Of course, smart glasses are just one possibility. The future might involve a combination of devices—smart glasses, desktop assistants, bedside hubs—all working together. The core requirement is that the AI service understands our context and provides timely, personalized help. The ideal interaction is seamless. Amon gives an example: if you’re walking down the street and wonder who someone is, the AI should recognize and tell you, not make you pull out your phone and search manually. Glasses have a unique advantage here, as they move with our field of view and have microphones close to our mouths, allowing the AI to see and hear the world as we do.

Other form factors like smart headphones with cameras, earrings, or necklaces could also become popular, especially for fashion-conscious users. This leads to an important balance: functionality versus aesthetics. Will people choose a powerful but clunky device or a stylish but less capable one? Amon believes the market will eventually find an equilibrium where devices are both highly functional and visually appealing.

The competitive landscape is taking shape with companies like Meta, Google, Apple, and even OpenAI exploring hardware. However, Amon suggests the real battle won’t be about who launches the first product, but who builds the most complete and effective AI ecosystem. A crucial part of this is “edge” computing—processing data on the device itself, not just in the cloud. For an AI to be truly personal and understand your specific context (like whether you’re in a business meeting or a family dinner at a restaurant), it needs access to real-time sensor data from your wearable device. This edge capability is key for privacy, speed, and providing relevant assistance.

Amon acknowledges that current smart assistants (like Alexa or Gemini) haven’t yet lived up to the dream of a truly intelligent companion, partly because the technology is still maturing. He is optimistic, though, citing rapid advances in AI models. Another challenge is computational power; some AI tasks require instant response and must be handled locally on the device to be practical, like real-time language translation or navigation while skiing.

The discussion also touched on AI PCs. While they have been heavily marketed, consumer interest has been lukewarm so far. Amon notes that current successful devices are valued for longer battery life and performance first, with AI as a bonus feature. For businesses, however, the value proposition is clearer: running AI tasks locally on a PC can drastically reduce costs compared to constantly calling on cloud services, making enterprise software more economical.

Qualcomm itself is making a strategic move into the AI data center server market, specifically for “inference” chips (the phase where a trained AI model answers questions or performs tasks). Amon argues that once AI models are deployed, the demand for inference processing will be massive and continuous, far outstripping the initial training phase. He believes Qualcomm’s expertise in designing low-power, efficient chips for mobile devices gives it an edge in creating energy-saving solutions for data centers.

Looking at broader applications, Amon sees AI revolutionizing fields like industrial training and robotics. An AI assistant could guide a factory worker through complex procedures in real-time, reducing errors. In robotics, AI can move machines beyond pre-programmed actions to adaptive, learning systems suitable for flexible manufacturing.

On the global stage, Amon offered a balanced view of China’s role in AI development, acknowledging its significant investments and progress in applications, while noting ongoing challenges in core research and chip design. He emphasized that global cooperation, not isolation, will best advance AI technology for everyone.

In his concluding thoughts, Amon stressed that AI is a tool to augment human capability, not replace it. The goal is to enhance our lives. He acknowledges philosophical concerns about over-reliance on AI potentially dulling human skills, but advocates for a balanced approach—using AI to aid our thinking while maintaining our own critical judgment. The adoption of AI devices will be a gradual process, requiring improvements in technology, user education, and the development of compelling use cases, much like the evolution of the smartphone. The future is about creating technology that works for us, seamlessly and helpfully.

As a developer, the enterprise angle for AI PCs is spot-on. Shifting inference costs from the cloud to the local device could completely change software economics. It could make powerful AI features affordable for small businesses. That’s a tangible, near-term benefit that gets overlooked in all the consumer gadget talk.

I’m skeptical about the 10 billion device number. That’s pure corporate hype. The smartphone succeeded because it replaced a dozen other things (camera, map, music player). What exactly is the “killer app” for AI glasses that my phone can’t do? Until they answer that with something more concrete than “contextual awareness,” this is just techies dreaming.

This is incredibly exciting! Finally, someone is talking sense about the natural next step for AI. Sticking it in glasses and jewelry we already wear is genius. It feels so much more intuitive than another screen to stare at. I can’t wait for the day my glasses can translate signs in real-time or remind me of someone’s name at a party. This is the future we were promised!