Connect with us

Technology & Innovation

Qualcomm’s new Snapdragon 8 Gen 1 adds an always-on camera

Diane Davis

Published

on

After years of work by Apple and Google to ensure that no app on your phone can turn on its cameras without you knowing it, the company behind the chipsets in most Android smartphones now wants to keep the front camera on all the time.

Qualcomm announced this “always-on camera” feature as a component of the new Snapdragon 8 Gen 1, a mobile chip platform that will ship in some smartphones by the end of the year, at its Snapdragon Tech Summit in Waimea, Hawaii. And executives with that San Diego company want you to welcome this lidless electronic eye as a privacy upgrade.

“We have a vision for the always-on camera to enhance privacy and security,” said Judd Heape, a product-management vice president, in a keynote on Tuesday afternoon. “By having a camera always on, we can make sure you’re always in front of the camera and in charge of the content.”

For example, he explained, a phone with this feature enabled could lock the screen automatically if the user’s face has suddenly vanished from view—because a thief has grabbed the device. It could blank the screen if it detects a second face appears behind you–a sign of a shoulder-surfing attempt–but only hide your notifications if a second face pops up next to you, on the assumption that you’re trying to share a photo, video or some other morsel of content.

And if you love to cook but don’t like having your phone lock automatically once your fingertips are coated in too much flour or butter to use a fingerprint sensor, the always-on camera can keep the phone’s screen open for you as long as you glance at it often enough.

The always-on camera feature will represent yet another potential target for attackers.

But early reactions, including some skeptical feedback from journalists, suggest the always-on camera instead could be a plot point in a dystopian Dave Eggers novel about Big Tech. It’s not hard to understand why–especially if you ponder how often you glance at your phone while on a toilet or just after you get out of a shower.

As PCMag’s Sascha Segan put it during a Q&A session on Wednesday: “Every non-technical person I’ve mentioned this to is completely freaked out by it.”

Qualcomm’s response so far has been to point to the locked-down nature of the Sensing Hub on the Gen 1 chipset that handles this task.

“We know that this is going to create some anxiety,” Heape said in an interview Tuesday before launching into an explanation of how Sensing Hub is isolated from the rest of the Snapdragon chipset and any other applications.

“The data never leaves this part of the chip,” he said. “There’s just a slight bit of local memory that is used while the face is processed, then that cache is basically flushed.”

Heape added that this low-powered system only does facial detection, not recognition. Unlike systems like Apple’s Face ID or Microsoft’s Windows Hello, it does not attempt detailed biometric matching. He did not, however, demo the always-on camera for me.

The always-on camera feature will represent yet another potential target for attackers, complicating a smartphone security scenario already tangled with adversaries and assets. As Saritha Sivapuram, a Qualcomm senior director of product management, acknowledged in an interview Wednesday: “The threat model itself has increased significantly.”

As Apple’s recent lawsuit against the Israeli spyware firm NSO Group spotlights, sufficiently determined and capable attackers can already take over a phone’s camera remotely. In the same interview, Sivapuram’s colleague Asaf Shen offered a caveat common among realistic security professionals: “Well-funded government organizations will always find a way.”

‘It goes down to convenience’

Two industry analysts (and Fast Company contributors) at the conference suggested that if the always-on camera actually saves people time or embarrassment, they won’t obsess over potential downsides.

“There’s been some precedent,” said Ross Rubin, founder and principal analyst with Reticle Research, who likened the technology to its audio equivalent in products such as Amazon’s Alexa speaker. “Many people have devices in the homes with always-on microphones.”

He also cited Samsung’s ‘Smart Stay,’ which used the camera to keep the phone on if it sensed you were looking at it—but without its own fortified hardware to keep the feature secure.

Carolina Milanesi, president and principal analyst at Creative Stategies, said phone vendors that elect to support this feature should take a no-surprises approach.

“Be transparent as to what is the camera able to see, who gets access to that information, and what you do with it,” she said. After that, she added: “I think it goes down to the convenience.”

(Heape said Tuesday that “a few” will enable the always-on camera but did not name them; Qualcomm’s announced list of companies shipping phones based on the Snapdragon 8 Gen 1 includes Motorola, OnePlus, and Sony but not yet Android smartphone kingpin Samsung.)

Both analysts offered rebranding advice to clarify that the always-on camera doesn’t record or remember. Rubin suggested “always-active,” while Milanesi preferred “always-ready.”

Milanesi also suggested that one particular smartphone company that definitely won’t use Qualcomm’s Snapdragon 8 Gen 1 might have better luck with a feature like this.

“For Apple, it might be easier,” she said. “If they rolled something like this out, you’re already talking to a base that believes that Apple is pro-security and pro-privacy.”

But whatever the brand name on a future phone incorporating Qualcomm’s always-on camera, history suggests that a lot of people will take the opportunity to save a tiny bit of time on a regular basis as they use their phones.

As Rubin put it: “Very often, we hear about technologies that sound as if they have a lot of potential for abuse, and yet they are accepted and they become very mainstream.”

(Disclosure: Qualcomm paid for my lodging and airfare, along with the travel costs of most of the journalists and analysts covering this invitation-only event.)


Technology & Innovation

This self-driving startup is using generative AI to predict traffic

Diane Davis

Published

on

A diptych view of the same image via camera and LiDAR.

While autonomous driving has long relied on machine learning to plan routes and detect objects, some companies and researchers are now betting that generative AI — models that take in data of their surroundings and generate predictions — will help bring autonomy to the next stage. Wayve, a Waabi competitor, released a comparable model last year that is trained on the video that its vehicles collect. 

Waabi’s model works in a similar way to image or video generators like OpenAI’s DALL-E and Sora. It takes point clouds of lidar data, which visualize a 3D map of the car’s surroundings, and breaks them into chunks, similar to how image generators break photos into pixels. Based on its training data, Copilot4D then predicts how all points of lidar data will move. Doing this continuously allows it to generate predictions 5-10 seconds into the future.

Waabi is one of a handful of autonomous driving companies, including competitors Wayve and Ghost, that describe their approach as “AI-first.” To Urtasun, that means designing a system that learns from data, rather than one that must be taught reactions to specific situations. The cohort is betting their methods might require fewer hours of road-testing self-driving cars, a charged topic following an October 2023 accident where a Cruise robotaxi dragged a pedestrian in San Francisco. 

Waabi is different from its competitors in building a generative model for lidar, rather than cameras. 

“If you want to be a Level 4 player, lidar is a must,” says Urtasun, referring to the automation level where the car does not require the attention of a human to drive safely. Cameras do a good job of showing what the car is seeing, but they’re not as adept at measuring distances or understanding the geometry of the car’s surroundings, she says.

Though Waabi’s model can generate videos showing what a car will see through its lidar sensors, those videos will not be used as training in the company’s driving simulator that it uses to build and test its driving model. That’s to ensure any hallucinations arising from Copilot4D do not get taught in the simulator.

The underlying technology is not new, says Bernard Adam Lange, a PhD student at Stanford who has built and researched similar models, but it’s the first time he’s seen a generative lidar model leave the confines of a research lab and be scaled up for commercial use. A model like this would generally help make the “brain” of any autonomous vehicle able to reason more quickly and accurately, he says.

“It is the scale that is transformative,” he says. “The hope is that these models can be utilized in downstream tasks” like detecting objects and predicting where people or things might move next.

Continue Reading

Technology & Innovation

Methane leaks in the US are worse than we thought

Diane Davis

Published

on

Methane leaks in the US are worse than we thought

Methane emissions are responsible for nearly a third of the total warming the planet has experienced so far. While there are natural sources of the greenhouse gas, including wetlands, human activities like agriculture and fossil-fuel production have dumped millions of metric tons of additional methane into the atmosphere. The concentration of methane has more than doubled over the past 200 years. But there are still large uncertainties about where, exactly, emissions are coming from.

Answering these questions is a challenging but crucial first step to cutting emissions and addressing climate change. To do so, researchers are using tools ranging from satellites like the recently launched MethaneSAT to ground and aerial surveys. 

The US Environmental Protection Agency estimates that roughly 1% of oil and gas produced winds up leaking into the atmosphere as methane pollution. But survey after survey has suggested that the official numbers underestimate the true extent of the methane problem.  

For the sites examined in the new study, “methane emissions appear to be higher than government estimates, on average,” says Evan Sherwin, a research scientist at Lawrence Berkeley National Laboratory, who conducted the analysis as a postdoctoral fellow at Stanford University.  

The data Sherwin used comes from one of the largest surveys of US fossil-fuel production sites to date. Starting in 2018, Kairos Aerospace and the Carbon Mapper Project mapped six major oil- and gas-producing regions, which together account for about 50% of onshore oil production and about 30% of gas production. Planes flying overhead gathered nearly 1 million measurements of well sites using spectrometers, which can detect methane using specific wavelengths of light. 

Sherwin et al., Nature

Here’s where things get complicated. Methane sources in oil and gas production come in all shapes and sizes. Some small wells slowly leak the gas at a rate of roughly one kilogram of methane an hour. Other sources are significantly bigger, emitting hundreds or even thousands of kilograms per hour, but these leaks may last for only a short period.

The planes used in these surveys detect mostly the largest leaks, above roughly 100 kilograms per hour (though they catch smaller ones sometimes, down to around one-tenth that size, Sherwin says). Combining measurements of these large leak sites with modeling to estimate smaller sources, researchers estimated that the larger leaks account for an outsize proportion of emissions. In many cases, around 1% of well sites can make up over half the total methane emissions, Sherwin says.

But some scientists say that this and other studies are still limited by the measurement tools available. “This is an indication of the current technology limits,” says Ritesh Gautam, a lead senior scientist at the Environmental Defense Fund.

Continue Reading

Technology & Innovation

The Download: What social media can teach us about AI

Diane Davis

Published

on

The Download: What social media can teach us about AI

June 2023

Astronomy should, in principle, be a welcoming field for blind researchers. But across the board, science is full of charts, graphs, databases, and images that are designed to be seen.

So researcher Sarah Kane, who is legally blind, was thrilled three years ago when she encountered a technology known as sonification, designed to transform information into sound. Since then she’s been working with a project called Astronify, which presents astronomical information in audio form. 

For millions of blind and visually impaired people, sonification could be transformative—opening access to education, to once unimaginable careers, and even to the secrets of the universe. Read the full story.

—Corey S. Powell

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ It’s time to get into metal detecting (no really, it is!)
+ Meanwhile, over on Mars
+ A couple in the UK decided to get married on a moving train, because why not?
+ Even giant manta rays need a little TLC every now and again.


Continue Reading

Trending