Connect with us

Research

5 Amazing Google Ads Tools You Need to Use

Published

on

Google Ads can be overwhelming in its scope.

It’s a versatile and competitive medium, and the wealth of tools can create two situations:

  • It feels like too much to learn, so new things are avoided.
  • The tools aren’t easy to spot.

As the platform continues to evolve, so do the tools Google provides!

Some save time, some give you insights faster, and others can help expand your Google Ads efforts in ways that are automated or backed by data.

Here are five tools in the Google Ads platform to at least give a shot if you haven’t.

Some are newer, some are oldies but goodies, and all are worth giving a shot.

1. Ad Variations

As most PPCers will tell you, ad testing is one of the core best practices to get the best performance possible.

Ad testing was a more frustrating experience once upon a time. While you could create multiple versions of a text ad, many buyers felt like Google would pick a winner too fast.

It was also labor-intense to create several versions, pull reporting at scale, and generally get insights fast.

The ad creative types in search have evolved since that time, and so have the tools for making them.

The original text ad format changed to expanded text ads, providing more space to test copy.

Google also launched RSAs (responsive search ads) to quickly test combinations of copy automatically, and new tools devoted to ad testing started to appear.

One of these tools is the Ad Variations tool. Located in the Drafts and Experiments area, this feature allows brands to create variations on their ads faster.

Brands can choose to run ad variations for the whole account, specific campaigns, or even a custom scope. Then they specify the part of the ad they want to run the variant for:

5 Amazing Google Ads Tools You Need to Use

Users can then specify the type of variation they want to run, including finding and replacing text, swapping headline order, or updating text altogether.

Once the experiment launches, results are shown and monitored in the Ad Variations area.

2. Audience Observation

For a long time, Google usually lost out to Facebook when advertisers would think of reaching users based on their demographic.

Need to reach female runners in their 40s? While they are certainly searching on Google, there wasn’t an easy way to segment them or see if their searches were more valuable.

This made it difficult for advertisers to understand whether certain audience types performed better, even when searching the same terms as another demographic.

It also limited what advertisers could learn about other possible audiences they could be targeting.

The option to observe Audiences helps close these gaps!

Advertisers can add a host of Google-defined audiences to their efforts to observe their performance relative to one another:

5 Amazing Google Ads Tools You Need to Use

Based on that data, bids can specifically for the demographic.

Think you should be getting better results with your Google Ads?
Your campaign may be suffering from click fraud. Check if you need to protect your ads from competitors & bots. Simple setup. Get a free checkup today.Start Free TrialADVERTISEMENT

Google will automatically suggest Audiences in the “Ideas” section.

This is a good way to jumpstart the process, and advertisers can also search based on things like affinity audiences, in-market buyers, or other demographic markers:

5 Amazing Google Ads Tools You Need to Use

3. Responsive Search Ads

Responsive Search Ads (commonly called RSAs) are another time-saver in ad testing.

Unlike traditional text ads where the advertiser creates separate, distinct versions of ads, RSAs employ a mix-and-match philosophy.

The ad copy is treated as several separate assets that Google then merges together to create a cohesive ad.

5 Amazing Google Ads Tools You Need to Use

Once RSAs are live, clicking on the View assets details link that appears just below the ad copy shows results.

5 Amazing Google Ads Tools You Need to Use

Users can view the individual asset stats and the results by combination of text.

This quick and easy testing setup gives insights about verbiage users respond to best, and lets advertisers test combinations faster than manually creating every version they would need.

One thing to look out for is the inputs an advertiser puts in.

All the copy lines must be able to mix and match without them seeming awkward or weird when they’re put together.

Make sure to think through and vet your copy options with that in mind before you start running them.

4. Discovery Campaigns

Unlike search, Discovery campaigns are visually-rich ad units shown based on user activity.

They are a departure both in their format, and how they’re targeted.

They are display-like in how they render to the user, combining imagery and a headline.

Also, like display, they are primarily used as a top of funnel way to catch a user’s eye and generate interest in a product.

Discovery ads appear across the Google Discover Feed (shown on the home page of the Google app or the Google.com homepage on mobile), YouTube home feed, and also Gmail.

5 Amazing Google Ads Tools You Need to Use

Targeting uses the familiar interfaces and options for specifying Audiences in their other campaign types:

5 Amazing Google Ads Tools You Need to Use

Discovery Campaigns are largely automated by Google.

Advertisers can’t make the adjustments they’re used to in search for things like device adjustments, ad rotation, or frequency capping.

There are still options for exclusions to help with brand safety, like keeping advertiser content away from things like violence or profanity.

5. Explanations Feature

It’s the bane of a PPC manager’s existence: sudden, unexpected changes in performance.

Diagnosing these instances can lead to a lot of digging, pivot table-ing, and sweating!

The Explanations feature is here to help.

Currently in beta, it provides insights to Search campaigns and why their performance may have changed.

Data can be analyzed for impressions, clicks, costs, and conversions for up to a 90-day window of time.

The Explanations feature works by comparing one date range’s results to another.

Doing this typically shows you a percentage change between the two dates.

If there’s an Explanation available, that difference turns blue like a hyperlink.

5 Amazing Google Ads Tools You Need to Use

Clicking the link will bring up a small window, with the option to go to a larger Explanation.ADVERTISEMENTCONTINUE READING BELOW

It will note the primary driver of the change, and contributing portions.

In this example, there was an increase in impression volume.

The details reveal there was an increase in search volume, and Google outlines the terms that specifically saw the increase:

5 Amazing Google Ads Tools You Need to Use

This is a far cry from looking in search history, digging through search terms manually, sorting, and crunching data.

Source: https://www.searchenginejournal.com/google-ads-tools/374962/#close

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Research

The PS5 is the weirdest thing I’ve ever seen in my life

Published

on

By

Before I talk about the PlayStation 5, I want to talk about the time I interviewed The Big Show.

About a decade ago, in another lifetime, I interviewed The Big Show, a professional wrestler who, to this day, works with the WWE.

True to his name, The Big Show is big. Ridiculously big. He’s billed at 7 feet tall and weighs about 400 pounds. As I waited in a hotel suite for The Big Show to arrive, I tried to mentally prepare myself for the sheer scale of the human being about to blot my horizon.

The mental prep didn’t work. Not even close. The Big Show walked in. My eyes widened. I audibly gasped when he took my tiny hand and shook it with a right paw the size of a large dinner plate.

gettyimages-509693388
The Big Show. He was big. And also really nice!Marc Pfitzenreuter/Getty Images

That’s sorta how I felt when I first came face to face with a PlayStation 5 in the wild. No matter how prepared I was, no matter how many photos I’d seen for scale, the size of this monstrosity of a console still took me utterly and completely by surprise.

I was at Sony’s offices in Australia when I first saw it, engaged in small talk with a Sony employee. I caught it in my peripheral vision. I started vibrating; completely lost focus.

“Is that it?”

“No… It can’t be.”

“There’s no way it’s that big.”

Despite seeing it in photos, despite preparing myself, I was honestly, sincerely shocked.

So shocked that, when I finally got a PlayStation 5 in the confines of my own home, I felt compelled to just… take pictures of it next to everyday objects. As though my primitive brain had to work through and digest its scale. By placing it in the context of a banana or a giant pot plant.

Ever since I got my PS5, I’ve been enjoying the hell out of it. Great games, great controller. But I’ve also been thinking about it. Trying to make sense of why smart Sony people (I’m assuming) decided to make the console look like this.

Because, beyond the sheer ungodly size of the thing, the PS5 is simply a strange object to look at, let alone try to understand. Consumer devices, particularly consoles, can usually be placed in the scheme of a broader design aesthetic. Maybe they look a little like the TVs they are connected to? Or the living rooms they were designed to be placed in?

Consoles tend to be connected directly to the design zeitgeist or push back against it in some creative way. The Xbox Series X, for example, is a console designed to disappear, marching in time with Microsoft‘s new focus on services like Game Pass. In a future where consoles may not even exist, the Xbox Series X might just be the last step. It’s designed to look like a last step.

Nintendo GameCube
I loved this thing.Nintendo

The Nintendo GameCube on the other hand, released in 2001, was a console that pushed back. A playful looking toy of a device, designed in direct opposition to sleek black boxes like the Xbox and the PlayStation 2. Consoles designed to hide beneath TVs. All three devices were tethered to one another whether they liked it or not and the design cues reflected that.

The PlayStation 5 is different. The PlayStation 5 arrived untethered to anything on Earth in 2020: Other consoles, tech devices, any kind of common sense.

The PlayStation 5’s design is so confounding I can’t decide whether it’s a deliberate Lynchian parody of our basest nostalgic impulses or — way more likely — the dumbest fucking thing I’ve ever seen. In the past Sony has pushed the boundaries of console design with a sort of avant-garde, just-beyond-the-future sensibility. Its consoles have flirted with allure and mystery. This time around, they’ve created something that looks like a gigantic, geriatric ISP router or an obnoxious PC gaming laptop.

It has to be deliberate, right? Surely.

I can’t make sense of it from any possible frame of reference. The PlayStation 5 is strange to look at but doesn’t even come close to some postmodern “weird for the sake of weird” ideal. It’s undoubtedly connected to objects we’ve seen before, in our recent past. In a strange way the PS5 is almost normal. Like, bad normal. Banal normal. Like something a teenage boy would have drawn in 2007 normal.

And the PS5 isn’t a “Homer” either. It’s not a busted up, bloated object that’s clearly the result of bad taste, bad ideas and poor design squished into one ugly box. If you squint, the PS5 is sort of nice to look at if you don’t think about it too hard, but it aged decades the second I took it out of the box.

Its closest design relative is probably the Xbox 360, a console that came out in 2005 and is probably too young to evoke any sort of nostalgia. A console that started out white and was eventually stained cream by the harsh ravages of time. The PS5, I suspect, will suffer the same fate. This thing is already sort of weird and uglier. In two or three years, you’ll be putting a paper bag over its head.

It feels brittle, heavy and doesn’t really belong in my house. And at this point I’m struggling to understand how the PS5 could fit in any house.

I love the PS5. I love what it does. I love Demon’s Souls, I love Spider-Man: Miles Morales, I love Astro’s Playroom. More than anything I love its new DualSense controller with its tactile, vibrant feedback and it’s responsive adaptive triggers.

But the best thing I can say about the PlayStation 5 as a physical object is that it — thank the lord — fits in my TV cabinet, where it can be mercifully hidden from human eyes.

Source: https://www.cnet.com/news/the-ps5-is-the-weirdest-thing-ive-ever-seen-in-my-life-playstation-5/

Continue Reading

Research

Part human, part machine: is Apple turning us all into cyborgs?

Published

on

By

At the beginning of the Covid-19 pandemic, Apple engineers embarked on a rare collaboration with Google. The goal was to build a system that could track individual interactions across an entire population, in an effort to get a head start on isolating potentially infectious carriers of a disease that, as the world was discovering, could be spread by asymptomatic patients.

Delivered at breakneck pace, the resulting exposure notification tool has yet to prove its worth. The NHS Covid-19 app uses it, as do others around the world. But lockdowns make interactions rare, limiting the tool’s usefulness, while in a country with uncontrolled spread, it isn’t powerful enough to keep the R number low. In the Goldilocks zone, when conditions are just right, it could save lives.

The NHS Covid-19 app has had its teething problems. It has come under fire for not working on older phones, and for its effect on battery life. But there’s one criticism that has failed to materialise: what happens if you leave home without your phone? Because who does that? The basic assumption that we can track the movement of people by tracking their phones is an accepted fact.This year has been good for tech companies, and Apple is no exception. The wave of global lockdowns has left us more reliant than ever on our devices. Despite being one of the first large companies to be seriously affected by Covid, as factory shutdowns in China hit its supply chain delaying the launch of the iPhone 12 by a month, Apple’s revenue has continued to break records. It remains the largest publicly traded company in the world by a huge margin: this year its value has grown by 50% to $2tn (£1.5tn) and it is still $400bn larger than Microsoft, the No 2.

It’s hard to think of another product that has come close to the iPhone in sheer physical proximity to our daily lives. Our spectacles, contact lenses and implanted medical devices are among the only things more personal than our phones.

Without us even noticing, Apple has turned us into organisms living symbiotically with technology: part human, part machine. We now outsource our contact books, calendars and to-do lists to devices. We no longer need to remember basic facts about the world; we can call them up on demand. But if you think that carrying around a smartphone – or wearing an Apple Watch that tracks your vitals in real time – isn’t enough to turn you into a cyborg, you may feel differently about what the company has planned next.

A pair of smartglasses, in development for a decade, could be released as soon as 2022, and would have us quite literally seeing the world through Apple’s lens – putting a digital layer between us and the world. Already, activists are worrying about the privacy concerns sparked by a camera on everyone’s face. But deeper questions, about what our relationship should be to a technology that mediates our every interaction with the world, may not even be asked until it’s too late to do anything about the answer.


The word cyborg – short for “cybernetic organism” – was coined in 1960 by Manfred E Clynes and Nathan S Kline, whose research into spaceflight prompted them to explore how incorporating mechanical components could aid in “the task of adapting man’s body to any environment he might choose”. It was a very medicalised concept: the pair imagined embedded pumps dispensing drugs automatically.

In the 1980s, genres such as cyberpunk began to express writers’ fascination with the nascent internet, and wonder how much further it could go. “It was the best we could do at the time,” laughs Bruce Sterling, a US science fiction author and futurist whose Mirrorshades anthology defined the genre for many. Ideas about putting computer chips, machine arms or chromium teeth into animals might have been very cyberpunk, Sterling says, but they didn’t really work. Such implants, he points out, aren’t “biocompatible”. Organic tissue reacts poorly, forming scar tissue, or worse, at the interface. While science fiction pursued a Matrix-style vision of metal jacks embedded in soft flesh, reality took a different path.

“If you’re looking at cyborgs in 2020,” Sterling says, “it’s in the Apple Watch. It’s already a medical monitor, it’s got all these health apps. If you really want to mess with the inside of your body, the watch lets you monitor it much better than anything else.”

The Apple Watch had a shaky start. Despite the company trying to sell it as the second coming of the iPhone, early adopters were more interested in using their new accessory as a fitness tracker than in trying to send a text message from a device far too small to fit a keyboard. So by the second iteration of the watch, Apple changed tack, leaning into the health and fitness aspect of the tech.

Now, your watch can not only measure your heart rate, but scan the electric signals in your body for evidence of arrhythmia; it can measure your blood oxygenation level, warn you if you’re in a noisy environment that could damage your hearing, and even call 999 if you fall over and don’t get up. It can also, like many consumer devices, track your running, swimming, weightlifting or dancercise activity. And, of course, it still puts your emails on your wrist, until you turn that off.

Apple believes that it can succeed where Google Glass failed.
 Apple believes that it can succeed where Google Glass failed. Illustration: Steven Gregor/The Guardian

As Sterling points out, for a vast array of health services that we would once have viewed as science fiction, there’s no need for an implanted chip in our head when an expensive watch on our wrist will do just as well.

That’s not to say that the entirety of the cyberpunk vision has been left to the world of fiction. There really are people walking around with robot limbs, after all. And even there, Apple’s influence has starkly affected what that future looks like.

“Apple, I think more than any other brand, truly cares about the user experience. And they test and test and test, and iterate and iterate and iterate. And this is what we’ve taken from them,” says Samantha Payne, the chief operating officer of Bristol’s Open Bionics. The company, which she co-founded in 2014 with CEO Joel Gibbard, makes the Hero Arm, a multi-grip bionic hand. With the rapid development of 3D printer technology, Open Bionics has managed to slash the cost of such advanced prosthetics, which could have cost almost $100,000 10 years ago, to just a few thousand dollars.

Rather than focus on flesh tones and lifelike design, Open Bionics leans into the cyborg imagery. Payne quotes one user describing it as “unapologetically bionic”. “All of the other prosthetics companies give the impression that you should be trying to hide your disability, that you need to try and fit in,” she says. “We are company that’s taking a big stance against that.”

At times, Open Bionics has been almost too successful in that goal. In November, the company launched an arm designed to look like that worn by the main character in the video game Metal Gear Solid V  red and black, shiny plastic and, yes, unapologetically bionic – and the response was unsettling. “You got loads of science fiction fans saying that they really are considering chopping off their hand,” Payne says.

Some disabled people who rely on technology to live their daily lives feel that cyberpunk imagery can exoticise the very real difficulties they face. And there are also lessons in the way that more prosaic devices can give disabled people what can only be described as superpowers. Take hearing aid users, for example: deaf iPhone owners can not only connect their hearing aids to their phones with Bluetooth, they can even set up their phone as a microphone and move it closer to the person they want to listen to, overcoming the noise of a busy restaurant or crowded lecture theatre. Bionic ears anyone?

“There’s definitely something in the idea of everyone in the world being a cyborg today,” Payne says. “A crazy high number of people in the world have a smartphone, and so all of these people are technologically augmented. It’s definitely taking it a step further when you depend on that technology to be able to perform everyday living; when it’s adorned to your body. But we are all harnessing the vast power of the internet every single day.”


Making devices so compelling that we carry them with us everywhere we go is a mixed blessing for Apple. The iPhone earns it about $150bn a year, more than every other source of revenue combined. In creating the iOS App Store, it has assumed a gatekeeper role with the power to reshape entire industries by carefully defining its terms of service. (Ever wonder why every app is asking for a subscription these days? Because of an Apple decision in 2016. Bad luck if you prefer to pay upfront for software.) But it has also opened itself up to criticism that the company allows, or even encourages, compulsive patterns of behaviour.

Apple co-founder Steve Jobs famously likened personal computers to “bicycles for the mind”, enabling people to do more work for the same amount of effort. That was true of the Macintosh computer in 1984, but modern smartphones are many times more powerful. If we now turn to them every waking hour of the day, is that because of their usefulness, or for more pernicious reasons?

“We don’t want people using their phones all the time,” Apple’s chief executive, Tim Cook, said in 2019. “We’re not motivated to do that from a business point of view, and we’re certainly not from a values point of view.” Later that year, Cook told CBS: “We made the phone to make your life better, and everybody has to decide for his or herself what that means. For me, my simple rule is if I’m looking at the device more than I’m looking into someone’s eyes, I’m doing the wrong thing.”

Apple has introduced features, such as the Screen Time setting, that help people strike that balance: users can now track, and limit, their use of individual apps, or entire categories, as they see fit. Part of the problem is that, while Apple makes the phone, it doesn’t control what people do with it. Facebook needs users to open its app daily, and Apple can only do so much to counter that tendency. If these debates – about screen time, privacy and what companies are doing with our data, our attention – seem like a niche topic of interest now, they will become crucial once Apple’s latest plans become reality. The reason is the company’s worst-kept secret in years: a pair of smartglasses.

It filed a patent in 2006 for a rudimentary version, a headset that would let users see a “peripheral light element” for an “enhanced viewing experience”, able to display notifications in the corner of your vision. That was finally granted in 2013, at the time of Google’s own attempt to convince people about smartglasses. But Google Glass failed commercially, and Apple kept quiet about its intentions in the field.

Recently, the company has intensified its focus on “augmented reality”, technology that overlays a virtual world on the real one. It’s perhaps best known through the video game Pokémon Go, which launched in 2016, superimposing Nintendo’s cute characters on parks, offices and playgrounds. However, Apple insists, it has much greater potential than simply enhancing games. Navigation apps could overlay the directions on top of the real world; shopping services could show you what you would look like wearing the clothes you’re thinking of getting; architects could walk around inside the spaces they have designed before shovels even break ground.

Smartglasses could leave us quite literally seeing the world through Apple’s lens.
 Smartglasses could leave us quite literally seeing the world through Apple’s lens. Illustration: Steven Gregor/The Guardian

With each new iPhone launch, Apple’s demonstrated new breakthroughs in the technology, such as “Lidar” support in new iPhones and iPads, a tech (think radar with lasers) that lets them accurately measure the physical space they are in. Then, at the end of 2019, it all slotted into place: a Bloomberg report suggested that the company hadn’t given up on smartglasses in the wake of Google Glass’s failure, but had spent five years honing the concept. The pandemic put paid to a target of getting hardware on the shelves in 2020, but the company is still hoping to make an announcement next year for a 2022 launch, Bloomberg suggested.

Apple’s plans cover two devices, codenamed N301 and N421. The former is designed to feature “ultra-high-resolution screens that will make it almost impossible for a user to differentiate the virtual world from the real one”, according to Bloomberg’s Mark Gurman. This is a product with an appeal far beyond the hardcore gamers who have adopted existing VR headsets: you might put it on to enjoy lifelike, immersive entertainment, or to do creative work that can make the most of the technology, but would probably take it off to have lunch, for instance.

N421 is where the real ambitions lie. Expected in 2023, it’s described only as “a lightweight pair of glasses using AR”. But, argues Mark Pesce in his book Augmented Reality, this would be the culmination of the “mirrorshades” dreamed up by the cyberpunks in the 80s, using the iPhone as the brains of the device and “keeping the displays themselves light and comfortable”. Wearing it all day, every day, the idea of a world without a digital layer between you and reality would eventually fade into memory – just as living without immediate access to the internet has for so many right now.

Apple isn’t the first to try to build such a device, says Rupantar Guha of the analysts GlobalData, who has been following the trend in smartglasses from a business standpoint for years, but it could lead the wave that makes it relevant. “The public perception of smartglasses has struggled to recover from the high-profile failure of Google Glass, but big tech still sees potential in the technology.” Guha cites the recent launch of Amazon Echo Frames – sunglasses you can talk to, because they have got the Alexa digital assistant built in – and Google’s purchase of the smartglasses maker North in June 2020. “Apple and Facebook are planning to launch consumer smartglasses over the next two years, and will expect to succeed where their predecessors could not,” Guha adds.

If Apple pulls off that launch, then the cyberpunk – and cyborg – future will have arrived. It’s not hard to imagine the concerns, as cultural questions clash with technological: should kids take off their glasses in the classroom, just as we now require them to keep phones in their lockers? Will we need to carve out lens-free time in our evenings to enjoy old-fashioned, healthy activities such as watching TV or playing video games?

“It’s a fool’s errand to imagine every use of AR before we have the hardware in our hands,” writes the developer Adrian Hon, who was called on by Google to write games for their smartglasses a decade ago. “Yet there’s one use of AR glasses that few are talking about but will be world-changing: scraping data from everything we see.” This “worldscraping” would be a big tech dream – and a privacy activist’s nightmare. A pair of smartglasses turns people into walking CCTV cameras, and the data that a canny company could gather from that is mindboggling. Every time someone browsed a supermarket, their smartglasses would be recording real-time pricing data, stock levels and browsing habits; every time they opened up a newspaper, their glasses would know which stories they read, which adverts they looked at and which celebrity beach pictures their gaze lingered on.

“We won’t be able to opt out from wearing AR glasses in 2035 any more than we can opt out of owning smartphones today,” Hon writes. “Billions have no choice but to use them for basic tasks like education, banking, communication and accessing government services. In just a few years time, AR glasses do the same, but faster and better.”

Apple would argue that, if any company is to control such a powerful technology, it ought to. The company declined to speak on the record for this story, but it has invested time and money in making the case that it can be trusted not to abuse its power. The company points to its comparatively simple business model: make things, and sell them for a lot of money. It isn’t Google or Facebook, trying to monetise personal data, or Amazon, trying to replace the high street – it’s just a company that happens to make a £1,000 phone that it can sell to 150 million people a year.

But whether we trust Apple might be beside the point, if we don’t yet know whether we can trust ourselves. It took eight years from the launch of the iPhone for screen time controls to follow. What will human interaction look like eight years after smartglasses become ubiquitous? Our cyborg present sneaked up on us as our phones became glued to our hands. Are we going to sleepwalk into our cyborg future in the same way?

Source: https://www.theguardian.com/technology/2020/nov/25/part-human-part-machine-is-apple-turning-us-all-into-cyborgs

Continue Reading

Research

Researchers hacked a robotic vacuum cleaner to record speech and music remotely

Published

on

By

A team of researchers demonstrated that popular robotic household vacuum cleaners can be remotely hacked to act as microphones.

The researchers—including Nirupam Roy, an assistant professor in the University of Maryland’s Department of Computer Science—collected information from the laser-based  in a popular vacuum robot and applied  and deep learning techniques to recover speech and identify  playing in the same room as the device.

The research demonstrates the potential for any device that uses light detection and ranging (Lidar) technology to be manipulated for collecting , despite not having a microphone. This work, which is a collaboration with assistant professor Jun Han at the University of Singapore was presented at the Association for Computing Machinery’s Conference on Embedded Networked Sensor Systems (SenSys 2020) on November 18, 2020.

“We welcome these devices into our homes, and we don’t think anything about it,” said Roy, who holds a joint appointment in the University of Maryland Institute for Advanced Computer Studies (UMIACS). “But we have shown that even though these devices don’t have microphones, we can repurpose the systems they use for navigation to spy on conversations and potentially reveal .”

The Lidar navigation systems in household vacuum bots shine a laser beam around a room and sense the reflection of the laser as it bounces off nearby objects. The robot uses the reflected signals to map the room and avoid collisions as it moves through the house.

Privacy experts have suggested that the maps made by vacuum bots, which are often stored in the cloud, pose potential privacy breaches that could give advertisers access to information about such things as home size, which suggests income level, and other lifestyle-related information. Roy and his team wondered if the Lidar in these robots could also pose potential security risks as sound recording devices in users’ homes or businesses.

Sound waves cause objects to vibrate, and these vibrations cause slight variations in the light bouncing off an object. Laser microphones, used in espionage since the 1940s, are capable of converting those variations back into sound waves. But laser microphones rely on a targeted laser beam reflecting off very smooth surfaces, such as glass windows.

Could your vacuum be listening to you?
Deep learning algorithms were able to interpret scattered sound waves, such those above that were captured by a robot vacuum, to identify numbers and musical sequences. Credit: Sriram Sami

A vacuum Lidar, on the other hand, scans the environment with a laser and senses the light scattered back by objects that are irregular in shape and density. The scattered signal received by the vacuum’s sensor provides only a fraction of the information needed to recover sound waves. The researchers were unsure if a vacuum bot’s Lidar system could be manipulated to function as a microphone and if the signal could be interpreted into meaningful sound signals.

First, the researchers hacked a robot vacuum to show they could control the position of the  and send the sensed data to their laptops through Wi-Fi without interfering with the device’s navigation.

Next, they conducted experiments with two sound sources. One source was a human voice reciting numbers played over computer speakers and the other was audio from a variety of television shows played through a TV sound bar. Roy and his colleagues then captured the laser signal sensed by the vacuum’s navigation system as it bounced off a variety of objects placed near the sound source. Objects included a trash can, cardboard box, takeout container and polypropylene bag—items that might normally be found on a typical floor.

The researchers passed the signals they received through deep learning algorithms that were trained to either match human voices or to identify musical sequences from television shows. Their computer system, which they call LidarPhone, identified and matched spoken numbers with 90% accuracy. It also identified television shows from a minute’s worth of recording with more than 90% accuracy.

“This type of threat may be more important now than ever, when you consider that we are all ordering food over the phone and having meetings over the computer, and we are often speaking our credit card or bank information,” Roy said. “But what is even more concerning for me is that it can reveal much more personal information. This kind of information can tell you about my living style, how many hours I’m working, other things that I am doing. And what we watch on TV can reveal our political orientations. That is crucial for someone who might want to manipulate the political elections or target very specific messages to me.”

The researchers emphasize that vacuum cleaners are just one example of potential vulnerability to Lidar-based spying. Many other devices could be open to similar attacks such as smartphone infrared sensors used for face recognition or passive infrared sensors used for motion detection.

“I believe this is significant work that will make the manufacturers aware of these possibilities and trigger the security and privacy community to come up with solutions to prevent these kinds of attacks,” Roy said.

Source: https://techxplore.com/news/2020-11-hacked-robotic-vacuum-cleaner-speech.html

Continue Reading
Advertisement

Trending

Copyright © 2020 Inventrium Magazine

%d bloggers like this: