A team of researchers from Carnegie Mellon University, in collaboration with the University of Minnesota, has made a breakthrough in the field of noninvasive robotic device control. Using a noninvasive brain-computer interface (BCI), researchers have developed the first-ever successful mind-controlled robotic arm exhibiting the ability to continuously track and follow a computer cursor.
Being able to noninvasively control robotic devices using only thoughts will have broad applications, in particular benefiting the lives of paralyzed patients and those with movement disorders.
BCIs have been shown to achieve good performance for controlling robotic devices using only the signals sensed from brain implants. When robotic devices can be controlled with high precision, they can be used to complete a variety of daily tasks. Until now, however, BCIs successful in controlling robotic arms have used invasive brain implants. These implants require a substantial amount of medical and surgical expertise to correctly install and operate, not to mention cost and potential risks to subjects, and as such, their use has been limited to just a few clinical cases.
A grand challenge in BCI research is to develop less invasive or even totally noninvasive technology that would allow paralyzed patients to control their environment or robotic limbs using their own “thoughts.” Such noninvasive BCI technology, if successful, would bring such much needed technology to numerous patients and even potentially to the general population.
However, BCIs that use noninvasive external sensing, rather than brain implants, receive “dirtier” signals, leading to current lower resolution and less precise control. Thus, when using only the brain to control a robotic arm, a noninvasive BCI doesn’t stand up to using implanted devices. Despite this, BCI researchers have forged ahead, their eye on the prize of a less- or non-invasive technology that could help patients everywhere on a daily basis.
Bin He, Trustee Professor and Department Head of Biomedical Engineering at Carnegie Mellon University, is achieving that goal, one key discovery at a time.
“There have been major advances in mind controlled robotic devicesusing brain implants. It’s excellent science,” says He. “But noninvasive is the ultimate goal. Advances in neural decoding and the practical utility of noninvasive robotic arm control will have major implications on the eventual development of noninvasive neurorobotics.”
Using novel sensing and machine learning techniques, He and his lab have been able to access signals deep within the brain, achieving a high resolution of control over a robotic arm. With noninvasive neuroimaging and a novel continuous pursuit paradigm, He is overcoming the noisy EEG signals leading to significantly improve EEG-based neural decoding, and facilitating real-time continuous 2-D robotic device control.
Using a noninvasive BCI to control a robotic arm that’s tracking a cursor on a computer screen, for the first time ever, He has shown in human subjects that a robotic arm can now follow the cursor continuously. Whereas robotic arms controlled by humans noninvasively had previously followed a moving cursor in jerky, discrete motions—as though the robotic arm was trying to “catch up” to the brain’s commands—now, the arm follows the cursor in a smooth, continuous path.
In a paper published in Science Robotics, the team established a new framework that addresses and improves upon the “brain” and “computer” components of BCI by increasing user engagement and training, as well as spatial resolution of noninvasive neural data through EEG source imaging.
The paper, “Noninvasive neuroimaging enhances continuous neural tracking for robotic device control,” shows that the team’s unique approach to solving this problem not enhanced BCI learning by nearly 60% for traditional center-out tasks, it also enhanced continuous tracking of a computer cursor by over 500%.
The technology also has applications that could help a variety of people, by offering safe, noninvasive “mind control” of devices that can allow people to interact with and control their environments. The technology has, to date, been tested in 68 able-bodied human subjects (up to 10 sessions for each subject), including virtual device control and controlling of a robotic arm for continuous pursuit. The technology is directly applicable to patients, and the team plans to conduct clinical trials in the near future.
“Despite technical challenges using noninvasive signals, we are fully committed to bringing this safe and economic technology to people who can benefit from it,” says He. “This work represents an important step in noninvasive brain-computer interfaces, a technology which someday may become a pervasive assistive technology aiding everyone, like smartphones.”
How to steal photos off someone’s iPhone from across the street
Well-known Google Project Zero researcher Ian Beer has just published a blog post that is attracting a lot of media attention.
The article itself has a perfectly accurate and interesting title, namely: An iOS zero-click radio proximity exploit odyssey.
But it’s headlines like the one we’ve used above that capture the practical essence of Beer’s attack.
The exploit sequence he figured out really does allow an attacker to break into a nearby iPhone and steal personal data – using wireless connections only, and with no clicks needed by, or warnings shown to, the innocently occupied user of the device.
Indeed, Beer’s article concludes with a short video showing him automatically stealing a photo from his own phone using hacking kit set up in the next room:
- He takes a photo of a “secret document” using the iPhone in one room.
- He leaves “user” of the phone (a giant pink teddy bear, as it happens) sitting happily watching a YouTube video.
- He goes next door and kicks off an automated over-the-air attack that exploits a kernel bug on the phone.
- The exploit sneakily uploads malware code onto the phone, grants itelf access to the Photo app’s data directory, reads the “secret” photo file and invisibly uploads it to his laptop next door.
- The phone continues working normally throughout, with no warnings, pop-ups or anything that might alert the user to the hack.
That’s the bad news.
The good news is that the core vulnerability that Beer relied upon is one that he himself found many months ago, reported to Apple, and that has already been patched.
So if you have updated your iPhone in the past few months, you should be safe from this particular attack.
The other sort-of-good news is that it took Beer, by his own admission, six months of detailed and dedicated work to figure out how to exploit his own bug.
To give you an idea of just how much effort went into the 5-minute “teddy bear’s data theft picnic” video above, and as a fair warning if you are thinking of studying Beer’s excellent article in detail, bear in mind that his blog post runs to more than 30,000 words – longer than the novel Animal Farm by George Orwell, or A Christmas Carol by Charles Dickens.
You may, of course, be wondering why Beer bothered to take a bug he’d found and already reported, yet went to so much effort to weaponise it, to use the paramilitary jargon common in cybersecurity.
Well, Beer gives the answer himself, right at the start of his article:
The takeaway from this project should not be: no one will spend six months of their life just to hack my phone, I’m fine.
Instead, it should be: one person, working alone in their bedroom, was able to build a capability which would allow them to seriously compromise iPhone users they’d come into close contact with.
To be clear: Beer, via Google, did report the original bug promptly, and as far as we know no one else had figured it out before he did, so there is no suggestion that this bug was exploited by anyone in real life.
But the point is that it is reasonable to assume that once a kernel-level buffer overflow has been discovered, even in the face of the latest and greatest exploit mitigations, a determined attacker could produce a dangerous exploit from it.
Even though security controls such as address space layout randomisation and pointer authentication codes increase our cybersecurity enormously, they’re not silver bullets on their own.
As Mozilla rather drily puts it when fixing any memory mismangement flaws in Firefox, even apparently mild or arcane errors that the team couldn’t or didn’t figure out how to exploit themselves: “Some of these bugs showed evidence of memory corruption and we presume that with enough effort some of these could have been exploited to run arbitrary code.”
In short, finding bugs is vital; patching them is critical; learning from our mistakes is important; but we must nevertheless continue to evolve our cybersecurity defences at all times.
The road to Beer’s working attack
It’s hard to do justice to Beer’s magnum opus in a brief summary like this, but here is a (perhaps recklessly oversimplified) description of just some of the hacking skills he used:
- Spotting a kernel variable name that sounded risky. The funky name that started it all was IO80211AWDLPeer::parseAwdlSyncTreeTLV, where TLV refers to type-length-value, a way of packaging complex data at one end for deconstructing (parsing) at the other, and AWDL is short for Apple Wireless Direct Link, the proprietary wireless mesh networking used for Apple features such as AirDrop. This function name implies the presence of complex kernel-level code that is directly exposed to untrusted data sent from other devices. This sort of code is often a source of dangerous programming blunders.
- Finding a bug in the TLV data handling code. Beer noticed a point at which a TLV data object that was limited to a memory buffer of just 60 bytes (10 MAC addresses at most) was incorrectly “length-checked” against a generic safety limit of 1024 bytes, instead of against the actual size of the buffer available.
- Building an AWDL network driver stack to create dodgy packets. Ironically, Beer started with an existing open source project intended to be compatible with Apple’s proprietary code, but couldn’t get it to work as he neeed. So he ended up knitting his own.
- Finding a way to get buffer-busting packets past safety checks that existed elsewhere. Althouth the core kernel code was defective, and didn’t do its final error checking correctly, there were several partial precursor checks that made the attack much harder. By the way, as Beer points out, it’s tempting, in low-level code – especially if it is performance critical – to assume that untrusted data will have been sanitised already, and therefore to skimp on error checking code at the very point it matters most. Don’t do it, especially if that critical code is in the kernel!
- Learning how to turn the buffer overflow into a controllable heap corruption. This provided a predictable and exploitable method for using AWDL packets to force unauthorised reads from and writes into kernel memory.
- Trying out a total 13 different Wi-Fi adapters to find a way mount the attack. Beer wanted to be able to send poisoned AWDL packets on the 5GHz Wi-Fi channels widely used today, so he had to find a network adapter he could reconfigure to meet his needs.
At this point, Beer had already reached a proof-of-concept result where most of us would have stopped in triumph.
With kernel read-write powers he could remotely force the Calc app to pop up on your phone, as long as you had AWDL networking enabled, for example while you were using the “Share” icon in the Photos app to send your own files via AirDrop.
Nevertheless, he was determined to convert this into a so-called zero-click attack, where the victim doesn’t have to be doing anything more specific that simply “using their phone” at the time.
As you can imagine, a zero-click attack is much more dangerous, because even a well-informed user wouldn’t see any tell-tale signs in advance that warned of impending trouble.
So Beer also figured out out techniques for:
- Pretending to be a nearby device offering files to share via AirDrop. If your phone thinks that a nearby device might be one of your contacts, based on Bluetooth data it is transmitting, it will temporarily fire up AWDL to see who it is. If it isn’t one of your contacts, you won’t see any popup or other warning, but the exploitable AWDL bug will be exposed briefly via the automatically activated AWDL subsystem.
- Extending the attack to do more than just popping up an existing app such as Calc. Beer figured out how to use his initial exploit in an detailed attack chain that could access arbitrary files on the device and steal them.
In the video above, the attack took over an app that was already running (the teddy bear was watching YouTube, if you recall); “unsandboxed” the app from inside the kernel so it was no longer limited to viewing its own data; used the app to access the DCIM (camera) directory belongong to the Photos app; stole the latest image file; and then exflitrated it using an innocent-looking TCP connection.
What to do?
Tip 1. Make sure you are up to date with security fixes, because the bug at the heart of Beer’s attack chain was found and disclosed by him in the first place, so it’s already been patched. Go to Settings > General > Software Update.
Tip 2. Turn off Bluetooth when you don’t need it. Beer’s attack is a good reminder that “less is more”, because he needed Bluetooth in order to turn this into a true zero-click attack.
Tip 3. Never assume that because a bug sounds “hard” that it will never be exploited. Beer admits that this one was hard – very hard – to exploit, but ultimately not impossible.
Tip 4. If you are a programmer, be strict with data. It’s never a bad idea to do good error checking.
For all the coders out there: expect the best, i.e. hope that everyone who calls your code has checked for errors at least once already; but prepare for the worse, i.e. assume that they haven’t.
The PS5 is the weirdest thing I’ve ever seen in my life
Before I talk about the, I want to talk about the time I interviewed The Big Show.
About a decade ago, in another lifetime, I interviewed The Big Show, a professional wrestler who, to this day, works with the WWE.
True to his name, The Big Show is big. Ridiculously big. He’s billed at 7 feet tall and weighs about 400 pounds. As I waited in a hotel suite for The Big Show to arrive, I tried to mentally prepare myself for the sheer scale of the human being about to blot my horizon.
The mental prep didn’t work. Not even close. The Big Show walked in. My eyes widened. I audibly gasped when he took my tiny hand and shook it with a right paw the size of a large dinner plate.
That’s sorta how I felt when I first came face to face with ain the wild. No matter how prepared I was, no matter how many photos I’d seen for scale, the size of this monstrosity of a console still took me utterly and completely by surprise.
I was at Sony’s offices in Australia when I first saw it, engaged in small talk with a Sony employee. I caught it in my peripheral vision. I started vibrating; completely lost focus.
“Is that it?”
“No… It can’t be.”
“There’s no way it’s that big.”
Despite seeing it in photos, despite preparing myself, I was honestly, sincerely shocked.
So shocked that, when I finally got a PlayStation 5 in the confines of my own home, I felt compelled to just… take pictures of it next to everyday objects. As though my primitive brain had to work through and digest its scale. By placing it in the context of a banana or a giant pot plant.
Ever since I got my PS5, I’ve been enjoying the hell out of it. Great games,. But I’ve also been thinking about it. Trying to make sense of why smart Sony people (I’m assuming) decided to make the console look like this.
Because, beyond the sheer ungodly size of the thing, the PS5 is simply a strange object to look at, let alone try to understand. Consumer devices, particularly consoles, can usually be placed in the scheme of a broader design aesthetic. Maybe they look a little like the TVs they are connected to? Or the living rooms they were designed to be placed in?
Consoles tend to be connected directly to the design zeitgeist or push back against it in some creative way. The Microsoft‘s new focus on services like Game Pass. In a future where consoles may not even exist, the Xbox Series X might just be the last step. It’s designed to look like a last step., for example, is a console designed to disappear, marching in time with
The Nintendo GameCube on the other hand, released in 2001, was a console that pushed back. A playful looking toy of a device, designed in direct opposition to sleek black boxes like the Xbox and the PlayStation 2. Consoles designed to hide beneath TVs. All three devices were tethered to one another whether they liked it or not and the design cues reflected that.
The PlayStation 5 is different. The PlayStation 5 arrived untethered to anything on Earth in 2020: Other consoles, tech devices, any kind of common sense.
The PlayStation 5’s design is so confounding I can’t decide whether it’s a deliberate Lynchian parody of our basest nostalgic impulses or — way more likely — the dumbest fucking thing I’ve ever seen. In the past Sony has pushed the boundaries of console design with a sort of avant-garde, just-beyond-the-future sensibility. Its consoles have flirted with allure and mystery. This time around, they’ve created something that looks like a gigantic, geriatric ISP router or an obnoxious PC gaming laptop.
It has to be deliberate, right? Surely.
I can’t make sense of it from any possible frame of reference. The PlayStation 5 is strange to look at but doesn’t even come close to some postmodern “weird for the sake of weird” ideal. It’s undoubtedly connected to objects we’ve seen before, in our recent past. In a strange way the PS5 is almost normal. Like, bad normal. Banal normal. Like something a teenage boy would have drawn in 2007 normal.
And the PS5 isn’t a “Homer” either. It’s not a busted up, bloated object that’s clearly the result of bad taste, bad ideas and poor design squished into one ugly box. If you squint, the PS5 is sort of nice to look at if you don’t think about it too hard, but it aged decades the second I took it out of the box.
Its closest design relative is probably the Xbox 360, a console that came out in 2005 and is probably too young to evoke any sort of nostalgia. A console that started out white and was eventually stained cream by the harsh ravages of time. The PS5, I suspect, will suffer the same fate. This thing is already sort of weird and uglier. In two or three years, you’ll be putting a paper bag over its head.
It feels brittle, heavy and doesn’t really belong in my house. And at this point I’m struggling to understand how the PS5 could fit in any house.
I love the PS5. I love what it does. I love Demon’s Souls, I love Spider-Man: Miles Morales, I love Astro’s Playroom. More than anything I love its new DualSense controller with its tactile, vibrant feedback and it’s responsive adaptive triggers.
But the best thing I can say about the PlayStation 5 as a physical object is that it — thank the lord —, where it can be mercifully hidden from human eyes.
Part human, part machine: is Apple turning us all into cyborgs?
At the beginning of the Covid-19 pandemic, Apple engineers embarked on a rare collaboration with Google. The goal was to build a system that could track individual interactions across an entire population, in an effort to get a head start on isolating potentially infectious carriers of a disease that, as the world was discovering, could be spread by asymptomatic patients.
Delivered at breakneck pace, the resulting exposure notification tool has yet to prove its worth. The NHS Covid-19 app uses it, as do others around the world. But lockdowns make interactions rare, limiting the tool’s usefulness, while in a country with uncontrolled spread, it isn’t powerful enough to keep the R number low. In the Goldilocks zone, when conditions are just right, it could save lives.
The NHS Covid-19 app has had its teething problems. It has come under fire for not working on older phones, and for its effect on battery life. But there’s one criticism that has failed to materialise: what happens if you leave home without your phone? Because who does that? The basic assumption that we can track the movement of people by tracking their phones is an accepted fact.This year has been good for tech companies, and Apple is no exception. The wave of global lockdowns has left us more reliant than ever on our devices. Despite being one of the first large companies to be seriously affected by Covid, as factory shutdowns in China hit its supply chain delaying the launch of the iPhone 12 by a month, Apple’s revenue has continued to break records. It remains the largest publicly traded company in the world by a huge margin: this year its value has grown by 50% to $2tn (£1.5tn) and it is still $400bn larger than Microsoft, the No 2.
It’s hard to think of another product that has come close to the iPhone in sheer physical proximity to our daily lives. Our spectacles, contact lenses and implanted medical devices are among the only things more personal than our phones.
Without us even noticing, Apple has turned us into organisms living symbiotically with technology: part human, part machine. We now outsource our contact books, calendars and to-do lists to devices. We no longer need to remember basic facts about the world; we can call them up on demand. But if you think that carrying around a smartphone – or wearing an Apple Watch that tracks your vitals in real time – isn’t enough to turn you into a cyborg, you may feel differently about what the company has planned next.
A pair of smartglasses, in development for a decade, could be released as soon as 2022, and would have us quite literally seeing the world through Apple’s lens – putting a digital layer between us and the world. Already, activists are worrying about the privacy concerns sparked by a camera on everyone’s face. But deeper questions, about what our relationship should be to a technology that mediates our every interaction with the world, may not even be asked until it’s too late to do anything about the answer.
The word cyborg – short for “cybernetic organism” – was coined in 1960 by Manfred E Clynes and Nathan S Kline, whose research into spaceflight prompted them to explore how incorporating mechanical components could aid in “the task of adapting man’s body to any environment he might choose”. It was a very medicalised concept: the pair imagined embedded pumps dispensing drugs automatically.
In the 1980s, genres such as cyberpunk began to express writers’ fascination with the nascent internet, and wonder how much further it could go. “It was the best we could do at the time,” laughs Bruce Sterling, a US science fiction author and futurist whose Mirrorshades anthology defined the genre for many. Ideas about putting computer chips, machine arms or chromium teeth into animals might have been very cyberpunk, Sterling says, but they didn’t really work. Such implants, he points out, aren’t “biocompatible”. Organic tissue reacts poorly, forming scar tissue, or worse, at the interface. While science fiction pursued a Matrix-style vision of metal jacks embedded in soft flesh, reality took a different path.
“If you’re looking at cyborgs in 2020,” Sterling says, “it’s in the Apple Watch. It’s already a medical monitor, it’s got all these health apps. If you really want to mess with the inside of your body, the watch lets you monitor it much better than anything else.”
The Apple Watch had a shaky start. Despite the company trying to sell it as the second coming of the iPhone, early adopters were more interested in using their new accessory as a fitness tracker than in trying to send a text message from a device far too small to fit a keyboard. So by the second iteration of the watch, Apple changed tack, leaning into the health and fitness aspect of the tech.
Now, your watch can not only measure your heart rate, but scan the electric signals in your body for evidence of arrhythmia; it can measure your blood oxygenation level, warn you if you’re in a noisy environment that could damage your hearing, and even call 999 if you fall over and don’t get up. It can also, like many consumer devices, track your running, swimming, weightlifting or dancercise activity. And, of course, it still puts your emails on your wrist, until you turn that off.
As Sterling points out, for a vast array of health services that we would once have viewed as science fiction, there’s no need for an implanted chip in our head when an expensive watch on our wrist will do just as well.
That’s not to say that the entirety of the cyberpunk vision has been left to the world of fiction. There really are people walking around with robot limbs, after all. And even there, Apple’s influence has starkly affected what that future looks like.
“Apple, I think more than any other brand, truly cares about the user experience. And they test and test and test, and iterate and iterate and iterate. And this is what we’ve taken from them,” says Samantha Payne, the chief operating officer of Bristol’s Open Bionics. The company, which she co-founded in 2014 with CEO Joel Gibbard, makes the Hero Arm, a multi-grip bionic hand. With the rapid development of 3D printer technology, Open Bionics has managed to slash the cost of such advanced prosthetics, which could have cost almost $100,000 10 years ago, to just a few thousand dollars.
Rather than focus on flesh tones and lifelike design, Open Bionics leans into the cyborg imagery. Payne quotes one user describing it as “unapologetically bionic”. “All of the other prosthetics companies give the impression that you should be trying to hide your disability, that you need to try and fit in,” she says. “We are company that’s taking a big stance against that.”
At times, Open Bionics has been almost too successful in that goal. In November, the company launched an arm designed to look like that worn by the main character in the video game Metal Gear Solid V – red and black, shiny plastic and, yes, unapologetically bionic – and the response was unsettling. “You got loads of science fiction fans saying that they really are considering chopping off their hand,” Payne says.
Some disabled people who rely on technology to live their daily lives feel that cyberpunk imagery can exoticise the very real difficulties they face. And there are also lessons in the way that more prosaic devices can give disabled people what can only be described as superpowers. Take hearing aid users, for example: deaf iPhone owners can not only connect their hearing aids to their phones with Bluetooth, they can even set up their phone as a microphone and move it closer to the person they want to listen to, overcoming the noise of a busy restaurant or crowded lecture theatre. Bionic ears anyone?
“There’s definitely something in the idea of everyone in the world being a cyborg today,” Payne says. “A crazy high number of people in the world have a smartphone, and so all of these people are technologically augmented. It’s definitely taking it a step further when you depend on that technology to be able to perform everyday living; when it’s adorned to your body. But we are all harnessing the vast power of the internet every single day.”
Making devices so compelling that we carry them with us everywhere we go is a mixed blessing for Apple. The iPhone earns it about $150bn a year, more than every other source of revenue combined. In creating the iOS App Store, it has assumed a gatekeeper role with the power to reshape entire industries by carefully defining its terms of service. (Ever wonder why every app is asking for a subscription these days? Because of an Apple decision in 2016. Bad luck if you prefer to pay upfront for software.) But it has also opened itself up to criticism that the company allows, or even encourages, compulsive patterns of behaviour.
Apple co-founder Steve Jobs famously likened personal computers to “bicycles for the mind”, enabling people to do more work for the same amount of effort. That was true of the Macintosh computer in 1984, but modern smartphones are many times more powerful. If we now turn to them every waking hour of the day, is that because of their usefulness, or for more pernicious reasons?
“We don’t want people using their phones all the time,” Apple’s chief executive, Tim Cook, said in 2019. “We’re not motivated to do that from a business point of view, and we’re certainly not from a values point of view.” Later that year, Cook told CBS: “We made the phone to make your life better, and everybody has to decide for his or herself what that means. For me, my simple rule is if I’m looking at the device more than I’m looking into someone’s eyes, I’m doing the wrong thing.”
Apple has introduced features, such as the Screen Time setting, that help people strike that balance: users can now track, and limit, their use of individual apps, or entire categories, as they see fit. Part of the problem is that, while Apple makes the phone, it doesn’t control what people do with it. Facebook needs users to open its app daily, and Apple can only do so much to counter that tendency. If these debates – about screen time, privacy and what companies are doing with our data, our attention – seem like a niche topic of interest now, they will become crucial once Apple’s latest plans become reality. The reason is the company’s worst-kept secret in years: a pair of smartglasses.
It filed a patent in 2006 for a rudimentary version, a headset that would let users see a “peripheral light element” for an “enhanced viewing experience”, able to display notifications in the corner of your vision. That was finally granted in 2013, at the time of Google’s own attempt to convince people about smartglasses. But Google Glass failed commercially, and Apple kept quiet about its intentions in the field.
Recently, the company has intensified its focus on “augmented reality”, technology that overlays a virtual world on the real one. It’s perhaps best known through the video game Pokémon Go, which launched in 2016, superimposing Nintendo’s cute characters on parks, offices and playgrounds. However, Apple insists, it has much greater potential than simply enhancing games. Navigation apps could overlay the directions on top of the real world; shopping services could show you what you would look like wearing the clothes you’re thinking of getting; architects could walk around inside the spaces they have designed before shovels even break ground.
With each new iPhone launch, Apple’s demonstrated new breakthroughs in the technology, such as “Lidar” support in new iPhones and iPads, a tech (think radar with lasers) that lets them accurately measure the physical space they are in. Then, at the end of 2019, it all slotted into place: a Bloomberg report suggested that the company hadn’t given up on smartglasses in the wake of Google Glass’s failure, but had spent five years honing the concept. The pandemic put paid to a target of getting hardware on the shelves in 2020, but the company is still hoping to make an announcement next year for a 2022 launch, Bloomberg suggested.
Apple’s plans cover two devices, codenamed N301 and N421. The former is designed to feature “ultra-high-resolution screens that will make it almost impossible for a user to differentiate the virtual world from the real one”, according to Bloomberg’s Mark Gurman. This is a product with an appeal far beyond the hardcore gamers who have adopted existing VR headsets: you might put it on to enjoy lifelike, immersive entertainment, or to do creative work that can make the most of the technology, but would probably take it off to have lunch, for instance.
N421 is where the real ambitions lie. Expected in 2023, it’s described only as “a lightweight pair of glasses using AR”. But, argues Mark Pesce in his book Augmented Reality, this would be the culmination of the “mirrorshades” dreamed up by the cyberpunks in the 80s, using the iPhone as the brains of the device and “keeping the displays themselves light and comfortable”. Wearing it all day, every day, the idea of a world without a digital layer between you and reality would eventually fade into memory – just as living without immediate access to the internet has for so many right now.
Apple isn’t the first to try to build such a device, says Rupantar Guha of the analysts GlobalData, who has been following the trend in smartglasses from a business standpoint for years, but it could lead the wave that makes it relevant. “The public perception of smartglasses has struggled to recover from the high-profile failure of Google Glass, but big tech still sees potential in the technology.” Guha cites the recent launch of Amazon Echo Frames – sunglasses you can talk to, because they have got the Alexa digital assistant built in – and Google’s purchase of the smartglasses maker North in June 2020. “Apple and Facebook are planning to launch consumer smartglasses over the next two years, and will expect to succeed where their predecessors could not,” Guha adds.
If Apple pulls off that launch, then the cyberpunk – and cyborg – future will have arrived. It’s not hard to imagine the concerns, as cultural questions clash with technological: should kids take off their glasses in the classroom, just as we now require them to keep phones in their lockers? Will we need to carve out lens-free time in our evenings to enjoy old-fashioned, healthy activities such as watching TV or playing video games?
“It’s a fool’s errand to imagine every use of AR before we have the hardware in our hands,” writes the developer Adrian Hon, who was called on by Google to write games for their smartglasses a decade ago. “Yet there’s one use of AR glasses that few are talking about but will be world-changing: scraping data from everything we see.” This “worldscraping” would be a big tech dream – and a privacy activist’s nightmare. A pair of smartglasses turns people into walking CCTV cameras, and the data that a canny company could gather from that is mindboggling. Every time someone browsed a supermarket, their smartglasses would be recording real-time pricing data, stock levels and browsing habits; every time they opened up a newspaper, their glasses would know which stories they read, which adverts they looked at and which celebrity beach pictures their gaze lingered on.
“We won’t be able to opt out from wearing AR glasses in 2035 any more than we can opt out of owning smartphones today,” Hon writes. “Billions have no choice but to use them for basic tasks like education, banking, communication and accessing government services. In just a few years time, AR glasses do the same, but faster and better.”
Apple would argue that, if any company is to control such a powerful technology, it ought to. The company declined to speak on the record for this story, but it has invested time and money in making the case that it can be trusted not to abuse its power. The company points to its comparatively simple business model: make things, and sell them for a lot of money. It isn’t Google or Facebook, trying to monetise personal data, or Amazon, trying to replace the high street – it’s just a company that happens to make a £1,000 phone that it can sell to 150 million people a year.
But whether we trust Apple might be beside the point, if we don’t yet know whether we can trust ourselves. It took eight years from the launch of the iPhone for screen time controls to follow. What will human interaction look like eight years after smartglasses become ubiquitous? Our cyborg present sneaked up on us as our phones became glued to our hands. Are we going to sleepwalk into our cyborg future in the same way?
The Motivator5 days ago
Parents Working From Home: 10 Productivity Hacks
Tech News3 days ago
Microsoft Teams will stop working for millions tomorrow
The Future3 days ago
Coding the future: the tech kids solving life’s problems
Systems3 days ago
NEW NOKIA LAPTOP SERIES REVEALS BIS CERTIFICATION
Internet3 days ago
How can I look at others’ WhatsApp status without them knowing it?
The Motivator2 days ago
The Ultimate Guide to Protecting Your Child Online in 2020
Entertainment1 day ago
Here’s How to See What You Listened to Most on Spotify This Year
Security6 hours ago
Warning Issued For Millions Of New iPhone Users