Connect with us

Security

How Safari and iMessage Have Made iPhones Less Secure

Published

on

The security reputation of iOS, once considered the world’s most hardened mainstream operating system, has taken a beating over the past month: Half a dozen interactionless attacks that could take over iPhones without a click were revealed at the Black Hat security conference. Another five iOS exploit chains were exposed in malicious websites that took over scores of victim devices. Zero-day exploit brokers are complaining that hackers are glutting the market with iOS attacks, reducing the prices they command.

As Apple prepares for its iPhone 11 launch on Tuesday, the recent stumbles suggest it’s time for the company to go beyond fixing the individual security flaws that have made those iPhone attacks possible, and to instead examine the deeper issues in iOS that have produced those abundant bugs. According to iOS-focused security researchers, that means taking a hard look at two key inroads into an iPhone’s internals: Safari and iMessage.

While vulnerabilities in those apps offer only an initial foothold into an iOS device—a hacker still has to find other bugs that allow them to penetrate deeper into the phone’s operating system—those surface-level flaws have nonetheless helped to make the recent spate of iOS attacks possible. Apple declined to comment on the record.

“If you want to compromise an iPhone, these are the best ways to do it,” says independent security researcher Linus Henze of the two apps. Henze gained notoriety as an Apple hacker after revealing a macOS vulnerability known as KeySteal earlier this year. He and other iOS researchers argue that when it comes to the security of both iMessage and WebKit—the browser engine that serves as the foundation not just of Safari but all iOS browsers—iOS suffers from Apple’s preference for its own code above that of other companies. “Apple trusts their own code way more than the code of others,” says Henze. “They just don’t want to accept the fact that they make bugs in their own code, too.”

Caught in a WebKit

As a prime example, Apple requires that all iOS web browsers—Chrome, Firefox, Brave, or any other—be built on the same WebKit engine that Safari uses. “Basically it’s just like running Safari with a different user interface,” Henze says. Apple demands browsers use WebKit, Henze says, because the complexity of running websites’ JavaScript requires browsers to use a technique called just-in-time (or JIT) compilation as a time-saving trick. While programs that run on an iOS device generally need to be cryptographically signed by Apple or an approved developer, a browser’s JIT speed optimization doesn’t include that safeguard.

As a result, Apple has insisted that only its own WebKit engine be allowed to handle that unsigned code. “They trust their own stuff more,” Henze says. “And if they make an exception for Chrome, they have to make an exception for everyone.”

“They should assume their own code has bugs.”

LINUS HENZE, SECURITY RESEARCHER

The problem with making WebKit mandatory, according to security researchers, is that Apple’s browser engine is in some respects less secure than Chrome’s. Amy Burnett, a founder of security firm Ret2 who leads trainings in both Chrome and WebKit exploitation, says that it’s not clear which of the two browsers has the most exploitable bugs. But she argues that Chrome’s bugs are fixed faster, which she credits in part to Google’s internal efforts to find and eliminate security flaws in its own code, often through automated techniques like fuzzing.

Google also offers a bug bounty for Chrome flaws, which incentivizes hackers to find and report them, whereas Apple offers no such bounty for WebKit unless a WebKit bug is integrated into an attack technique that penetrates deeper into iOS. “You’re going to find similar bug classes in both browsers,” says Burnett. “The question is whether they can get rid of enough of the low hanging fruit, and it seems like Google is doing a better job there.” Burnett adds that Chrome’s sandbox, which isolates the browser from the rest of the operating system, is also “notoriously” difficult to bypass—more so than WebKit’s—making any Chrome bugs that do persist less useful for gaining further access to a device.

Shady References

Another specific element of WebKit’s architecture that can result in hackable flaws, says Luca Todesco, an independent security researcher who has released WebKit and full iOS hacking techniques, is its so-called document object model, known as WebCore, which WebKit browsers use to render websites. WebCore requires that a browser developer keep careful track of which data “object”—anything from a string of text to an array of data—references another object, a finicky process known as “reference counting.” Make a mistake, and one of those references might be left pointing at a missing object. A hacker can fill that void with an object of their choosing, like a spy who picks up someone else’s name tag at a conference registration table.

By contrast, Chrome’s own version of WebCore includes a safeguard known as a “garbage collector” that cleans up pointers to missing objects, so they can’t be mistakenly left unassigned and vulnerable to an attacker. WebKit by contrast uses an automated reference counting system called “smart pointers” that Todesco argues still leaves room for error. “There’s just so many things that can potentially happen, and in WebCore the browser developer has to keep track of all these possibilities,” Todesco says. “It’s impossible not to screw up.”

To Apple’s credit, iOS has for more than a year implemented a security mitigation called isolated heaps, or “isoheaps,” designed to make errors in reference counting impossible to exploit, as well as newer mitigations in the hardware of the iPhone XS, XS Max, and XR. But both Todesco and Burnett note that while isolated heaps significantly improved WebCore’s security and pushed many hackers towards attacking different parts of WebKit, they didn’t entirely prevent attacks on WebCore. Todesco says there have been multiple exploiting reference counting errors since isoheaps were introduced in 2018. “You can’t say they’re eliminated,” Ret2’s Burnett agrees.

Despite all those issues, and even as WebKit’s flaws have served as the entry point for one iOS attack after another, it’s debatable whether WebKit is measurably less secure than Chrome. In fact, a price chart from Zerodium, a firm which sells zero-day hacking techniques, values Chrome and Safari attacks equally. But another zero-day broker, Maor Shwartz, told WIRED by contrast that WebKit’s insecurity relative to Chrome contributed directly to top prices for an Android exploit surpassing those for iOS. “Chrome is the most secure browser today,” Shwartz says. “The prices are aligned with that.”

Getting the Message

Hackable flaws in iMessage are far rarer than those WebKit. But they’re also far more powerful, given that they can be used as the first step in a hacking technique that takes over a target phone with no user interaction. So it was all the more surprising last month to see Natalie Silvanovich, a researcher with Google’s Project Zero team, expose an entire collection of previously unknown flaws in iMessage that could be used to enable remote, zero-click takeovers of iPhones.

More disturbing than the existence of those individual bugs was that they all stemmed from the same security issue: iMessage exposes to attackers its “unserializer,” a component that essentially unpacks different types of data sent to the device via iMessage. Patrick Wardle, a security researcher at Apple-focused security firm Jamf, describes the mistake as something like blindly opening a box sent to you full of disassembled components, and reassembling them without an initial check that they won’t add up to something dangerous. “I could put the parts of a bomb in that box,” says Wardle. “If Apple is allowing you to unserialize all these objects, that exposes a big attack surface.”

LEARN MORE



How Safari and iMessage Have Made iPhones Less Secure
The WIRED Guide to the iPhone

More fundamentally, iMessage has innate privileges in iOS that other messaging apps are denied. In fact, non-Apple apps are cordoned off from the rest of the operating system by rigorous sandboxes. That means that if a third-party app like WhatsApp is compromised, for instance, a hacker still has to break through its sandbox with another, distinct technique to gain deeper control of the device. But Project Zero’s Silvanovich noted in her writeup of the iMessage flaws that some of iMessage’s vulnerable components are integrated with SpringBoard, iOS’s program for managing a device’s home screen, which Silvanovich writes has no sandbox at all.

“What I personally can’t understand is why they don’t sandbox it more,” Linus Henze says of iMessage. “They should assume their own code has bugs, and make sure their code is sandboxed in the same way they sandbox the code of other developers, just as they do with WhatsApp or Signal or any other app.”

Apple, after all, built the iPhone’s sterling reputation in part by carefully restricting what apps it allowed into its App Store, and even then carefully isolating those apps within the phone’s software. But to head off these high-profile incidents, it may need to reexamine that security caste system—and ultimately, to treat its own software’s code with the same suspicion it has always cast on everyone else’s.

Source: https://www.wired.com/story/ios-security-imessage-safari/

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Security

GitHub launches ‘Security Lab’ to help secure open source ecosystem

Published

on

By

Today, at the GitHub Universe developer conference, GitHub announced the launch of a new community program called Security Lab that brings together security researchers from different organizations to hunt and help fix bugs in popular open source projects.

“GitHub Security Lab’s mission is to inspire and enable the global security research community to secure the world’s code,” the company said in a press release.

“Our team will lead by example, dedicating full-time resources to finding and reporting vulnerabilities in critical open source projects,” it said.

Founding members include security researchers from organizations like Microsoft, Google, Intel, Mozilla, Oracle, Uber, VMWare, LinkedIn, J.P. Morgan, NCC Group, IOActive, F5, Trail of Bits, and HackerOne.

GitHub says Security Lab founding members have found, reported, and helped fix more than 100 security flaws already.

Other organizations, as well as individual security researchers, can also join. A bug bounty program with rewards of up to $3,000 is also available, to compensate bug hunters for the time they put into searching for vulnerabilities in open source projects.

Bug reports must contain a CodeQL query. CodeQL is a new open source tool that GitHub released today; a semantic code analysis engine that was designed to find different versions of the same vulnerability across vasts swaths of code. Besides GitHub, CodeQL is already being rolled out in other places to help with vulnerability code scans, such as Mozilla.

SolarWinds® Network Insight for Cisco ASA goes beyond basic up/down status. It can help provide comprehensive firewall performance, and also offers access control list monitoring.Downloads provided by SolarWinds

GitHub’s new Security Lab project did not come out of the blue. Efforts have been going on at the company to improve the overall security state of the GitHub ecosystem for some time. Security Lab merges all these together.

For example, GitHub has been working for the past two years on rolling out security notifications that warn project maintainers about dependencies that contain security flaws.

Earlier this year, GitHub started testing a feature that would enable project authors to create “automated security updates.” When GitHub would detect a security flaw inside a project’s dependency, GitHub would automatically update the dependency and release a new project version on behalf of the project maintainer.

The feature has been in beta testing for all 2019, but starting today automated security updates are generally available and have been rolled out to every active repository with security alerts enabled. [Also see official announcement.]

github-automated-fixes.png
Image: GitHub

Furthermore, GitHub also recently became an authorized CVE Numbering Authority (CNA), which means it can issue CVE identifiers for vulnerabilities. GitHub didn’t apply to become a CNA for nothing.

Its CNA capability has been added to a new service feature called “security advisories.” These are special entries in a project’s Issues Tracker where security flaws are handled in private.

Once a security flaw is fixed, the project owner can publish the security, and GitHub will warn all upstream project owners who are using vulnerable versions of the original maintainer’s code.

But before publishing a security advisory, project owners can also request and receive a CVE number for their project’s vulnerability directly from GitHub.

Previously, many open source project owners who hosted their projects on GitHub didn’t bother requesting a CVE number due to the arduous process.

However, getting CVE identifiers is crucial, as these IDs and additional details can be integrated into many other security tools that scan source code and projects for vulnerabilities, helping companies detect vulnerabilities in open sourcec tools that they would have normally missed.[Also see official announcement.]

github-cve-advisory.png
Image: GitHub

And in addition to the new GitHub Security Lab, the code-sharing platform is also launching the GitHub Advisory Database, where it will collect all security advisories found on the platform, to make it easier for everyone to keep track of security flaws found in GitHub-hosted projects. [Also see official announcement.]

And last, but not least, GitHub also updated Token Scanning, its in-house service that can scan users’ projects for API keys and tokens that have been accidentally left inside their source code.

Starting today, the service, which previously could detect API tokens from 20 services, can identify four more formats, from GoCardless, HashiCorp, Postman, and Tencent. [Also see official announcement.]

Source: https://www.zdnet.com/article/github-launches-security-lab-to-help-secure-open-source-ecosystem/

Continue Reading

Security

iPhone owners should delete these 17 apps now, security experts warn

Published

on

By

APPLE has confirmed that 17 applications have been removed from the App Store after they were found to be secretly committing fraud behind users’ backs to quietly collect advertising revenue from their smartphones. Here’s which apps were called out, so you can immediately delete any that are still sitting pretty on your iOS home screen.

iPhone App Delete

If you’ve got any of these apps on your iPhone, you really need to do something about it (Image: GETTY)

If you’ve got any of these 17 apps saved on your iPhone, you’d best delete them as soon as possible.

Apple has confirmed the applications have now been wiped from its App Store, but you’ll still need to manually delete them from your smartphone if you’d already downloaded and run the software. The apps, which were all created by a single developer, were maliciously collecting advertising revenue behind iPhone owners’ backs.

The warning comes just hours after Android users were cautioned to delete a number of malicious apps from Google’s rival Play Store.

Mobile security firm Wandera sniffed-out the malicious software made available for iPhone owners worldwide. For users, it would be almost impossible to tell that anything was wrong, since the apps did exactly what they promised on the tin… except that they were secretly fraud in the background on your iPhone too.

“The objective of most clicker trojans is to generate revenue for the attacker on a pay-per-click basis by inflating website traffic. They can also be used to drain the budget of a competitor by artificially inflating the balance owed to the ad network,” the security firm explains.

Although the apps weren’t designed to cause any direct harm to users or their smartphones themselves, the nefarious behind-the-scenes activity would drain mobile data faster than usual, so if you’re not on an unlimited 4G plan – it would cost you each month. Secondly, the activity from the apps could also cost you precious battery life, as well as slowing down your phone, since it’s having to process all the extra ad requests.

So, deleting the software could see a drop in any additional monthly charges from your network provider, faster performance, as well as a few more hours battery life, which are all pretty substantial benefits.

Wandera claims these iPhone apps were able to Apple’s stringent review process since the malicious code was never inside the apps themselves – therefore there was nothing for Apple to detect when scanning them before allowing them onto the App Store. Instead, the apps would receive instructions to begin their activities from a remote server hosted by the developers.

Apple says it’s now improving the app review process to stop this happening in future.

iPhone Apps Delete

The malicious apps in question – check your iPhone for these (Image: WANDERA)

The same server was also designed to control a similar set of Android apps. Unfortunately, the weaker security on the Android operating system meant that the developer was able to go even further with these malicious apps – causing direct harm to the user.

According to the Wandera security team, “Android apps communicating with the same server were gathering private information from the user’s device, such as the make and model of the device, the user’s country of residence and various configuration details.

“One example involved users who had been fraudulently subscribed to expensive content services following the installation of an infected app.”

The full list of infected apps:

  • RTO Vehicle Information
  • EMI Calculator & Loan Planner
  • File Manager – Documents
  • Smart GPS Speedometer
  • CrickOne – Live Cricket Scores
  • Daily Fitness – Yoga Poses
  • FM Radio – Internet Radio
  • My Train Info – IRCTC & PNR (not listed under developer profile)
  • Around Me Place Finder
  • Easy Contacts Backup Manager
  • Ramadan Times 2019
  • Restaurant Finder – Find Food
  • BMI Calculator – BMR Calc
  • Dual Accounts
  • Video Editor – Mute Video
  • Islamic World – Qibla
  • Smart Video Compressor

All 17 infected apps are published on the App Stores in various countries by the same developer, India-based AppAspect Technologies Pvt. Ltd. So, if you spot the name on a listing of an app that looks good… don’t download it.

Source: https://www.express.co.uk/life-style/science-technology/1196281/iPhone-Delete-These-Apps

Continue Reading

Security

Top Linux developer on Intel chip security problems: ‘They’re not going away.’

Published

on

By

Greg Kroah-Hartman, the stable Linux kernel maintainer, could have prefaced his Open Source Summit Europe keynote speech, MDS, Fallout, Zombieland, and Linux, by paraphrasing Winston Churchill: I have nothing to offer but blood sweat and tears for dealing with Intel CPU’s security problems. 

Or as a Chinese developer told him recently about these problems: “This is a sad talk.” The sadness is that the same Intel CPU speculative execution problems, which led to Meltdown and Spectre security issues, are alive and well and causing more trouble.

The problem with how Intel designed speculative execution is that, while anticipating the next action for the CPU to take does indeed speed things up, it also exposes data along the way. That’s bad enough on your own server, but when it breaks down the barriers between virtual machines (VM)s in cloud computing environments, it’s a security nightmare.

Kroah-Hartman said, “These problems are going to be with us for a very long time, they’re not going away. They’re all CPU bugs, in some ways they’re all the same problem,” but each has to be solved in its own way. “MDS, RDDL, Fallout, Zombieland: They’re all variants of the same basic problem.”

And they’re all potentially deadly for your security: “RIDL and Zombieload, for example, can steal data across applications, virtual machines, even secure enclaves. The last is really funny, because [Intel Software Guard Extensions (SGX)] is what supposed to be secure inside Intel ships” [but, it turns out it’s] really porous. You can see right through this thing.”
 
To fix each problem as it pops up, you must patch both your Linux kernel and your CPU’s BIOS and microcode. This is not a Linux problem; any operating system faces the same problem. 

OpenBSD, a BSD Unix devoted to security first and foremost, Kroah-Hartman freely admits was the first to come up with what’s currently the best answer for this class of security holes: Turn Intel’s simultaneous multithreading (SMT) off and deal with the performance hit. Linux has adopted this method. 

But it’s not enough. You must secure the operating system as each new way to exploit hyper-threading appears. For Linux, that means flushing the CPU buffers every time there’s a context switch (e.g. when the CPU stops running one VM and starts another).

You can probably guess what the trouble is. Each buffer flush takes a lot of time, and the more VMs, containers, whatever, you’re running, the more time you lose.

How bad are these delays? It depends on the job. Kroah-Hartman said he spends his days writing and answering emails. That activity only takes a 2% performance hit. That’s not bad at all. He also is always building Linux kernels. That takes a much more painful 20% performance hit. Just how bad will it be for you? The only way to know is to benchmark your workloads. 

Of course, it’s up to you, but as Kroah-Hartman said, “The bad part of this is that you now must choose: Performance or security. And that is not a good option.” It’s also, he reminded the developer-heavy crowd, which choice your cloud provider has made for you.

But wait! The bad news keeps coming. You must update your Linux kernel and patch your microcode as each Intel-related security update comes down the pike. The only way to be safe is to run the latest CanonicalDebianRed Hat, or SUSE distros, or the newest long-term support Linux kernel. Kroah-Hartman added, “If you are not using a supported Linux distribution kernel or a stable/long term kernel, you have an insecure system.”

So, on that note, you can look forward to constantly updating your operating system and hardware until the current generation of Intel processors are in antique shops. And you’ll be stuck with poor performance if you elect to put security ahead of speed. Fun, fun, fun!

Source: https://www.zdnet.com/article/top-linux-developer-on-intel-chip-security-problems-theyre-not-going-away/

Continue Reading
Advertisement

Trending

%d bloggers like this: