Connect with us

Security

An IPhone App that Protects your Privacy for Real

Published

on

THE DATA ECONOMY has too often betrayed its customers, whether it’s Facebook sharing data you didn’t even realize it had, or invisible trackers that follow you around the webwithout your knowledge. But a new app launching in the iOS App Store today wants to help you take back some control—without making your life harder.

The Guardian Firewall app runs in the background of an iOS device, and stymies data and location trackers while compiling a list of all the times your apps attempt to deploy them. It does so without breaking functionality in your apps or making them unusable. Plus, the blow by blow list gives you much deeper insight than you would normally have into what your phone is doing behind the scenes. Guardian Firewall also takes pains to avoid becoming another cog in the data machine itself. You don’t need to make an account to run the firewall, and the app is architected to box its developers out of user data completely.

“We don’t log IPs, because that’s toxic,” says Will Strafach, a long-time iOS jailbreaker and founder of Sudo Security Group, which develops Guardian Firewall. “To us, data is a liability, not an asset. But to think that way you’ve got to think outside the box, because it means you can’t just choose the simplest solutions to engineering problems a lot of times. But if you are willing to spend the time and resources, you can find solutions where there isn’t a privacy downside.”

Block Party

The Guardian Firewall development team, which also includes noted jailbreaker Joshua Hill, currently comprises four engineers and two security researchers, and the app translates their collective knowledge about App Store services into automatic blocking for modules within apps that are known to be potentially invasive. The service costs $10 per month, or $100 per year. You pay through an in-app purchase using your AppleID, which means Guardian Firewall doesn’t manage the transaction or the data associated with it. The team doesn’t have immediate plans to expand to Android, because their expertise lies so specifically in iOS.

LILY HAY NEWMAN COVERS INFORMATION SECURITY, DIGITAL PRIVACY, AND HACKING FOR WIRED.

To start using Guardian Firewall, all you do is tap a big button on the main screen. It turns green and says “Protection is on.” From the user’s perspective, that’s it. Under the hood, the app establishes a virtual private network connection, and creates a random connection identity for it to keep track of people’s data without knowing who they are. If you turn Guardian Firewall protection off and then on again, the app establishes a new connection and new connection identity, meaning that there’s no way to connect the dots between your sessions.

LILY NEWMAN VIA GUARDIAN
LILY NEWMAN VIA GUARDIAN

The app uses its VPN connection to filter your data in the cloud, but the stream is fully encrypted. Guardian Firewall has automated machine learning mechanisms that evaluate how an app behaves and, particularly, whether it sends out data to third parties, like marketing analytics firms. The idea is to flag whenever an app tries to communicate beyond its own infrastructure. Guardian Firewall is also able to detect and block other types of potentially invasive behavior, like page hijackers that push mobile pop-ups.

Apple itself has already been working on baking similar protections directly into iOS, particularly when it comes to blocking web trackers in Safari that would otherwise fingerprint users across multiple sites. But Guardian Firewall aims to go a few steps further, and to apply across all apps.

Test Drive

I’ve been testing Guardian Firewall on and off for months, and have found it easy to leave it running in the background. The connection doesn’t seem to slow things down on my phone or eat my battery, and the list of trackers the app has blocked is constantly growing—310 location trackers, seven page hijackers, and 3,200 data trackers so far. It felt a little uncomfortable at first to have something constantly running in the background, but it was fascinating to see all the shenanigans happening on my iPhone all the time. Some beta testers have noted that they wish Guardian Firewall offered a customizable blacklisting feature, instead of only automated blocking. But I didn’t personally feel a desire to put time into customizing the app. To me the whole value is in “set it and forget it.”

“‘How can we trust you?’ is just such a valid question for users to be asking all app makers.”

WILL STRAFACH, SUDO SECURITY GROUP

Guardian Firewall has already engineered its way around at least one privacy conundrum during its limited prerelease. Someone essentially launched a denial of service attack against the service by rapidly initiating a deluge of connection requests all at once. Guardian Firewall couldn’t check what IP address or addresses the requests came from, though, because it doesn’t record IP addresses. The team could have solved the issue by altering its policy to access IP addresses during the small window when devices are establishing their connection and then delete the data. But “we determined that that would go against our values,” Strafach says.

Instead, the developers devised a workaround that uses a device check offered by Apple, but encrypts the check so Guardian Firewall itself can’t see the data that’s sent to Apple. The only thing Guardian Firewall finds out at the end of the process is whether the device is a legitimate iOS device or not.

As with any VPN, the ultimate test of Guardian Firewall’s privacy protections and approach to minimal data retention would be a subpoena that is later made public through a trial in which the service has nothing to hand over. And Strafach says that while the company will cooperate with investigators if necessary as required by law, the company has taken precautions both internally and in contracts with its infrastructure providers to ensure that it can be transparent with users about any law enforcement requests.

“Looking over their privacy policy it looks really good,” says William Budington, a senior staff technologist at the Electronic Frontier Foundation. “You’re not logging in, and there’s radical data minimization in general. If they don’t have data stored on a server then a breach or buy-out won’t actually have that much of a negative impact. But keeping an eye on the privacy policy and news about the company is a good practice in general with VPNs, because things can slowly change.”

Not Just Another VPN

Of course, many of the same questions about trust apply to Guardian Firewall as they do to other VPNs. You’re still sending all of your data to their server. But at least Guardian Firewall uses the built-in iOS VPN application programming interface instead of trying to reinvent the wheel, and the encryption scheme protecting your data similarly draws on vetted industry standards, rather than anything proprietary. Strafach also says Guardian Firewall’s goal is to be as open and transparent about its actions as possible—and agrees that people should think carefully about whether it suits their specific needs, as they should for any app.

“People should know exactly what Guardian is doing and if it’s just a concept they don’t like, or they think we’re not the right data custodians for them then so be it, that’s cool,” he says. “‘How can we trust you?’ is just such a valid question for users to be asking all app makers.”

One thing Guardian Firewall can’t currently do is identify what specific apps trigger its tracking alerts, a feature that I found myself wishing it had. If anything, though, the absence helps solidify its privacy cred. Strafach and his team hadn’t figured out how to achieve that granularity without inadvertently creating a potentially identifiable data set of all the apps on your phone. An upcoming solution still won’t directly connect warnings to specific apps, but will instead show the apps that were running at that timestamp that could have cased the alert.

“All you’ll be able to see is ‘at this time we saw this tracker and these are the apps which could be causing it,'” Strafach says. “So maybe that’s one app or maybe three, but it’s a compromise that gives more of the answer users want while it respects their privacy.”

“Clearly the biggest risk to the everyday iOS user is apps surreptitiously tracking them, which unfortunately the majority of apps do—rather massively,” says Patrick Wardle, a Mac security specialist. “Guardian generically thwarts such trackers. I love that Will and Josh, who are former jailbreakers, tackled this. I bet it wasn’t easy, but with their unique skills they are probably one of the few teams that could figure it out and make it all seamlessly work in the constrictive iOS environment.”

It’s complicated and resource-intensive to make all of these wild workarounds happen, but if Guardian Firewall can do it and be financially viable, Strafach hopes that the project will become a sort of case study that privacy pays. With so many companies in the marketplace seemingly convinced that that’s not the case, there’s a lot riding on its success.

Source: https://www.wired.com/story/guardian-firewall-ios-app/

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Security

iPhone owners should delete these 17 apps now, security experts warn

Published

on

By

APPLE has confirmed that 17 applications have been removed from the App Store after they were found to be secretly committing fraud behind users’ backs to quietly collect advertising revenue from their smartphones. Here’s which apps were called out, so you can immediately delete any that are still sitting pretty on your iOS home screen.

iPhone App Delete

If you’ve got any of these apps on your iPhone, you really need to do something about it (Image: GETTY)

If you’ve got any of these 17 apps saved on your iPhone, you’d best delete them as soon as possible.

Apple has confirmed the applications have now been wiped from its App Store, but you’ll still need to manually delete them from your smartphone if you’d already downloaded and run the software. The apps, which were all created by a single developer, were maliciously collecting advertising revenue behind iPhone owners’ backs.

The warning comes just hours after Android users were cautioned to delete a number of malicious apps from Google’s rival Play Store.

Mobile security firm Wandera sniffed-out the malicious software made available for iPhone owners worldwide. For users, it would be almost impossible to tell that anything was wrong, since the apps did exactly what they promised on the tin… except that they were secretly fraud in the background on your iPhone too.

“The objective of most clicker trojans is to generate revenue for the attacker on a pay-per-click basis by inflating website traffic. They can also be used to drain the budget of a competitor by artificially inflating the balance owed to the ad network,” the security firm explains.

Although the apps weren’t designed to cause any direct harm to users or their smartphones themselves, the nefarious behind-the-scenes activity would drain mobile data faster than usual, so if you’re not on an unlimited 4G plan – it would cost you each month. Secondly, the activity from the apps could also cost you precious battery life, as well as slowing down your phone, since it’s having to process all the extra ad requests.

So, deleting the software could see a drop in any additional monthly charges from your network provider, faster performance, as well as a few more hours battery life, which are all pretty substantial benefits.

Wandera claims these iPhone apps were able to Apple’s stringent review process since the malicious code was never inside the apps themselves – therefore there was nothing for Apple to detect when scanning them before allowing them onto the App Store. Instead, the apps would receive instructions to begin their activities from a remote server hosted by the developers.

Apple says it’s now improving the app review process to stop this happening in future.

iPhone Apps Delete

The malicious apps in question – check your iPhone for these (Image: WANDERA)

The same server was also designed to control a similar set of Android apps. Unfortunately, the weaker security on the Android operating system meant that the developer was able to go even further with these malicious apps – causing direct harm to the user.

According to the Wandera security team, “Android apps communicating with the same server were gathering private information from the user’s device, such as the make and model of the device, the user’s country of residence and various configuration details.

“One example involved users who had been fraudulently subscribed to expensive content services following the installation of an infected app.”

The full list of infected apps:

  • RTO Vehicle Information
  • EMI Calculator & Loan Planner
  • File Manager – Documents
  • Smart GPS Speedometer
  • CrickOne – Live Cricket Scores
  • Daily Fitness – Yoga Poses
  • FM Radio – Internet Radio
  • My Train Info – IRCTC & PNR (not listed under developer profile)
  • Around Me Place Finder
  • Easy Contacts Backup Manager
  • Ramadan Times 2019
  • Restaurant Finder – Find Food
  • BMI Calculator – BMR Calc
  • Dual Accounts
  • Video Editor – Mute Video
  • Islamic World – Qibla
  • Smart Video Compressor

All 17 infected apps are published on the App Stores in various countries by the same developer, India-based AppAspect Technologies Pvt. Ltd. So, if you spot the name on a listing of an app that looks good… don’t download it.

Source: https://www.express.co.uk/life-style/science-technology/1196281/iPhone-Delete-These-Apps

Continue Reading

Security

Top Linux developer on Intel chip security problems: ‘They’re not going away.’

Published

on

By

Greg Kroah-Hartman, the stable Linux kernel maintainer, could have prefaced his Open Source Summit Europe keynote speech, MDS, Fallout, Zombieland, and Linux, by paraphrasing Winston Churchill: I have nothing to offer but blood sweat and tears for dealing with Intel CPU’s security problems. 

Or as a Chinese developer told him recently about these problems: “This is a sad talk.” The sadness is that the same Intel CPU speculative execution problems, which led to Meltdown and Spectre security issues, are alive and well and causing more trouble.

The problem with how Intel designed speculative execution is that, while anticipating the next action for the CPU to take does indeed speed things up, it also exposes data along the way. That’s bad enough on your own server, but when it breaks down the barriers between virtual machines (VM)s in cloud computing environments, it’s a security nightmare.

Kroah-Hartman said, “These problems are going to be with us for a very long time, they’re not going away. They’re all CPU bugs, in some ways they’re all the same problem,” but each has to be solved in its own way. “MDS, RDDL, Fallout, Zombieland: They’re all variants of the same basic problem.”

And they’re all potentially deadly for your security: “RIDL and Zombieload, for example, can steal data across applications, virtual machines, even secure enclaves. The last is really funny, because [Intel Software Guard Extensions (SGX)] is what supposed to be secure inside Intel ships” [but, it turns out it’s] really porous. You can see right through this thing.”
 
To fix each problem as it pops up, you must patch both your Linux kernel and your CPU’s BIOS and microcode. This is not a Linux problem; any operating system faces the same problem. 

OpenBSD, a BSD Unix devoted to security first and foremost, Kroah-Hartman freely admits was the first to come up with what’s currently the best answer for this class of security holes: Turn Intel’s simultaneous multithreading (SMT) off and deal with the performance hit. Linux has adopted this method. 

But it’s not enough. You must secure the operating system as each new way to exploit hyper-threading appears. For Linux, that means flushing the CPU buffers every time there’s a context switch (e.g. when the CPU stops running one VM and starts another).

You can probably guess what the trouble is. Each buffer flush takes a lot of time, and the more VMs, containers, whatever, you’re running, the more time you lose.

How bad are these delays? It depends on the job. Kroah-Hartman said he spends his days writing and answering emails. That activity only takes a 2% performance hit. That’s not bad at all. He also is always building Linux kernels. That takes a much more painful 20% performance hit. Just how bad will it be for you? The only way to know is to benchmark your workloads. 

Of course, it’s up to you, but as Kroah-Hartman said, “The bad part of this is that you now must choose: Performance or security. And that is not a good option.” It’s also, he reminded the developer-heavy crowd, which choice your cloud provider has made for you.

But wait! The bad news keeps coming. You must update your Linux kernel and patch your microcode as each Intel-related security update comes down the pike. The only way to be safe is to run the latest CanonicalDebianRed Hat, or SUSE distros, or the newest long-term support Linux kernel. Kroah-Hartman added, “If you are not using a supported Linux distribution kernel or a stable/long term kernel, you have an insecure system.”

So, on that note, you can look forward to constantly updating your operating system and hardware until the current generation of Intel processors are in antique shops. And you’ll be stuck with poor performance if you elect to put security ahead of speed. Fun, fun, fun!

Source: https://www.zdnet.com/article/top-linux-developer-on-intel-chip-security-problems-theyre-not-going-away/

Continue Reading

Security

Hackers steal secret crypto keys for NordVPN. Here’s what we know so far

Published

on

By

Breach happened 19 months ago. Popular VPN service is only disclosing it now.

Hackers breached a server used by popular virtual network provider NordVPN and stole encryption keys that could be used to mount decryption attacks on segments of its customer base.

log of the commands used in the attack suggests that the hackers had root access, meaning they had almost unfettered control over the server and could read or modify just about any data stored on it. One of three private keys leaked was used to secure a digital certificate that provided HTTPS encryption for nordvpn.com. The key wasn’t set to expire until October 2018, some seven months after the March 2018 breach. Attackers could have used the compromised certificate to impersonate the nordvpn.com website or mount man-in-the-middle attacks on people visiting the real one. Details of the breach have been circulating online since at least May 2018.

Based on the command log, another of the leaked secret keys appeared to secure a private certificate authority that NordVPN used to issue digital certificates. Those certificates might be issued for other servers in NordVPN’s network or for a variety of other sensitive purposes. The name of the third certificate suggested it could also have been used for many different sensitive purposes, including securing the server that was compromised in the breach.

The revelations came as evidence surfaced suggesting that two rival VPN services, TorGuard and VikingVPN, also experienced breaches the leaked encryption keys. In a statement, TorGuard said a secret key for a transport layer security certificate for *.torguardvpnaccess.com was stolen. The theft happened in a 2017 server breach. The stolen data related to a squid proxy certificate.

TorGuard officials said on Twitter that the private key was not on the affected server and that attackers “could do nothing with those keys.” Monday’s statement went on to say TorGuard didn’t remove the compromised server until early 2018. TorGuard also said it learned of VPN breaches last May, “and in a related development we filed a legal complaint against NordVPN.

VikingVPN officials have yet to comment.

Serious concerns

One of those keys expired on December 31, 2018, and the other went to its grave on July 10 of the same year, a company spokeswoman told me. She didn’t say what the purpose of those keys were. A cryptography feature known as perfect forward secrecy ensured that attackers couldn’t decrypt traffic simply by capturing encrypted packets as they traveled over the Internet. The keys, however, could still have been used in active attacks, in which hackers use leaked keys on their own server to intercept and decrypt data.

It was unclear how long the attackers remained present on the server or if they were able to use their highly privileged access to commit other serious offenses. Security experts said the severity of the server compromise—coupled with the theft of the keys and the lack of details from NordVPN—raised serious concerns.

Here is some of what Dan Guido, who is the CEO of security firm Trail of Bits, told me:

Compromised master secrets, like those stolen from NordVPN, can be used to decrypt the window between key renegotiations and impersonate their service to others… I don’t care what was leaked as much as the access that would have been required to reach it. We don’t know what happened, what further access was gained, or what abuse may have occurred. There are many possibilities once you have access to these types of master secrets and root server access.

Insecure remote management

In a statement issued to reporters, NordVPN officials characterized the damage that was done in the attack as limited.

Officials wrote:

The server itself did not contain any user activity logs… None of our applications send user-created credentials for authentication, so usernames and passwords couldn’t have been intercepted either. The exact configuration file found on the internet by security researchers ceased to exist on March 5, 2018. This was an isolated case, no other datacenter providers we use have been affected.

The breach was the result of hackers exploiting an insecure remote-management system that administrators of a Finland-based datacenter installed on a server NordVPN leased. The unnamed datacenter, the statement said, installed the vulnerable management system without ever disclosing it to its NordVPN. NordVPN terminated its contract with the datacenter after the remote management system came to light a few months later.

NordVPN first disclosed the breach to reporters on Sunday following third-party reports like this one on Twitter. The statement said NordVPN officials didn’t disclose the breach to customers while it ensured the rest of its network wasn’t vulnerable to similar attacks.

The statement went on to refer to the TLS key as expired, even though it was valid for seven months following the breach. Company officials wrote:

The expired TLS key was taken at the same time the datacenter was exploited. However, the key couldn’t possibly have been used to decrypt the VPN traffic of any other server. On the same note, the only possible way to abuse the website traffic was by performing a personalized and complicated MiTM attack to intercept a single connection that tried to access nordvpn.com.

Not as hard as claimed

The suggestion that active man-in-the-middle attacks are complicated or impractical to carry out is problematic. Such attacks can be carried out on public networks or by employees of Internet services. They are precisely the type of attacks that VPNs are supposed to protect against.

“Intercepting TLS traffic isn’t as hard as they make it seem,” said a security consultant who uses the handle hexdefined and has spent the past 36 hours analyzing the data exposed in the breach. “There are tools to do it, and I was able to set up a Web server using their TLS key with two lines of configuration. The attacker would need to be able to intercept the victim’s traffic (e.g. on public Wi-Fi).”

A cryptographically-impersonated site using NordVPN's stolen TLS key.
A cryptographically-impersonated site using NordVPN’s stolen TLS key.hexdefined

Note also that the statement says only that the expired TLS key couldn’t have been used to decrypt VPN traffic of any other server. The statement makes no mention of the other two keys and what type of access they allowed. The compromise of a private certificate authority could be especially severe because it might allow the attackers to compromise multiple keys that are generated by the CA.

Putting all your eggs in one basket

VPNs put all of a computer’s Internet traffic into a single encrypted tunnel that’s only decrypted and sent to its final destination after it reaches one of the provider’s servers. That puts the VPN provider in the position of seeing huge amounts of its customers’ online habits and metadata, including server IP addresses, SNI information, and any traffic that isn’t encrypted.

The VPN provider has received recommendations and favorable reviews from CNET, TechRadar, and PCMag. But not everyone has been so sanguine. Kenneth White, a senior network engineer specializing in VPNs, has long listed NordVPN and TorGuard as two of the VPNs to reject because, among other things, they post pre-shared keys online.

Until more information is available, it’s hard to say precisely how people who use NordVPN should respond. At a minimum, users should press NordVPN to provide many more details about the breach and the keys and any other data that were leaked. Kenneth White, meanwhile, suggested people move off the service altogether.

“I have recommended against most consumer VPN services for years, including NordVPN,” he told me. “[The services’] incident response and attempted PR spin here has only enforced that opinion. They have recklessly put activists lives at risk in the process. They are downplaying the seriousness of an incident they didn’t even detect, in which attackers had unfettered admin LXC ‘god mode’ access. And they only notified customers when reporters reached out to them for comment.”

Source: https://arstechnica.com/information-technology/2019/10/hackers-steal-secret-crypto-keys-for-nordvpn-heres-what-we-know-so-far/

Continue Reading
Advertisement

Trending

%d bloggers like this: