Connect with us






Another set of regulations, another set of discussions between attorneys and clients, now requiring very detailed insight into what is possible on the market from the cybersecurity engineering world: How to make the response practical, effective and valuable is, of course, the goal. Read this blog to the end, and I will show you that the goal is readily attainable.

New cybersecurity regulations first introduced by the New York State Department of Financial Services (“NYDFS”) in September 2016 and taking effect in their final form on March 1 represent the dawn of a new era of cybersecurity regulation. Formally titled “Cybersecurity Requirements for Financial Services Companies” (the “NY Regulations”), these rules are the first foray by a state into the realm of cybersecurity regulation. (The full NY Regulations can be found here and a NYDFS summary here.) They leave behind the tried but not-particularly-true approaches of voluntary risk evaluation (e.g., the NIST Cybersecurity Framework) and post-breach remedial action (such as those regularly required by the Federal Trade Commission) and instead create a comprehensive system, based on periodic mandated risk assessment, designed to result in robust cybersecurity systems capable of preventing cyber incidents, rather than merely evaluating cyber maturity or reacting to data security breaches.

In introducing the draft Regulations, New York Governor Andrew Cuomo asserted that:

New York, the financial capital of the world, is leading the nation in taking decisive action to protect consumers and our financial system from serious economic, harm that is often perpetrated by state-sponsored organizations, global terrorist networks, and other criminal enterprises.

Clearly, Mr. Cuomo had no illusions about the potential reach or significance of the Regulations, and neither should practitioners, wherever located, with clients in, or even proximate to, the financial services industry. The Regulations contemplate a new, holistic approach to cybersecurity apply to a broad universe of industries operating in the State of New York and are likely to affect regulation far beyond that state’s borders.

Who must comply with the Regulations?

The reach of the Regulations appears to be extraordinarily broad, as might be expected for regulators in “the financial capital of the world.” They apply to “Covered Entities,” defined as “any Person  operating  under  or required  to operate  under  a license,  registration, charter,  certificate,  permit,  accreditation   or similar  authorization [from the NYDFS]   under  the Banking  Law, the Insurance  Law  or the Financial  Services  Law,” but exempt certain very small Entities—those with (1) fewer than 10 employees or independent contractors; (2) less than $5 million in gross annual revenue each of the past three fiscal years; or (3) less than $10 million in it and its affiliates’ GAAP year-end total assets. (Note that these small concerns are still considered “Covered Entities.” and so still must comply with certain portions of the NY Regulations.) It is important to read and consider that definition carefully—it covers a much larger universe than one may expect. Besides banks and other obvious financial institutions, the NYDFS also regulates insurance companies, including health insurers, mortgage lenders, mortgage brokers and any other businesses covered by any of the New York Banking, Insurance or Financial Services Laws. And, because the touchstone of the Regulations is authorization from the NYDFS, the Regulations, by their terms, apply to national and international concerns with headquarters and even substantially all operations, outside of New York, so long as they are operating within the State of New York, under NYDFS authorization and do not fall within the de minimis exceptions provided in the Regulations.

When do the Regulations take effect?

The Regulations become effective March 1, 2017 and, with certain exceptions, are subject to a 180-day transition period. Covered Entities must file their first annual certifications with the NYDFS no later than February 15, 2018.

What do the Regulations require?

The Regulations are intended to create an expansive, integrated, risk-based system to ensure that regulated entities develop and maintain robust cybersecurity capabilities and, therefore, are able to properly safeguard sensitive nonpublic information in their possession. Not surprisingly, with such a lofty goal, they have a large number of largely interconnected “moving parts,” which must fit, and work, together seamlessly. The following are some of the most critical elements of the Regulations.

  • Cybersecurity Program. Each Covered Entity must develop, implement and maintain a Cybersecurity Program, based on its Risk Assessment (discussed below), that performs these core functions:
    • Identify and assess internal and external cyber risks to the security or integrity of information stored on the Entity’s information systems;
    • Create infrastructure and implement policies and procedures to prevent unauthorized access to the Entity’s information systems and use of nonpublic information on such systems;
    • Detect cybersecurity events, respond to such events to mitigate adverse effects and recover and restore normal operations and services; and
    • Meet regulatory reporting obligations.
  • Cybersecurity Policy. Each Covered Entity must adopt a written Cybersecurity Policy, made up of policies and procedures for the protection of its information systems and of nonpublic information stored on those systems. The Cybersecurity Policy must be based on the Entity’s Risk Assessment (discussed below), approved by a senior officer (as defined) or the Entity’s board of directors and must address the following areas to the extent applicable:
    • Information security;
    • Data governance and classification;
    • Asset inventory and device management;
    • Access controls and identity management;
    • Business continuity and disaster recovery planning and resources;
    • Systems operations and availability concerns;
    • Systems and network security;
    • Systems and network monitoring;
    • Systems and application development and quality assurance;
    • Physical security and environmental controls;
    • Customer data privacy;
    • Vendor and third-party service provider management;
    • Risk assessment; and
    • Incident response.
  • Monitoring, Penetration and Vulnerability Testing. The Cybersecurity Program for each Covered Entity (other than those exempt under the de minimis standard) must include a program of ongoing monitoring and testing, developed in accordance with the Entity’s Risk Assessment (discussed below), to assess the effectiveness of the Entity’s Cybersecurity Program. This monitoring and testing regime must include either (1) continuous monitoring or (2) periodic penetration testing (in which the assessors “attempt to circumvent or defeat the security features of an information system”) and vulnerability assessments. In the absence of continuous monitoring, penetration testing must be performed at least annually, to identify vulnerabilities of the Covered Entity’s network security systems and vulnerability testing, including systematic scans or reviews of information systems to identify known vulnerabilities, must be undertaken at least twice annually.
  • Risk Assessment. Each Covered Entity must undertake a periodic Risk Assessment to reassess the cybersecurity risks inherent in its business operations, including its information systems and the nonpublic information it collects and stores. Compliance with a number of other requirements is, under the Regulations, explicitly dependent on the Risk Assessments. These requirements include the Cybersecurity Program, Cybersecurity Policy, Penetration Testing and Vulnerability Assessment and Third-Party Service Provider Security Policy (all discussed herein), as well as Multi-Factor Authentication, Encryption of Non-Public Information and Training and Monitoring. While the original proposal for the Regulations called for the Risk Assessment to be performed annually, the final Regulations remove the “annual” requirement. Instead, the Regulations indicate that the Risk Assessment must be “sufficient to inform the design” of the required Cybersecurity Program. In other words, Covered Entities must undertake Risk Assessments with sufficient frequency to ensure that other provisions of their Cybersecurity Plans remain in compliance with the Regulations.

Other notable requirements under the Regulations include:

  • Chief Information Security Officer. Each Covered Entity (other than those exempt under the de minimis standard) must designate a Chief Information Security Officer (CISO) responsible for overseeing and implementing the institution’s Cybersecurity Program and enforcing its Cybersecurity Policy. The CISO must report to the Entity’s Board of Directors, at least twice annually, on a list of prescribed matters.
  • Third-Party Service Provider Security Policy. Each Covered Entity must have in place policies and procedures designed to ensure the security of information systems and nonpublic information accessible to, or held by, third parties.
  • Reporting Requirements. Covered Entities are required to report to the DFS as follows:
    • Within 72 hours after a determination that a “Cybersecurity Event” has occurred. A Cybersecurity Event is an event “that has a reasonable likelihood of materially harming any material part of the normal operation(s) of the Covered Entity.”
    • No later than February 15 of each year, each Covered Entity must certify that it is in compliance with the requirements of the Regulations.

Each Cybersecurity Program also must include:

  • Implementation and maintenance of an audit trail system to reconstruct transactions and log access privileges;
  • Limitations and periodic reviews of access privileges;
  • Written application security procedures, guidelines and standards that are reviewed and updated by the CISO at least annually;
  • Employment and training of cybersecurity personnel;
  • Multi-factor authentication for individuals accessing internal systems who have privileged access or to support functions including remote access;
  • Timely destruction of nonpublic information that is no longer necessary;
  • Monitoring of authorized users and cybersecurity awareness training for all personnel;
  • Encryption of all nonpublic information held or transmitted; and
  • Written incident response plan to respond to, and recover from, any cybersecurity event.

What should a Covered Entity do now?

It is clear that the Regulations are here to stay and that compliance will require many Covered Entities to act fast to develop and implement or revise and upgrade their processes and procedures. And, with an initial phase-in period of only six months, they had better act fast. The key is finding a trusted cybersecurity advisor without fail. While law firms and accounting firms may wish to fill this need, the fact is that only genuine cybersecurity engineers can best address many of the requirements.

As noted above, the foundation of the Regulations’ is the Risk Assessment. Everything from the Cybersecurity Program, Cybersecurity Policy, Penetration Testing and Vulnerability Assessment and Third-Party Service Provider Security Policy, to Multi-Factor Authentication and Encryption of Non-Public Information Policies and Training and Monitoring requirements are dependent on the Risk Assessment’s results. So the logical—and necessary—first step is for the Entity to undergo a thorough, state-of-the-art and unassailable Risk Assessment.

Assured Enterprises, Inc. (“Assured”), through its TripleHelixSM and AssuredScanDKV® tools, delivers the state-of-the-art and up-to-the-minute assessment of a Covered Entity’s cyber risk profile based on (1) criteria for the evaluation and categorization of identified cybersecurity risks and threats facing the entity, (2) criteria for the assessment of the confidentiality, integrity, security and availability of the Entity’s information systems and nonpublic information, including the adequacy of existing controls in the context of identified risks, and (3) recommendations for how identified risks should be mitigated or accepted. Although it is uncanny, the NY Regulations are actually perfectly geared for these tools from Assured.

Assured’s deep scanning tool, AssuredScanDKV®, provides a critical, and heretofore unobtainable, deliverable for an effective Risk Assessment—an inventory of all known cybersecurity vulnerabilities hidden within the Covered Entity’s software applications. AssuredScanDKV® searches within bundled binary executable files, libraries and DLLs throughout the Entity’s enterprise network, detecting all known vulnerabilities residing in the software. The AssuredScanDKV® output report provides a prioritized list of the identified vulnerabilities, along with the remediation pathway for each. An AssuredScanDKV® scan is an invaluable element of a Risk Assessment, as it illuminates the previously dark and inaccessible corners of the Entity’s cybersecurity infrastructure and provides the Entity acquirer with inputs necessary to construct an effective—and compliant—Cybersecurity Program.

Assured’s proprietary approach enables a thorough understanding of a Covered Entity’s security profile and provides a comprehensive roadmap for mitigating risk and improving its security posture. TripleHelixSM evaluates three strands of essential information:

  • Cyber Maturity Report. Identifies existing vulnerabilities, gaps and weaknesses.
  • Threat Report. Heightens understanding of potential risks by identifying bad actors, state-sponsored hackers, “hacktivists,” organized crime, commercial espionage experts, insider threats and more.
  • Impact Report. Aids in prioritizing mitigation and resource allocation by quantifying the impact a successful data breach could inflict on the Entity.

And, in short order the Covered Entity receives three deliverables:

An Actionable Roadmap

Takes into account the actual cost-effectiveness and workflow issues facing the Covered Entity, and which makes concrete recommendations to improve the cyber health of the Covered Entity. This roadmap takes into account hardware, software, policies, procedures, training, network connections and much more.

A CyberScore®

Just like a FICO® score, Assured’s CyberScore® provides a 3- digit representation of the cyber health of the Covered Entity. The CyberScore® serves as a benchmark of where a Covered Entity is today, and serves to inform the Roadmap of what is possible to do to improve. Then, by refreshing the CyberScore® every six months, just as the NY Regulations call for, it is possible for the board of directors and senior management to measure improvements and make additional, informed decisions, now armed with accurate information. The CyberScore® is backed up by actuarial data and is based on the thousands of data points evaluated by TripleHelixSM.

The Covered Entity’s own Regulatory Compliance Dossier

This Dossier is populated with virtually all of the regulatory, compliance, best standards reports which any Covered Entity may need. PCI, HIPAA, SOX, GLBA, SEC, FFIEC, NCUA, ISO 27001/01, CoBit5 and many more are available and will be all delivered in proper form and with certifications as part of the deliverables. Naturally, the NY Regulations are already incorporated into TripleHelixSM. This is truly one-stop shopping, which reduces impact on the workflow at the Covered Entity and which provides for consistency and accuracy in evaluation.

Assured has already built the precise tools—TripleHelixSM and AssuredScanDKV® to fully satisfy the NY Regulations and to satisfy much more. The company was founded by top notch cybersecurity engineers with extensive experience within the US DoD and Intelligence Community and with a team of professional leaders, which is nothing short of first rate.

If you want to get the best, most comprehensive solution for not only the NY Regulations, consider Assured Enterprises’ solutions. For more information contact us or schedule a demo today.


Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.





Yes, You Should Probably Have A TLS Certificate

This entry was posted in General Security, WordPress Security on September 18, 2018 by Mikey Veenstra   9 Replies

Last week’s article covering the decision to distrust Symantec-issued TLS certificates generated a great response from our readers. One common question we received, and one that pops up just about any time SSL/TLS comes up, is how to determine when a site does and does not need such a certificate. Spoiler: Your site should probably have a TLS certificate.

A subject of some discussion in the web community surrounds the use of TLS certificates and the implementation of HTTPS that these certificates allow. While their use is critical on sites where sensitive data from visitors may be involved, like payment data or other personally identifiable information (PII), the debate concerns the use of HTTPS in cases where users aren’t providing sensitive input. In today’s post, we’ll take a practical look at the difference between HTTP and HTTPS traffic, and discuss the benefits of being issued a certificate regardless of the way users interact with your site.

What’s TLS? Is It Different From SSL?

Before we really dig in, let’s clear up some terminology for anyone who might be unfamiliar.

HTTPS (short for Hypertext Transfer Protocol Secure) allows for the secure transmission of data, especially in the case of traffic to and from websites on the internet. The security afforded by HTTPS comes from the implementation of two concepts, encryption and authenticationEncryption is a well-known concept, referring to the use of cryptographyto communicate data in a way that only the intended recipient can read. Authentication can mean different things based on context, but in terms of HTTPS it means verification is performed to ensure the server you’re connecting to is the one the domain’s owner intended you to reach. The authentication portion of the transaction relies on a number of trusted sources, called Certificate Authorities (CA for short). When a certificate is requested for a domain name, the issuing CA is responsible for validating the requestor’s ownership of that domain. The combination of validation and encryption provides the site’s visitors with assurance that their traffic is privately reaching its intended destination, not being intercepted midway and inspected or altered.

TLS, or Transport Layer Security, is the open standard used across the internet to facilitate HTTPS communications. It’s the successor to SSL, or Secure Sockets Layer, although the name “SSL” has notoriously picked up common usage as an interchangeable term for TLS despite it being a deprecated technology. In general when someone brings up SSL certificates, outside of the off chance they’re literally referring to the older standard, they’re probably talking about TLS. It’s a seemingly minor distinction, but it’s one we hope will gain stronger adoption in the future.

I Shouldn’t Use TLS Unless I Really Need To, Right?

There’s no shortage of conflicting advice across the web regarding when to implement TLS and when to leave a site insecure, so it’s no surprise that a lot of strong opinions develop on both sides of the issue. Outside of cut-and-dry cases like PCI compliance, where payment transactions need to be secure to avoid a policy violation, you’ll find plenty of arguments suggesting cases where the use of TLS is unnecessary or even harmful to a website. Common arguments against the wide use of TLS tend to fall into two general categories: implementation and performance.

Concerns about implementation difficulties with TLS, like the cost of purchasing a certificate, difficulty in setting up proper HTTPS redirects, and compatibility in general are common, but are entirely manageable. In fact, TLS has never been more accessible. Let’s Encrypt, a free certificate issuer which launched in early 2016, has issued just under two-thirds of the active TLS certificates on the internet at the time of this writing. Following the flood of free certificates into the marketplace, many popular web hosting companies have begun allowing Let’s Encrypt certificates to be installed on their hosted sites, or are at least including their own certificates for free with their hosting. After all, site owners are more security-conscious now than ever, and many will happily leave a host if TLS is a cost-prohibitive endeavor.

Other pain points in the implementation of HTTPS, like compatibility with a site’s existing application stack, are no different than the pain points you’d see following other security best practices. Put simply, avoiding the use of HTTPS because your site will break is the same as avoiding security updates because your site will break. It’s understandable that you might delay it for a period of time so you can fix the underlying issue, but you still need to fix that issue.

The other arguments against widespread TLS are those of performance concerns. There’s certainly overhead in play, considering the initial key exchange and the processing necessary to encrypt and decrypt traffic on the fly. However, the efficiency of any system is going to depend heavily on implementation. In the case of most sites, the differences in performance are going to be negligible. For the rest, there’s a wealth of information available on how to fine-tune an environment to perform optimally under TLS. As a starting point, I recommend visiting Is TLS Fast Yet? to learn more about the particulars of this overhead and how best to mitigate it.

My Site Doesn’t Take Payments, So Why Bother?

Each debate ultimately hinges on whether the site owner sees value in HTTPS in the first place. A lot of the uncertainty in this regard can be traced to unfamiliarity with the data stored in HTTP requests, as well as the route that these requests travel to reach their destination. To illustrate this, let’s take a look at the contents of a typical WordPress login request.

The request contains a number of interesting pieces of information:

  • The full URL of the destination, including domain and file path
  • User-Agent details, which describe my browser and operating system
  • My referer, which reveals the page I visited prior to this one
  • Any cookies my browser has stored for this site
  • The POST body, which contains the username and password I’m attempting to log in with

The implications of this request falling into the wrong hands should be immediately recognizable in the fact that my username and password are plainly visible. Anyone intercepting this traffic can now establish administrative access to my site.

Contrast this with the same request submitted via HTTPS. In an HTTPS request, the only notable information left unencrypted is the destination hostname, to allow the request to get where it needs to go. As far as any third party is concerned, I’m sending this request instead:

Outside of examples as obvious as login security, the thing to keep in mind above all is the value of privacy. If a site’s owner hasn’t installed a TLS certificate, even though the site is purely informational and takes no user input, any traffic to that site can be inspected by the user’s ISP, or even the administrator of the network they’re connected to. This is notably problematic in certain cases, like when someone might be researching private medical or legal matters, but at the end of the day the content of a site is irrelevant. Granted, my hat probably contains a bit more tinfoil than most, but there’s no denying this is an era where browsing habits are tracked wherever possible. Real examples exist of ISPs injecting advertising into unencrypted traffic, and the world has a nonzero number of governments happy to inspect whatever traffic they can get their hands on. Using HTTPS by default shows your site’s users that their privacy is important to you, regardless of whether your site contains anything you might consider private.


The internet at large is rapidly adopting improved security standards, and the majority of web traffic is now being delivered via HTTPS. It’s more important than ever to make sure you’re providing your users with the assurance that their traffic is private, especially with HTTP pages being flagged as “Not Secure” by popular browsers. Secure-by-default is a great mindset to have, and while many of your users may never notice, the ones who do will appreciate it.

Interested in learning more about secure networking as it pertains to WordPress? Check out our in-depth lesson, Networking For WordPress Administrators. It’s totally free, you don’t even need to give us an email address for it. Just be sure to share the wealth and help spread the knowledge with your peers, either by sharing this post or giving them the breakdown yourself. As always, thanks for reading!






Continue Reading





The malware is currently being distributed through the RIG exploit kit.

The RIG exploit kit, which at its peak infected an average of 27,000 machines per day, has been grafted with a new tool designed to hijack browsing sessions. The malware in question, a rootkit called CEIDPageLock, has been distributed through the exploit kit in recent weeks.

According to researchers from Check Point, the rootkit was first discovered in the wild several months ago.

CEIDPageLock was detected when it attempted to tamper with a victim’s browser. The malware was attempting to turn their homepage into, a legitimate Chinese directory for weather forecasts, TV listings, and more.

The researchers say that CEIDPageLock is sophisticated for a browser hijacker and now a bolt-on for RIG has received “noticeable” improvements.

Among the new additions is functionality which permits user browsing activities to be monitored, alongside the power to change a number of websites with fake home pages.

The malware targets Microsoft Windows systems. The dropper extracts a 32-bit kernel-mode driver which is saved in the Windows temporary directory with the name “houzi.sys.” While signed, the certificate has now been revoked by the issuer.

When the driver executes, hidden amongst standard drivers during setup, the dropper then sends the victim PC’s mac address and user ID to a malicious domain controlled by a command-and-control (C&C) server. This information is then used when a victim begins browsing in order to download the desired malicious homepage configuration.

If victims are redirected from legitimate services to fraudulent ones, this can lead to threat actors obtaining account credentials, victims being issued malicious payloads, as well as the gathering of data without consent.

CNET: That VPNFilter botnet the FBI wanted us to help kill? It’s still alive

“They then either use the information themselves to target their ad campaigns or sell it to other companies that use the data to focus their marketing content,” the team says.

The latest version of the rootkit is also packed with VMProtect, which Check Point says makes an analysis of the malware more difficult to achieve. In addition, the malware prevents browsers from accessing antivirus solutions’ files.

CEIDPageLock appears to focus on Chinese victims. Infection rates number in the thousands for the county, and while Check Point has recorded 40 infections in the United States, the spread of the malware is considered “negligible” outside of China.

“At first glance, writing a rootkit that functions as a browser hijacker and employing sophisticated protections such as VMProtect, might seem like overkill,” Check Point says. “CEIDPageLock might seem merely bothersome and hardly dangerous, the ability to execute code on an infected device while operating from the kernel, coupled with the persistence of the malware, makes it a potentially perfect backdoor.”

According to Trend Micro, exploit kits are still making inroads in the security landscape. RIG remains the most active, followed by GrandSoft and Magnitude.




Continue Reading





Google engineer found that he was able to hack the supposedly secure doors at the search giant’s Sunnyvale offices. He was able to unlock doors without the RFID key, and even lock out employees who did have their key.


Forbes reports that David Tomaschik found what turned out to be a completely inexcusable vulnerability in the Software House devices used to secure the site.

Last summer, when Tomaschik looked at the encrypted messages the Software House devices (called iStar Ultra and IP-ACM) were sending across the Google network, he discovered they were non-random; encrypted messages should always look random if they’re properly protected.

He was intrigued and digging deeper discovered a “hardcoded” encryption key was used by all Software House devices. That meant he could effectively replicate the key and forge commands, such as those asking a door to unlock. Or he could simply replay legitimate unlocking commands, which had much the same effect […] And he could prevent legitimate Google employees from opening doors.

Worse, the hack left no trace in the security logs, so there would be no evidence of whether or not the exploit had ever been used.

The same Software House tech is widely used by other companies, meaning that any number of businesses could be left vulnerable.

Google has been forced to segment its network to prevent exploitation of the flaw, and while Software House has now come up with a solution, that will require new hardware. Software House said only that ‘this issue was addressed with our customers.’


Continue Reading