Posted in Cybersecurity

U.S. Financial Regulators Propose Sweeping New Cybersecurity Regulations

Written by Sydney White

The U.S. Board of Governors of the Federal Reserve System, the U.S. Office of the Comptroller of the Currency (OCC), and the U.S. Federal Deposit Insurance Corporation (the “Agencies”), released an Advanced Notice of Proposed Rulemaking (“ANPR”) on October 20, requesting comments by January 17, 2017, on enhanced cybersecurity risk management rules for the financial sector, particularly companies that are interconnected with other industries. 

The ANPR proposes to apply new and enhanced cybersecurity standards to a giant swath of financial services companies and service providers.  The proposal is targeted at U.S. financial sector companies with $50 billion or more in assets or the U.S. operations of a foreign banking organization where the total U.S. assets are $50 billion or more.   The ANPR is also targeted at companies whose interconnectedness could result in systemic risk to the financial sector or risk of cybersecurity exposure to external stakeholders.  These larger companies would be subject to stringent “sector-critical standards.”  In addition, the ANPR would sweepingly apply to not only large banks but also regional banks, credit card companies offering checking or savings accounts, large insurers, transaction clearinghouses, and non-bank financial companies (referred to as “covered entities”) and indirectly to third party vendors and other service providers. 

The ANPR deviates from the voluntary and flexible nature of the National Institute of Standards & Technology U.S. Cybersecurity Framework (“Cybersecurity Framework”) as required under Executive Order 13636, “Improving Critical Infrastructure Cybersecurity”  issued in February 2013 (“Cybersecurity EO”) and the bipartisan Cybersecurity Enhancement Act of 2014, P.L. 113-274.  It ignores the Cybersecurity Framework’s explicit policy of allowing companies to adopt security practices appropriate to their own circumstances.

The ANPR seems to ignore another fundamental goal of the Cybersecurity EO and the Cybersecurity Framework, that of eliminating conflicting and duplicative cybersecurity regulations, rather than creating more of them.  The ANPR proposes to make several financial agency guidelines including the U.S. Federal Financial Institutions Examination Council (FFIEC) Cybersecurity Assessment Tool into mandatory standards.  U.S. Financial regulators have already come under fire for increasing the cybersecurity regulatory burden on the sector beyond what is required under the Cybersecurity Framework and the ANPR goes even further.  The ANPR also mandates many practices already followed by the financial sector (i.e., adoption of a cyber resilience and incident response program, etc.). 

The Agencies plan to issue a formalized proposal in the Spring, which stakeholders will have another opportunity to comment on before a final rule is adopted.  One wild card in the process is the election of Donald J. Trump as President, which may create an interesting dynamic as this proposal moves forward.

Posted in Asia Privacy Privacy and Data Security

CHINA: Significant changes to data and cybersecurity practices under PRC Cybersecurity Law

Written by Carolyn Bigg

After a third deliberation, the Chinese government passed the new PRC Cybersecurity Law on 7 November 2016. The new law will come into force on 1 June 2017 and has significant implications for the data privacy and cybersecurity practices of both Chinese companies and international organisations doing business in China.

The new PRC Cybersecurity Law intends to combat online fraud and protect China against Internet security risks. In short, it imposes new security and data protection obligations on “network operators”; puts restrictions on transfers of data outside China by “key information infrastructure operators”; and introduces new restrictions on critical network and cybersecurity products.

The new law has been widely reported in both the local and international press. While Chinese officials maintain that China is not closing the door on foreign companies with the introduction of this new law, there has been widespread international unease since the first reading. Commentators have expressed concern that competition will be stifled; regarding the handover of intellectual property, source codes and security keys to the Chinese government; as to perceived increased surveillance and controls over the Internet in China; and in relation to the data localisation requirements. Other new obligations, including increased personal data protections, have been less controversial, but are a clear indicator of the increased focus within the Chinese authorities on data protection, and could signal a change to the data protection enforcement environment in China.

Some of the key provisions of the final law (which contains some changes to earlier drafts of the law) include (inter alia):

  • Chinese citizens’ personal information and “important data” gathered and produced by “key information infrastructure operators” (“KIIO”) during operations in China must be kept within the borders of the PRC. If it is “necessary” for the KIIO to transfer such data outside of China, a security assessment must be conducted pursuant to the measures jointly formulated by the National Cyberspace Administration and State Council unless other PRC laws permit the overseas transfer. While the final version of the law provides some guidance as to the industry fields that will be ascribed greater protection, such as public communications and information service, energy, transportation, water conservancy, finance, public service and e-government, the definition of KIIO remains vague and could potentially be interpreted to cover a broader range of companies and industry sectors. “Personal information” is defined as including all kinds of information, recorded electronically or through other means, that taken alone or together with other information, is sufficient to identify a natural person’s identity, including, but not limited to, natural persons’ full names, birth dates, identification numbers, personal biometric information, addresses, telephone numbers, and so forth. However, the types of information that might constitute “important data” is currently unclear. In any case, these data localisation rules are likely to create practical issues for international businesses operating in China.
  • A range of new obligations apply to organisations that are “network operators” (i.e., network owners, network administrators and network service providers). A “network” means any system comprising computers or other information terminals and related equipment for collection, storage, transmission, exchange and processing of information. Some commentators are suggesting that these broad definitions could catch any business that owns and operates IT networks/infrastructure or even just websites in China.
    • In terms of data protection, network operators must make publicly available data privacy notices (explicitly stating purposes, means and scope of personal information to be collected and used); and obtain individuals’ consent when collecting, using and disclosing their personal information. Network operators must adopt technical measures to ensure the security of personal information against loss, destruction or leaks, and in the event of a data security breach must take immediate remedial action and promptly notify users and the relevant authorities. They must also comply with principles of legality, propriety and necessity in their data handling, and not be excessive; not provide an individual’s personal information to others without the individual’s consent; nor illegally sell an individual’s personal data to others. The rules do not apply to truly anonymised data. There are also general obligations to keep user information confidential and to establish and maintain data protection systems. Data subject rights to correction of their data, as well as a right to request deletion of data in the event of a data breach, are also provided. While an earlier draft specifically provided protection to personal information of “citizens”, the final law does not make this distinction, and so seemingly offers a broader protection to all personal information. These requirements formalise as binding legal obligations some data protection safeguards that were previously only perceived as best practice guidance in China.
    • As regards network security, network operators must fulfil certain tiered security obligations according to the requirements of the classified protection system for cybersecurity, which includes (amongst other things): formulating internal security management systems and operating instructions; appointing dedicated cybersecurity personnel; taking technological measures to prevent computer viruses and other similar threats and attacks, and formulating plans to monitor and respond to network security incidents; retaining network logs for at least six months; undertaking prescribed data classification, back up, encryption and similar activities; complying with national and mandatory security standards; reporting incidents to users and the authorities; and establishing complaints systems.
    • Network operators must also provide technical support and assistance to state security bodies safeguarding national security and investigating crimes, and will be subject to government and public supervision. The form and extent of such co-operation is not currently clear, and international businesses have expressed concerns over the extent to which this may require them to disclose their IP, proprietary and confidential information to the Chinese authorities.
    • More general conditions on network operators carrying out business and service activities include: obeying all laws and regulations, mandatory and industry national standards, social mores and commercial ethics; being honest and credible; and bearing social responsibility. There are also requirements on network operators to block, delete and report to the authorities prohibited information and malicious programmes published or installed by users.
    • Network operators handling “network access and domain registration services” for users, including mobile phone and instant message service providers, are required to comply with “real identity” rules when signing up or providing service confirmation to users, or else may not provide the service.
  • Additional security safeguards apply to KIIOs, including: security background checks on key managers; staff training obligations; disaster recovery back ups; emergency response planning; and annual inspections and assessments. Further, strict procurement procedures will apply to KIIOs buying network products and services.
  • Providers of “network products and services” must comply with national and mandatory standards; their products and services must not contain malicious programs; must take remedial action against security issues and report them to users and relevant authorities; and must provide security maintenance for their products and services which cannot be terminated within the contract term agreed with customers. These new conditions will require providers of technology products and services to review and update their product and related maintenance offerings and, in particular, the contractual terms on which they are offered to customers.
  • Critical network equipment and specialised cybersecurity products must obtain government certification or meet prescribed safety inspection requirements before being sold or provided. This potentially catches a wide range of software, hardware and other technologies being sold – or proposed to be sold – by international companies in the China, since the definitions used in the law are drafted very broadly. Further guidance by way of a catalogue of key network products is expected in due course. There are concerns that this may create barriers to international businesses looking to enter the Chinese market.
  • Each individual and organisation shall be responsible for its own use of websites, and may not set up websites or communication groups for the purpose of committing fraud, imparting criminal methods, producing or selling prohibited items, or engaging in other unlawful activities. Again, there is scope for this to be interpreted and applied broadly.
  • Institutions, organisations and individuals outside China that cause serious consequences by attacking, interfering or destructing key information infrastructure of China shall be responsible for any damage, and the relevant public security department of the State Council may freeze assets and impose other sanctions against them. While these provisions would appear to have an extra-territorial effect, and could be interpreted very broadly, it is unclear what sanctions could in practice be enforced against organisations without a presence in China.
  • Other new rules relate to: network/online protections for minors; the establishment of schemes for network security monitoring, early warning and breach notification to relevant authorities and the public, as well as rights for individuals and organisations to report conduct endangering network security; opening of public data resources; and prohibitions on hacking and supporting activities.

While criminal sanctions, administrative penalties and civil liabilities potentially await those (both organisations and, in some circumstances, individual employees and officers) who violate the new law, unfortunately great uncertainties remain as to how the new legislation will be enforced, who exactly is caught by the various new rules, and the precise steps that organisations must take to comply with them. It is hoped that the Chinese authorities will publish more detailed, practical guidance in the coming months. In the meantime, organisations are strongly advised to review their data privacy and cybersecurity practices in China to ensure compliance with the new law before it comes into force on 1 June 2017, and to keep these under review as further guidance becomes available.

Posted in EU Data Protection

FRENCH LAW FOR A DIGITAL REPUBLIC ADOPTED – Part III: Significant Changes are in Store for Online Platforms, Telecom Operators and Online Communication Providers

Written by Carol A.F. Umhoefer and Caroline Chancé

As reported earlier here and here, France’s Law for a Digital Republic (“Law”) introduces important amendments to French data protection law. But once implementing decrees are adopted (expected later this year and in March 2017), the Law will also bring significant changes to online platform operators, telecom operators and online communication providers, as described below.

New consent requirement to ensure confidentiality of electronic correspondence

The Law amends the Postal and Electronic Communications Code by requiring telecom operators and online public communication service providers to maintain the confidentiality of user correspondence, which includes: the content of the message, the correspondents’ identity and, where applicable, the subject line and attachments. The automatic analysis of such correspondence for advertising, statistical or service improvement purposes is prohibited, except with the user’s express, specific and time-limited consent. The period of validity of such consent (which cannot be longer than one year) will be specified by an implementing decree expected by the end of 2016.

However, electronic correspondence can still be automatically analyzed without users’ express, specific and time-limited consent whenever the analysis is for purposes of displaying, sorting or dispatching messages, or detecting unsolicited content or computer malware.

Telecom operators and online public communication service providers will be required to inform their employees of the new confidentiality obligations.

New definition of “online platform operators”

The Law introduces in the French Consumer Code a new definition of online platform operators: Any individual or legal entity offering, on a professional basis, whether for free or for consideration, an online public communication service consisting of either (i) ranking or referencing content, goods or services offered or uploaded by third parties, by using computerized algorithms (e.g., online price comparison tools); or (ii) bringing together several parties (intermediation) for the sale of a good, the provision of a service or the exchange or sharing of content, a good or a service (i.e., marketplaces).

Enhanced transparency and fairness obligations vis-à-vis consumers

Under the Law, online platform operators are required to provide fair, clear and transparent information regarding (i) the general terms of use of any intermediation service, (ii) the referencing, ranking and dereferencing criteria for content, goods and services offered or uploaded, (iii) the existence of any contractual relationship, capitalistic relation or direct remuneration for the operator’s benefit that influences the classification or referencing of the content, goods or services offered or uploaded, (iv) any person acting as an advertiser and (v) when consumers are put in contact with professionals or non-professionals, the rights and obligations of each party under civil and tax laws. Implementing decrees are expected by March 2017 to specify these obligations.

In addition, online platform operators whose activity generates connections above a certain threshold (to be defined by implementing decree by March 2017) must establish and make available to consumers good practices guidelines aimed at strengthening the obligations of clarity, transparency and fairness mentioned above.

Marketplaces will be required to provide professionals with a space that allows them to comply with their own information obligations vis-à-vis consumers. The implementing decree specifying requirements for this space is expected in March 2017.

The regulator is empowered to conduct audits of platform operators’ business practices. The regulator will publish the results of these evaluations and a list of platform operators that do not comply with the information obligations.

Finally, websites that collect, moderate or disseminate consumer reviews or opinions will be required to provide a host of new information in a fair, clear and transparent manner regarding the conditions for publishing and processing these reviews or opinions. Here again, an implementing decree will specify the requirements for providing this information.

Posted in EU Data Protection International Privacy New Privacy Laws

France’s Law for a Digital Republic expands transparency rules – significant changes for platforms, telecoms, online providers

Written By Caroline Chancé and Carol A. F. Umhoefer

France’s newly published Law for a Digital Republic includes key provisions that aim to foster more consumer and user trust in the digital ecosystem by requiring enhanced transparency and fairness obligations for online platforms and heightened confidentiality of private electronic  correspondence. These provisions will be fully effective upon the adoption of implementing decrees, which are expected within the coming months.

New definition of “online platform operators”;  enhanced transparency and fairness obligations vis-à-vis consumers

The Law for a Digital Republic introduces in the French Consumer Code a new definition of online platform operators: any individual or legal entity offering, on a professional basis, whether for free or for consideration, an online public communication service consisting of either (i) ranking or referencing content, goods or services offered or uploaded by third parties, by using computerized algorithms (e.g., online price comparison tools); or (ii) bringing together several parties (intermediation) for the sale of a good, the provision of a service or the exchange or sharing of content, a good or a service (i.e., marketplaces).

Under the Law, online platform operators are required to provide fair, clear and transparent information regarding (i) the general terms of use of any intermediation service; (ii) the referencing, ranking and dereferencing criteria for content, goods and services offered or uploaded; (iii) the existence of any contractual relationship, capitalistic relation or direct remuneration for the operator’s benefit that influences the classification or referencing of the content, goods or services offered or uploaded; (iv) any person acting as an advertiser; and (v) the rights and obligations of each party under civil and tax laws when consumers are put in contact with professionals or non-professionals. Implementing decrees are expected by March 2017 to specify these obligations.

In addition, online platform operators whose activity generates connections above a certain threshold (to be defined by implementing decree by March 2017) must establish and make available to consumers good practices guidelines aimed at strengthening the obligations of clarity, transparency and fairness mentioned above.

Marketplaces will be required to provide professionals with a space that allows them to comply with their own information obligations vis-à-vis consumers.  The implementing decree specifying requirements for this space is expected in March 2017.

The regulator is empowered to conduct audits of platform operators’ business practices. The regulator will publish the results of these evaluations and a list of platform operators that do not comply with the information obligations.

Finally, websites that collect, moderate or disseminate consumer reviews or opinions will be required to provide a host of new information in a fair, clear and transparent manner regarding the conditions for publishing and processing these reviews or opinions. Here again, an implementing decree will specify the requirements for providing this information.

New consent requirement to ensure confidentiality of electronic correspondence

The Law introduces into the Postal and Electronic Communications Code a definition of online public communication service providers as any person making available content, services or applications that constitutes online communication to the public.

Telecom operators and online public communication service providers are required to maintain the confidentiality of user correspondence, including the content of the message, the correspondents’ identity and, where applicable, the subject line and attachments. The automatic analysis of such correspondence for advertising, statistical or service improvement purposes is prohibited, except with the user’s express, specific and time-limited consent. The period of validity of such consent (which cannot be longer than one year) will be specified by an implementing decree expected by the end of 2016.

However, electronic correspondence can still be automatically analyzed without users’ express, specific and time-limited consent whenever the analysis is for purposes of displaying, sorting or dispatching messages, or detecting unsolicited content or computer malware.

Telecom operators and online public communication service providers will be required to inform their employees of the new confidentiality obligations.

Posted in International Privacy Privacy and Data Security

CASL made clearer: CRTC releases its first compliance and enforcement decision under Canada’s Commerce Messages Law

Written by Kelly Friedman, Tamara Hunter and Jim Halpert

The Canadian Radio-Television and Telecommunications Commission (CRTC) has issued its first Compliance and Enforcement Decision for violation of Canada’s anti-spam legislation (CASL).

Until now, CRTC CASL enforcement actions have taken the form of settlements reached in confidential negotiations between the  Enforcement Branch and the company. But this decision, released on October 26, 2016,  is significant because it is the first CASL enforcement decision to provide guidance on compliance.

The decision contains several important lessons about regulation of commercial electronic messages (CEMs) in Canada before class action enforcement opens on July 1, 2017.

Background to CASL and CEMs

CASL went into force on July 1, 2014 and created a comprehensive regime of offences, enforcement mechanisms and penalties prohibiting the sending of an unsolicited or misleading  CEM. A CEM is defined broadly. It means any electronic message (not just an email) one of whose purposes is to encourage participation in commercial activity. CASL has rigid requirements regarding CEMs: the recipient of the CEM must have provided consent (express or implied) to receive the CEM, and the CEM must comply with specific formalities set out in the legislation, including disclosure of contact information and an effective unsubscribe mechanism. Violation of CASL’s CEM rules can result in an administrative monetary penalty (AMP) of up to CA$10 million per violation, civil liability (through a private right of action which comes into force on July 1, 2017), and vicarious liability on employers, directors and officers who are unable to prove that they exercised due diligence to prevent the violation.

The CRTC’s decision

In its decision, the CRTC found that Blackstone Learning Corp., an Ontario company engaging in educational and training programs and services, committed nine violations of CASL in the summer of 2014 by sending 385,668 CEMs in nine messaging campaigns to employee electronic addresses at twenty-five government organizations without the consent of the recipients.  An AMP of CA$50,000 was imposed against Blackstone.

The CEMs at issue came to the attention of the CRTC because they triggered approximately sixty complaints to its Spam Reporting Centre. An investigation was launched, and on January 30, 2015, the company was issued a notice of violation under CASL. The notice of violation set out an AMP of CA$640,000.[1]

Under CASL, when a company refuses to pay the AMP in the notice of violation and challenges the decision, the CRTC must decide, on a balance of probabilities, whether the company committed the violations alleged in the notice of violation and, if so, whether to impose the AMP set out in the notice of violation. The CRTC has the power to vary the amount of the AMP, suspend it subject to conditions, or waive it altogether.

Blackstone exercised its right to challenge the notice of violation issued against it. Blackstone’s primary arguments were that it had implied consent to send the CEMs and that the amount of the AMP was unreasonably high.

Blackstone conceded that it had sent the CEMs in question, but argued that it had obtained implied consent to send them because the email addresses to which the company sent CEMs were publicly available; that is, Blackstone tried to rely on what has come to be known as the “conspicuous publication exemption” in CASL.[2]   The CRTC rejected this position. The CRTC stated that, if the recipient had no prior or existing business or non-business relationship with the company, implied consent to send unsolicited messages to conspicuously published contact information could arise only with respect to messages relevant to that person’s employment role. The CRTC stated that “the Act does not provide persons sending commercial electronic messages with a broad licence to contact any electronic address they find online; rather, it provides for circumstances in which consent can be implied by such publication, to be evaluated on a case-by-case basis”.[3]

In the result, as Blackstone was unable to provide proof that it had obtained express or implied consent to send the CEMs, the CRTC determined, on a balance of probabilities, that the company committed the nine violations of CASL as set out in the original notice of violation.

In assessing the reasonableness of the CA$640,000 AMP set out in the notice of violation, the CRTC considered, among other factors, the company’s ability to pay the penalty. While Blackstone’s ability to pay could not be assessed initially because the company did not provide financial information as required in the notice to produce, along with its submissions, the company did submit unaudited financial statements for the preceding two years. The CRTC accepted that Blackstone is a small business and the AMP would represent several years’ worth of the company’s revenues.

Blackstone’s demonstrated lack of cooperation by refusing to respond to the notice to produce documents was taken into account. However, the CRTC noted some attempt had been made by Blackstone to self-correct, because the company had made some enquiries to the Department of Industry before CASL came into force and to the investigator in response to the notice to produce. Further, the decision noted that, in the summer of 2014, Blackstone did not have the benefit of more recent guidance published on the topic of implied consent to send CEMs[4] and that this may have contributed to the company’s erroneous belief that it had adequately obtained consent. The CRTC also noted that the company had no history of violations of under CASL or other related statutes.

Taking these and other factors into consideration, the CRTC concluded that a much lower AMP of CA$50,000 was proportionate to the circumstances of the case and reasonable to promote Blackstone’s compliance with the Act.

Important CASL lessons

This decision has five key takeaways:

  1. It can be extremely valuable for a company to exercise its right to contest an AMP imposed by notice of violation. In Blackstone’s case, the CRTC reduced Blackstone’s AMP from CA$640,000 to CA$50,000.
  2. The messages need not mention any specific commercial terms to qualify as CEMs under CASL. In Blackstone’s case, the CRTC found that the nature of the language used in the emails, including references to discounts and group rates, conveyed that services were available for purchase from Blackstone.
  3. For the purpose of relying on the conspicuous publication exception, the public availability of an email address is insufficient. In order to properly rely on the exception, there must be, in addition to conspicuous publication of the email address by or on behalf of the recipient, no statement refusing CEMs, and the messages in question be relevant to the recipient’s business, role, functions or duties.
  4. General assertions of implied consent will never suffice.  Implied consent must be supported with evidence as to where and when the company discovered the recipients’ addresses and how the company determined that the messages it was sending were relevant to the roles or functions of the recipients.
  5. Factors considered when determining the appropriate amount of the AMP, include the following:
  • The object of the penalty is to promote changes in behaviour, not to destroy the business. The CRTC concluded that Blackstone’s limited ability to pay suggested a lower penalty than the one set out in the notice of violation.
  • Unaudited financial statements can be admitted as evidence of ability to pay an AMP.
  • The number of complaints received about the CEMs will be relevant to enforcement decisions. In Blackstone’s case, the 60 or more unique submissions to the Spam Reporting Centre, as well as correspondence with some of the complainants, reflected that the messages were unwelcome and caused nuisance and frustration.
  • The time period over which the CEMs were sent is relevant. In Blackstone’s case, the two-month duration of the violations was considered relatively short and suggested a lower penalty than set out in the notice of violation.
  • The extent of cooperation during an investigation will affect the amount of the AMP. The CRTC stated that Blackstone’s failure to cooperate with the investigation increased the need for a penalty.
  • Demonstrating a good faith intention to comply with CASL and self-correct in the event of a violation will suggest a lower AMP.

Find out more about addressing CASL issues, including CASL compliance audits and disputes with regulators, by contacting any of the authors.


[1] Blackstone received a notice to produce documents on November 7, 2014. Blackstone requested a review of the notice on December 4, after the deadline for production had passed. The CRTC denied this request on January 22, 2015, and the notice of violation was issued the following week.

[2] Section 10(9)(b) of CASL provides that consent can be implied when publication of contact information is conspicuous, the publication is not accompanied by a statement that the person does not want to receive unsolicited CEMs, and the CEM is relevant to the recipient’s business, role, functions or duties in a business or official capacity.

[3] Para. 28.

[4] Such as the CRTC’s Guidance on Implied Consent, published on  September 4, 2015 available here  and discussed here.

Posted in EU Data Protection Privacy and Data Security Uncategorized

Managing third parties under the Privacy Shield needs care

Written by Rena Mears, Ryan Sulkin, Eric Roth and Jim Halpert

Controllers need to negotiate contract terms with third-party controllers and processors that are consistent with the controller’s obligations under the Shield. By Rena Mears, Ryan Sulkin, Eric Roth and Jim Halpert.

The Privacy Shield’s heightened infrastructure, regulatory, and documentation requirements present participating companies with new compliance requirements when transferring EU personal data to data controllers or data processors in the US. These conditions come at a time when many sectors use increasingly complex vendor “ecosystems” that process personal data and the US Federal Trade Commission (FTC) is signaling that companies have greater oversight obligations over vendors that process personal data.

Traditional supply chain models envisioned a limited number of third-party entities each providing directly to a data controller well documented services that result in an end product that was ultimately unaltered from its original design. However, today’s technology-driven ecosystems have made the traditional supply chain methodology virtually obsolete. With the explosive increase in data-driven services, all certifying companies, both small and large, must now understand how the Privacy Shield will affect their existing vendor management processes.

An effective vendor risk management program integrates strong contract provisions alongside a comprehensive operational vendor risk assessment methodology. This article shows how to adapt such a methodology to entities that certify under the Privacy Shield.

DRAFTING AND NEGOTIATING CONTRACTS WITH THIRD PARTIES

The EU-US Privacy Shield requires first party controllers who certify under the program to enter into a contract with third-party controllers (including affiliates within a same corporate group) that provides that:

  • Privacy Shield personal data may be processed only for limited and specified purposes consistent with the notice provided to the individual (and the consent obtained if consent is obtained); and
  • The recipient will provide the same level of protection as the EU-US Privacy Shield Principles for the Privacy Shield personal data, will notify the controller if it makes a determination that it can no longer meet this obligation and, if so, cease processing or take other reasonable and appropriate remedial steps.

To transfer Privacy Shield personal data to a third party acting as a processor, organizations must:

  • Transfer such data only for limited and specified purposes;
  • Ascertain that the agent is obligated to provide at least the same level of privacy protection as is required by the Principles;
  • Take reasonable and appropriate steps to ensure that the agent effectively processes the personal information transferred in a manner consistent with the organization’s obligations under the Principles;
  • Upon receipt of notice from the processor that it can no longer meet Privacy Shield requirements, take reasonable and appropriate steps to stop and remediate unauthorized processing; and
  • Provide a summary or a representative copy of the relevant privacy provisions of its contract with that processor to the Department of Commerce upon request.

Accordingly, when negotiating contracts with third party controllers and processors, first party data controllers should consider the following (note: beyond negotiating contracts that address the bulleted points above, each of the steps described below is not specifically called out as required by the Privacy Shield ):

  1. Conduct a reasonable amount of due diligence on the third party controller, to confirm that the third party controller is capable of providing the level of security required by the EU-US Privacy Shield — namely “reasonable and appropriate measures to protect it from loss, misuse and unauthorized access, disclosure, alteration and destruction, taking into due account the risks involved in the processing and the nature of the personal data.” While the EU-US Privacy Shield does not specify the exact type or amount of due diligence required, sound due diligence may include requiring the Vendor complete a security-assessment questionnaire, requesting supporting documentation, and if warranted, performing follow-up interviews and on-site information gathering tailored to the risks posed by the circumstances at-hand.
  2. Determine and notify the third party of the purposes for which the vendor is permitted to process the data pursuant to notices provided and consents obtained by the first party data controller.
  3. Negotiate contract provisions consistent with the controller’s obligations under the EU-US Privacy Shield (as described in the bullet points above in this Section) and leverage contract language, to the greatest extent possible, that uses the same words found in the EU-US Privacy Shield (again, as stated in the bulleted points above in this Section). Further, the third party should be prohibited from engaging any downstream processors or controllers without the first party controller’s prior consent and any first party controller-permitted additional controllers or processors that are engaged by the third party must be bound to terms that are consistent with the EU-US Privacy Shield and the agreement between the first party controller and the third party. The third party should be responsible to the first party controller for any downstream controller or processor failures.
  4. Seek contractual terms that:

(i) list specific security protocols with which the third party controller must comply. By way of example, request comprehensive information security policies, robust encryption at all times, strong password policies, limited user access and strong access controls, vulnerability testing and remediation, logging and monitoring, independent third party assessments based on appropriate criteria and compliance with established standards issued by organizations such as NIST or ISO;

ii) provide audit rights allowing the controller periodically to review and access third party controller compliance. The audit rights should include a review of the specific physical, technical, administrative and organizational security measures in place, both at the document/policy level and at the operational level (e.g., visits to data centers and the like);

iii) create indemnification rights in favor of the controller in the event that the third party controller violates its commitments under the agreement; and

iv) create meaningful liability exposure for the third party controller in the event that it violates the EU-US Privacy Shield-related provisions of its agreement with the controller.

In negotiating agreements with third party vendors certifying organizations should be mindful of the dynamics of the negotiation. Third party vendors will each have their own position on what constitutes appropriate security and the appropriate amount of liability for the vendor to absorb in the event of a data breach or contract violation. Controllers should develop and strive to formulate a minimum bar for security that is applied to all third party vendors grounded in defensible, industry best practices as to information security in light of the sensitivity of the personal data at-issue. Some third party vendors will not have all of the security measures sought by the controller, but the minimum bar deemed required by the controller should always be in place. Further, if a vendor is seeking to limit its liability vis-a-vis the controller, the controller should consider the request in the context of the commercial need for the contract being negotiated, the risk and magnitude of the potential loss to the controller in the event of a breach of that contract by the third party controller (consider fines, penalties, regulatory scrutiny, reputational harm, costs and expenses), the sensitivity of the data being placed with the vendor and how the contract at-issue fits into the overall portfolio of risk undertaken by the controller vis-a-vis other third party controllers and processors.

RISK MANAGEMENT EXTENDS BEYOND CONTRACT LANGUAGE

Companies with under-developed privacy and security programs typically rely upon contractual agreements in lieu of conducting security and privacy assessments of third parties. However, in the case of EU data that would give rise to a data breach notification obligation (bearing in mind that the GDPR will create those obligations across Europe, starting in May 2018), relying solely on contract provisions is wishful thinking when it comes to safeguarding confidential data.[1] Under the new Privacy Shield framework, supplementing contract language with rational risk-based methodologies and operational controls is now important to safeguarding first party controller data that has been entrusted to others. While the text of the Privacy Shield does not explicitly call for an audit or assessment of third party vendors, the FTC has traditionally required some form of due diligence by the data controller of the vendor so that contract language is actually being enforced. Therefore, organizations subject to FTC enforcement are expected to adopt operational practices that govern third party data management practices extending beyond policy and contract language.

OPERATIONAL METHODOLOGY FOR VENDOR RISK MANAGEMENT

Global businesses are quickly becoming modular enterprises that outsource core components of their products and infrastructure to specialized vendors. Relying on third party platforms for cloud-based solutions, user access management, and other multi-tenant back end functions, while necessary to maintain competitive and enable cost effective growth, obscures an organization’s ability to understand and control sensitive data that flows through an expanding data ecosystem. This fragmentation requires an increased level of awareness and oversight by the data controller. Third party software is so deeply integrated into fundamental business processes that all organizations inevitably have direct and indirect relationships with third, fourth, and Nth party vendors, whether they are aware of these vendors or not.

Outsourcing functions and operations inherently increases the risk of regulatory violations. However, by conducting a detailed supply chain analysis and adopting a rational operational methodology, companies can better understand and mitigate the challenges of protecting sensitive or confidential information shared with external entities.

Step One – Evaluating Risk: Before approaching an assessment under the Privacy Shield framework, develop and communicate your organization’s approach to managing data risks created through outsourcing. Evaluate the risk that potential or current vendors pose to the data of your company. A partial list of factors to consider includes privacy implications, security standards, regulatory restrictions (both at home and abroad), and potential downstream processors.

Step Two – Identify Vendors and Services: Conduct a comprehensive vendor inventory to identify all relevant third parties and applications. This can be achieved by scheduling interviews with department managers to identify and document all in-scope services and systems. Be sure to include company affiliates in this process, as they may be considered separate legal entities under EU data protection law.

Step Three – Identify Trans-Atlantic Data Flows and Applicable Processes: Identify the individual processes or activities that require providing access to sensitive or confidential information. Then, identify the outsourcing relationships responsible for processing personal data under the scope of the Privacy Shield certification by focusing on data flows coming from the EU to the US. To effectively account for all data, create data flow diagrams to provide visibility into potential outside vendors that could have access to personal or confidential information. These data maps will assist your organization in identifying when to provide appropriate notice to the data subject. For example, the notice principle under Privacy Shield requires the data controller to notify the data subject before the personal data is first disclosed to a third party.[2]

Step Four – Establish Rational Criteria to Evaluate Risk: Develop criteria to benchmark a vendor’s privacy and security practices. Each company will have its unique criteria tailored to its own business practices. Some factors that may be considered are the vendor’s reputation, the level of access and system integration granted, the sensitivity of accessible data, the number of previous data breaches reported by Vendor, as well as the Vendor’s existing privacy and security programs.

Step Five – Identify Methods to Evaluate Vendors: Create a logical scoring mechanism to evaluate a vendor’s infrastructure and business practices. Always baseline your assessment against authoritative literature. In this case, the Privacy Shield Principles would be the primary source of guidance, but organizations can also supplement with ISO, or NIST standards.

Step Six – Determine Risk Threshold and Risk Ranking: Each organization has a different tolerance for risk depending on the nature of the products or services and the types of personal data it collects. Based on the criteria established above, categorize each vendor as a Low, Moderate or High risk.

Step Seven – Conduct Risk Assessment Based Vendor Risk Level: When conducting a third party vendor risk assessment companies may utilize a single or combination of methods to evaluate risk and assess compliance, these approaches include but are not limited to: 1) self-assessment and reporting by the vendor, 2) leveraging existing company internal resources (such as Internal Audit personnel), or 3) reporting provided by external independent assessors.

The assessment should provide assurance to the company sharing the data, i.e. the receiving entities have adequate safeguards, to ensure the data is processed only for limited and specified purposes consistent with the notice provided and will provide the same level of protection as required by the EU- US Privacy Shield Principles. Additionally, companies should ensure that the data processor has a procedure in place to notify the data controller if it can no longer meet its obligation to provide the Privacy Shield level of protection.

Step Eight – Report and Track Assessment: Schedule a timeline for all vendor assessment activities and include deadlines for reports. Each stage of the process should be clearly documented and easily accessible for senior management.

This will help streamline the vendor risk management process for future engagements. Most importantly, detailed and organized documentation of the program and assessment process provides clear evidence of a serious, formal process in the event of an investigation by a regulatory body. Step Nine – Build an Exception Process: Depending upon an organization’s operations, some or many vendors and applications may fall outside the scope of the Privacy Shield, so an exception process should be developed and documented. The purposes for each specific exception should be justified, documented, and reported to management. For example, Privacy Shield carves out specific exceptions for journalistic, audit and investment banking activities.[3]

Step Ten – Work with Management to Build Remediation Activities: Establish clear reporting and communication channels within the company. This includes creating a process for reporting vendor risks or discrepancies to senior management or the board of directors and promotes transparency within the organization.

Step Eleven – Maintain Program and Conduct Periodic Reassessments: Assign designated personnel and resources to conduct ongoing monitoring of the program and continue to evaluate the vendor risk management program on an annual or bi-annual basis. This individual could be a chief privacy officer or an employee within the privacy, compliance, or legal department, depending on the size of the organization.

USE CASES: APPLICATION OF METHODOLOGY

US Based Supplier with EU Presence:

As an example, consider a US Based Supplier (“Supplier”) has subsidiary offices in the EU as well as a heavy marketing presence in EU. Supplier will be subject to the jurisdiction of GDPR come May, 2018, and Supplier self-registers with the Privacy Shield. Supplier must first identify all flows of EU data entering the United States.

Then Supplier has to identify and inventory all third party vendors with access to EU customer or employee data and identify the applications and data elements incorporated within each data flow. Next, Supplier must ensure that all contracts with these vendors are updated to contain the necessary provisions required for compliance under Privacy Shield. This process may not be as straightforward as it sounds. Third party vendors typically push back on certain provisions regarding liability, stall and delay during the negotiation process, or, in the case of larger vendors with greater bargaining power, adopt a take-it-or-leave it approach, and refuse to amend any contract language all together.

After the contracting phase, Supplier must now implement an internal methodology to determine that each vendor maintains adequate security standards for safeguarding EU data as stated in the contract. Depending upon the sensitivity of the data, this may include conducting or contracting for periodic on-site assessments of vendor’s facility.

Cloud-Based Global Software Company: Diving deeper into the Privacy Shield, consider the example of a cloud-based software company (“Cloud Company”) offering Human Resources (“HR”) software and services for employee time and expense management. The company maintains sensitive data elements such as employee name, social security number, salary and benefits information, and employee performance data.

Cloud Company manages and stores this data on behalf of other organizations from around the world. Assume Cloud Company has existing contract language in place for managing and restricting trans-border data flows, with certain exceptions built in for specific client needs. However, under the new Privacy Shield framework, to the extent that it is relying on Privacy Shield to transfer EU personal data to the US, Cloud Company must reassess its internal processes and vendor agreements to comply with the Privacy Shield’s supplemental principles for HR data.[4] This means that if Cloud Company engages another data storage company in the United States (for scenarios such as data overload, disaster management, or data backup) that is not Privacy Shield certified, additional provisions will be required to safeguard EU data, such as de-identification or pseudonymization of all EU Data. As a best practice, Cloud Company should conduct or contract for routine privacy and security assessments to ensure that vendor’s operations align with the contract provisions and Privacy Shield principles.

USING EXTERNAL PROFESSIONAL SERVICES

Determine if your organization has adequate resources to manage the risks of outsourcing EU data, if not, consider engaging a third party to assist with the process. Engaging a qualified external legal and consulting practice can help an organization overcome shortcomings typically overlooked during an internal self-assessment. In addition, experienced professionals can provide step by step guidance during the process, yielding higher quality assessments and mitigating the potential risks of regulatory sanctions and reputational damage.

 

[1] See: www.statista.com/statistics/ 273550/data-breaches-recorded-in- the-united-states-by-number-of- breaches-and-records-exposed/

[2] See 1.b Notice

[3] See 2. Journalistic Expectations and 4. Performing Due Diligence and Conducting Audits

[4] See Principle 9. Human Resources Data; EU-US Privacy Shield: www.commerce.gov/page/eu-us- privacy-shield

Posted in Privacy and Data Security US Federal Law

FCC Adopts Broadband Privacy Rules

Written by Sydney White

Today the Federal Communications Commission (FCC) approved new privacy rules for mobile and fixed broadband ISPs by a vote of 3-2. The rules seek to harmonize the requirements for ISPs with current FCC CPNI rules that restrict usage of customer data by telecommunications carriers.

The rules are broader than FTC privacy standards. In particular, they expand the current categories of information considered to be sensitive to include routine web browsing and app usage data, as well as content of communications. The rules require customer opt-in consent prior to the use by ISPs of these new categories of sensitive information for advertising or marketing. This creates different requirements for ISPs than the regime that applies for the rest of the Internet ecosystem, where web browsing and app usage information is subject to implied consent or opt-out consent.

The FCC otherwise applies sensitive data categories that are very similar to the FTC sensitive data categories set out in the FTC’s  2012 Privacy Report. The FTC privacy framework defines sensitive to include health information, children’s information, precise geolocation information, financial account data and Social Security Numbers. The FTC has declined to include within the definition of sensitive, web browsing or app usage information that does not itself include health, children’s, geolocation and financial account information or Social Security Numbers.

The rules also require immediate and persistent notification to customers about the ISPs information collection practices, its use and sharing of the information, and with whom the ISP shares the information

The FCC also created a new requirement for Commission case-by-case approval of an ISP’s offers of financial incentives, such as discounts, in exchange for customer’s consenting to use and share their customer information. This will likely chill some pricing and specialized service offerings.

Finally, the rules include new security requirements for customer information in the form of “guidelines” on reasonable data security practices. There is also a requirement for ISPs to notify customers of data breaches within 30 days after determination of a breach.

In sum, the rules provide disparate treatment for the same online data depending upon which entity is collecting and using it, and may be challenged in court. If the FTC follows the FCC’s lead as to categories of information that are sensitive and should require opt-in consent for use in marketing and advertising, it would produce a sea-change in the U.S. privacy framework and severely restrict Internet advertising.

The notice and opt-in requirements go into effect 6 months after publication in the Federal Register (although small providers will have 18 months to comply), the data security requirements go into effect 90 days after publication, and the data breach notifications go into effect 6 months after publication.

LexBlog