The FinTech revolution is led by the PDS2 with banks as a platform facing legal risks due to open APIs and new strong authentication obligations. Continue Reading
Written by Sydney White and Jim Halpert
This week the US House of Representatives passed a Congressional Review Act (CRA) resolution of disapproval of the US Federal Communications Commission (FCC) broadband privacy rules that were approved by the FCC in a straight partisan vote at the end of the Obama Administration, but have not yet taken effect. The Senate passed an identical resolution last week. President Trump has signaled that he will sign the resolution, which means that the FCC is prohibited by the CRA from imposing “substantially similar” privacy regulations on broadband ISPs in the future.
The broadband privacy rules would have imposed an opt-in consent requirement for use of web browsing activity by ISPs for marketing or advertising and a 7 day breach notification deadline. They would have applied only to ISPs and not to any other businesses in the Internet ecosystem. Interestingly, the rules were opposed not only by ISPs but also by a wide range of other Internet and advertising companies, and were subject to criticism for imposing confusing disparate regulation on select companies based upon siloed regulatory classifications. As a result of the CRA resolution, privacy regulation of broadband ISPS, which was to be more demanding than regulation of other Internet companies, will become similar again.
Although the current FCC Chairman Ajit Pai opposes the broadband privacy rules and could have chosen to forebear from enforcing them or could have amended the rules through the rulemaking process, a future FCC could have reversed that decision. Congressional passage of the CRA resolution provides long term certainty.
Although selective, more aggressive privacy requirements for ISPs will be foreclosed, the CRA essentially puts the US privacy framework back where it was before November 2016, and does not create a regulatory loophole for ISPs to sell customer information, as some advocates have charged. The FCC will continue to have the authority over the privacy of telecomm usage information as well as to enforce unreasonable broadband privacy and security practices.
In January, more than 15 ISPs announced that they would adhere to a voluntary set of privacy and data security principles that are consistent with the more flexible US Federal Trade Commission (FTC) framework, which applies to the rest of the Internet. The principles include specific policies on transparency, consumer choice, security and data breach notification.
- The transparency principle confirms that ISPs will continue to provide customers with comprehensive, accurate, and continuously available notice of collection, use, and sharing of customer information.
- Under the choice principle, ISPs will continue to give customers choice over use or disclosure of their data consistent with the FTC’s framework. Choice will vary depending upon the sensitivity of the information. Sharing of sensitive information will require opt-in choice, non-sensitive information will require opt-out choice, and uses such as fraud prevention, product development, market research, network management and security, compliance with law, and marketing by the ISP will be subject to implied consent.
- Under the data security principle, ISPs will continue to protect customer information collected by the ISP using reasonable security measures taking into account the nature of the ISPs activities, sensitivity of data, size of the ISP, and technical feasibility.
- The data breach principle provides that ISPs will continue to notify customers of data breaches where there is unauthorized acquisition of customers’ sensitive personal information.
State Attorneys General can enforce these ISP privacy and security commitments in addition to existing state privacy, data security, and data breach laws that have protected and will continue to protect consumers.
Both FCC and FTC privacy enforcement authority over ISPs could change if the FCC or Congress overturns the FCC’s reclassification under the Open Internet Order of broadband providers as common carriers under Title II of the Communications Act. Congressional or FCC action to overturn that order would restore FTC authority over ISP privacy and security practices. While both Chairman Pai and leaders of the Congressional committees with jurisdiction over the FCC and FTC are on record as supporting this change, this reversal of the underpinnings of broadband regulation is a longer term and more complicated policy objective.
One immediate effect of the CRA is likely to be legislative activity in several states to impose opt-in consent requirements at the state level. Already, legislators have added a written opt-in consent requirement for information collection by ISPs to Minnesota’s budget bill. The long term effect is likely to be to focus more attention on giving the US FTC clearer authority over privacy and security practices of businesses in many sectors in order to create clear and uniform privacy and security requirements across those sectors.
Written by Michelle Anderson and Anne Kierig
New York Attorney General Eric Schneiderman announced that his office received a record number (1,300) of data breach notices in 2016. In the press release, Attorney General Schneiderman also provided a list of recommendations for how organizations can help protect sensitive personal data—a list that could be used as a benchmark against which the Attorney General’s office could evaluate whether a company has implemented reasonable security measures.
Many of the recommendations overlap with those made by other regulators (e.g., minimizing data collection practices is in the FTC’s Start with Security: A Guide for Business), but they reiterate that the New York Attorney General considers both encryption and offering post-breach services like credit monitoring to be more of an expectation than an option. They also highlight the importance of having a written information security policy, rather than undocumented procedures. The Attorney General’s recommendations update recommendations made in a 2014 report titled Information Exposed: Historical Examination of Data Security in New York State and are as follows:
- Understand where your business stands by understanding the types of information your business has collected and needs, how long it is stored, and what steps have been taken to secure it. Data mapping would be a ready way to meet this recommendation.
- Identify and minimize data collection practices by collecting only the information that you need, storing it only for the minimum time necessary, and using data minimization tactics when possible (e.g., don’t store credit card expiration dates with credit card numbers).
- Create an information security plan that includes encryption and other technical standards, as well as training, awareness, and “detailed procedural steps in the event of data breaches.” This recommendation reiterates a recommendation from the 2014 report, in which the Attorney General said that effective technical safeguards include “[r]equir[ing] encryption of all stored sensitive personal information—including on databases, hard drives, laptops, and portable devices.”
- Implement an information security plan and conduct regular reviews to ensure that the plan aligns with ever-changing best practices.
- Take immediate action in the event of a breach by investigating immediately and thoroughly and notifying consumers, law enforcement, regulators, credit bureaus, and other businesses as required. This is the only new recommendation from the 2014 report, which mentioned the importance of immediate breach response in the context of implementing an information security plan. Now, however, it’s a separate recommendation.
- Offer mitigation products in the event of a breach, including credit monitoring.
The announcement also revealed that the number of data breaches reported to the New York Attorney General’s office in 2016 represented a 60% increase over prior years. It reported that the most common causes of data breaches were external (i.e., hacking) and employee negligence (consisting of inadvertent exposure of records, insider wrongdoing, and the loss of a device or media). These causes accounted for, respectively, 40% and 37% of the reported breaches, showing a rise in employee negligence as a breach source. The information exposed consisted overwhelmingly of Social Security numbers (46%) and financial account information (35%).
This announcement also comes on the heels of final cybersecurity rules for the financial sector from the New York Department of Financial Services (NYDFS). The NYDFS requirements went into effect on March 1, 2017, and are designed to keep both “nonpublic information” and “information systems” secure. More information about these requirements can be found in DLA Piper’s Cybersecurity Law Alert NYDFS announces final cybersecurity rules for financial services sector: key takeaways.
Written by Carol Umhoefer and Caroline Chancé
On March 15, 2017, the CNIL published a 6-step methodology for companies that want to prepare for the changes that will apply as from May 25, 2018 under the EU the General Data Protection Regulation (“GDPR”).
The abolishment under GDPR of registrations and filings with data protection authorities will represent fundamental shift of the data protection compliance framework in France., which has been heavily reliant on declarations to the CNIL and authorizations from the CNIL for certain types of personal data processing. In place of declarations, the CNIL underscores the importance of “accountability” and “transparency”, core principles that underlie the GDPR requirements. These principles necessitate taking privacy risk into account throughout the process of designing a new product or service (privacy by design and by default), implementing proper information governance, as well as adopting internal measures and tools to ensure optimal protection of data subjects.
In order to help organizations get ready for the GDPR, the CNIL has published the following 6 step methodology:
Step 1: Appoint a data protection officer (“DPO”) to “pilot” the organization’s GDPR compliance program
Pursuant to Article 37 of the GDPR, appointing a DPO will be required if the organization is a public entity; or if the core activities of the organization require the regular and systematic monitoring of data subjects on a large scale, or if such activities consist of the processing of sensitive data on a large scale. The CNIL recommends appointing a DPO before GDPR applies in May 2018.
Even when a DPO is not required, the CNIL strongly recommends appointing a person responsible for managing GDPR compliance in order to facilitate comprehension and compliance in respect of GDRP, cooperation with authorities and mitigation of risks of litigation.
Step 1 will be considered completed once the organization has appointed a DPO and provided him/her with the human and financial resources needed to carry out his/her duties.
Step 2: Undertake data mapping to measure the impact of the GDPR on existing data processing
Pursuant to Article 30 of the GDPR, controllers and processors will be required to maintain a record of their processing activities. In order to measure the impact of the GDPR on existing data processing and maintain a record, the CNIL advises organizations to identify data processing, the categories of personal data processed, the purposes of each processing, the persons who process the data (including data processor), and data flows, in particular data transfers outside the EU.
To adequately map data, the CNIL recommends asking:
- Who? (identity of the data controller, the persons in charge of the processing operations and the data processors)
- What? (categories of data processed, sensitive data)
- Why? (purposes of the processing)
- Where? (storage location, data transfers)
- Until when? (data retention period)
- How? (security measures in place)
Step 2 will be considered completed once the organization has identified the stakeholders for processing, established a list of all processing by purposes and categories of data processed, and identified the data processors, to whom and where the data is transferred, where the data is stored and for how long it is retained.
Step 3: Based on the results of data mapping, identify key compliance actions and prioritize them depending on the risks to individuals
In order to prioritize the tasks to be performed, the CNIL recommends:
- Ensuring that only data strictly necessary for the purposes is collected and processed;
- Identifying the legal basis for the processing;
- Revising privacy notices to make them compliant with the GDPR;
- Ensuring that data processors know their new obligations and responsibilities and that data processing agreements contain the appropriate provisions in respect of security, confidentiality and protection of personal data;
- Deciding how data subjects will be able to exercise their rights;
- Verifying security measures in place.
In addition, the CNIL recommends particular caution when the organization processes data such as sensitive data, criminal records and data regarding minors, when the processing presents certain risks to data subjects (massive surveillance and profiling), or when data is transferred outside the EU.
Step 3 will be considered completed once the organization has implemented the first measures to protect data subjects and has identified high risk processing.
Step 4: Conduct a privacy impact assessment for any data processing that presents high privacy risks to data subjects due to the nature or scope of the processing operations
Conducting a privacy impact assessment (“PIA”) is essential to assess the impact of a processing on data subjects’ privacy and to demonstrate that the fundamental principles of the GDPR have been complied with.
The CNIL recommends to conduct a PIA before collecting data and starting processing, and any time processing is likely to present high privacy risks to data subjects. A PIA contains a description of the processing and its purposes, an assessment of the necessity and proportionality of the processing, an assessment of the risks to data subjects, and measures contemplated to mitigate the risks and comply with the GDPR.
Step 4 will be considered completed once the organization has implemented measures to respond to the principal risks and threats to data subjects’ privacy.
Step 5: Implement internal procedures to ensure a high level of protection for personal data
According to the CNIL, implementing compliant internal procedures implies adopting a privacy by design approach, increasing awareness, facilitating information reporting within the organization, responding to data subject requests, and anticipating data breach incidents.
Step 5 will be considered completed once the organization has adopted good practices in respect of data protection and knows what to do and who to go to in case of incident.
Step 6: Document everything to be able to prove compliance to the GDPR
In order to be able to demonstrate compliance, the CNIL recommends that organizations retain documents regarding the processing of personal data, such as: records of processing activities, PIAs and documents regarding data transfers outside the EU; transparency documents such as privacy notices, consent forms, procedures for exercising data subject rights; and agreements defining the roles and responsibilities of each stakeholder, including data processing agreements, internal procedures in case of data breach, and proof of consent when the processing is based on the data subject’s consent.
Step 6 will be considered completed once the organization’s documentation shows that it complies with all the GDPR requirements.
The CNIL’s methology includes several useful tools (template records, guidelines, template contract clauses, etc.) and will be completed over time to take into account the WP29’s guidelines and the CNIL’s responses to frequently asked questions.
Consumer Reports (CR) announced on March 6, 2017, that it is developing a new standard—The Digital Standard—for safeguarding consumers’ security and privacy. The eventual goal is for CR to use the Standard to evaluate and rate consumer products. By scoring products based on certain Standard criteria, CR aims to help consumers make informed purchasing decisions based on how products protect their privacy and security.
The Standard is currently divided into 35 general testing categories, each of which is (or will be) further divided into (i) test criteria, (ii) indicators of that criteria, and (iii) procedures for evaluating that criteria. For example, under the “Data control” testing category, CR first asks whether a consumer can “see and control everything the company knows about” him/her. Indicators of consumer data control include, among other things, whether users can control the collection of their information, delete their information, and control how their information is used to target advertising. In order to evaluate whether a product gives consumers control over their data, the evaluator would investigate and analyze “publicly available documentation to determine what the company clearly discloses.”
Some of the criteria are in line with guidelines from other sources. For example, both the Standard and the FTC’s Start with Security guide discuss having passwords that are unique and complex. On other issues, however, some companies may find that the Standard stretches beyond existing guidance or market practices. For example, the “Ownership” testing category appears to touch on issues related to the First Sale Doctrine: It has as testing criteria whether a consumer “own[s] every part” of the product” and the indicator of that criteria is that “[t]he company does not retain any control or ownership over the operation, use, inputs, or outputs of the product after it has been purchased by the consumer.”
Consumer Reports developed the standard in collaboration with a number of partners, primarily Disconnect, Ranking Digital Rights, and the Cyber Independent Testing Lab (CITL). It is currently a first draft, but CR and its collaborators welcome feedback and suggestions. To provide input, see the Contribute tab on the Standard’s website.
As we noted in our January blog post Swiss-US Privacy Shield Adopted, Aligns with EU-US Privacy Shield, the Department of Commerce will begin accepting self-certifications to the Swiss-US Privacy Shield on April 12, 2017.
In response to frequently asked questions, Commerce provides guidance on how to self-certify:
- Companies already certified under the EU-US Privacy Shield: Companies that have already self-certified to the EU-US Privacy Shield Framework, can log into to their existing Privacy Shield accounts and click on “self-certify.” Companies will have to pay a separate annual fee, which will be similar in tier structure to the EU-US Privacy Shield fee structure. If a company is approved under both frameworks, the re-certification date will be one year from the date of the first of the two certifications.
- Companies not yet certified under the EU-US Privacy Shield: If a company is not yet certified under the EU-US Privacy Shield, then it will be able to select the “Self-Certify” link on the Privacy Shield website to certify for one or both of the frameworks.
Regardless of whether a company is certified under the EU-US Privacy Shield, any company applying for certification under the Swiss-US Privacy Shield framework will have to update its privacy notice to align with Privacy Shield requirements.
A € 3+ million sanction was issued against a multinational company by the Italian Competition Authority for prize promotions unlawfully performed. Continue Reading
My colleague Emil Odling, lead partner for IP and Technology in Stockholm, has written the piece below discussing a decision this week of the Swedish courts which suspends the decision of the Swedish regulator which would have required Telia to stop some practices on the basis that they infringe the net neutrality rules. Note that although the offers concerned are zero-rated it appears that the PTA’s (now suspended) decision looked at traffic management more generally and did not consider zero-rating specifically.
On 8th of March 2017, the Swedish Administrative Court of Appeal ruled to inhibit the Swedish Post and Telecom Authority’s (“PTA”) decision to prohibit partially state-owned telecom and mobile network operator Telia Company AB’s (“Telia”) distribution of two services which according to the PTA constituted a breach of the so called Open Internet Regulation.
Written by Roxanne Chow
No one wants to start a new supplier relationship by discussing what happens if a service contract terminates. Everyone’s busy getting the service off the ground, and no one wants to upset goodwill by talking about breaking up. However, lawyers specialize in “what ifs”, so this comment will look at how customers should tackle exit and termination assistance issues at the outset.
Before the provision of services begins, it’s difficult for a customer and supplier to know exactly how an exit will work, but it’s best practice to have agreed in the contract the high level principles as to what the termination assistance services will need to include in the future, for example:
- The process for developing an exit plan
- The date by which the supplier is expected to provide a draft exit plan (typically this is within weeks or months of contract signature)
- The escalation process if the parties cannot agree the draft exit plan, including timescales for resolution
- The process for finalisation of the draft exit plan when exit is actually triggered
- The content of the exit plan
- The preparatory activities to be carried out by the supplier during the term of the contract and prior to exit being triggered (such as maintaining the asset and IP registers, updating and maintaining the draft exit plan, supplier cooperation with the customer’s tendering process for a replacement supplier, etc.)
- The activities to be carried out by the supplier once exit is triggered (such as the supplier’s provision of its operational and personnel information related to the services, procuring the assignment or novation of any third party contracts, knowledge transfer by the supplier to the customer or a replacement supplier including job shadowing, service migration activities)
- The customer’s responsibilities which will enable the supplier to carry out termination assistance
- The management of termination assistance; for example, will there need to be a dedicated supplier team? Who will be the main contacts for each party during exit?
- After-care, such as the continued provision of information and assistance to the customer or replacement supplier after services have been migrated
- Setting out a process for agreeing the exit plan on exit and expediting termination assistance in an emergency situation
- The anticipated duration of the termination assistance activities, and what happens if termination assistance extends beyond the termination date of the contract
- How termination assistance activities will be charged
- Whether the charges for termination assistance activities are built into the charges for the services during the term, or whether there is a separate charge for termination assistance activities
- Whether there are any activities that can be carried out within a fixed fee
- Which activities will be payable on a time and materials basis
- Whether payment for termination assistance is linked to any milestones in the exit plan
If you have a rough outline for exit from the start, it will be easier for the parties to figure out what each needs to do during the term of the contract with respect to preparing for exit and activating the exit plan, and provide more certainty as to what costs and charges for termination assistance services may be incurred in future.
Written by Giangiacomo Olivi
Connected devices that exchange substantial volumes of data come with some obvious data protection concerns. Such concerns increase when dealing with artificial intelligence or other devices/robots that autonomously collect large amounts of information and learn though experience.
Although there are not (yet) specific regulations on data protection and artificial intelligence (AI), certain legal trends can be identified, also taking into account the new European General Data Protection Regulation (GDPR).
The GDPR requires data controllers to demonstrate compliance, including obligations to carry out at an initial stage a data protection impact assessment for each risky process/product and to implement data protection by design and by default.
This implies an obligation for software developers and other parties that intervene in the creation and management of AI to integrate the data governance process with appropriate safeguards including, for instance, data minimization and data portability (which should cover both the data provided knowingly by the data subject and the data generated by their activity).
Furthermore the GDPR requires security measures that are “appropriate” to the risk, taking into account the evolving technological progress. This is particularly relevant when dealing with the potential risks of AI.
The application of the above principles will be key for all parties involved to limit their responsibility, or at least to obtain insurance cover for the data protection (and related data breach) risks. In this respect, the adherence to industry codes of conduct or other data protection adequacy certifications will also help.
Informed consent from the data subject is another key principle for the GDPR, as was already the case for most European jurisdictions. Such consent may not be easy to achieve within an AI scenario, particularly when it is not possible to rely upon predefined sets of instructions.
This is even more relevant if we consider that updated consent may not be easy to achieve for “enriched data”, certain non-personal data that have become personal data (i.e., associated with an individual) through processing combined with other data gathered by the AI from the environment or other sources.
This may lead to a substantial increase in requests for consent (through simplified, yet explicit forms), even when personal data are not being used. Such an increase may not necessarily entail an equivalent increase in awareness of data protection – as was seen with the application of cookie regulations in certain European jurisdictions.
When dealing with AI, it may be that under certain circumstances parties involved will opt for a request of “enhanced consent”, as is applied in Italy for certain contracts that impose clauses that are particularly unfavorable for the consumer. Such consent, however, will not per se exclude responsibility for the entity ultimately responsible for the data processing.
The GDPR provides that individuals shall have the right not to be subject to a decision based solely on automated processing, including profiling, unless such decision is provided by the law (e.g., fraud prevention systems) or is necessary to enter into or perform a contract, or is based on the data subject’s explicit consent.
In the two latter instances, the GDPR also provides that the data subject shall have the right to receive an explanation by a “natural” person. The data subject will accordingly have the right to express their opinion, and this may lead to increasing transparency as to the usage of AI, with specific explanation processes to be embedded in software architectures and governance models.
However, it will remain very difficult to determine how certain decisions are made, particularly when decisions are based on enormous data combinations. In such cases, any explanation may likely cover the processes, elements or categories of data that have been taken into account when making a decision.
It is likely that data governance models will increasingly go into detail on how certain decisions are taken by the AI, so as to facilitate explanations when required. Whether this will lead to rights similar to the principle of “software decompiling” rights in certain civil law jurisdictions is yet to be determined.
Undoubtedly, data protection awareness will become increasingly relevant for all AI practitioners. More sophisticated organizations will set up specific governance guidelines when dealing with AI, with such guidelines to address not only the overall technical and data feeding processes, but also a number of legal and ethical issues.