Posted in Cybersecurity Privacy and Data Security

New York AG Announces Record Year for Data Breaches in New York – and Updates Guidance on Reasonable Security Measures

Written by Michelle Anderson and Anne Kierig

New York Attorney General Eric Schneiderman announced that his office received a record number (1,300) of data breach notices in 2016. In the press release, Attorney General Schneiderman also provided a list of recommendations for how organizations can help protect sensitive personal data—a list that could be used as a benchmark against which the Attorney General’s office could evaluate whether a company has implemented reasonable security measures.

Many of the recommendations overlap with those made by other regulators (e.g., minimizing data collection practices is in the FTC’s Start with Security: A Guide for Business), but they reiterate that the New York Attorney General considers both encryption and offering post-breach services like credit monitoring to be more of an expectation than an option. They also highlight the importance of having a written information security policy, rather than undocumented procedures. The Attorney General’s recommendations update recommendations made in a 2014 report titled Information Exposed: Historical Examination of Data Security in New York State and are as follows:

  • Understand where your business stands by understanding the types of information your business has collected and needs, how long it is stored, and what steps have been taken to secure it. Data mapping would be a ready way to meet this recommendation.
  • Identify and minimize data collection practices by collecting only the information that you need, storing it only for the minimum time necessary, and using data minimization tactics when possible (e.g., don’t store credit card expiration dates with credit card numbers).
  • Create an information security plan that includes encryption and other technical standards, as well as training, awareness, and “detailed procedural steps in the event of data breaches.” This recommendation reiterates a recommendation from the 2014 report, in which the Attorney General said that effective technical safeguards include “[r]equir[ing] encryption of all stored sensitive personal information—including on databases, hard drives, laptops, and portable devices.”
  • Implement an information security plan and conduct regular reviews to ensure that the plan aligns with ever-changing best practices.
  • Take immediate action in the event of a breach by investigating immediately and thoroughly and notifying consumers, law enforcement, regulators, credit bureaus, and other businesses as required. This is the only new recommendation from the 2014 report, which mentioned the importance of immediate breach response in the context of implementing an information security plan. Now, however, it’s a separate recommendation.
  • Offer mitigation products in the event of a breach, including credit monitoring.

The announcement also revealed that the number of data breaches reported to the New York Attorney General’s office in 2016 represented a 60% increase over prior years. It reported that the most common causes of data breaches were external (i.e., hacking) and employee negligence (consisting of inadvertent exposure of records, insider wrongdoing, and the loss of a device or media). These causes accounted for, respectively, 40% and 37% of the reported breaches, showing a rise in employee negligence as a breach source. The information exposed consisted overwhelmingly of Social Security numbers (46%) and financial account information (35%).

This announcement also comes on the heels of final cybersecurity rules for the financial sector from the New York Department of Financial Services (NYDFS). The NYDFS requirements went into effect on March 1, 2017, and are designed to keep both “nonpublic information” and “information systems” secure. More information about these requirements can be found in DLA Piper’s Cybersecurity Law Alert NYDFS announces final cybersecurity rules for financial services sector: key takeaways.

 

Posted in Cybersecurity EU Data Protection International Privacy Privacy and Data Security

FRANCE: The French Data Protection Authority (CNIL) Publishes 6-Step Methodology For Compliance With GDPR

Written by Carol Umhoefer and Caroline Chancé 

On March 15, 2017, the CNIL published a 6-step methodology for companies that want to prepare for the changes that will apply as from May 25, 2018 under the EU the General Data Protection Regulation (“GDPR”).

The abolishment under GDPR of registrations and filings with data protection authorities will represent fundamental shift of the data protection compliance framework in France., which has been heavily reliant on declarations to the CNIL and authorizations from the CNIL for certain types of personal data processing. In place of declarations, the CNIL underscores the importance of “accountability” and “transparency”, core principles that underlie the GDPR requirements. These principles necessitate taking privacy risk into account throughout the process of designing a new product or service (privacy by design and by default), implementing proper information governance, as well as adopting internal measures and tools to ensure optimal protection of data subjects.

In order to help organizations get ready for the GDPR, the CNIL has published the following 6 step methodology:

Step 1: Appoint a data protection officer (“DPO”) to “pilot” the organization’s GDPR compliance program

Pursuant to Article 37 of the GDPR, appointing a DPO will be required if the organization is a public entity; or if the core activities of the organization require the regular and systematic monitoring of data subjects on a large scale, or if such activities consist of the processing of sensitive data on a large scale. The CNIL recommends appointing a DPO before GDPR applies in May 2018.

Even when a DPO is not required, the CNIL strongly recommends appointing a person responsible for managing GDPR compliance in order to facilitate comprehension and compliance in respect of GDRP, cooperation with authorities and mitigation of risks of litigation.

Step 1 will be considered completed once the organization has appointed a DPO and provided him/her with the human and financial resources needed to carry out his/her duties.

Step 2: Undertake data mapping to measure the impact of the GDPR on existing data processing

Pursuant to Article 30 of the GDPR, controllers and processors will be required to maintain a record of their processing activities. In order to measure the impact of the GDPR on existing data processing and maintain a record, the CNIL advises organizations to identify data processing, the categories of personal data processed, the purposes of each processing, the persons who process the data (including data processor), and data flows, in particular data transfers outside the EU.

To adequately map data, the CNIL recommends asking:

  • Who? (identity of the data controller, the persons in charge of the processing operations and the data processors)
  • What? (categories of data processed, sensitive data)
  • Why? (purposes of the processing)
  • Where? (storage location, data transfers)
  • Until when? (data retention period)
  • How? (security measures in place)

Step 2 will be considered completed once the organization has identified the stakeholders for processing, established a list of all processing by purposes and categories of data processed, and identified the data processors, to whom and where the data is transferred, where the data is stored and for how long it is retained.

Step 3: Based on the results of data mapping, identify key compliance actions and prioritize them depending on the risks to individuals

In order to prioritize the tasks to be performed, the CNIL recommends:

  • Ensuring that only data strictly necessary for the purposes is collected and processed;
  • Identifying the legal basis for the processing;
  • Revising privacy notices to make them compliant with the GDPR;
  • Ensuring that data processors know their new obligations and responsibilities and that data processing agreements contain the appropriate provisions in respect of security, confidentiality and protection of personal data;
  • Deciding how data subjects will be able to exercise their rights;
  • Verifying security measures in place.

In addition, the CNIL recommends particular caution when the organization processes data such as sensitive data, criminal records and data regarding minors, when the processing presents certain risks to data subjects (massive surveillance and profiling), or when data is transferred outside the EU.

Step 3 will be considered completed once the organization has implemented the first measures to protect data subjects and has identified high risk processing.

Step 4: Conduct a privacy impact assessment for any data processing that presents high privacy risks to data subjects due to the nature or scope of the processing operations

Conducting a privacy impact assessment (“PIA”) is essential to assess the impact of a processing on data subjects’ privacy and to demonstrate that the fundamental principles of the GDPR have been complied with.

The CNIL recommends to conduct a PIA before collecting data and starting processing, and any time processing is likely to present high privacy risks to data subjects. A PIA contains a description of the processing and its purposes, an assessment of the necessity and proportionality of the processing, an assessment of the risks to data subjects, and measures contemplated to mitigate the risks and comply with the GDPR.

The CNIL has published guidelines in 3 volumes to help organizations conduct PIAs (see here, here and here).

Step 4 will be considered completed once the organization has implemented measures to respond to the principal risks and threats to data subjects’ privacy.

Step 5: Implement internal procedures to ensure a high level of protection for personal data

According to the CNIL, implementing compliant internal procedures implies adopting a privacy by design approach, increasing awareness, facilitating information reporting within the organization, responding to data subject requests, and anticipating data breach incidents.

Step 5 will be considered completed once the organization has adopted good practices in respect of data protection and knows what to do and who to go to in case of incident.

Step 6: Document everything to be able to prove compliance to the GDPR

In order to be able to demonstrate compliance, the CNIL recommends that organizations retain documents regarding the processing of personal data, such as: records of processing activities, PIAs and documents regarding data transfers outside the EU; transparency documents such as privacy notices, consent forms, procedures for exercising data subject rights; and agreements defining the roles and responsibilities of each stakeholder, including data processing agreements, internal procedures in case of data breach, and proof of consent when the processing is based on the data subject’s consent.

Step 6 will be considered completed once the organization’s documentation shows that it complies with all the GDPR requirements.

The CNIL’s methology includes several useful tools (template records, guidelines, template contract clauses, etc.) and will be completed over time to take into account the WP29’s guidelines and the CNIL’s responses to frequently asked questions.

Posted in Cybersecurity

Consumer Reports Begins Cybersecurity Evaluations

Consumer Reports (CR) announced on March 6, 2017, that it is developing a new standard—The Digital Standard—for safeguarding consumers’ security and privacy. The eventual goal is for CR to use the Standard to evaluate and rate consumer products. By scoring products based on certain Standard criteria, CR aims to help consumers make informed purchasing decisions based on how products protect their privacy and security.

The Standard is currently divided into 35 general testing categories, each of which is (or will be) further divided into (i) test criteria, (ii) indicators of that criteria, and (iii) procedures for evaluating that criteria. For example, under the “Data control” testing category, CR first asks whether a consumer can “see and control everything the company knows about” him/her. Indicators of consumer data control include, among other things, whether users can control the collection of their information, delete their information, and control how their information is used to target advertising. In order to evaluate whether a product gives consumers control over their data, the evaluator would investigate and analyze “publicly available documentation to determine what the company clearly discloses.”

Some of the criteria are in line with guidelines from other sources. For example, both the Standard and the FTC’s Start with Security guide discuss having passwords that are unique and complex. On other issues, however, some companies may find that the Standard stretches beyond existing guidance or market practices. For example, the “Ownership” testing category appears to touch on issues related to the First Sale Doctrine: It has as testing criteria whether a consumer “own[s] every part” of the product” and the indicator of that criteria is that “[t]he company does not retain any control or ownership over the operation, use, inputs, or outputs of the product after it has been purchased by the consumer.”

Consumer Reports developed the standard in collaboration with a number of partners, primarily Disconnect, Ranking Digital Rights, and the Cyber Independent Testing Lab (CITL). It is currently a first draft, but CR and its collaborators welcome feedback and suggestions. To provide input, see the Contribute tab on the Standard’s website.

Posted in International Privacy Privacy and Data Security

Commerce to Begin Accepting Swiss-US Privacy Shield Applications in a Month

As we noted in our January blog post Swiss-US Privacy Shield Adopted, Aligns with EU-US Privacy Shield, the Department of Commerce will begin accepting self-certifications to the Swiss-US Privacy Shield on April 12, 2017.

In response to frequently asked questions, Commerce provides guidance on how to self-certify:

  • Companies already certified under the EU-US Privacy Shield: Companies that have already self-certified to the EU-US Privacy Shield Framework, can log into to their existing Privacy Shield accounts and click on “self-certify.” Companies will have to pay a separate annual fee, which will be similar in tier structure to the EU-US Privacy Shield fee structure. If a company is approved under both frameworks, the re-certification date will be one year from the date of the first of the two certifications.
  • Companies not yet certified under the EU-US Privacy Shield: If a company is not yet certified under the EU-US Privacy Shield, then it will be able to select the “Self-Certify” link on the Privacy Shield website to certify for one or both of the frameworks.

Regardless of whether a company is certified under the EU-US Privacy Shield, any company applying for certification under the Swiss-US Privacy Shield framework will have to update its privacy notice to align with Privacy Shield requirements.

 

 

 

 

 

Posted in Technology and Commercial Telecoms

Net Neutrality in Sweden – PTA decision suspended

My colleague Emil Odling, lead partner for IP and Technology in Stockholm, has written the piece below discussing a decision this week of the Swedish courts which suspends the decision of the Swedish regulator which would have required Telia to stop some practices on the basis that they infringe the net neutrality rules. Note that although the offers concerned are zero-rated it appears that the PTA’s (now suspended) decision looked at traffic management more generally and did not consider zero-rating specifically.

>>>>

On 8th of March 2017, the Swedish Administrative Court of Appeal ruled to inhibit the Swedish Post and Telecom Authority’s (“PTA”) decision to prohibit partially state-owned telecom and mobile network operator Telia Company AB’s (“Telia”) distribution of two services which according to the PTA constituted a breach of the so called Open Internet Regulation.

Continue Reading

Posted in Commercial Contracting Strategic Sourcing Uncategorized

Exit: Having a Plan

Written by Roxanne Chow

No one wants to start a new supplier relationship by discussing what happens if a service contract terminates. Everyone’s busy getting the service off the ground, and no one wants to upset goodwill by talking about breaking up. However, lawyers specialize in “what ifs”, so this comment will look at how customers should tackle exit and termination assistance issues at the outset.

Before the provision of services begins, it’s difficult for a customer and supplier to know exactly how an exit will work, but it’s best practice to have agreed in the contract the high level principles as to what the termination assistance services will need to include in the future, for example:

  • The process for developing an exit plan
    • The date by which the supplier is expected to provide a draft exit plan (typically this is within weeks or months of contract signature)
    • The escalation process if the parties cannot agree the draft exit plan, including timescales for resolution
    • The process for finalisation of the draft exit plan when exit is actually triggered
  • The content of the exit plan
    • The preparatory activities to be carried out by the supplier during the term of the contract and prior to exit being triggered (such as maintaining the asset and IP registers, updating and maintaining the draft exit plan, supplier cooperation with the customer’s tendering process for a replacement supplier, etc.)
    • The activities to be carried out by the supplier once exit is triggered (such as the supplier’s provision of its operational and personnel information related to the services, procuring the assignment or novation of any third party contracts, knowledge transfer by the supplier to the customer or a replacement supplier including job shadowing, service migration activities)
    • The customer’s responsibilities which will enable the supplier to carry out termination assistance
    • The management of termination assistance; for example, will there need to be a dedicated supplier team? Who will be the main contacts for each party during exit?
    • After-care, such as the continued provision of information and assistance to the customer or replacement supplier after services have been migrated
    • Setting out a process for agreeing the exit plan on exit and expediting termination assistance in an emergency situation
    • The anticipated duration of the termination assistance activities, and what happens if termination assistance extends beyond the termination date of the contract
  • How termination assistance activities will be charged
    • Whether the charges for termination assistance activities are built into the charges for the services during the term, or whether there is a separate charge for termination assistance activities
    • Whether there are any activities that can be carried out within a fixed fee
    • Which activities will be payable on a time and materials basis
    • Whether payment for termination assistance is linked to any milestones in the exit plan

If you have a rough outline for exit from the start, it will be easier for the parties to figure out what each needs to do during the term of the contract with respect to preparing for exit and activating the exit plan, and provide more certainty as to what costs and charges for termination assistance services may be incurred in future.

 

Posted in EU Data Protection International Privacy Privacy and Data Security

Data protection laws and AI: What can we learn from the GDPR?

Written by Giangiacomo Olivi 

Connected devices that exchange substantial volumes of data come with some obvious data protection concerns. Such concerns increase when dealing with artificial intelligence or other devices/robots that autonomously collect large amounts of information and learn though experience.

Although there are not (yet) specific regulations on data protection and artificial intelligence (AI), certain legal trends can be identified, also taking into account the new European General Data Protection Regulation (GDPR).

Accountability

The GDPR requires data controllers to demonstrate compliance, including obligations to carry out at an initial stage a data protection impact assessment for each risky process/product and to implement data protection by design and by default.

This implies an obligation for software developers and other parties that intervene in the creation and management of AI to integrate the data governance process with appropriate safeguards including, for instance, data minimization and data portability (which should cover both the data provided knowingly by the data subject and the data generated by their activity).

Furthermore the GDPR requires security measures that are “appropriate” to the risk, taking into account the evolving technological progress. This is particularly relevant when dealing with the potential risks of AI.

The application of the above principles will be key for all parties involved to limit their responsibility, or at least to obtain insurance cover for the data protection (and related data breach) risks. In this respect, the adherence to industry codes of conduct or other data protection adequacy certifications will also help.

Informed Consent

Informed consent from the data subject is another key principle for the GDPR, as was already the case for most European jurisdictions. Such consent may not be easy to achieve within an AI scenario, particularly when it is not possible to rely upon predefined sets of instructions.

This is even more relevant if we consider that updated consent may not be easy to achieve for “enriched data”, certain non-personal data that have become personal data (i.e., associated with an individual) through processing combined with other data gathered by the AI from the environment or other sources.

This may lead to a substantial increase in requests for consent (through simplified, yet explicit forms), even when personal data are not being used. Such an increase may not necessarily entail an equivalent increase in awareness of data protection – as was seen with the application of cookie regulations in certain European jurisdictions.

When dealing with AI, it may be that under certain circumstances parties involved will opt for a request of “enhanced consent”, as is applied in Italy for certain contracts that impose clauses that are particularly unfavorable for the consumer. Such consent, however, will not per se exclude responsibility for the entity ultimately responsible for the data processing.

Explanations

The GDPR provides that individuals shall have the right not to be subject to a decision based solely on automated processing, including profiling, unless such decision is provided by the law (e.g., fraud prevention systems) or is necessary to enter into or perform a contract, or is based on the data subject’s explicit consent.

In the two latter instances, the GDPR also provides that the data subject shall have the right to receive an explanation by a “natural” person. The data subject will accordingly have the right to express their opinion, and this may lead to increasing transparency as to the usage of AI, with specific explanation processes to be embedded in software architectures and governance models.

However, it will remain very difficult to determine how certain decisions are made, particularly when decisions are based on enormous data combinations. In such cases, any explanation may likely cover the processes, elements or categories of data that have been taken into account when making a decision.

It is likely that data governance models will increasingly go into detail on how certain decisions are taken by the AI, so as to facilitate explanations when required. Whether this will lead to rights similar to the principle of “software decompiling” rights in certain civil law jurisdictions is yet to be determined.

Undoubtedly, data protection awareness will become increasingly relevant for all AI practitioners. More sophisticated organizations will set up specific governance guidelines when dealing with AI, with such guidelines to address not only the overall technical and data feeding processes, but also a number of legal and ethical issues.

@giangiolivi

Posted in Telecoms

A new era for the General Conditions?

By Peter Elliott and Mike Conradi, DLA Piper

By many accounts, the UK’s framework for regulating communications services is amongst the world’s most dynamic and successful. Leaving in its wake a telecommunications licensing regime, in 2003 the UK Government influenced and then implemented new EU Directives which took a different approach to regulating telecoms: general authorisation. In short, this meant that, subject to certain exceptions (such as in respect of the ever-so-valuable radio spectrum), companies were given a general right to provide communications services or networks provided they complied with a set of a rules, namely the General Conditions of Entitlement (or ‘General Conditions’ or ‘GCs’ for short). In the UK, unlike other EU countries, there was not even an obligation to notify Ofcom (the UK’s telecoms regulator) about the provision of communications services!

This fits in with Ofcom’s commitment towards ‘reducing regulation and minimising administrative burdens on its stakeholders‘ and its ‘bias against regulatory intervention‘. However, the General Conditions have increased in length and number since their inception; indeed, three new conditions and 63 pages have been added since 2003. Some of this is understandable; the communications market has changed significantly over the past 14 years, and Ofcom has had to respond to UK and global market developments in addition to implementing new EU Directives and regulations.

However, it is easier to build than deconstruct, and the General Conditions now often fail to meet Ofcom’s goal of seeking to ‘ensure that regulation does not involve…the maintenance of burdens which have become unnecessary‘ . Navigating the unwieldy and confusing structure of the General Conditions is a burden that eludes many.

It is for this reason that Ofcom began a consultation with industry stakeholders in August 2016 to ‘produce a coherent set of regulatory conditions which are clearer and more practical, easier to comply with and simpler to enforce‘. Whereas this may seem sensible, the stakeholders who have responded are nearly unanimous in celebrating the purpose of this exercise whilst criticising many of the Ofcom’s actual proposals.

The consultation has been split into two parts. The first part, which ended in October 2016, concerned the General Conditions relating to network functioning and numbering, and Ofcom’s focus was primarily on shortening and simplifying these requirements; the second part (which is due to conclude on 14 March 2017), relates to consumer protection, and Ofcom’s proposals frequently would extend the scope of these General Conditions in order to take account of changes in technology and consumer behaviour. The proposed changes include (with our comments in italics in brackets):

Consolidating definitions: consolidating the definitions by placing them into a single section. (This is long overdue! More time and energy is often dispensed trying to discern the different ways in which the same terms – such as “Communications Provider” – are defined differently across the various General Conditions than it is actually reading the requirements themselves. The current structure is confusing and contrary to Ofcom’s goal of achieving coherency);

Consolidating overlapping Conditions: consolidating those General Conditions which address associated issues, namely by (i) combining those covering emergency services and emergency situations (GC 3 and GC 4), (ii) combining those covering directory information (GCs 8 and 19), and (iii) placing into a single condition all of the information publication requirements across the General Conditions (whilst also simplifying these, where possible). (Again, this was overdue, particularly as GCs 8 and 19 do not consequentially follow from each other, and the drafting under GC 19 always seemed unnecessarily long given the simplicity of the obligation);

Removing unnecessary Conditions: removing those requirements which are covered under other UK laws, which are no longer needed due to regulatory and market developments, or which are unnecessary because Ofcom has the right to exercise the relevant rights in any event – e.g. removing (i) the obligation on communications providers to share confidential information with Ofcom (GC 1.3), (ii) the prohibition on imposing unreasonable restrictions of network access (GC 3.2), (iii) the rules relating to directory enquiry services (GCs 6.1(b), 8.1(b) and 8.4), and (iv) some requirements on VoIP providers to provide information about service reliability amongst other things, and to ensure emergency calls can be made (Annex 3 to GC 14). (Some of these are welcome – for example, for many new market entrants, the concept of directory enquiry services seemed to hark back to a byzantine era. Similarly, with VoIP increasingly becoming the standard means of making voice calls amongst many enterprises and consumers, it is unsurprising that Ofcom have focussed on clarifying regulations in this area. However, these changes relating to VoIP have been called into question by several stakeholders; for example, Microsoft do not believe it is necessary to ‘create a discrete definition of potential communications services using a specific technology or network architecture’ and Vodafone ‘finds it curious that Ofcom continues to regulate on a technology-centric basis, with specific requirements placed on VoIP call services’. We expect more jockeying in this area as, arguably, the future of VoIP (and data) is the near-future of telecoms);

Extending billing requirements: increasing the scope of the rules on billing accuracy, debt collection and disconnection procedures for non-payment of bills so that, in addition to voice call services, they apply to data services. (This is unsurprising given the uptake in data-related services in recent years. In respect of billing accuracy, Ofcom appears to be targeting the largest players in the market as it also proposes increasing the turnover threshold for triggering these obligations from £40m to £55m; this should help support competition from the smaller players, although this is likely to be contested by the larger communications providers); and

Establishing a new code for disputes and complaints handling: creating a new code containing, for example, a requirement (i) to inform a customer proactively about how and when a complaint will be handled, and (ii) to provide certain information to customers who have made a complaint (e.g. the latest date following the closure/resolution of a complaint by which the customer can revert to the communications provider stating they remain unhappy). (Whilst the intention behind these changes is understandable, how readily they will operate in practice is questionable as different complaints may merit different responses that, in turn, may require different levels of resourcing which could be difficult for a communications provider to determine in advance. Again, communications providers are likely to resist some of these proposals).

All in all, whilst not a complete overhaul of the General Conditions of Entitlement, these changes are likely to represent a significant and – largely – much-needed makeover. It will be interesting to see if and how Ofcom takes into account the responses it receives from industry stakeholders.
Either way, Ofcom intends to publish a final statement on its proposals, in addition to the revised versions of the General Conditions, in the Spring of 2017.

Posted in Asia Privacy Cybersecurity International Privacy New Privacy Laws Privacy and Data Security

CHINA DATA PROTECTION UPDATE (JANUARY 2017)

Guidance on who is a “key information infrastructure operator” under the PRC Cybersecurity Law, and draft regulations on handling minors’ data

In the rapidly evolving data protection compliance environment in the People’s Republic of China, this month has seen some helpful clarification around two areas of uncertainty – namely:

  •  some further indications as to whom will be deemed a “KIIO” (and so subject to the data localization rules under the PRC Cybersecurity Law); and
  • on the additional safeguards required when handling personal data of minors,

but unfortunately in both regards significant uncertainties remain.

New Cybersecurity Strategy gives first guidance on application of PRC Cybersecurity Law

Following the recent enactment of the PRC Cybersecurity Law, China’s Internet regulator published the country’s first National Cyberspace Security Strategy (the “Strategy“) on December 27, 2016. The Strategy offers few fresh initiatives but summarizes goals within the PRC Cybersecurity Law and other regulations passed over the past year. A guiding concept is “Internet sovereignty”, which the Strategy defines as China’s right to police the Internet within its borders and participate in managing international cyberspace. In particular, the Strategy emphasizes the strategic need to safeguard key information infrastructure operators (“KIIOs“).

Most importantly, the Strategy seeks to clarify the definition of a KIIO, by providing guidance on the industries which the Chinese Government will prioritize with respect to cybersecurity.

A KIIO is defined in the Strategy as an operator of “information facilities that have an immediate bearing on national security, the national economy or people’s livelihoods such that, in the event of a data leakage, damage, or loss of functionality, national security and public interest would be jeopardized“. This aligns with the definition in the PRC Cybersecurity Law, and indicates the potential impact of a security breach is a key factor in determining who will be considered a KIIO.

In addition, the expanded definition put forward in the Strategy includes clarification on the industries that the Chinese authorities consider to be operating key information infrastructure. The PRC Cybersecurity Law listed “public communications and information service, energy, transportation, hydropower, finance, public service, e-government and other critical information infrastructure”, and the Strategy clarifies this by:

  • listing “basic telecommunications networks that provide public communications, radio and television transmission and other such services” to expand on the definition of “public communications” operators;
  • noting that important information systems in sectors and State bodies in the additional fields of “education“, “scientific research“, “industry and manufacturing“, “medicine and health” and “social security” will also be caught; and
  • identifying that “important Internet application systems” will be deemed to be KIIOs as well. Unofficial reports suggest that this is intended to catch popular apps such as Taobao and WeChat which have millions of daily users in China who would be affected by a security breach.

Organizations within these newly-highlighted sectors are now also advised to pay attention to the additional cybersecurity and data protection obligations imposed on KIIOs in the PRC Cybersecurity Law and consider updating their compliance programs accordingly. For our summary of the key features of the PRC Cybersecurity Law click here.

Unfortunately this additional guidance is far from definitive, in that it remains unclear whether all organizations within the specified industries that are encompassed by the definition of a KIIO will automatically be KIIOs if they operate any networks (and potentially even just a website) in the People’s Republic of China. Further, other key uncertainties under the PRC Cybersecurity Law – including the definition of “network operator” and “important business data” – remain. The ongoing uncertainty is extremely unhelpful for local and international organizations trying to identify whether they need to update their China compliance programs in advance of 1 June 2017 when the PRC Cybersecurity Law becomes effective, and we hope that further guidance will be published shortly.

Draft Regulations on the protection of the use of Internet by minors published for comments

The State Council published for public consultation the draft Regulations on the Protection of the Use of Internet by Minors (the “Draft Regulations“) on January 7, 2017 to provide additional protection to minors (i.e., Chinese citizens under the age of eighteen) when they are online. In particular, the Draft Regulations propose additional data protection obligations, with which “network information service providers” (i.e., organizations and individuals using networks to provide users with information technology, information services, information products, including online platform service providers, and providers of online content and products) would need to comply. The definition of a “network information service provider” appears to catch any individual or business that operates websites or processes online data in China.

Some of the key provisions of the Draft Regulations include:

  • Network information service providers must conduct reviews of the information published on their platform. If any content is deemed unsuitable for minors, a warning must be placed prominently before the content is displayed. The Draft Regulations recognize the need for relevant authorities to publish policies to offer guidance to organizations on how to manage information unsuitable for minors.
  • “Minors’ personal information” is given a wide definition, and would capture all kinds of information, whether recorded electronically or through other means, that when alone or taken together with other information is sufficient to identify a minor’s identity, including but not limited to a minor’s full name, location, residential address, date of birth, contact information, account name, identification number, personal biometric information, and photographs.
  • Individuals or organizations collecting and using minors’ personal information online must clearly notify (for example, by way of a website privacy policy) the purposes, means and scope of collecting or using such personal information and obtain the consent of the minor or their parent/guardian. The Draft Regulations would require “specific privacy policies” to be formulated for such collection and use to enhance protection of minors’ personal information, although it is unclear whether the authorities would require a separate privacy policy specifically aimed at minors and their parent/guardian to be published on websites. Amid the uncertainties, if the Draft Regulations are passed, individuals or organizations collecting and using minors’ personal information online, especially on websites that are targeted at minors, are urged to review their existing privacy policies to ensure that as a minimum the required consent is obtained and that their privacy policy at least clearly addresses collection of data from or about minors.
  • Network information service providers that offer search functions on their platforms would not be allowed to display search results that comprise minors’ personal information. If a minor or his/her parent/guardian requests a network information service provider to delete or block the minor’s personal information that is available online, the network information service provider would also be required to do so.

Consultation on the Draft Regulations closes on 6 February 2017. It is hoped that some of the uncertainties in the Draft Regulations will be clarified before the Draft Regulations are finalized and come into force. In the meantime, organizations – particularly those whose websites are aimed at young people – are warned that, if passed, the Draft Regulations would require a pro-active review and update of their Chinese websites and privacy policies, and data collection/retention policies and procedures, to address these new safeguards.

DLA Piper’s Data Protection and Privacy practice delivers topical legal and regulatory updates and analysis from across the globe. To learn more please click here.

LexBlog