A € 3+ million sanction was issued against a multinational company by the Italian Competition Authority for prize promotions unlawfully performed. Continue Reading
My colleague Emil Odling, lead partner for IP and Technology in Stockholm, has written the piece below discussing a decision this week of the Swedish courts which suspends the decision of the Swedish regulator which would have required Telia to stop some practices on the basis that they infringe the net neutrality rules. Note that although the offers concerned are zero-rated it appears that the PTA’s (now suspended) decision looked at traffic management more generally and did not consider zero-rating specifically.
On 8th of March 2017, the Swedish Administrative Court of Appeal ruled to inhibit the Swedish Post and Telecom Authority’s (“PTA”) decision to prohibit partially state-owned telecom and mobile network operator Telia Company AB’s (“Telia”) distribution of two services which according to the PTA constituted a breach of the so called Open Internet Regulation.
Written by Roxanne Chow
No one wants to start a new supplier relationship by discussing what happens if a service contract terminates. Everyone’s busy getting the service off the ground, and no one wants to upset goodwill by talking about breaking up. However, lawyers specialize in “what ifs”, so this comment will look at how customers should tackle exit and termination assistance issues at the outset.
Before the provision of services begins, it’s difficult for a customer and supplier to know exactly how an exit will work, but it’s best practice to have agreed in the contract the high level principles as to what the termination assistance services will need to include in the future, for example:
- The process for developing an exit plan
- The date by which the supplier is expected to provide a draft exit plan (typically this is within weeks or months of contract signature)
- The escalation process if the parties cannot agree the draft exit plan, including timescales for resolution
- The process for finalisation of the draft exit plan when exit is actually triggered
- The content of the exit plan
- The preparatory activities to be carried out by the supplier during the term of the contract and prior to exit being triggered (such as maintaining the asset and IP registers, updating and maintaining the draft exit plan, supplier cooperation with the customer’s tendering process for a replacement supplier, etc.)
- The activities to be carried out by the supplier once exit is triggered (such as the supplier’s provision of its operational and personnel information related to the services, procuring the assignment or novation of any third party contracts, knowledge transfer by the supplier to the customer or a replacement supplier including job shadowing, service migration activities)
- The customer’s responsibilities which will enable the supplier to carry out termination assistance
- The management of termination assistance; for example, will there need to be a dedicated supplier team? Who will be the main contacts for each party during exit?
- After-care, such as the continued provision of information and assistance to the customer or replacement supplier after services have been migrated
- Setting out a process for agreeing the exit plan on exit and expediting termination assistance in an emergency situation
- The anticipated duration of the termination assistance activities, and what happens if termination assistance extends beyond the termination date of the contract
- How termination assistance activities will be charged
- Whether the charges for termination assistance activities are built into the charges for the services during the term, or whether there is a separate charge for termination assistance activities
- Whether there are any activities that can be carried out within a fixed fee
- Which activities will be payable on a time and materials basis
- Whether payment for termination assistance is linked to any milestones in the exit plan
If you have a rough outline for exit from the start, it will be easier for the parties to figure out what each needs to do during the term of the contract with respect to preparing for exit and activating the exit plan, and provide more certainty as to what costs and charges for termination assistance services may be incurred in future.
Written by Giangiacomo Olivi
Connected devices that exchange substantial volumes of data come with some obvious data protection concerns. Such concerns increase when dealing with artificial intelligence or other devices/robots that autonomously collect large amounts of information and learn though experience.
Although there are not (yet) specific regulations on data protection and artificial intelligence (AI), certain legal trends can be identified, also taking into account the new European General Data Protection Regulation (GDPR).
The GDPR requires data controllers to demonstrate compliance, including obligations to carry out at an initial stage a data protection impact assessment for each risky process/product and to implement data protection by design and by default.
This implies an obligation for software developers and other parties that intervene in the creation and management of AI to integrate the data governance process with appropriate safeguards including, for instance, data minimization and data portability (which should cover both the data provided knowingly by the data subject and the data generated by their activity).
Furthermore the GDPR requires security measures that are “appropriate” to the risk, taking into account the evolving technological progress. This is particularly relevant when dealing with the potential risks of AI.
The application of the above principles will be key for all parties involved to limit their responsibility, or at least to obtain insurance cover for the data protection (and related data breach) risks. In this respect, the adherence to industry codes of conduct or other data protection adequacy certifications will also help.
Informed consent from the data subject is another key principle for the GDPR, as was already the case for most European jurisdictions. Such consent may not be easy to achieve within an AI scenario, particularly when it is not possible to rely upon predefined sets of instructions.
This is even more relevant if we consider that updated consent may not be easy to achieve for “enriched data”, certain non-personal data that have become personal data (i.e., associated with an individual) through processing combined with other data gathered by the AI from the environment or other sources.
This may lead to a substantial increase in requests for consent (through simplified, yet explicit forms), even when personal data are not being used. Such an increase may not necessarily entail an equivalent increase in awareness of data protection – as was seen with the application of cookie regulations in certain European jurisdictions.
When dealing with AI, it may be that under certain circumstances parties involved will opt for a request of “enhanced consent”, as is applied in Italy for certain contracts that impose clauses that are particularly unfavorable for the consumer. Such consent, however, will not per se exclude responsibility for the entity ultimately responsible for the data processing.
The GDPR provides that individuals shall have the right not to be subject to a decision based solely on automated processing, including profiling, unless such decision is provided by the law (e.g., fraud prevention systems) or is necessary to enter into or perform a contract, or is based on the data subject’s explicit consent.
In the two latter instances, the GDPR also provides that the data subject shall have the right to receive an explanation by a “natural” person. The data subject will accordingly have the right to express their opinion, and this may lead to increasing transparency as to the usage of AI, with specific explanation processes to be embedded in software architectures and governance models.
However, it will remain very difficult to determine how certain decisions are made, particularly when decisions are based on enormous data combinations. In such cases, any explanation may likely cover the processes, elements or categories of data that have been taken into account when making a decision.
It is likely that data governance models will increasingly go into detail on how certain decisions are taken by the AI, so as to facilitate explanations when required. Whether this will lead to rights similar to the principle of “software decompiling” rights in certain civil law jurisdictions is yet to be determined.
Undoubtedly, data protection awareness will become increasingly relevant for all AI practitioners. More sophisticated organizations will set up specific governance guidelines when dealing with AI, with such guidelines to address not only the overall technical and data feeding processes, but also a number of legal and ethical issues.
By Peter Elliott and Mike Conradi, DLA Piper
By many accounts, the UK’s framework for regulating communications services is amongst the world’s most dynamic and successful. Leaving in its wake a telecommunications licensing regime, in 2003 the UK Government influenced and then implemented new EU Directives which took a different approach to regulating telecoms: general authorisation. In short, this meant that, subject to certain exceptions (such as in respect of the ever-so-valuable radio spectrum), companies were given a general right to provide communications services or networks provided they complied with a set of a rules, namely the General Conditions of Entitlement (or ‘General Conditions’ or ‘GCs’ for short). In the UK, unlike other EU countries, there was not even an obligation to notify Ofcom (the UK’s telecoms regulator) about the provision of communications services!
This fits in with Ofcom’s commitment towards ‘reducing regulation and minimising administrative burdens on its stakeholders‘ and its ‘bias against regulatory intervention‘. However, the General Conditions have increased in length and number since their inception; indeed, three new conditions and 63 pages have been added since 2003. Some of this is understandable; the communications market has changed significantly over the past 14 years, and Ofcom has had to respond to UK and global market developments in addition to implementing new EU Directives and regulations.
However, it is easier to build than deconstruct, and the General Conditions now often fail to meet Ofcom’s goal of seeking to ‘ensure that regulation does not involve…the maintenance of burdens which have become unnecessary‘ . Navigating the unwieldy and confusing structure of the General Conditions is a burden that eludes many.
It is for this reason that Ofcom began a consultation with industry stakeholders in August 2016 to ‘produce a coherent set of regulatory conditions which are clearer and more practical, easier to comply with and simpler to enforce‘. Whereas this may seem sensible, the stakeholders who have responded are nearly unanimous in celebrating the purpose of this exercise whilst criticising many of the Ofcom’s actual proposals.
The consultation has been split into two parts. The first part, which ended in October 2016, concerned the General Conditions relating to network functioning and numbering, and Ofcom’s focus was primarily on shortening and simplifying these requirements; the second part (which is due to conclude on 14 March 2017), relates to consumer protection, and Ofcom’s proposals frequently would extend the scope of these General Conditions in order to take account of changes in technology and consumer behaviour. The proposed changes include (with our comments in italics in brackets):
• Consolidating definitions: consolidating the definitions by placing them into a single section. (This is long overdue! More time and energy is often dispensed trying to discern the different ways in which the same terms – such as “Communications Provider” – are defined differently across the various General Conditions than it is actually reading the requirements themselves. The current structure is confusing and contrary to Ofcom’s goal of achieving coherency);
• Consolidating overlapping Conditions: consolidating those General Conditions which address associated issues, namely by (i) combining those covering emergency services and emergency situations (GC 3 and GC 4), (ii) combining those covering directory information (GCs 8 and 19), and (iii) placing into a single condition all of the information publication requirements across the General Conditions (whilst also simplifying these, where possible). (Again, this was overdue, particularly as GCs 8 and 19 do not consequentially follow from each other, and the drafting under GC 19 always seemed unnecessarily long given the simplicity of the obligation);
• Removing unnecessary Conditions: removing those requirements which are covered under other UK laws, which are no longer needed due to regulatory and market developments, or which are unnecessary because Ofcom has the right to exercise the relevant rights in any event – e.g. removing (i) the obligation on communications providers to share confidential information with Ofcom (GC 1.3), (ii) the prohibition on imposing unreasonable restrictions of network access (GC 3.2), (iii) the rules relating to directory enquiry services (GCs 6.1(b), 8.1(b) and 8.4), and (iv) some requirements on VoIP providers to provide information about service reliability amongst other things, and to ensure emergency calls can be made (Annex 3 to GC 14). (Some of these are welcome – for example, for many new market entrants, the concept of directory enquiry services seemed to hark back to a byzantine era. Similarly, with VoIP increasingly becoming the standard means of making voice calls amongst many enterprises and consumers, it is unsurprising that Ofcom have focussed on clarifying regulations in this area. However, these changes relating to VoIP have been called into question by several stakeholders; for example, Microsoft do not believe it is necessary to ‘create a discrete definition of potential communications services using a specific technology or network architecture’ and Vodafone ‘finds it curious that Ofcom continues to regulate on a technology-centric basis, with specific requirements placed on VoIP call services’. We expect more jockeying in this area as, arguably, the future of VoIP (and data) is the near-future of telecoms);
• Extending billing requirements: increasing the scope of the rules on billing accuracy, debt collection and disconnection procedures for non-payment of bills so that, in addition to voice call services, they apply to data services. (This is unsurprising given the uptake in data-related services in recent years. In respect of billing accuracy, Ofcom appears to be targeting the largest players in the market as it also proposes increasing the turnover threshold for triggering these obligations from £40m to £55m; this should help support competition from the smaller players, although this is likely to be contested by the larger communications providers); and
• Establishing a new code for disputes and complaints handling: creating a new code containing, for example, a requirement (i) to inform a customer proactively about how and when a complaint will be handled, and (ii) to provide certain information to customers who have made a complaint (e.g. the latest date following the closure/resolution of a complaint by which the customer can revert to the communications provider stating they remain unhappy). (Whilst the intention behind these changes is understandable, how readily they will operate in practice is questionable as different complaints may merit different responses that, in turn, may require different levels of resourcing which could be difficult for a communications provider to determine in advance. Again, communications providers are likely to resist some of these proposals).
All in all, whilst not a complete overhaul of the General Conditions of Entitlement, these changes are likely to represent a significant and – largely – much-needed makeover. It will be interesting to see if and how Ofcom takes into account the responses it receives from industry stakeholders.
Either way, Ofcom intends to publish a final statement on its proposals, in addition to the revised versions of the General Conditions, in the Spring of 2017.
Guidance on who is a “key information infrastructure operator” under the PRC Cybersecurity Law, and draft regulations on handling minors’ data
In the rapidly evolving data protection compliance environment in the People’s Republic of China, this month has seen some helpful clarification around two areas of uncertainty – namely:
- some further indications as to whom will be deemed a “KIIO” (and so subject to the data localization rules under the PRC Cybersecurity Law); and
- on the additional safeguards required when handling personal data of minors,
but unfortunately in both regards significant uncertainties remain.
New Cybersecurity Strategy gives first guidance on application of PRC Cybersecurity Law
Following the recent enactment of the PRC Cybersecurity Law, China’s Internet regulator published the country’s first National Cyberspace Security Strategy (the “Strategy“) on December 27, 2016. The Strategy offers few fresh initiatives but summarizes goals within the PRC Cybersecurity Law and other regulations passed over the past year. A guiding concept is “Internet sovereignty”, which the Strategy defines as China’s right to police the Internet within its borders and participate in managing international cyberspace. In particular, the Strategy emphasizes the strategic need to safeguard key information infrastructure operators (“KIIOs“).
Most importantly, the Strategy seeks to clarify the definition of a KIIO, by providing guidance on the industries which the Chinese Government will prioritize with respect to cybersecurity.
A KIIO is defined in the Strategy as an operator of “information facilities that have an immediate bearing on national security, the national economy or people’s livelihoods such that, in the event of a data leakage, damage, or loss of functionality, national security and public interest would be jeopardized“. This aligns with the definition in the PRC Cybersecurity Law, and indicates the potential impact of a security breach is a key factor in determining who will be considered a KIIO.
In addition, the expanded definition put forward in the Strategy includes clarification on the industries that the Chinese authorities consider to be operating key information infrastructure. The PRC Cybersecurity Law listed “public communications and information service, energy, transportation, hydropower, finance, public service, e-government and other critical information infrastructure”, and the Strategy clarifies this by:
- listing “basic telecommunications networks that provide public communications, radio and television transmission and other such services” to expand on the definition of “public communications” operators;
- noting that important information systems in sectors and State bodies in the additional fields of “education“, “scientific research“, “industry and manufacturing“, “medicine and health” and “social security” will also be caught; and
- identifying that “important Internet application systems” will be deemed to be KIIOs as well. Unofficial reports suggest that this is intended to catch popular apps such as Taobao and WeChat which have millions of daily users in China who would be affected by a security breach.
Organizations within these newly-highlighted sectors are now also advised to pay attention to the additional cybersecurity and data protection obligations imposed on KIIOs in the PRC Cybersecurity Law and consider updating their compliance programs accordingly. For our summary of the key features of the PRC Cybersecurity Law click here.
Unfortunately this additional guidance is far from definitive, in that it remains unclear whether all organizations within the specified industries that are encompassed by the definition of a KIIO will automatically be KIIOs if they operate any networks (and potentially even just a website) in the People’s Republic of China. Further, other key uncertainties under the PRC Cybersecurity Law – including the definition of “network operator” and “important business data” – remain. The ongoing uncertainty is extremely unhelpful for local and international organizations trying to identify whether they need to update their China compliance programs in advance of 1 June 2017 when the PRC Cybersecurity Law becomes effective, and we hope that further guidance will be published shortly.
Draft Regulations on the protection of the use of Internet by minors published for comments
The State Council published for public consultation the draft Regulations on the Protection of the Use of Internet by Minors (the “Draft Regulations“) on January 7, 2017 to provide additional protection to minors (i.e., Chinese citizens under the age of eighteen) when they are online. In particular, the Draft Regulations propose additional data protection obligations, with which “network information service providers” (i.e., organizations and individuals using networks to provide users with information technology, information services, information products, including online platform service providers, and providers of online content and products) would need to comply. The definition of a “network information service provider” appears to catch any individual or business that operates websites or processes online data in China.
Some of the key provisions of the Draft Regulations include:
- Network information service providers must conduct reviews of the information published on their platform. If any content is deemed unsuitable for minors, a warning must be placed prominently before the content is displayed. The Draft Regulations recognize the need for relevant authorities to publish policies to offer guidance to organizations on how to manage information unsuitable for minors.
- “Minors’ personal information” is given a wide definition, and would capture all kinds of information, whether recorded electronically or through other means, that when alone or taken together with other information is sufficient to identify a minor’s identity, including but not limited to a minor’s full name, location, residential address, date of birth, contact information, account name, identification number, personal biometric information, and photographs.
- Network information service providers that offer search functions on their platforms would not be allowed to display search results that comprise minors’ personal information. If a minor or his/her parent/guardian requests a network information service provider to delete or block the minor’s personal information that is available online, the network information service provider would also be required to do so.
Consultation on the Draft Regulations closes on 6 February 2017. It is hoped that some of the uncertainties in the Draft Regulations will be clarified before the Draft Regulations are finalized and come into force. In the meantime, organizations – particularly those whose websites are aimed at young people – are warned that, if passed, the Draft Regulations would require a pro-active review and update of their Chinese websites and privacy policies, and data collection/retention policies and procedures, to address these new safeguards.
DLA Piper’s Data Protection and Privacy practice delivers topical legal and regulatory updates and analysis from across the globe. To learn more please click here.
Artificial intelligence is a massive opportunity, but triggers some risks which cannot be sorted through over-regulations that might damage the market. Continue Reading
The draft EU ePrivacy Regulation might have a considerable impact on privacy compliance obligations relating to new technologies. Continue Reading
The Internet of Things (IoT) is getting regulated through the draft European ePrivacy privacy regulation and the revised database and product liability directives, but is this good news? Continue Reading
Written by Jim Halpert and Michelle Anderson
The National Institute of Standards and Technology (NIST) released proposed revisions (draft Version 1.1) to its Framework for Improving Critical Infrastructure Cybersecurity (“Cybersecurity Framework”) on January 10, 2017. The latest draft is intended to “refine, clarify, and enhance” Version 1.0, released in February 2014 in response to Executive Order 13636 – Improving Critical Infrastructure Cybersecurity.
Notable changes in draft Version 1.1 include:
- Additional information on mitigating supply-chain risks. NIST expanded Section 3.3 (“Communicating Cybersecurity Requirements with Stakeholders”) to address the importance of communicating and verifying cybersecurity requirements among stakeholders as part of cyber supply chain risk management (SCRM). In addition, NIST added SCRM as a property of the Implementation Tiers (Section 2.2) and to the Framework Core under the Identify Function.
- A new section (Section 4.0) on cybersecurity measures and metrics. NIST notes that by using metrics and measurements the Cybersecurity Framework can be used as the basis for assessing an organization’s cybersecurity posture. According to the draft, “metrics” help “facilitate decision making and improve performance and accountability” while “measurements” are “quantifiable, observable, objective data supporting metrics.” For example, organizations can measure system uptime—and this measurement can be used as a metric against which an individual responsible for developing and implementing appropriate safeguards to ensure delivery under the framework’s Protect Function can be held accountable.
NIST invites comments on draft Version 1.1. Comments are due by April 10, 2017, and can be sent to firstname.lastname@example.org. After reviewing these comments and convening a workshop, NIST intends to publish a final Framework Version 1.1 in Fall 2017.
NIST reiterates that “[a]s with Version 1.0, use of the Version 1.1 is voluntary,” and says that users of Version 1.1 may “customize the Framework to maximize organizational value.”
That said, NIST’s encouragement of using cybersecurity measures and metrics for internal organizational accountability could lead to the creation of metrics that can also be used by third parties (e.g., regulators) to hold organizations accountable under the framework. While it remains to be seen what the Federal Trade Commission (FTC) will do under the incoming Trump administration, the FTC (and other regulators) could use such metrics as the bases for enforcement actions. Indeed, there is significant overlap between what the FTC considers to be “reasonable” security and the Cybersecurity Framework. According to the FTC’s blog post on The NIST Cybersecurity Framework and the FTC, “The types of things the Framework calls for organizations to evaluate are the types of things the FTC has been evaluating for years in its Section 5 enforcement to determine whether a company’s data security and its processes are reasonable. By identifying different risk management practices and defining different levels of implementation, the NIST Framework takes a similar approach to the FTC’s long-standing Section 5 enforcement.”
According to NIST, this latest draft incorporates feedback to Version 1.0, responses to its December 2015 request for information, and comments from NIST’s April 2016 Cybersecurity Framework Workshop.