Header graphic for print

Technology's Legal Edge

A Technology, Privacy, and Sourcing Blog

Unresolved Issues In TCPA Litigation: Direct And Vicarious Liability

Posted in US Federal Law

Written by Elliot Katz and Monica D. Scott

 The Federal Communications Commission this summer released its much anticipated omnibus Declaratory Ruling and Order on the Telephone Consumer Protection Act. Although the Ruling covers the agency’s position on many important TCPA issues, such as the definition of an autodialer and the treatment of reassigned cellular telephone numbers, one key issue affecting TCPA litigation still remains unaddressed: the disparate application of direct and vicarious liability. Currently, courts use different standards in the phone and fax context to determine which theory of liability applies. Because so many companies rely on third parties for marketing solutions, resolving the applicable liability theory will become an increasingly important issue in TCPA litigation.

Vicarious Liability for Robocalls

The plain language of the TCPA assigns civil liability to the party that “makes” a robocall[1] without consent. The statute is, however, silent as to the issue of vicarious liability. In this vacuum, courts have applied traditional federal common law principles of vicarious liability, including alter ego and agency doctrines, when analyzing whether a business is liable for robocalls made on its behalf.[2] Because vicarious liability applies, the limitations on vicarious liability that arise out of agency relationships also apply. In other words, businesses that hire third parties to make robocalls on their behalf may be held vicariously liable for violations of the TCPA only if they have an agency relationship with the third party and only for actions within the scope of authority granted to the third party. The application of federal common law principals to vicarious liability in the context of robocalls was affirmed by the FCC in a 2013 Declaratory Ruling.[3]

Currently, courts generally apply federal common law principals of agency and examine a number of factors when determining whether there is an agency relationship between a business and a third party, such as: (1) whether the business reviewed the content of the telemarketing call, (2) whether the business provided the third party with its trademarks, (3) whether the business knew that the third party was violating the TCPA, and (4) whether the business engaged the third party specifically for telemarketing. Accordingly, businesses that hire third parties to make robocalls on their behalf can use these federal common law principles of agency to protect themselves against TCPA liability.

This strategy has been particularly successful in situations where businesses have an attenuated connection through a chain of different marketing contractors to the third party directly violating the TCPA. For example, in Grant Keating v. Peterson’s Nelnet, 2015 WL 4430355 (6th Cir. July 21, 2015), the Sixth Circuit found that a defendant retailer was not vicariously liable where the text messages at issue were actually sent by a subcontractor whom the defendant did not know about, and where the text messages were sent despite the express ban on text messages in the defendant retailer’s agreement with its marketing contractor. The Sixth Circuit specifically noted (1) the care the defendant retailer took to comply with the TCPA, and (2) the defendant retailer’s immediate and appropriate steps to rectify breaches once it was informed of them.

The ability to use federal common law principals of agency in this context is extremely important for defendants who may find themselves without many defenses to TCPA lawsuits, particularly after the FCC’s 2015 Ruling. 

Direct Liability for Faxes

 In contrast, in an amicus brief in Palm Beach Golf Center-Boca, Inc. v. Sarris, the FCC stated that, with respect to faxes, a business can be held directly liable, as opposed to merely vicariously liable, if a third party advertises a business product or services via a fax that violates the TCPA. In short, direct liability attaches even if there is no agency relationship between the business and the third party under federal common law principles. The FCC made this liability distinction between robocalls and faxes on the basis of the statutory language in 47 U.S.C. § 227(b)(1)(C), which differentiates between “initiat[ing]” calls and “send[ing]” faxes. From this interpretation, however, it also follows that the third party that actually sent the faxes may not be directly liable. In its brief in Sarris, the FCC stated: “By its plain terms, 47 C.F.R. § 64.1200(f)(10) defines the direct liability-incurring ‘sender’ not as the party that physically transmits the fax, but as ‘the person or entity on whose behalf a facsimile unsolicited advertisement is sent or whose goods or services are advertised or promoted in the unsolicited advertisement.’”

The Eleventh Circuit followed the FCC’s position in Palm Beach Golf Center-Boca, Inc. v. Sarris, holding that “a person whose services are advertised in an unsolicited fax transmission, and on whose behalf the fax is transmitted, may be held liable directly” under the TCPA. Palm Beach Golf Center-Boca, Inc. v. Sarris, 781 F.3d 1245 (11th Cir. March 9, 2015). In Sarris, the owner of a dental practice gave a marketing firm free reign over the promotion of his dental practice. The marketing firm, in turn, hired a subcontractor, which sent out blast faxes advertising for the practice. Although the owner of the dental practice did not control the content of the fax sent by the subcontractor, because the advertisement was sent on behalf of the owner, the Eleventh Circuit held that the owner of the dental practice could be directly liable for the TCPA violation. At least one district court ruling in the Eleventh Circuit, Physicians Healthsource, Inc. v. Doctor Diabetic Supply, LLC, 2015 WL 1257983 (S.D. Fla. Mar. 18, 2015), has followed the holding in Sarris regarding TCPA fax cases. Sarris has also been found to be persuasive by district courts outside the Eleventh Circuit.[4]

Some district courts outside the Eleventh Circuit, however, have used federal common law agency principles to impose vicarious liability in the fax broadcaster context, i.e., a third party that facilitates the sending of multiple faxes to numerous recipients at one time.[5] That said, courts that have examined the discrepancy between Sarris and the application of federal common law principals on the issues of direct and vicarious liability in robocall and fax cases have sided with Sarris in fax cases.[6]

Two Different Standards Creates Issues For TCPA Defendants

 The uncertain and murky legal doctrine surrounding direct and vicarious liability in the TCPA context presents thorny issues for TCPA defendants trying to move forward. Particularly in blast fax cases, where according to the FCC and some courts, plaintiffs do not need to show an agency relationship between the actual sender of the faxes and the business being promoted in the faxes, businesses may unwillingly and without notice be exposed to litigation risk. For instance, in Helping Hand Caregivers, Ltd. v. Darden Restaurants, Inc., 2015 WL 2330197 (N.D. Ill. May 14, 2015), the court, citing Sarris, refused to dismiss a TCPA case involving faxes sent with the defendant restaurant’s name and logo by a third party, even after the restaurant sent a cease-and-desist letter to the third party regarding its use of the restaurant’s name and logo. As cases such as Helping Hand Caregivers frighteningly illustrate, businesses can be subject to TCPA lawsuits even if they in no way authorize sending the offending faxes in the first place, and at times, even if they tried to stop the sends.

The distinction between liability theories for telemarketing faxes and robocalls is not justified in the statute, nor does it make policy sense. Imposing liability on the party that did not physically transmit the robocall or fax only if there is an agency relationship between the parties is a clear standard which helps defendants structure their contractor relationships and manage litigation risk. Federal common law agency principles on vicarious liability should apply in both the robocall and fax context. As more cases dealing with these issues are filed, it remains to be seen if courts will continue to follow the FCC’s conflicting guidance or will apply federal common law agency principles, as they arguably should.

The authors gratefully acknowledge the assistance of Olga Slobodyanyuk, a former DLA Piper summer associate, now a third-year law student at Harvard Law School

[1]  “Robocalls” and “calls” refer to both telephone calls and text messages made by an autodialer or with a prerecorded voice violating any provision of the TCPA. Generally, for telemarketing robocalls, businesses must receive the prior express written consent of the call recipient. For other types of calls, prior express consent is generally sufficient.

[2]  A business that makes its own telemarketing robocalls could of course be directly liable under the TCPA for any violations.

[3]  28 FCC Rcd. 6574 (May 9, 2013).

[4]  See, e.g., Bais Yaakov v. Varitronics, LLC, 2015 WL 1529279 (D. Minn. Apr. 3, 2015); Helping Hand Caregivers, Ltd. v. Darden Restaurants, Inc., 2015 WL 2330197 (N.D. Ill. May 14, 2015); Imhoff Inv., L.L.C. v. Alfoccino, Inc., 792 F.3d 627 (6th Cir. 2015).

[5]  See, e.g., Savanna Grp., Inc. v. Trynex, Inc., 2013 WL 4734004 (N.D. Ill. 2013); Siding & Insulation Co. v. Alco Vending, Inc., 2015 WL 1858935 (N.D. Ohio Apr. 22, 2015)

[6]  See City Select Auto Sales, Inc. v. David Randall Associates, Inc., 2014 WL 4755487 (D.N.J. 2014); Bridgeview Health Care Ctr. v. Clark, 2015 WL 1598115 (N.D. Ill. Apr. 8, 2015).

The One-Size-Fits-All Non-Disclosure Agreement is (often) an ill fit

Posted in Privacy and Data Security, Strategic Sourcing

Written by Isabel DeObaldia

Let’s start with the fact that there is no such thing as a one-size-fits-all Non-Disclosure Agreement (“OSFA NDA”). The same way that a one-size-fits-all dress looks dreadful on most women (with the exception of Gisele Bündchen, but we all know that she’s not mere mortal), an OSFA NDA may end up looking dreadful on your final deal. On a weekly basis I get to advise clients on boilerplate NDAs that seem innocuous until I ask: “What is the flow of information?” (meaning, who is more likely to disclose the bulk of the information?).  More often than not I find that the discloser, who up to that point felt fairly comfortable with the long definition of confidential information, is either ignoring a residual clause or not realizing that the contract does not protect its trade secrets. The baffling aspect in this scenario is that more often than not, the OSFA NDA was procured with the mentality of using it as both, as an inbound and outbound document, and under the perception that it can be used in the same way by either a discloser or a recipient of information, with equally good results.

This is not true. NDAs are like dresses, you need to find one that fits you depending on the occasion. For example, an NDA to be used for exchange of technical information should not be the same than the one used for evaluating a business acquisition. Moreover, at its core in each NDA, the issue for the discloser is to protect the information, while for a recipient the issue is to make sure that its confidentiality obligations do not survive longer than necessary. Trade Secrets are the most common casualty in this tug-of-war. Trade Secret protection is a black and white premise: either the secret is protected in perpetuity or there is no secret at all. Listing ‘trade secrets’ in the list of types of confidential information within a NDA does not grant automatic trade-secret protection, by the contrary, it may end up hurting the owner of the information in the long run. A Trade Secret, as defined by the World Intellectual Property Organization (WIPO), is any non-public information that derives independent economic value, actual or potential, as reasonably determined by the disclosing party, by virtue of remaining confidential. This means a Trade Secret is not labeled such by a third party, like a Patent Office or the Library of Congress; a Trade secret is labeled such by its owner, who determines its value and the need to keep it a secret. This owner with the valuable secret has only one way to protect it, which is to keep it secret.

What usually happens with an OSFA NDA is that, as this document looks to fit most cases of exchange of information, it is usually set to expire after certain amount of time. Any secret disclosed under a non-perpetual NDA, technically, stops being a secret the day of the expiration of the contract. Once the expiration happens, we are in ‘un-ringing the proverbial bell’ territory: you can try but it is unlikely that you will succeed in claiming that the secret is protected. Note that an NDA expiring is not the issue per se, but the lack of language making the confidentiality obligations for trade secrets survive, even if the contract terminates.

In the same vein, but less subtle, is the residual clause problem. A residual clause is the one that allows the recipient to retain any information that is “intangible,” or kept in “unaided” memories and use it. It takes different forms, but the gist is the same: the recipient can use your information so long as they remember it without writing it down. Residual clauses unequivocally benefit the recipient. But is not uncommon encountering clients that use the same OSFA NDA (their ‘form’) regardless if they are in the discloser or recipient role. This means that they are using their own residual clause against themselves.

In either case, the issue with an OSFA NDAs remains the same: the false sense of security that it provides. Like carrying a moth-eaten umbrella, clients do not realize the damage until it starts raining. Clients should let go of the idea that one document could be used for all relationships and have, at minimum, two flavors of NDAs, one they use when they will be disclosing the bulk of the information, and another they use when they will be receiving the information. In an ideal world, clients would also have NDAs differently tailored depending on the purpose (one for pure commercial exchanges, another for technical engagements or joint technical developments, and so forth). The moral of the story here is, a mutual NDA is never mutual, because the risk run by each party under it is different.

Germany: Impact of Safe Harbor case on German data transfers?

Posted in EU Data Protection, Privacy and Data Security

By Jan Pohle

In an earlier post, my colleagues reported on the Safe Harbor case currently being dealt with at the European Court of Justice.

For the time being the Safe Harbor certification basically justified a data transfer from Germany to the United States. Nevertheless since 2013 the Safe Harbor Agreement became subject to decreasing criticism in the German marketplace, in particular by German federal and federal state data protection commissioners. already in July 2013 they more or less suspended the application of the Safe Harbor Agreement.

Albeit the absence of an explicit suspension of the decision of the EU Commission, according to which such a Safe Harbor Certification is sufficient as a proof for a reasonable data protection level the German data protection authorities, deemed themselves as not being strictly bound anymore to the aforesaid decision of the EU Commission. From 2014 onwards some federal data protection commissioners started precedent administrative proceedings against at least two international operations relying on the Safe Harbor Agreement in order to impose a formal interdiction of future inner group transfer of personal data to and processing of personal data in the U.S.

Although this practise of the federal data protection commissioners could have been criticised as the national data protection commissioners are bound to the decision of the European Commission, the practise might be supported by the European Court of Justice in the near future by its ruling in the case C-362/14 – Schrems vs. Data Protection Commissioner. As announced by a press release of the European Court of Justice in this case on 23. September 2015, the Advocat General is of the opinion that the existence of a European Commission Decision does not have the effect of preventing a national supervisory authority from investigating a complaint alleging that a third country does not ensure an adequate level of protection of the personal data transferred, and, where appropriate, from suspending the transfer of that data. Albeit the European Court of Justice does not necessarily follow the Advocate General’s opinion at least this statement will encourage the German data protection authorities for the time being to move ahead with or even expand their proceedings on international operations relying on Safe Harbor for their data transfer into the U.S.  On this background the second and indeed important statement of the Advocate General as summarized in the press release stating that the European Commission’s Decision on the adequacy of the protection provided by the Safe Harbor privacy principles is invalid, is even of no specific relevance as it meets the customary legal point of view of the German data protection commissioners anyway.

The practical consequence: No operation processing data in Germany should any longer rely on the Safe Harbor Agreement currently in force as concerns the transfer of personal data from Germany to the U.S.

The main milestones and facts re. the status of the on-going discussion in Germany:

  • The fact that Safe Harbor is a “self-certification procedure”, for which the participating companies merely by an informal letter declare to the FTC that they acknowledge and observe the Safe Harbor principles, constituted one of the core elements of such criticism amongst German data protection authorities and legal experts. Especially as no control is carried out, neither through the FTC, nor through any other institution as to whether the respective companies effectively meet with the requirements of the Safe-Harbor Certification.
  • As a consequence of the extensive supervision by foreign secret services, in particular by the US National Security Agency (NSA), criticism on Safe Harbor as being an instrument for guaranteeing a reasonable data protection level, further increased. On July 24, 2013, the data protection commissioners of the German federal government and federal states governments stated in a press release that they will not issue new authorizations for data transfers to third countries and that they will investigate and assess whether such data transfers on the basis of the Safe Harbor Agreement (as well as the European Union standard contractual clauses) are to be suspended) cf. press release
  • Furthermore, as a result from their conference on July 24, 2013, the data protection authorities issued a plea to the European Commission according to which the European Commission should suspend its decisions on Safe Harbor in the light of the excessive supervision activities of foreign secret services for the time being. In accordance with this statement, the Data Protection Officers of the federal and federal states governments stressed during their conference on March 18/19, 2015 again that the Safe Harbor decision of the European Commission does not provide a sufficient protection for the fundamental right of data protection for the transfer of personal data to the US – cf. press release. In fact we have seen federal state data protection authorities opening formal investigations on international operations re. data transfer to the US on the basis of Safe Harbor; such proceedings are ongoing.
  • Additionally, the Safe Harbor Agreement is subject to considerable criticism on the European level. In a notification to the European Parliament and the Board dated November 27, 2013, the European Commission considerably criticized the Safe Harbor Agreement (COM (2013) 847 final). As a result, in March 2014, the members of the European Parliament voted for the suspension of the Safe Harbor Program (cf. press release)
  • The European Court of Justice (ECJ) will at short notice deal with the legality of the Safe Harbor Agreement (file no.: C-362/14) on the background of a lawsuit of the Austrian data protection activist, Max Schrems, vs. the Irish Data Protection Commissioner, who, however, supports the view that the Safe Harbor Agreement allows European companies to transfer personal data to the USA. In the light of this practice, the ECJ will decide on the compatibility of the Safe Harbor Agreement with the European Charter in the upcoming months.
  • Against this background, negotiations are actually running between the EU Commission and the US Government regarding the amendment of the Safe Harbor Agreement. The Justice Commissioner, Ms. Jaurová, insofar proposes during the meeting of the Commissioners for Home Affairs and Justice of the EU and the USA on May 28, 2015 in Riga a new draft for a Safe Harbor Certification, in which particularly abuse prevention and effective liability provisions shall be provided. The negotiations are not yet completed successfully.

For further information, please contact Jan.Pohle@dlapiper.com

ITALY – New “Digital Tax” in Italy for e-commerce and web operators!

Posted in E-Commerce and Social Media

Written by Giangiacomo Olivi

Some Italian parliament members published a proposal for a new law that may dramatically change the scenario for e-commerce and web operators doing business in Italy.

The proposal (called by some commentators as “Digital” or “Web Tax”) is substantially a combination of domestic rules affecting the taxation of the digital economy. Also in line with the OCSE “Action Plan on Base Erosion and Profit Shifting (BEPS) the proposal includes:

  • the setting up of a new concept of permanent establishment in Italy (which should cover also a “virtual permanent establishment” in line with the OECD approach);
  • the application of a 25% withholding tax to be applied by the financial institution processing the payments made to a foreign e-commerce provider by an Italian customer (B2C transaction);
  • the application of a 25% withholding tax to be applied where it is identified a hidden “virtual permanent establishment” on payments made by an Italian company to a foreign e-commerce provider (B2B transaction). Also in this case the withholding tax is applied directly by the financial institution processing the payment process.

The new concept of “virtual permanent establishment” should allow the taxation in the State where the e-commerce player sells its products/services each time there is a “continuative presence of on-line activities connected to the non-resident entity for a period of not less than 6 months which grants a payments flow to the non-resident entity not lower than Euro 5 Million”.

The main objective of the proposal is to avoid tax evasion schemes for e-commerce transactions (which, according a study by the Politecnico di Milano – a Milan University -, amounted to 11 billion Euro revenues in 2013, and are bound to increase). The proposal sets up a new concept of taxable presence based on a “digital residency”, whereby the digital services’ revenues should be subject to the taxes of the State in which the services or products are provided (“destination based tax“), and not to the taxes of the digital operator’s official residency. Any double taxation event due to the application of said withholding taxes, according to the proposal, will be set off through the recognition of a tax credit.

In any case, the existing provisions set forth by the double tax treaties signed by Italy should continue to prevail over the domestic rules.

This is not the first attempt to introduce a web or digital tax in Italy. According to some commentators, this new regulation may be successful as it relates to a local tax (IRES), instead of VAT (which is defined at an EU level) and accordingly the approval process may be easier.

Whether this law proposal will be enacted without substantial changes is yet to be determined; although it cannot be excluded that it will be effective by 1° January 2017 should the EU decide not to centrally tackle the taxation of the digital economy with similar provisions. According to said proposal, a pivotal role will be played by the financial institutions (banks) involved in the payments process related to the acquisitions of goods and services (e.g. for the application of the withholding taxes on B2C payments).

That said, this initiative once again confirms the belief of certain EU Member States that new broad rules will have to be promptly issued with regard to the digital economy, also taking into account the necessity that revenues should be taxed where the customers are established. Such rules will however be effective if they are harmonized (at least throughout the European Union).  This will no doubt ignite the broader debate over a global digital level playing field.

Please contact our team if you want to further discuss this issue.

giangiacomo.olivi@dlapiper.com  antonio.tomassini@dlapiper.com giovanni.iaselli@dlapiper.com

Getting foggy in the Safe Harbor – Privacy agreement between EU and US at stake?

Posted in EU Data Protection, International Privacy, Privacy and Data Security

By Patrick Van Eecke & Loretta Marschall

Today an important statement was issued endangering the free flow of personal data from the European Union to the United States. Advocate General Bot issued his opinion to the Court of Justice of the European Union (CJEU) in the Facebook case on whether or not a national supervisory authority has the right to prohibit transfers of personal data to the United States, even if the recipient is Safe Harbor certified. Safe Harbor is a framework, endorsed by the European Commission fifteen years ago, allowing for the transfer of personal data from the EU to undertakings in the US that adhere to its principles. The Advocate General also advises the CJEU to declare the Safe Harbor scheme invalid. If followed by the judges, this opinion may cause global organisations to rethink their cross-border data transfers.

The Opinion

In the Opinion delivered today, 23 September 2015, Advocate General Bot concluded that:

  1. the existence of a European Commission Decision does not have the effect of preventing a national supervisory authority from investigating a complaint alleging that a third country does not ensure an adequate level of protection of the personal data transferred, and, where appropriate, from suspending the transfer of that data;
  2. The Commission’s Decision on the adequacy of the protection provided by the Safe Harbor privacy principles is invalid.


The Opinion is based on a complaint to the data protection authority in Ireland, that Facebook Ltd keeps its subscribers’ personal data on servers located in the US whereas, it was put forward, the law and practices in the US do not offer adequate protection against State surveillance. The Irish Commissioner considered that there was no requirement to investigate the complaint due to Decision 2000/520, whereby the Commission found that under Safe Harbor, the US ensured an adequate level of protection of the personal data transferred.

The proceedings were brought to the High Court for judicial review where it concluded that once personal data is transferred to the US, the NSA and similar agencies are able to access it in the course of mass surveillance and interception of such data. This reasoning was driven by the revelations based on leaked documents from Edward Snowden back in June 2013, which confirmed that US authorities can have access on a mass basis to personal data of individuals living in the EU. The High Court invited the Court of Justice of the European Union (CJEU) to clarify the landscape.

In summary, Advocate General Bot concluded that national supervisory authorities must be able to investigate where they receive a complaint that calls the level of protection ensured by a third country into question, even where the Commission has carried out an assessment and decided an adequate level of protection is provided. The reasoning continues that not only is the Commission informed that its finding in Decision 2000/520 is subject to criticism but is also itself entering negotiations to remedy the situation. Ultimately, the Advocate General’s view is that a national supervisory authority must enjoy an independence, allowing them to form their own opinion, free from external influence.

Impact for businesses?

What is the impact of this opinion for businesses active in data sharing overseas? We must keep in mind that this is a non-binding Opinion of the Advocate General to the Court, however, if the CJEU follows the same reasoning, the practical impact for business could be quite significant.

  1. If a national supervisory authority has the power to investigate and suspend the transfer of the personal data in question to the United States, irrespective of the Safe Harbor framework that has been endorsed by the Commission, there is a new and potentially substantial obstacle for US companies to overcome in order to gain access to EU data. For instance, US companies may need separate consent arrangements or transfer agreements before EU citizens and companies feel comfortable sharing their personal data.
  2. Further, it’s possible that we find ourselves in a highly fragmented environment given the potential for challenges from 28 national supervisory authorities if it is considered that a transfer does not provide adequate protection for European citizens.

Surely the Opinion begs more questions than provides answers at a time where Europe is trying to clarify how data should be protected by establishing a common set of rules with the draft EU General Data Protection Regulation. Watch this space!

You can contact the authors at Patrick.VanEecke@dlapiper.com and Loretta.Marshall@dlapiper.com

FRANCE: New Law on Intelligence Adopted

Posted in International Privacy, Privacy and Data Security, Telecoms

By Florence Guthfreund-Roland and Mathilde Hallé

On June 24th, 2015 a new bill regarding intelligence was adopted in the French Parliament by a vote of 438 to 86 (the “Intelligence Law”).

The new legal framework set forth by the Intelligence Law

The necessity of reforming the legal framework surrounding intelligence-gathering had been highlighted in several reports and white papers as a recurrent issue. The Intelligence Law is aimed at setting a coherent legal framework for intelligence-gathering that was lacking whilst ensuring the protection of citizens’ rights and freedoms.

The Intelligence Law provides that French intelligence services may use surveillance techniques for the protection or pursuit of any of the following objectives: (i) national independence, territorial integrity and national defense; (ii) foreign policy interests and the prevention of any form of foreign interference; (iii) France’s major industrial and scientific economic interests; (iv) prevention of terrorism; (v) the prevention of damages to republican institutions, of collective violence likely to affect national security, of the reformation or continuation of groups previously dissolved; (vi) prevention of crime and organized delinquency; and (vii) prevention of weapons of mass destruction.

Subject to specific exceptions, the setup of any surveillance measure must be subject to the prior authorization of the French Prime Minister. Any data collected through surveillance measure can be retained between thirty days and a four-year period depending on the type of data collected. In case of claims regarding the lawful nature of a considered surveillance measure, the Council of State (Conseil d’Etat) has full jurisdiction.

In addition, the Intelligence Law authorizes new methods of surveillance using IT equipment and networks including the following:

  • Collection in real time, on electronic communications networks of information and documents relating to individuals identified as presenting a threat;
  • Implementation by electronic communications services providers of an automatic processing aimed at detecting connections likely to reveal a terrorist threat;
  • Real-time localization of any individual, vehicle, or object;
  • Collection of login information;
  • Interceptions of electronic correspondences likely to reveal intelligence information;
  • Interception of correspondences for a forty-eight-hour period;
  • Reception, transmission and recording of words pronounced privately or in confidence, or pictures taken in a private area; and
  • Access to computer data.

Finally, the Intelligence Law creates a new regulatory agency, the National Commission for the Control of Intelligence Techniques (Commission Nationale de Contrôle des Techniques de Renseignement, the “Commission”). Any request to the French Prime Minister aimed at monitoring individuals, must be reviewed beforehand by the Commission, to ensure that the contemplated measures are proportionate, justified, and compliant with applicable law. However, the Commission’s opinions are not binding and the Prime Minister may still authorize the measures, even in a case of disapproval by the Commission.

A controversial bill whose compliance with the French Constitution has been challenged

While the Intelligence Law has been widely adopted by the French Parliament, critics have arisen since the early stages of the enactment process.

Prior to the adoption of the law, the French Data Protection Authority (the “CNIL”) had underlined the risks relating to the monitoring of data collected through surveillance measures and raised concerns about the data retention duration. On that basis, the CNIL recommended further guarantees be provided within the new law, that protect the right to privacy, and ensure data protection.

At the same time, several professional groups, unions, lawyers, human rights activists, and civil liberties supporters, have sent the French Constitutional Council (Conseil Constitutionnel) observations about the potential risks, that such a new law could have on the protection of rights and freedoms. In this context, the chairman of the Paris Bar Association has notably warned that the bill constitutes “a serious threat to public liberties,” which would put French citizens under “general surveillance”. Amnesty International and the Human Rights League have also pointed out the unclear drafting of the Intelligence Law, and have highlighted that such surveillance techniques would be infringing upon citizens’ right to privacy by being disproportionate to the objectives pursued. Lastly, according to the Numerical National Council (Conseil National du Numérique), the Intelligence Law will lead to some kind of “mass surveillance” among French society.

On June 25th, 2015, following these criticisms (largely echoed in the media), the Constitutional Council was by the President of the Republic, the President of Senate, and by more than 60 deputies so as to verify the compliance of the Intelligence Law to the French Constitution. On July 23th, 2015, the Constitutional Council issued its decision and censored two provisions of the Intelligence Law regarding:

  • A new “operational emergency” procedure in certain cases, that permits the implementation of surveillance measures without the Prime Minister’s prior authorization.. Such a procedure was considered by the Constitutional Council as disproportionate interference with the right to privacy and to the principle of secrecy of correspondence.
  • International surveillance measures, considering that their scope of application is not specific enough and that the Intelligence Law does not provide for sufficient guarantees to safeguard citizens’ rights and freedoms.

In the meantime, the French Data Network Association requested a ruling from the Constitutional Council, on the compliance of the Intelligence Law with the French Constitution (Question Prioritaire de Constitutionnalité). However, this application was rejected by the Constitutional Council on July 24,2015, as they consider the definitions in the Intelligence Law to be clear enough.

Despite the criticisms that remain, the Intelligence Law was put into force on July 27th, 2015 — except for the provisions censored by the Constitutional Council.


For further information, please contact Florence Guthfreund-Roland (Florence.Guthfreund-Roland@dlapiper.com) or Mathilde Hallé (Mathilde.Halle@dlapiper.com).


Posted in Asia Privacy, Cloud Computing

Written by Scott Thiel, Edward Chatterton and Louise Crawford

The Hong Kong Privacy Commissioner for Personal Data (“PCPD“) recently published an information leaflet outlining the application of the Personal Data (Privacy) Ordinance (the “PDPO“) for data users looking to engage cloud providers. The information leaflet outlines the data protection principles (“DPPs“) which apply in the context of cloud services, and highlights the particular characteristics of cloud computing that give rise to risks from a privacy perspective.


While there is no universally accepted definition of cloud computing, the PCPD refers to it as “a pool of on-demand, shared and configurable computing resources that can be rapidly provided to customers with minimal management efforts or service provider interaction”. In essence, it involves the storing and processing of data on computers in multiple locations, which are accessed over the internet. This differs from outsourcing which usually involves the customer’s infrastructure being managed by a third party, and is also a departure from traditional software licensing or purchase of “on-premises” hardware.

The main benefit of cloud computing is that customers can avoid making the significant investment in IT infrastructure which would otherwise be needed in order to host large volumes of data. All they need is an internet connection, and this permits them to access their data from anywhere in the world. Cloud computing may also enable organisations to exploit other technologies that can give them a competitive advantage, such as big data analytics, which would otherwise be unmanageable given the magnitude and diversity of data involved.


Cloud solutions can be used to process all kinds of data, but where that data is “personal data” (that is, it can be used to ascertain the identity of an individual), then the PDPO applies, and the interests of the following parties are engaged:

  • Data User:  The entity or organisation that controls the collection and use of the personal data, and that chooses to adopt cloud services as part of its data management strategy.
  • Data Subject:  Any individual whose personal data is being processed via the cloud services, e.g. an organisation’s customer or employee.
  • Data Processor:  The entity that provides cloud services.

Under Hong Kong law (and indeed in many other legal systems), responsibility to comply with privacy law rests with the data user, regardless of the action or inaction taken by the data processor. Accordingly, when engaging a cloud service provider, the data user should be mindful that responsibility for any breach of the PDPO lies with the data user, even if the breach is caused by the cloud service provider.

As a corollary of this, data users should select their cloud providers carefully, impose robust obligations upon them in relation to processing personal data, and obtain contractual indemnities in relation to any breaches. Taking these steps is not only important from a risk management perspective, but it also meets a statutory obligation under the PDPO: when engaging data processors, data users are required to use “contractual or other means” to ensure that:

  1. Personal data is not retained by the data processor for longer than is necessary (sometimes referred to as the “Retention Requirement”). This requires the data processor to comply with the data user’s retention policy and to return (or destroy) personal data in its possession upon termination of the services; and
  2. personal data is protected against unauthorised or accidental access, processing, erasure, loss or use (sometimes referred to as the “Security Requirement”). The security measures necessary to meet the Security Requirement are not prescribed, however measures such as encryption, anti-virus software, firewalls and physical security measures are considered best practice. The PCPD makes reference to the ISO 27018 Code of practice for personally identifiable information (PII) protection in public clouds acting as PII processors, which provides specific guidance for cloud providers, and may assist data users in selecting their cloud provider. However, as the PCPD makes clear, compliance with this standard is neither mandated by law, nor guaranteed to achieve compliance with the law.


Aside from the loss of control over the processing and storage of personal data, there are other factors which render cloud services “higher risk” from a privacy perspective. This does not mean that cloud services should not be used (and indeed, some cloud offerings could offer organisations enhanced protection compared with the measures that would otherwise be available in-house) but it does mean that appropriate steps should be taken to address these risks. The PCPD highlights the following unique “cloud” characteristics of which data users should be aware:

1. Rapid Transborder Data Flow

Cloud services are often provided from data centres located in multiple jurisdictions. This enables cloud providers to optimize storage capacity and speed of services. However, levels of physical and technical security may vary from country to country, and in some countries, the law may regulate levels of encryption, and possibly permit governments or regulators to mandate access to data. Accordingly, data users should ask cloud providers to disclose the location of their data centres, and cloud providers should only be engaged where they can demonstrate that data processed in overseas data centres will receive similar protection as if the data were in Hong Kong.

Section 33 of the PDPO is not in force yet, however when this provision becomes effective (expected in the near future), data users will be restricted from transferring personal data outside Hong Kong unless a specific exception applies (e.g. where the data subject has consented in writing). Data users should carefully review their cloud arrangements to prepare for this section coming into force.

2. Loose outsourcing arrangements

Cloud services are often sub-contracted, and sometimes further sub-contracted again. The result is that data users have little visibility in practice, of where personal data is being processed, by whom, and what measures are being taken to protect it.

Cloud service agreements should ensure control over sub-contracting. This means requiring the cloud provider to:

  • give notice of sub-contracting (and in some circumstances, require the data user’s approval);
  • monitor and exert appropriate oversight over sub-contractors;
  • permit auditing in respect of sub-contractors where this is required by the data user; and
  • assume responsibility for any defaults of sub-contractors.

3. Standard services and contracts

Cloud services are often provided on standard form contracts, and in some cases these are said to be “non-negotiable”. The result is that cloud service contracts are often executed despite lacking key obligations which are required to ensure adequate protection of personal data.

As a minimum, data users must ensure that undertakings are given in order to meet the Retention Requirement and the Security Requirement referred to above. In addition, the agreement should restrict sub-contracting and contain undertakings that will enable data users to comply with their regulatory requirements, for example, granting audit rights to comply with the data user’s obligations in any regulatory investigation.

In addition to scrutinizing the contract, due diligence should be conducted on the selected cloud service provider to ensure that the service provider has a good track record in terms of reputation and technical security. Moreover, some regulated institutions (e.g. banks and insurance companies) will be bound by industry regulations which impose additional risk management measures to be taken in relation to cloud service arrangements.

4. Services and deployment model

Certain cloud services are higher risk than others, depending on the type of service and deployment model. Broadly speaking, there are four types of clouds:

  • Public clouds: Infrastructure, platform and software are provided through services accessible via online terms of use and paid for based on actual usage.
  • Private clouds: Dedicated cloud computing resources are made available to the customer through negotiated service agreements. Because the resources are dedicated, capital investment may be greater.
  • Hybrid clouds: This model may be used by a customer who desires the ease of use of a public cloud, but also wants some level of dedicated resources afforded by a private cloud.
  • Managed clouds: This model is similar to outsourcing, but rather than having the customer own the infrastructure and outsource its management to a third party, the customer owns the cloud computing capability and outsources management to a third party.

Each of these methods can encompass the three basic cloud computing business models, including Infrastructure as a Service (IaaS) – where customers receive access to IT infrastructure often shared with others; Platform as a Service (PaaS) – where customers can develop and operate applications by accessing a computing platform; and Software as a Service (SaaS) – where customers receive access to a suite of software applications remotely and on-demand.

Privacy risks tend to be higher where software is provided by the cloud provider (SaaS), particularly where software is being operated by the cloud provider (since software provides the tools to facilitate data processing requirements). The risks are also elevated where a public cloud is used, since data users have reduced control over the service. Data users should consider the deployment model to ensure that the service being provided is appropriate to their business, and that privacy risks are being managed.


Privacy is a key consideration when engaging cloud services, but there are other issues to consider too. Will this service meet business needs? Does this service provider have adequate capacity? How serious are the business consequences if there are service interruptions? The service level a customer receives from a cloud provider is either contained in the cloud service agreement, or it may be contained in a separate service level agreement incorporated by reference. Some considerations in developing service level agreements include:

Level of effortCustomers should consider whether they require performance under the agreement to be absolute or subject to a less than absolute standard, such as “commercially reasonable efforts.” The level of effort on offer will vary from provider to provider.

Nature of obligations:  Most service level agreements focus on service availability, but service providers should also be prepared to respond to requests for specific commitments on performance, such as response times and bandwidth.

Definition of uptime:  Service level agreements should clearly define variables such as how uptime will be measured; what constitutes downtime; the nature of permitted downtime; and circumstances that do not constitute downtime.

Ability to suspend services:  A cloud service provider may at times need to suspend services, such as if a customer’s use of the services creates a security risk. While it may be reasonable for the provider to retain this right, it will be important for the customer to ensure that adequate notice is given.

Service credits:  The service level agreement should detail the amount of service credits available to customers, whether customers are automatically entitled to credits and whether there are circumstances under which the supplier is required to provide an actual refund.


  • Check existing contracts with your cloud providers and consider whether these arrangements comply with the law (and whether they will they continue to comply with the law when section 33 comes into force).
  • Compile and regularly update a list of the names of cloud service providers and their sub-contractors, locations where cloud services are provided, and applications provided as part of the services. This will assist you with effective monitoring.
  • Establish a negotiation strategy for selecting and engaging cloud service providers. Depending on the nature of your organization, some cloud service offerings may be inappropriate.
  • Review privacy policies and personal information collection statements to ensure that appropriate notifications are given to data subjects in relation to the engagement of cloud providers.
  • Consider whether a consent-based approach will need to be adopted for overseas transfers, in advance of section 33 coming into force.If you have any queries or concerns about data privacy laws in Hong Kong or elsewhere, our data privacy team, comprising of 130 data protection lawyers around the globe, would be pleased to hear from you.

Scott Thiel
Partner, Hong Kong
T: +852 2103 0519

Edward Chatterton
Partner, Hong Kong
T: +852 2103 0504

Louise Crawford
Legal Officer, Hong Kong
T: +852 2103 0752


How Can A Month Have A Million Minutes? Measuring “Availability” for Your Data Center or Cloud Solution SLAs

Posted in Cloud Computing, Strategic Sourcing

Written by Joanna Sykes-Saavedra and Greg Manter

In complex sourcing transactions, all sides are quick on the draw with calculators at the ready when it comes to determining formulas for charges or caps on limitations for liability. However, another point of negotiation that similarly requires a sharp pencil is the calculation of “availability” for service levels – especially in a large environment. Drafters and business teams must pay attention to the relationship between a proposed SLA percentage and the definition that drives its calculation across a multiple system or device population.

For example, a service provider may agree that a particular system or device will have 99.9% availability. Said another way, that system will be operational 99.9% of the time, with 0.1% permitted downtime before SLA credits or any other negotiated remedies apply. Over a month, this means that the system will be available to the customer the entire month, less approximately 44 minutes of downtime (assuming for purposes of these calculations that a month has 43,800 minutes). That sounds reasonable and may make business sense for the customer – so long as this calculation applies to one system or device.

However, in a large multiple-device environment, those “three nines” of comfort may be diluted by a definition that multiplies the number of available minutes by the number of measured devices. Availability definitions commonly start off the calculation by multiplying the total number of minutes by the number of devices being measured. So, where there are 10 devices, the calculation starts with a total pool of approximately 438,000 minutes, so 99.9% availability could leave a customer facing a potential of 438 minutes of downtime (or 7.3 hours) over the course of a month.

Taking the example further, consider an environment of 100 devices – approximately 4.3 million minutes, so 99.9% availability could give the service provider 73 hours of allowable downtime while still being in full compliance with the service levels! In extreme examples of global deals with thousands of devices, it’s easy to see how an entire country or region could go down for entire days or weeks – all the while, SLA dashboards are showing green, with no credits or commitments to resume service.

Making matters worse, the “outage” definition commonly doesn’t get the same bountiful treatment. Outages typically count the minutes when the total system is down, without re-multiplying for all devices. In extreme cases, outages only count if they effect the whole system, so partial outages fail to register at all, even against the minutes those “down” devices contributed to the million-minute month.

To avoid giving a service provider this kind of unintended cushion on performance standards, customers should carefully review the language of the underlying availability calculation closely to prevent this sort of availability devaluation.

Examples of proactive ways to address this problem include the following approaches:

  • Accept the language that permits a multiplier resulting in a larger pool of available minutes, but insist on a higher percentage rate – go beyond the “three nines” and negotiate for five or six “nines”;
  • Calculate the available minutes based on regions or areas – availability would be determined by each region or area (g., by country), which would reduce the pool of available minutes; or
  • Draft a compound service level that does not permit the multiplication of the pool of available minutes, along with a definition of “unavailable” that goes beyond they typical “number of minutes the entire system is down” and instead identifies elements or segments of the system that render the system “unavailable” if down.

Ultimately, availability methodology is not a one-size-fits-all provision – drafters and business teams must carefully consider the number of systems or devices in scope and their impact upon the availability percentage in order to provide a customer with meaningful and undiluted SLAs.


Posted in Asia Privacy, Technology and Commercial

Written by:  Heng Loong Cheong, Joyce Chan, Samuel Yang, Louise Crawford


Only a few months ago, the State Council’s announcement of the Decision on the Implementation of Market Access Administration in relation to Bank Card Clearing Institutions (the “Decision“) marked the opening of China’s domestic bank card clearing market. This followed a high level policy statement issued by the State Council in October 2014, that in principle overseas enterprises would be permitted to file applications for setting up bank-card clearing institutions in China (see here for more on the Statement).

The Decision, which was issued on 9 April 2015 and became effective on 1 June 2015, requires all bank card clearing institutions in China to obtain a Bank Card Clearing Business Licence (“Licence“). Significantly, the Decision enables foreign companies, for the first time, to compete in this market, by one of two routes: (i) by setting up a foreign invested enterprise (“FIE“), or (ii) by acquiring a domestic bank card clearing institution (subject to a security review). The Decision is expected to bring about changes in the structure of the bank card clearing market, enhancing competition and ending the 12-year monopoly of China UnionPay in the domestic market. The attraction for foreign companies to enter this market is obvious: last year, the volume of bank card transactions in China exceeded RMB 42 trillion (around US$7 trillion).

While the Decision marks a clear step forward, and presents a significant opportunity to international credit card clearing businesses like Visa and MasterCard, we expect there will remain practical challenges for foreign players to become eligible. Moreover, even without regulatory barriers, foreign players will need to go through the process of persuading banks to issue their cards and merchants to accept them. The emergence of new entrants and enhanced competition is therefore likely to be incremental, and we are not likely to witness significant changes until the end of 2016.

Highlights of the Decision

Pre-conditions for eligibility:  The Decision provides a set of pre-conditions for applicants to become eligible bank card clearing institutions. Among other things, the applicant must be incorporated under PRC Company Law with registered capital of at least RMB 1 billion with a bank card clearing system which meets national and industry standards. Other pre-conditions include requirements on shareholding structure and financial stability of the institution, qualifications of senior management personnel and prudential requirements such as appropriate internal controls and risk management.

Procedures of application:  The applicant is required to submit a preliminary application to the People’s Bank of China (PBOC). The PBOC, after obtaining consent from the China Banking Regulatory Commission (CBRC), will inform the applicant of its decision within 90 days of receipt of the preliminary application, and within one year of preliminary approval, the applicant must complete certain preparatory work. A second application is then submitted which, if approved, will result in issuance of a Licence.

Business operational requirements:  The Decision outlines principle requirements on the operation of bank card clearing businesses. For example: the bank card clearing system should be owned by the bank card clearing institution itself; the infrastructure should be safe, efficient and stable; and the information acquired in the course of bank card clearing operations should remain confidential. The requirement for onshore processing infrastructure and PRC compliant systems (without any grace period to enable systems to be adapted) could pose a practical challenge for foreign applicants.

Special rules on foreign-funded institutions:  Foreign applicants should submit their applications to PBOC through their FIEs. Note that foreign companies providing foreign currency clearing services for cross border transactions are not required to obtain a licence in principle, but should report to the PBOC and CBRC and comply with relevant business administration rules.2


While the bank card clearing market opens its doors, legal changes in other areas of the payments industry, namely online payment solutions, are expected to challenge new entrants. Anticipated new regulations could potentially slow down the online payments industry which has, until now, been one of the country’s fastest developing industries. The

Administrative Measures for Internet Payment Services of Non-Banking Payment Institutions (Draft) (the “Draft Measures“) were published by the People’s Bank of China (“PBOC“) on 31 July 2015, and have undergone a consultation period, which ended on 28 August 2015, and we await news on when the final Measures will be published. The Draft Measures indicate a clear shift towards a more stringent legal regime for the internet payments industry, which has until now been subject to relatively light regulation.

Highlights of the Draft Measures

The Draft Measures intend to impose strict obligations on non-banking payment institutions that operate internet payment businesses. Perhaps most notably, they will introduce:

  • specific technical requirements in relation to customer identity verification;
  • an annual limit and daily cap for online payments;
  • restrictions on funds remitted between bank accounts; and
  • a ban on the provision of certain financial services.

Requirements of identity verification:  When opening a Payment Account for customers, the payment institution is required to register and verify the customers’ identity through no less than 3 external channels (e.g. by way of the public security bureau, tax office and credit reporting agencies).

Setting up a “comprehensive account” and “consumption account”:  The Draft Measures provide two types of Payment Accounts that the payment institution may open for customers, depending on the degree of identity verification. A “comprehensive account” may be set up if either (i) the identity verification is conducted face-to-face; or (ii) the identity verification is completed through no fewer than 5 external channels. If the customer’s identity is verified through 3 or 4 external channels, the payment institution is only allowed to open a “consumption account” for the customer.

Functions and annual online payment limit of the “comprehensive account” and “consumption account”:  The “comprehensive account” and “consumption account” differ both in terms of their functions and their annual online payment limit. The “comprehensive account” can be used for transactions, remittances and purchases of investment or financial products, while the “consumption account” can only be used for transactions and remittances. The annual online payment limit for all “comprehensive accounts” owned by a particular customer cannot exceed RMB 200,000, while the limit for all “consumption accounts” owned by a particular customer is RMB 100,000. Any sums in excess of the payment limit should be processed through the customer’s bank account.

Daily cap on online payments:  There are also daily caps for online transactions, depending on the security measures used to verify a customer’s payment instructions. Specifically:

  1. if payment instructions are verified through 2 or more means, one of which is a digital certificate or electronic signature, the daily cap can be agreed by the customer and payment institution;
  2. if payment instructions are verified through 2 or more means not including a digital certificate or electronic signature, the daily cap for all Payment Accounts owned by the customer is RMB 5,000; and
  3. if payment instructions are verified through fewer than 2 means, the daily cap for all Payment Accounts of the customer will be RMB 1,000, and the payment institution is fully liable for the risks and losses associated with the relevant transaction.

Note, however, that the transfer of funds from a customer’s Payment Account to his/her own bank account is not restricted by the above rules.

Restrictions on remittance:  The Draft Measures impose restrictions on remittances between a customer’s Payment Account and bank accounts. Customers are only permitted to transfer funds from their own personal savings or current account into their own Payment Account, and vice versa.

Ban on provision of financial services:  The Draft Measures make clear that payment institutions are not financial institutions, and are banned from providing financial services in any form. “Financial services” include cash deposits and withdrawals, money lending, funding, wealth management, guarantee services and currency exchange.

Payment institutions are also required to report to PBOC before providing any “innovative” internet payment products or services; there are certain requirements to protect customers’ personal data, for example, payment institutions must not store customers’ sensitive information (such as bank account passwords and verification codes).

While regulation of the payments industry is necessary to protect consumers and enhance confidence in innovative payments systems, the new regulations, if implemented, could close the market to potential new players due to the significant costs involved in operating multiple verification measures. The fairly low transaction caps are also likely to impact overall revenue for online payment providers.


We will continue to monitor developments in this regard and provide our clients with updates. In the meantime, if you would like any further information on payment regulations in China, our Technology team would be pleased to hear from you.

Heng Loong Cheong
Partner, Hong Kong
T:+852 2103 0610 hengloong.cheong@dlapiper.com
Joyce Chan
Partner, Hong Kong
T:+852 2103 0473
Samuel Yang
Senior Associate, Beijing
T +86 10 8520 0667
Louise Crawford
Legal Officer, Hong Kong
T +852 2103 0752
Back to Top of Page