Posted in Uncategorized

Caps on Liability – How to Avoid it Backfiring

Written by Jeff Aronson

Imagine that you’re counsel for a software company. You draft a software license agreement’s limitation-of-liability clause to limit the liability of each party under the agreement to $5,000, notwithstanding anything to the contrary.  What do you expect would be the result if the customer refuses to pay for the $20,000 in license fees and argues that its liability is capped at $5,000?

A recent (2014) case out of Texas had this exact scenario.  The customer, a security alarm installation and monitoring service provider, had licensed software from a vendor for $20,000.  The software license agreement contained the following sentence: “NOTWITHSTANDING ANYTHING TO THE CONTRARY, THE TOTAL DOLLAR LIABILITY OF EITHER PARTY UNDER THIS AGREEMENT OR OTHERWISE SHALL BE LIMITED TO U.S. $5,000.”  The software supposedly did not function as promised.  The customer refused to pay.  When the vendor sued, the customer asserted its liability was capped at $5,000. The Texas Court of Appeals ultimately read the aforementioned cap on liability to apply just to damages caused by the “improper functioning or failure of software“ and found the contract’s ‘notwithstanding anything to the contrary’ language did not operate to limit the customer’s liability in the event it breaches the agreement by refusing to make payment.

The software vendor got lucky in this situation – a different court could just have easily concluded the other way given the breadth of the ‘notwithstanding anything to the contrary’ language.  What could have been done differently here to avoid this situation?  The vendor’s counsel could have simply carved out the customer’s express payment obligations from the cap on liability. That would have saved a lot of legal fees and wrangling over the payment.

See IHR Security, LLC v. Innovative Business Software, Inc., 441 S.W.3d 474 (2014).

Posted in Uncategorized

Cloud Transformation – How to Steer Clear of the Storms

Written by Annabel Ashby

Cloud computing is no longer new (and indeed can be seen as itself being rebadged from earlier forms of remote hosted/ASP style offerings). However, what is apparent is the continuing growth in size and sophistication of cloud deals, both in the private and public cloud space. With this comes an increased focus on the contractual and commercial terms, and a continuing challenge for perceived norms on both the customer and supplier side of the fence.

The Developing Market

For a long time, deals appeared to mostly fall into two categories:

  1. Public cloud offerings on a “one-to-many” basis, where suppliers showed little appetite to depart from standard terms of business, whether because of the need to have the contract terms reflect the common delivery model or simply by reason of a reluctance to take on additional contract risk; or
  2. Private cloud arrangements which were bespoke/specific to one (usually larger) client, where the contract terms would be fully negotiated in much the same way as a managed service/outsource style contract would be, and frequently based on the customer’s proposed contract terms.

Both of these models do, of course, still exist and have increasingly replaced more traditional forms of on premise licence and support arrangements. However, we have also seen variants which can create more of a challenge when it comes to the negotiation of contract terms. These include:

  1. “ERP in the cloud” style deals, where far more business-critical systems are proposed to be run remotely, and where the relevant software licensor is taking on both implementation and hosting services/responsibilities for the first time; and
  2. Private cloud services which – whilst “dedicated” to the relevant customer in the sense that there will be discrete instances of the relevant software and very often segregated servers/racks as well – will nonetheless be provided from shared datacentres and using shared networks, which can have an impact upon both the supplier’s ability to meet the specific service requirement of the customer, whilst also necessitating a few supplier mandated requirements as well.

Key Contractual Challenges

In the light of these developments, what forms of contractual challenges are we seeing?

  • Warranty Protections

Whilst such an approach might pass muster for a “take it or leave it” commodity-style cloud offering, it would clearly not be acceptable for a customer who is seeking to use the cloud for more business critical applications and services. Accordingly, we now commonly see the acceptance of an obligation upon the service provider to correct warranty defects as being independent of any separate right to claim compensation for related losses.

In the context of standard contract terms sent out by vendors of public cloud solutions (e.g., the likes of AWS, Microsoft Azure, etc.), it would be common to see provisions whereby the service provider would warrant only to use “reasonable endeavours” to correct any non-compliance with specifications, and on the basis that if such non-compliances could not be readily resolved, then the sole remedy of the customer would be to terminate that aspect of the cloud service and receive a pro rata refund. A similar approval is often applied to alleged IP infringements.

  • Compliance with Laws

We have seen a considerable deviation in approach in this regard, depending on the nature of the cloud services (where bespoke modifications can be more readily accommodated). For private cloud services, where the service provider can commit to maintain compliance with laws, the key area of debate is instead centred on who has to monitor and interpret the new requirements, and thereafter bear the cost of any resulting changes. For public cloud services – and even for “private” cloud services, which are based on distinct instances of a particular product that the service provider wants to keep in line with its “core” product – the willingness of the service provider to commit to compliance may be limited to those laws that make specific reference to it as a service provider, or potentially to the laws of a particular sector or function on which the cloud services are particularly focused. The customer may then face a difficult challenge in that sector if specifying requirements proceed in a direction potentially at odds with the cloud solution it has commissioned.

As the customer will be dependent upon the cloud services provided to ensure that the relevant services are provided in such a way as to enable the customer to comply with its legal obligations, it is not surprising that the contract terms related to compliance with law and regulation are increasingly subject to negotiation (despite many cloud service providers initially arguing that all they offered was “a service” and that it would then be for the client to do the mapping against its relevant obligations).

  • GDPR/data protection compliance

Many of the cloud services standard terms we have seen are very carefully crafted by the service provider so as is to emphasise their role as data processor rather than controller, and to thus emphasise the fact that the majority of statutory obligations remain with the client (at least until the advent of GDPR). However, many customers are wise to this and will at least push back to force the service provider to take on the relevant security obligations vis-a-vis appropriate technical and organisational measures to safeguard data from unauthorised access or alteration.

Service Providers can point out that in this regard, that they will not necessarily know what data the customer is asking them to host or process via the relevant cloud service, and so they are in no position to judge what measures should be taken. Indeed, it is possible that the advert of GDPR may add a new twist in this regard, with service providers instead asking the customer to warrant that the security arrangements are adequate for the customer’s purposes, and to indemnify the service provider against any future claims if they are ruled not to be.

With data being so central to cloud based services, data protection-related provisions receive a significant degree of attention when it comes to negotiating the associated contract terms.

  • Decommissioning and Disposal Costs

However, the reality is that the pace of decommissioning and disposal was likely to have been relatively steady in the past (and indeed many organisations will have delayed decommissioning and “sweated their assets” for longer than might be ideal). If, therefore. there is to be a “spike” in decommissioning as a supplier implements a cloud solution instead as part of a more root and branch transformation programme, the costs associated with that will have to be carefully considered, and budgeted for accordingly. For example, might the supplier incur the costs initially, but on the basis of amortising them across the lifetime of the contract thereafter to mitigate against the customer’s cashflow hit?

Even deals thought to be “private” cloud-based there may still be elements of this requirement. For example, if a service provider is servicing multiple clients from a single datacentre, then any changes to such datacentre’s generic infrastructure or networks would need to be adhered to by all of the relevant customers.

In the past, a customer organisation would have had to deal with the costs of decommissioning and disposing of redundant hardware. In the context of a cloud transformation programme, therefore, it may be tempting to leave the associated effort and cost of this out of the equation, when calculating the business case.

  • Supplier Mandated Changes

On some larger deals, we have also seen SaaS providers willing to contractually commit to maintain set functionality for minimum periods, and/or to continue with an agreed upgrade/development path.

For truly public cloud services, it is usually the case that the contract terms reserve for the service provider the right to make changes to the service descriptions. Where the services are delivered communally to multiple clients and need to evolve to remain competitive, this makes eminent sense. However, we have increasingly seen service providers willing to modify/mitigate this approach, e.g., by committing to only improve the services offered, or at least to not make any “material” reductions in what has gone before, and/or to offer termination rights for the customer if it does not agree with the changes being made.

  • Suspension and Termination Rights

In many forms of standard cloud services deals promulgated by service providers, we have seen both termination and suspension rights drafted so as to set the bar at a very low level; so much so that in some cases, even the slightest mis-step may lead to a loss of service for the customer. However, as the size and reach of cloud service offerings have expanded, customers have increasingly focused on the breadth of such rights and sought to narrow them. It has become the norm in outsourcing contracts for the service provider’s sole right of termination to be non‑payment of fees, and it could be argued that the customer’s degree of reliance upon a cloud-based service may actually be greater than that of a “traditionally” outsourced one (where there may be assets, contracts and staff who can be more readily transferred back to the customer or a replacement supplier).

  • Liability Exclusions

Liability provisions have been another key area of the contract where standard cloud services seem to have dialled back the extent of risk that they are willing to assume. Aside from attaching the liability limits to the fees paid for only those services that proved defective (rather than the charges as a whole), we have seen blanket exclusions of data related losses, applications of the cap to (traditionally unlimited) IP and confidentiality related claims, and even exclusions of losses relating to the costs of procuring replacement services. A typical justification for this approach from a public cloud service provider might be that it would not be able to take on more significant degrees of liability if something went “wrong” with its solution. In such a case, it is likely that such a malfunction would affect multiple customers, if not all. Accordingly, the provider might then being exposed to multiple related claims, which, by reason of a single default, could ultimately lead to bankruptcy.

  • Prime Contractor/Supply Chain Challenges

We have seen a number of deals which previously may have been approached as outsource-style contracts with a service-orientated prime contractor, who could, in turn then subcontract or sublicense elements of the overall solution. However, many SaaS providers are either reluctant to accept the flow down of contract risk from the prime contract, or seek to position themselves as the first point of contact with the customer. The risk scenario for the former option is problematic for the prime contractor, who may then feel trapped between the contract expectations of the customer and what it can pass down to the cloud service provider (e.g., in terms of warranty or service level commitments, liability caps, etc.). The second scenario may pose a similar challenge for the customer, who may be unpleasantly surprise when comparing the proposed SaaS contract terms against those typically used for achieving similar, non-cloud-based services. For example, we recently advised a major client who was considering a cloud services deal potentially valued in the hundreds of millions of pounds. This client was shocked to find potential suppliers refused to respond to their draft contract terms. Instead, the client was directed to a web link containing the supplier’s standard cloud services terms, which were available to the world at large and where, for example, the sole remedy for the customer in the event that the SaaS solution did not work (after the customer would have spent tens of millions of pounds on implementation, etc.) would be to get a refund of pre-paid fees.

  • Audit Rights

In a private cloud environment which is dedicated to a single client, this is easy to accommodate. However, the moment one ventures into environments used to support multiple clients, difficulties can emerge. The service provider would then want to ensure that any audit undertaken by Client A does not impact upon the services it provides to Clients B, C and D from the service data centre. For public cloud services, it is unlikely that any customer-specific audit right will be granted at all. However, customers can still expect the service provider to undertake regular generic audits using an independent third party, such that the resulting audit report can then be made available to all customers. We have seen this approach taken on occasion, although many service providers are still resistant to it.The business benefits of cloud solutions are so manifest and substantial that there is ever greater momentum in favour of their implementation, and for more and more substantial and business critical functions. At the same time, however, we are also seeing a rapid development of the contracting models for such solutions, particularly for private and/or hybrid cloud models, and we can expect this state of flux to persist for some time to come.

Conclusion

This is sensible and reasonable. However, recent regulatory guidance in the UK (e.g., the FCA’s FG 16/5 note on Cloud Contracts) suggested that “unrestricted” audit access might be required, and it remains to be seen how literally this requirement/guidance will be interpreted.

An organisation that entrusts its core functions and data to the cloud will be understandably concerned to ensure that all of the contractually promised safeguards are being implemented. Having an effective audit right is key to accomplishing this.

Posted in New Privacy Laws Privacy and Data Security

Congress Rolls Back FCC Broadband Privacy Rules: What Does It Mean?

Written by Sydney White and Jim Halpert

This week the US House of Representatives passed a Congressional Review Act (CRA) resolution of disapproval of the US Federal Communications Commission (FCC) broadband privacy rules that were approved by the FCC in a straight partisan vote at the end of the Obama Administration, but have not yet taken effect.  The Senate passed an identical resolution last week.  President Trump has signaled that he will sign the resolution, which means that the FCC is prohibited by the CRA from imposing “substantially similar” privacy regulations on broadband ISPs in the future.

The broadband privacy rules would have imposed an opt-in consent requirement for use of web browsing activity by ISPs for marketing or advertising and a 7 day breach notification deadline.   They would have applied only to ISPs and not to any other businesses in the Internet ecosystem.   Interestingly, the rules were opposed not only by ISPs but also by a wide range of other Internet and advertising companies, and were subject to criticism for imposing confusing disparate regulation on select companies based upon siloed regulatory classifications.  As a result of the CRA resolution, privacy regulation of broadband ISPS, which was to be more demanding than regulation of other Internet companies, will become similar again.

Although the current FCC Chairman Ajit Pai opposes the broadband privacy rules and could have chosen to forebear from enforcing them or could have amended the rules through the rulemaking process, a future FCC could have reversed that decision. Congressional passage of the CRA resolution provides long term certainty.

Although selective, more aggressive privacy requirements for ISPs will be foreclosed, the CRA essentially puts the US privacy framework back where it was before November 2016, and does not create a regulatory loophole for ISPs to sell customer information, as some advocates have charged. The FCC will continue to have the authority over the privacy of telecomm usage information as well as to enforce unreasonable broadband privacy and security practices.

In January, more than 15 ISPs announced that they would adhere to a voluntary set of privacy and data security principles that are consistent with the more flexible US Federal Trade Commission (FTC) framework, which applies to the rest of the Internet.  The principles include specific policies on transparency, consumer choice, security and data breach notification.

  • The transparency principle confirms that ISPs will continue to provide customers with comprehensive, accurate, and continuously available notice of collection, use, and sharing of customer information.
  • Under the choice principle, ISPs will continue to give customers choice over use or disclosure of their data consistent with the FTC’s framework.  Choice will vary depending upon the sensitivity of the information.  Sharing of sensitive information will require opt-in choice, non-sensitive information will require opt-out choice, and uses such as fraud prevention, product development, market research, network management and security, compliance with law, and marketing by the ISP will be subject to implied consent.
  • Under the data security principle, ISPs will continue to protect customer information collected by the ISP using reasonable security measures taking into account the nature of the ISPs activities, sensitivity of data, size of the ISP, and technical feasibility.
  • The data breach principle provides that ISPs will continue to notify customers of data breaches where there is unauthorized acquisition of customers’ sensitive personal information.

State Attorneys General can enforce these ISP privacy and security commitments in addition to existing state privacy, data security, and data breach laws that have protected and will continue to protect consumers.

Both FCC and FTC privacy enforcement authority over ISPs could change if the FCC or Congress overturns the FCC’s reclassification under the Open Internet Order of broadband providers as common carriers under Title II of the Communications Act. Congressional or FCC action to overturn that order would restore FTC authority over ISP privacy and security practices.  While both Chairman Pai and leaders of the Congressional committees with jurisdiction over the FCC and FTC are on record as supporting this change, this reversal of the underpinnings of broadband regulation is a longer term and more complicated policy objective.

One immediate effect of the CRA is likely to be legislative activity in several states to impose opt-in consent requirements at the state level. Already, legislators have added a written opt-in consent requirement for information collection by ISPs to Minnesota’s budget bill. The long term effect is likely to be to focus more attention on giving the US FTC clearer authority over privacy and security practices of businesses in many sectors in order to create clear and uniform privacy and security requirements across those sectors.

Posted in Cybersecurity Privacy and Data Security

New York AG Announces Record Year for Data Breaches in New York – and Updates Guidance on Reasonable Security Measures

Written by Michelle Anderson and Anne Kierig

New York Attorney General Eric Schneiderman announced that his office received a record number (1,300) of data breach notices in 2016. In the press release, Attorney General Schneiderman also provided a list of recommendations for how organizations can help protect sensitive personal data—a list that could be used as a benchmark against which the Attorney General’s office could evaluate whether a company has implemented reasonable security measures.

Many of the recommendations overlap with those made by other regulators (e.g., minimizing data collection practices is in the FTC’s Start with Security: A Guide for Business), but they reiterate that the New York Attorney General considers both encryption and offering post-breach services like credit monitoring to be more of an expectation than an option. They also highlight the importance of having a written information security policy, rather than undocumented procedures. The Attorney General’s recommendations update recommendations made in a 2014 report titled Information Exposed: Historical Examination of Data Security in New York State and are as follows:

  • Understand where your business stands by understanding the types of information your business has collected and needs, how long it is stored, and what steps have been taken to secure it. Data mapping would be a ready way to meet this recommendation.
  • Identify and minimize data collection practices by collecting only the information that you need, storing it only for the minimum time necessary, and using data minimization tactics when possible (e.g., don’t store credit card expiration dates with credit card numbers).
  • Create an information security plan that includes encryption and other technical standards, as well as training, awareness, and “detailed procedural steps in the event of data breaches.” This recommendation reiterates a recommendation from the 2014 report, in which the Attorney General said that effective technical safeguards include “[r]equir[ing] encryption of all stored sensitive personal information—including on databases, hard drives, laptops, and portable devices.”
  • Implement an information security plan and conduct regular reviews to ensure that the plan aligns with ever-changing best practices.
  • Take immediate action in the event of a breach by investigating immediately and thoroughly and notifying consumers, law enforcement, regulators, credit bureaus, and other businesses as required. This is the only new recommendation from the 2014 report, which mentioned the importance of immediate breach response in the context of implementing an information security plan. Now, however, it’s a separate recommendation.
  • Offer mitigation products in the event of a breach, including credit monitoring.

The announcement also revealed that the number of data breaches reported to the New York Attorney General’s office in 2016 represented a 60% increase over prior years. It reported that the most common causes of data breaches were external (i.e., hacking) and employee negligence (consisting of inadvertent exposure of records, insider wrongdoing, and the loss of a device or media). These causes accounted for, respectively, 40% and 37% of the reported breaches, showing a rise in employee negligence as a breach source. The information exposed consisted overwhelmingly of Social Security numbers (46%) and financial account information (35%).

This announcement also comes on the heels of final cybersecurity rules for the financial sector from the New York Department of Financial Services (NYDFS). The NYDFS requirements went into effect on March 1, 2017, and are designed to keep both “nonpublic information” and “information systems” secure. More information about these requirements can be found in DLA Piper’s Cybersecurity Law Alert NYDFS announces final cybersecurity rules for financial services sector: key takeaways.

 

Posted in Cybersecurity EU Data Protection International Privacy Privacy and Data Security

FRANCE: The French Data Protection Authority (CNIL) Publishes 6-Step Methodology For Compliance With GDPR

Written by Carol Umhoefer and Caroline Chancé 

On March 15, 2017, the CNIL published a 6-step methodology for companies that want to prepare for the changes that will apply as from May 25, 2018 under the EU the General Data Protection Regulation (“GDPR”).

The abolishment under GDPR of registrations and filings with data protection authorities will represent fundamental shift of the data protection compliance framework in France., which has been heavily reliant on declarations to the CNIL and authorizations from the CNIL for certain types of personal data processing. In place of declarations, the CNIL underscores the importance of “accountability” and “transparency”, core principles that underlie the GDPR requirements. These principles necessitate taking privacy risk into account throughout the process of designing a new product or service (privacy by design and by default), implementing proper information governance, as well as adopting internal measures and tools to ensure optimal protection of data subjects.

In order to help organizations get ready for the GDPR, the CNIL has published the following 6 step methodology:

Step 1: Appoint a data protection officer (“DPO”) to “pilot” the organization’s GDPR compliance program

Pursuant to Article 37 of the GDPR, appointing a DPO will be required if the organization is a public entity; or if the core activities of the organization require the regular and systematic monitoring of data subjects on a large scale, or if such activities consist of the processing of sensitive data on a large scale. The CNIL recommends appointing a DPO before GDPR applies in May 2018.

Even when a DPO is not required, the CNIL strongly recommends appointing a person responsible for managing GDPR compliance in order to facilitate comprehension and compliance in respect of GDRP, cooperation with authorities and mitigation of risks of litigation.

Step 1 will be considered completed once the organization has appointed a DPO and provided him/her with the human and financial resources needed to carry out his/her duties.

Step 2: Undertake data mapping to measure the impact of the GDPR on existing data processing

Pursuant to Article 30 of the GDPR, controllers and processors will be required to maintain a record of their processing activities. In order to measure the impact of the GDPR on existing data processing and maintain a record, the CNIL advises organizations to identify data processing, the categories of personal data processed, the purposes of each processing, the persons who process the data (including data processor), and data flows, in particular data transfers outside the EU.

To adequately map data, the CNIL recommends asking:

  • Who? (identity of the data controller, the persons in charge of the processing operations and the data processors)
  • What? (categories of data processed, sensitive data)
  • Why? (purposes of the processing)
  • Where? (storage location, data transfers)
  • Until when? (data retention period)
  • How? (security measures in place)

Step 2 will be considered completed once the organization has identified the stakeholders for processing, established a list of all processing by purposes and categories of data processed, and identified the data processors, to whom and where the data is transferred, where the data is stored and for how long it is retained.

Step 3: Based on the results of data mapping, identify key compliance actions and prioritize them depending on the risks to individuals

In order to prioritize the tasks to be performed, the CNIL recommends:

  • Ensuring that only data strictly necessary for the purposes is collected and processed;
  • Identifying the legal basis for the processing;
  • Revising privacy notices to make them compliant with the GDPR;
  • Ensuring that data processors know their new obligations and responsibilities and that data processing agreements contain the appropriate provisions in respect of security, confidentiality and protection of personal data;
  • Deciding how data subjects will be able to exercise their rights;
  • Verifying security measures in place.

In addition, the CNIL recommends particular caution when the organization processes data such as sensitive data, criminal records and data regarding minors, when the processing presents certain risks to data subjects (massive surveillance and profiling), or when data is transferred outside the EU.

Step 3 will be considered completed once the organization has implemented the first measures to protect data subjects and has identified high risk processing.

Step 4: Conduct a privacy impact assessment for any data processing that presents high privacy risks to data subjects due to the nature or scope of the processing operations

Conducting a privacy impact assessment (“PIA”) is essential to assess the impact of a processing on data subjects’ privacy and to demonstrate that the fundamental principles of the GDPR have been complied with.

The CNIL recommends to conduct a PIA before collecting data and starting processing, and any time processing is likely to present high privacy risks to data subjects. A PIA contains a description of the processing and its purposes, an assessment of the necessity and proportionality of the processing, an assessment of the risks to data subjects, and measures contemplated to mitigate the risks and comply with the GDPR.

The CNIL has published guidelines in 3 volumes to help organizations conduct PIAs (see here, here and here).

Step 4 will be considered completed once the organization has implemented measures to respond to the principal risks and threats to data subjects’ privacy.

Step 5: Implement internal procedures to ensure a high level of protection for personal data

According to the CNIL, implementing compliant internal procedures implies adopting a privacy by design approach, increasing awareness, facilitating information reporting within the organization, responding to data subject requests, and anticipating data breach incidents.

Step 5 will be considered completed once the organization has adopted good practices in respect of data protection and knows what to do and who to go to in case of incident.

Step 6: Document everything to be able to prove compliance to the GDPR

In order to be able to demonstrate compliance, the CNIL recommends that organizations retain documents regarding the processing of personal data, such as: records of processing activities, PIAs and documents regarding data transfers outside the EU; transparency documents such as privacy notices, consent forms, procedures for exercising data subject rights; and agreements defining the roles and responsibilities of each stakeholder, including data processing agreements, internal procedures in case of data breach, and proof of consent when the processing is based on the data subject’s consent.

Step 6 will be considered completed once the organization’s documentation shows that it complies with all the GDPR requirements.

The CNIL’s methology includes several useful tools (template records, guidelines, template contract clauses, etc.) and will be completed over time to take into account the WP29’s guidelines and the CNIL’s responses to frequently asked questions.

Posted in Cybersecurity

Consumer Reports Begins Cybersecurity Evaluations

Consumer Reports (CR) announced on March 6, 2017, that it is developing a new standard—The Digital Standard—for safeguarding consumers’ security and privacy. The eventual goal is for CR to use the Standard to evaluate and rate consumer products. By scoring products based on certain Standard criteria, CR aims to help consumers make informed purchasing decisions based on how products protect their privacy and security.

The Standard is currently divided into 35 general testing categories, each of which is (or will be) further divided into (i) test criteria, (ii) indicators of that criteria, and (iii) procedures for evaluating that criteria. For example, under the “Data control” testing category, CR first asks whether a consumer can “see and control everything the company knows about” him/her. Indicators of consumer data control include, among other things, whether users can control the collection of their information, delete their information, and control how their information is used to target advertising. In order to evaluate whether a product gives consumers control over their data, the evaluator would investigate and analyze “publicly available documentation to determine what the company clearly discloses.”

Some of the criteria are in line with guidelines from other sources. For example, both the Standard and the FTC’s Start with Security guide discuss having passwords that are unique and complex. On other issues, however, some companies may find that the Standard stretches beyond existing guidance or market practices. For example, the “Ownership” testing category appears to touch on issues related to the First Sale Doctrine: It has as testing criteria whether a consumer “own[s] every part” of the product” and the indicator of that criteria is that “[t]he company does not retain any control or ownership over the operation, use, inputs, or outputs of the product after it has been purchased by the consumer.”

Consumer Reports developed the standard in collaboration with a number of partners, primarily Disconnect, Ranking Digital Rights, and the Cyber Independent Testing Lab (CITL). It is currently a first draft, but CR and its collaborators welcome feedback and suggestions. To provide input, see the Contribute tab on the Standard’s website.

Posted in International Privacy Privacy and Data Security

Commerce to Begin Accepting Swiss-US Privacy Shield Applications in a Month

As we noted in our January blog post Swiss-US Privacy Shield Adopted, Aligns with EU-US Privacy Shield, the Department of Commerce will begin accepting self-certifications to the Swiss-US Privacy Shield on April 12, 2017.

In response to frequently asked questions, Commerce provides guidance on how to self-certify:

  • Companies already certified under the EU-US Privacy Shield: Companies that have already self-certified to the EU-US Privacy Shield Framework, can log into to their existing Privacy Shield accounts and click on “self-certify.” Companies will have to pay a separate annual fee, which will be similar in tier structure to the EU-US Privacy Shield fee structure. If a company is approved under both frameworks, the re-certification date will be one year from the date of the first of the two certifications.
  • Companies not yet certified under the EU-US Privacy Shield: If a company is not yet certified under the EU-US Privacy Shield, then it will be able to select the “Self-Certify” link on the Privacy Shield website to certify for one or both of the frameworks.

Regardless of whether a company is certified under the EU-US Privacy Shield, any company applying for certification under the Swiss-US Privacy Shield framework will have to update its privacy notice to align with Privacy Shield requirements.

 

 

 

 

 

LexBlog