By  Victoria Lee and Andrew W. Grant

On September 12, 2019, the Task Force on Artificial Intelligence, which is a task force within the House Financial Services Committee (FSC), held a hearing titled “The Future of Identity in Financial Services: Threats, Challenges, and Opportunities.” As the FSC wrote before the hearing, the way customers identify themselves and have their identities verified in the financial marketplace has evolved over the years, especially as transactions move online. With evolutions in digital identity, financial institutions are turning to artificial intelligence to enhance identity verification, but artificial intelligence can also be used for nefarious motives, such as committing fraud. Congressman Bill Foster (D-IL), who chaired the hearing, noted a recent example of possible AI-enabled fraud (which we covered here) in his opening remarks − AI appears to have been recently used to mimic an executive’s voice, prompting an employee to initiate an unauthorized transfer of nearly US$250,000.

This hearing addressed a diverse range of topics related to one fundamental question, succinctly summarized by one of the participants: in an online environment, how can the entity providing a service trust the digital identity of the person seeking the service? Below is a summary of most participants’ presentations.

  • Anne L. Washington: Ms. Washington, an Assistant Professor of Data Policy at the NYU Steinhardt School, concentrated her testimony on the need for institutions to have a process that allows for resolving disputes that arise because the use of AI resulted in the wrong identity decision. Financial institutions use AI both to validate a person’s identity and to authenticate a person after doing so. However, AI is not infallible. When operating at scale, even low error rates will negatively affect people. For example, an AI must be able to recognize naming practices from different cultures or religious traditions. Therefore, entities need to think about how to handle disputes over AI errors so that people can “assert the authority of their lived experience over the authority of the numbers.”
  • Jeremy Grant: Mr. Grant, a Coordinator at the Better Identity Coalition, noted that attackers have caught up with many “first-generation tools” that entities have been relying on to protect and verify identity. He stressed that industry alone cannot solve the problem of digital identity and needs government to play a more prominent role in addressing critical vulnerabilities. In his prepared remarks, Mr. Grant noted, “Authentication is getting easier, but identity proofing is getting harder.” Authentication is getting easier, in part, because AI tools allow financial institutions to better determine that the rightful account owner is accessing his or her account. However, because identity proofing is getting more difficult in the online environment, Mr. Grant recommended that the government take an active role in working with industry to create next-generation ID systems. Specifically, paper-based systems should be modernized so that customers can ask the system that created the credential to validate it for the digital world.
  • Amy Walraven: Ms. Walraven, president and founder of Turnkey Risk Solutions, discussed  the problem of synthetic identity. A synthetic identity combines real and fake information, like an actual social security number with a fake name, address, and date of birth. Unlike traditional identity theft, synthetic identity creates a new identity rather than stealing the identity of an existing person. Once this new identity has been created, the perpetrator can use it like any other identity, including to open bank accounts, establish a presence on social media, or purchase a cell phone.  Ms. Walraven noted that industry and government need to work together to address synthetic identity fraud.
  • Andre Boysen: Mr. Boysen, the Chief Identity Officer, SecureKey Technologies, spoke about how trust in the current system is down because of the complexity of identification methods combined with the pervasiveness of fraud. He said that to solve the challenge of digital identity, we need to combine various factors, such as things we know, things we have, and things we are. This, he stated, requires the involvement of all transaction participants, including the consumer. Mr. Boysen worked on developing such a model in Canada and described it as a public-private partnership among companies such as banks and phone companies, the government, and other trusted parties.

Committee members then asked the panelists a range of questions, including questions related to the use of blockchain to help solve digital identity problems and to whether the new California privacy law would diminish fraud prevention because of its limitation on the use of consumer data. Overall, the hearing provided an insightful look into the challenges industry faces when it seeks to either identify a person initially online or to authenticate that person throughout the lifetime of the relationship. The use of AI, both by industry and by criminals, means that questions surrounding how best to create and maintain the security of a person’s digital identity will continue.

Finally, this is the second hearing of the Task Force on Artificial Intelligence in the last four months. You can watch the full hearing here. Follow us here as we continue to provide you with information related to this committee’s efforts as well as other news related to AI.