• United States



Contributing Writer

New AI privacy, security regulations likely coming with pending federal, state bills

News Analysis
Dec 08, 20206 mins
ComplianceData PrivacyPrivacy

CISOs should prepare for new requirements to protect data collected for and generated by artificial intelligence algorithms.

ethical ai artificial intelligence algorithms
Credit: Iaremenko / Getty Images

Regulation surrounding artificial intelligence technologies will likely have a growing impact on how companies store, secure, and share data in the years ahead. The ethics of artificial intelligence (AI), particularly facial recognition, by law enforcement authorities, have received a lot of attention. Still, the US is just at the beginning of what will likely be a surge in federal and state legislation regarding what companies can and cannot do regarding algorithmically derived information.

“It’s really the wild west right now in terms of regulation of artificial intelligence,” Peter Stockburger, partner in the Data, Privacy, and Cybersecurity practice at global law firm Dentons, tells CSO. Much like the California Consumer Protection Act (CCPA), which spelled out notice requirements that companies must send to consumers regarding their privacy protections, “a lot of people think that’s where the AI legislation is going to go, that you should be getting giving users notification that there’s automated decision making happening and get the consent.”

AI encompasses a wide range of technical activities, from the creation of deepfakes to automated decision-making regarding credit scores, rental applications, job worthiness, and much more. On a day-to-day basis, many, if not most, companies now use formulas for business decision-making that could fall into the category of artificial intelligence.

“For example, when you’re interacting with financial institutions, they’ll ask you a question like your age or your zip code, and then the products they offer after that are based on automated decision-makers,” Stockburger says.

Europe is ahead of the curve in considering the privacy and security implications of AI with the European Parliament’s resolution on Civil Law Rules on Robotics, passed in 2017, the European Commission’s Ethical Guidelines for Trustworthy AI, adopted in April 2019, and the Organisation for Economic Co-operation and Development’s (OECD’s) Council Recommendation on Artificial Intelligence, approved in May 2019.

US legislative and policy initiatives affect AI data use

Many US legislative and policy initiatives have been introduced that affect the use of AI data and how the data is protected. Most prominently, US Senators Cory Booker (D-NJ) and Ron Wyden (D-OR) along with Rep. Yvette D. Clarke (D-NY) introduced the Algorithmic Accountability Act in April 2019. That bill requires companies to study and fix flawed computer algorithms that result in inaccurate, unfair, biased or discriminatory decisions impacting Americans. It also requires “entities that use, store or share personal information to conduct automated decision system impact assessments and data protection impact assessments.”

The legislation shares many aspects with the EU’s General Data Privacy Regulation (GDPR) in terms of having to conduct impact assessments for high-risk automated decision systems and information systems, according to Yoon Chae, an intellectual property attorney at Baker and McKenzie LLP.  The draft bill requires independent auditors to conduct these assessments.

The Algorithmic Accountability Act defines an automated decision system as “a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers.” It was referred to the Committee on Commerce, Science, and Transportation, where it will die with the current Congress’s end.

Another bill that mirrors many aspects of the GDPR, the Commercial Facial Recognition Privacy Act, was introduced in the Senate in March 2019. It “prohibits entities from collecting, processing, storing, or controlling facial recognition data unless such entities (1) provide documentation that explains the capabilities and limitations of facial-recognition technology, and (2) obtain explicit affirmative consent from end-users to use such technology after providing notice about the reasonably foreseeable uses of the collected facial-recognition data. Facial-recognition data includes attributes or features of the face that permit facial-recognition technology to uniquely and consistently identify a specific individual.”

Controllers of facial recognition data are barred from using the data to discriminate against end-users or using it for a purpose not foreseeable by the end-user. Controllers of the data are also prohibited from sharing it without the user’s consent and cannot condition the product’s use on the end-user, providing affirmative consent. Like the Algorithmic Accountability Act, the bill has been referred to the Committee on Commerce, Science, and Transportation.

Another federal initiative that touches on AI-related security requirements is a Trump Administration executive order known as the American AI initiative, issued on February 11, 2019. That order required the National Institute of Standards and Technology (NIST) to create a plan regarding federal engagement in AI standards within 180 days. NIST released its plan on August 2, 2019, which focused on nine areas for AI standards, including safety, risk management, and trustworthiness.

Shortly after the executive order was signed, the House passed its own resolution called Supporting the development of guidelines for artificial intelligence’s ethical development. The bill features ten aims that seek to balance AI’s potential with the need for “safe, responsible and democratic” technology development.

State- and city-level bills address AI technologies

Many AI-related bills have been introduced at the state and even city level that touch on biometric privacy, overlapping with facial identification notice, consent, and privacy requirements. Illinois, Texas, and Washington have specific biometric laws that regulate the collection, retention, and use of biometric data. Parts of the landmark CCPA, which went into effect on January 1, 2020, have provisions that deal with biometric data use, storage and protection.

These laws differ in terms of how they define biometric data and differ substantially from the proposed federal laws, with more states likely to follow suit starting in 2021. Before the COVID-19 crisis, Arizona, Florida, and Massachusetts contemplated their own proposed biometric data laws.

Even with all this activity, experts think even more efforts will emerge at both the federal and state levels as legislatures begin to focus on artificial intelligence as the COVID crisis subsides. “There has been a shift in public opinion that not all technology is good. Now people are more aware of some of the dangers,” Chae says.

Chae tells CSO that no bills were dealing with artificial intelligence for the legislative session from 2015 to 2016 in California. “That increased to five bills from 2017 to 2018 and 13 bills for 2019 to 2020 session, and that was only during half the session. At the federal level, there were only two bills from 2015 to 2016, which increased to 42 bills from 2017 to 2018. Halfway through 2019 to 2020, 37 bills were already introduced.”

Chae says that CISOs now “need to start thinking about setting best practices from auditing and managing different algorithms” to get ahead of the curve on the new requirements ahead. From a broader perspective, technology developers can help ensure that legislation keeps up with the technology by incorporating privacy and security at the outset, Denton’s Stockburger believes. “You should develop the technology in a way that’s privacy focused, privacy by design, security by design,” he says.