Byte Back AI: December 23, 2024
Welcome to the first edition of Byte Back AI, a weekly newsletter providing updates on proposed state AI bills and regulations, an AI bill tracker chart, summaries of important AI hearings, and special features. The first two editions of Byte Back AI will be released for free on Byte Back. Starting January 6, 2025, Byte Back AI will be available only to paid subscribers. For more information on subscriptions, please click here. If you would like to be added to the Byte Back AI waitlist, please click here. Waitlist members will be contacted during the week of December 30 with more information on how to subscribe to Byte Back AI.
As always, the contents provided below are time-sensitive and subject to change.
Table of Contents
- What’s New
- State AI Bill Tracker Chart
- Summaries of Notable AI Hearings
- Special Features
- What’s New
On December9, New York lawmakers delivered the Legislative Oversight of Automated Decision-making in Government (or LOADinG) Act (SB7543/A9430) to Governor Hochul. The bill passed the legislature in June but only was delivered to the governor last week. The governor has until December 24 to sign or veto the bill or it will become law. The bill regulates the use of automated decisionmaking technologies by state agencies.
Meanwhile, three members of a multistate AI work group are moving forward with AI bills. Over the past few months, Connecticut Senator James Maroney has been leading the group, which includes over two hundred bipartisan state lawmakers from over forty states. Over sixty members of the workgroup recently signed an open letter published by the IAPP on “why now is the time to act on US state AI legislation.”
Last year, Senator Maroney’s Senate Bill 2 passed the Connecticut Senate but failed to move in the House after the governor threatened a veto. Connecticut Senate Democrats already announced that an AI bill will be a caucus priority in 2025. Senator Maroney has been actively preparing a revised version of his bill to introduce when the legislature opens January 8.
In Texas, Representative Giovanni Capriglione (author of Texas’ data privacy law) previously circulated a draft of the Texas Responsible AI Governance Act. That bill is one of many that Texas lawmakers will consider when the legislature opens January 14. State lawmakers have already prefiled fifteen bills dealing with AI. The bills are on a wide range of topics, including the use of AI in mental health services, classrooms, and elections. Of note, Senator Hughes introduced a bill (SB 668) that requires a narrow scope of entities to disclose their use of AI in limited circumstances.
In Virginia, Delegate Michele Maldono’s Artificial Intelligence Developer Act (HB 747) will carry over from the 2024 session. It is one of a handful of AI-related bills that will carry over from 2024. Virginia has a very short 2025 legislative session, opening January 8 and closing February 22.
Meanwhile, Colorado policymakers have been actively engaged in a task force to consider potential amendments to last year’s Colorado AI Act (SB 205). We provide a summary of the latest task force meeting in section 3 below.
Lawmakers in other states are also actively pursuing AI legislation on a wide variety of topics.
Arkansas lawmakers prefiled two bills: HB 1041 (prohibiting deepfakes in elections) and HB 1071 (providing protections for an individual whose protograph, voice, or likeness is reproduced through AI and used commercially). The Arkansas legislature opens January 13.
Two placeholder bills have been filed in California – SB 11 and SB 7. As currently drafted, SB 11 provides, in part, that “any person or entity that sells or provides access to any artificial intelligence technology that is designed to create any synthetic content shall provide a consumer warning that misuse of the technology may result in civil or criminal liability for the user.” SB 7 currently does not have text. The California legislature reconvenes January 6.
Of note, California policymakers also are engaged in rulemaking on two AI-related topics. The California Privacy Protection Agency initiated formal rulemaking on proposed automated decisionmaking technology regulations in November with comments due January 14. We provide a summary of the Agency’s December 18 Board meeting below, and you can find a webinar on the proposed regulations here. In addition, the California Civil Rights Council is engaged in rulemaking on proposed modifications to employment regulations regarding automated-decision systems. Our special feature section, below, provides a summary comparison of the two proposed regulations.
In Illinois, Representative Morgan prefiled HB 5918, which creates the Artificial Intelligence Systems Use in Health Insurance Act. The Illinois legislature opens January 8.
Missouri lawmakers prefiled three bills in advance of the legislature’s January 8 open date: SB 509 (use of AI in elections), HB 362 (disclosure of intimate digital depictions), and SB 85 (use of AI in property assessments).
Lawmakers in three other states prefiled bills dealing with the use of AI in elections: Montana (SB 25), Nevada (AB 73), and South Carolina (H 3517). Those legislatures open January 6, February 3, and January 14, respectively. Of note, the Montana legislature’s website shows fourteen bill drafting requests dealing with AI.
Finally, the New Jersey legislature is currently engaged in a two-year legislative cycle with bills filed in 2024 carrying over to 2025. The legislature is considering numerous AI-related bills. The ones we are currently most closely tracking are: AB 4030 / AB 3854 / SB 1588 (use of automated decision tools); SB 3015 / AB 3911 (use of AI in video interviews for hiring); and SB 2964 / AB3855 (independent bias auditing for auto employment decision tools).
- State AI Bill Tracker Chart
Click here to see our latest AI state bill tracker chart.
- Summaries of Notable AI Hearings
In this edition of Byte Back AI, we provide summaries of the latest Colorado AI Impact Task Force meeting held on December 20 and the California Privacy Protection Agency’s Board meeting held on December 18.
Colorado AI Task Force
On Friday, December 20, the Colorado AI Impact Task Force held its third discussion panel. Previous panelists included representatives from small and large businesses, as well as members from Colorado’s Office of Economic Development and International Trade, the Governor’s Office of Information Technology, the Center for Democracy and Technology (CDT), and the Colorado Technology Association (CTA). This hearing had panelists from the latter two organizations. Their perspective on the Colorado AI Act (SB 205) could not have been more different.
Colorado Technology Association
The first panelist from the CTA indicated that they largely share the same goals as SB 205, which aims to reduce algorithmic bias, build public trust, and ensure fair treatment of consumers in the use of AI. However, the CTA proposed several changes to refine the bill:
- Clarify key definitions
- Algorithmic discrimination: The CTA suggested referencing existing legal definitions on discrimination in case law and in other states to avoid conflicting interpretations.
- Consequential decisions: The CTA argued that the current definition is vague, making it difficult for companies to determine if they are subject to the law. The CTA suggests making the definition more clear as to which companies must comply, and providing more detail in the subcategories.
- High-risk AI systems: The CTA argued that more clarity is needed to distinguish between AI used in making imperative decisions that impact peoples’ lives versus relatively novel functions to improve efficiency.
- Redirect consumer appeals
- Rather than having consumers appeal to companies, the CTA suggested that consumers report their concerns about adverse AI decisions directly to the Attorney General.
- This approach would avoid unnecessary compliance costs for companies, allowing them to focus on patterns of bias brought by the Attorney General rather than isolated cases which may not provide enough evidence for discrimination.
- Restore the opportunity to cure
- Originally part of the bill, the CTA asked to add back in a right to cure. This would promote compliance by giving companies trying to adhere “an opportunity to footfall.”
- The CTA argued that this would not excuse discriminatory practices, as the law establishes best practices to reduce the risk of discrimination, and that there are other existing laws that forbid and enforce against discrimination.
- Adjust disclosure requirements
- The CTA advocated for an enforcement model similar to traditional law enforcement, where misconduct is investigated after the fact.
- This would allow the Attorney General’s office to focus its resources on instances where there is an established cause for concern, rather than requiring ongoing self-reporting.
The suggestions received mixed reviews from the committee. While they appreciated the detailed level of the suggestions, they were concerned by the CTA’s proposed changes regarding disclosure requirements and their impact on transparency. A member from CDT pointed out that the proposed disclosure and self-reporting exemptions were pulled from the EU AI Act. Another member seemed to have issues with the CTA’s suggestions, arguing that the existing disclosure requirements will not be demanding as most companies will use AI to comply, and that some will eventually develop AI compliance departments.
Center for Democracy and Technology
CDT was once again featured on the panel and strongly advocated for more restrictions. Before getting into those, they highlighted the following strengths and weaknesses of SB 205.
Strengths:
- Broad applicability to entities
- Mandates impact assessments and a right to explanation for consumers
- Provides the Attorney General with authority to interpret and clarify the law
Weaknesses:
- Expand transparency provisions to require explanations for uses
- Strengthen impact assessments
- Eliminate loopholes and exemptions that undermine the bill’s intent
- Extend enforcement powers to local district attorneys for better oversight
With their self-proclaimed “ironclad stance,” the panelists from CDT went on to describe their proposed changes. They pointed out that a recent opinion poll showed that consumers prefer that AI tools not be used to make decisions about them. They also argued that companies will take advantage of any loophole they can to avoid disclosures. Among other things, most public interest groups prefer a broader definition of consequential decision, like substantial factor rather than controlling factor. They also oppose the trade secret exemption, claiming that companies routinely declare that regular business practices are trade secrets.
During the meeting other individuals argued that the law will kill innovation and harm Colorado residents and businesses. Others argued that these were scare tactics similar to those used when states considered passing privacy laws and that none of those scare tactics came true.
CPPA Hearings
The California Privacy Protection Agency has held six board meetings the past year to discuss potential updates to existing CCPA regulations. This past November, the Agency’s Board met to discuss initiating rulemaking on proposed regulations for insurance, cybersecurity audits, risk assessments, and automated decisionmaking technology (ADMT).
During the meetings, members of the public offered their opinions on the proposed regulations. Numerous commentators argued that the CPPA should not move forward with the ADMT regulations and instead allow the legislature and the governor to lead the charge on legislation. Aside from the broad language for the Agency’s proposed opt-out requirements for ADMT, commentators were especially concerned over increased compliance costs. Rather than focusing on research and innovation, commentators worried that businesses would be forced to divert their efforts towards compliance. They argued that some businesses may be forced to leave the state, and small businesses especially would be affected. Nonetheless, the Board voted to move forward with initiating formal rulemaking. The public will have until January 14, 2025 to provide comments.
The Board met again on December 18 to highlight their partnerships with other agencies, and provide an update on state and federal privacy bills. The Agency highlighted their engagement with other privacy agencies around the world, including the Global Privacy Enforcement Network, the Berlin Group, the Asia Pacific Privacy Authority, and the multistate AI working group. They emphasized the importance of working with other states and nations to help harmonize with other jurisdictions to ease compliance burdens. Additionally, Agency representatives gave an update on federal and California legislation.
The next Board meeting will take place on January 14, which will be the public’s opportunity to provide oral comments on the Agency’s proposed regulations. You can find details of the Agency’s previous hearings here.
- Special Features
For our first Byte Back AI special feature, we are providing a comparison chart of two sets of California regulations seeking to cover automated decisionmaking technologies. The first regulation is the draft CCPA regulation on automated decisionmaking technology. As noted, formal rulemaking began on the draft CCPA regulations in late November and the comment period is set to end on January 14, 2025. The second set of regulations are the California Civil Rights Council’s Proposed Modifications to Employment Regulations Regarding Automated-Decision Systems. The chart was prepared by Carol Barrera.
Privacy Regulations (CCPA) | Civil Rights Regulations (ADS) | |
Focus | Employee data rights, transparency, and control over personal information | Anti-discrimination in AI usage, fairness in decision-making processes |
Key Requirements | Notices about data collection and use, opt-outs for automated decisions, access and correction rights | Bias audits to ensure fairness, detailed record-keeping for system operations, fairness testing and accommodations |
Tools Covered | Automated Decision-Making Technology (ADMT), including resume screening tools and performance evaluation systems | Automated Decision Systems (ADS), such as hiring platforms and performance review algorithms |
Employer Actions | Update privacy notices, provide opt-out mechanisms, establish data correction workflows, conduct annual risk assessments | Monitor AI tools for bias, conduct fairness tests, maintain records for at least four years, provide appeals for ADS-driven decisions |
Primary Employee Protections | Transparency in data collection, prevention of unauthorized access, and control over sensitive information like health or biometric data | Equity in employment opportunities, avoidance of decisions that disadvantage protected classes |
Compliance Challenges | Implementing clear opt-out processes, maintaining compliance with sensitive data handling requirements, and avoiding security breaches | Regularly auditing ADS for bias, justifying system use as job-related, and addressing potential discrimination claims |