Our newest briefing unpacks necessary developments on the Shopper Monetary Safety Bureau (CFPB) and the Nationwide Institute of Requirements and Expertise (NIST) and different AI-related information on the state and business degree.
Regulatory and Legislative Developments
- The CFPB launched an announcement on March 16 taking purpose at unfair discrimination in client finance. The CFPB stated it is going to “carefully study monetary establishments’ decision-making in promoting, pricing and different areas to make sure that corporations are appropriately testing for and eliminating unlawful discrimination.” To that finish, the Bureau additionally stated that “CFPB examiners would require supervised corporations to point out their processes for assessing dangers and discriminatory outcomes, together with documentation of buyer demographics and the affect of merchandise and costs on totally different demographic teams.” This new level of emphasis needs to be high of thoughts for monetary establishments — and one other sign that automated decision-making processes are more likely to face elevated scrutiny from the CFPB.
- The NIST has launched a draft of its AI Danger Administration Framework. The draft framework identifies dangers associated to AI programs and affords a course of for managing these dangers. As well as, NIST has printed a paper, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270), which explores how AI bias may end up from human biases and systemic, institutional biases. NIST is internet hosting a workshop on March 29-31 to additional its growth of the chance administration framework.
- The New York Division of Monetary Providers (NYDFS) is searching for data on insurer use of non-public credit score data. The NYDFS has requested that insurers writing personal passenger auto, industrial auto and householders insurance coverage present data concerning their use of credit score and insurance coverage scores. Amongst different issues, NYDFS desires to know if corporations have carried out “any impartial or different evaluation to make sure that the usage of credit score and insurance coverage scores for preliminary tier placement shouldn’t be a proxy for some other prohibited variables, similar to revenue.” In different information, New York Superintendent Adrienne A. Harris will function one of many co-chairs for the NAIC’s Big Data and Artificial Intelligence Working Group.
- Digital well being stakeholders convened on the inaugural ViVe conference — billed as a “new well being data expertise occasion centered on the enterprise of well being care programs” — and mentioned key themes about the usage of knowledge and knowledge science in well being care. The convention, held March 6-9 in Miami, introduced collectively well being care, digital well being, and expertise executives and leaders, together with well being startups and authorities officers, to debate improvements and developments in digital well being. Programming grappled with subjects similar to utilizing expertise to extend entry to care; leveraging knowledge, algorithms and AI to enhance affected person outcomes and cut back clinician burnout; knowledge safety; interoperability; coverage and regulatory developments; and “techquity.” Among the many many points mentioned, one key takeaway loomed massive over all classes: knowledge, algorithm-based options, and synthetic intelligence are destined to turn out to be a fixture in the way forward for well being care. Over the following decade and past, business leaders, care suppliers and regulators will likely be tasked with discovering methods to encourage technological innovation and deploy well being expertise options whereas making certain affected person security, knowledge safety and privateness.
- Insurance coverage business regulators and standard-setters weighed in on synthetic intelligence on the Geneva Association’s Programme on Regulation and Supervision (PROGRES) Seminar. The seminar included a panel on AI, that includes Petra Hielkema (Chair of EIOPA), and Maryland Insurance coverage Commissioner Kathleen Birrane (Chair of NAIC Innovation, Cybersecurity, and Expertise (H) Committee). Hielkema outlined the EU Synthetic Intelligence Act, which might set up harmonized guidelines on AI all through the EU, together with assigning purposes of AI to 3 threat classes. Though we received’t see finalized language till the top of 2023 on the earliest, this bears watching as it is going to possible affect pondering within the U.S. Commissioner Birrane laid out the fees of the H Committee, specializing in the work plans that the H Committee will roll out in April for every of its working teams. She reported that the Large Knowledge and Synthetic Intelligence Working Group will proceed in full drive, fleshing out the AI ideas and creating a extra exact framework round these pointers. This may embody taking a look at third get together distributors. The H Committee additionally will set up a single group to establish implicit bias in algorithms. That work is at present interspersed amongst a number of teams, and she or he intends to deliver it collectively into one collaborative group.