On July 11, 2024, after considering comments from insurers, trade associations, advisory firms, universities, and other stakeholders, the New York State Department of Financial Services (NYSDFS) issued its Final Circular Letter regarding the “Use of Artificial Intelligence Systems and External Consumer Data and Information Sources in Insurance Underwriting and Pricing” (“Final Letter.”)

By way of background, NYSDFS published its Proposed Circular Letter (“Proposed Letter”) on the subject in January 2024. As we noted in our February blog, the Proposed Letter called on insurers and others in the state of New York, using external consumer data and information sources (“ECDIS”) and artificial intelligence systems (“AIS”), to assess and mitigate bias, inequality, and discriminatory decision making or other adverse effects in the underwriting and pricing of insurance policies. While NYSDFS recognized the value of ECDIS and AI in simplifying and expediting the insurance underwriting process, the agency—following current trends—wanted to mitigate the potential for harm. 

And if the opening section of the Final Letter is any indication, the agency did not back down. It continued to insist, for example, that senior management and boards of directors “have a responsibility for the overall outcomes of the use of ECDIS and AIS”; and that insurers should conduct “appropriate due diligence and oversight” with respect to third-party vendors. NYSDFS declined to define “unfair discrimination” or “unlawful discrimination,” noting that those definitions may be found in various state and federal laws dealing with insurance and insurers.

The Themes

Definitions. The Final Letter retains the original definition of ECDIS (Point 6, former Point 5) as including “data or information used—in whole or in part—to supplement traditional medical, property or casualty underwriting or pricing, or to establish ‘lifestyle indicators’ that may contribute to an underwriting or pricing assessment of an applicant for insurance coverage.” Similarly, AIS is still defined (Point 5, former Point 4) as “any machine-based system designed to perform functions normally associated with human intelligence, such as reasoning, learning, and self-improvement,” used for the same purpose.

Proxy Assessments. The Final Letter clarifies the requirement of a proxy assessment in Point 12 (former Point 11) mandating that insurers demonstrate that ECDIS do not serve as a “proxy” for any protected classes that result in unfair or unlawful discrimination. The requirement means that insurers must demonstrate that ECDIS employed for underwriting and pricing are not prohibited by law and they must evaluate the extent to which ECDIS are correlated with (“proxy for” status in any protected classes that may result in unfair or unlawful discrimination. The Final Letter adds a sentence that whether ECDIS correlates with a protected class may be determined using data available to the insurer or may be reasonably inferred using accepted statistical methodologies. The Final Letter mandates that if such correlations are found, insurers should next consider whether the use of such ECDIS is required by a legitimate business necessity.

Quantitative Assessments: The Final Letter clarifies a requirement of quantitative assessments in Point 18 (former Point 17). The NYSDFS rejected commenters’ arguments that they cannot perform these assessments, as they do not collect data regarding protected classes. The agency specified that the quantitative assessments apply only to protected classes for which data are available or may be reasonably imputed using statistical methodologies. Insurers are not expected to collect additional data. The additional requirement of a qualitative assessment of unfair and unlawful discrimination in Point 19 (former Point 18) remains largely unchanged. A qualitative assessment “includes, being able to explain at all times, how the insurer’s AIS operates and to articulate a logical relationship between ECDIS and other model variables with an insured or potential insured individual’s risk.

What data is “available,” however? The issue of data scarcity—defined by Expert.ai as “the lack of data that could possibly satisfy the need of the system to increase the accuracy of predictive analytics”—is growing. Indeed, as aptly noted by AI observers, “[W]hile the internet generates enormous amounts of data daily, quantity doesn’t necessarily translate to quality when it comes to training AI models. Researchers need diverse, unbiased and accurately labeled data—a combination that is becoming increasingly scarce.” It remains to be seen how NYSDFS’s limitation to “available data” will play out in practice.

Governance and Risk Management. The Final Letter keeps “the expectation that insurers take an appropriate, risk-based approach to utilizing ECDIS and AIS,” but NYSDFS declined to provide additional detail on thresholds for sufficiency regarding risk management procedures. “It is up to insurers,” the NYSDFS stated, “to determine the appropriate sufficiency thresholds and standards of proof based on the product and the particular use of ECDIS or AIS.” The topic—covered in Point 20 (former Point 19)—thus remains unchanged, meaning that New York law requires insurers to have a corporate governance framework that is appropriate for the nature, scale, and complexity of the insurer; and that the framework should provide appropriate oversight of the insurer’s use of ECDIS and AIS to ensure compliance with insurance laws and regulations.

Board and Senior Management Oversight. NYSDFS reiterated that senior management and the boards “have a responsibility for the overall outcomes of the use of ECDIS and AIS. While the agency indicated in the “themes” section that this did not include “the day to day implementation,” both the Proposed Letter (Point 23) and the Final Letter (Point 22) make clear that senior management, at least, “is responsible for day to day implementation of the insurer’s development and management of ECDIS and AIS, consistent with the board’s or other governing body’s strategic vision and risk appetite.” The sections on Governance and Risk Management remain largely unchanged except for minor wording under Part B, Policies, Procedures and Documentation.

Third-Party Vendors. The requirement in Points 35 and 36 (former Points 34 and 35) that insurers maintain appropriate oversight of third-party vendors, with written standards, policies, procedures, and protocols for the acquisition, use of, or reliance on ECDIS and AIS developed or deployed by a third-party vendor, remains unchanged. Insurers retain responsibility for understanding any tools used in underwriting and pricing developed and deployed by third-party vendors, and ensuring that they comply with all applicable laws and regulations. The Final Rule contains a new Point 37 specifying that “where appropriate and available,” insurers should include terms in contracts with third-party vendors that i) provide audit rights or entitle the insurer to receive audit reports by qualified auditing entities; and ii) require the third-party vendor to cooperate with the insurer regarding regulatory inquiries and investigations related to the insurer’s use of the third-party vendor’s product or services.

Pertinent Provisions

The NYSDFS Final Letter reiterates:

  • Insurers’ use of emerging technologies, such as AIS and ECDIS, must comply with all applicable federal and state laws and regulations. (Point 3)
  • An insurer should not use ECDIS or AIS in underwriting and pricing unless the insurer has determined that the ECDIS or AIS does not collect or use criteria that would constitute unfair or unlawful discrimination or an unfair trade practice. (Point 13)
  • The responsibility to comply with antidiscrimination laws remains with the insurer at all times. (Point 14)
  • Insurers should not use ECDIS or AIS for underwriting or pricing purposes unless they can establish that the data source or model is not based on any class protected pursuant to Insurance Law Article 26 and/or if such use would result in or permit any unfair discrimination or otherwise violate the Insurance Law or applicable regulations. (Point 10)
  • Insurers should be able to demonstrate that ECDIS are supported by generally accepted actuarial standards of practice and are based on actual or reasonably anticipated experience. (Point 11)
  • Comprehensive assessment: Insurers should assess, as a first step, whether the use of ECDIS or AIS produces disproportionate adverse effects in underwriting or pricing for similarly situated insureds or insureds of a protected class (using data available to the insurer or reasonably inferred using accepted statistical methodologies).
    • If it does, they should then assess whether there is a legitimate, lawful, and fair explanation (Step 2).
    • If there is a legitimate, lawful, and fair explanation, they should conduct and appropriately document a search and analysis for a less discriminatory alternative variable or methodology that would reasonably meet the insurer’s legitimate business needs (Step 3).
    • If there is no legitimate, lawful, and fair explanation, or if there is and a less discriminatory alternative exists, they should modify their use of ECDIS or AI and reevaluate.
    • If no less discriminatory alternative exists, the insurer should conduct ongoing risk management consistent with Section III (Point 15).
  • Both qualitative and quantitative assessments for assessing whether the use of ECDIS/AI produces unfair and unlawful discrimination, frequency of testing, and documentation requirements remain. (Points 16-19)

Takeaways

A recent  Digital Insurance article indicates that the insurance industry is welcoming AI “to accelerate underwriting and claims processing by analyzing historical data to evaluate risk.” The article notes that the effective use of AI can improve claims accuracy by up to 99 percent and increase efficiency by around 60 percent (an insurer’s editorial board reported in May that 79 percent of principal agents have adopted or plan to adopt an AI over the following six month period).

Additionally, state as well as federal regulators are looking closely at the potential discriminatory effect of AI in all areas—and the regulations are not necessarily industry-specific. A recent Colorado law, for example, requires developers and deployers of high-risk AI to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. And within the insurance industry specifically, other regulators of insurers will be sure to follow NYSDFS’s lead.

As we noted in our previous blog, the Proposed Letter of January 17 followed a Model Bulletin of the National Association of Insurance Commissioners (NAIC) on the use of artificial intelligence systems in insurance, adopted in December 2023. Nearly a dozen states have adopted it; more will certainly follow. NAIC reminds all insurers that those using AI systems must comply with all applicable federal and state insurance laws and regulations, including those addressing unfair discrimination. And with the Federal Trade Commission geared up to examine how AI and other technological tools are being used in the realm of personalized pricing, in the most recent AI developments—and issuing subpoenas to financial companies and consultants—it appears that those using these systems in all industries will need to proceed thoughtfully. Interested parties, with the help of counsel, should be carefully looking at what data they have and assessing how they can use it, with the best possible outcomes.


Epstein Becker Green Staff Attorney Ann W. Parks assisted with the preparation of this post.

Back to Workforce Bulletin Blog

Search This Blog

Blog Editors

Authors

Related Services

Topics

Archives

Jump to Page

Subscribe

Sign up to receive an email notification when new Workforce Bulletin posts are published:

Privacy Preference Center

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.