- Posts by Frances M. GreenOf Counsel
Drawing upon decades of experience as a trial lawyer and trusted counselor, Fran Green counsels global clients on navigating the complexities of workforce management, cybersecurity, and data privacy laws, as well as the ...
The rise of workplace wearable technology has opened new possibilities for employee efficiencies, safety, and health monitoring. Collecting health-related workplace data, however, may subject employers to liability under nondiscrimination laws.
Yesterday, the Equal Employment Opportunity Commission (“EEOC”) published a fact sheet addressing potential concerns and pitfalls employers may run into when gathering and making employment related decisions based on health-related information.
Understanding Workplace Wearables
Wearable technologies, or “wearables,” are digital devices worn on the body that can track movement, collect biometric data, and monitor location. Employers have implemented these tools for a multitude of reasons, including tracking and predicting how long certain tasks take employees to promote efficiency. Wearables may also be programmed to recognize signs of fatigue, like head or body slumps, and notice improper form when lifting, which can be critical for workplace health and safety.
On September 24, 2024, the U.S. Department of Labor (“DOL”), collaborating with the Partnership on Employment & Accessible Technology (“PEAT”), a non-governmental organization the DOL funds and supports, announced the publication of the “AI & Inclusive Hiring Framework,” (“the DOL’s Framework”). The DOL’s Framework, created in response to the Biden-Harris Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, helps employers create and maintain non-discriminatory artificial intelligence (“AI”) hiring procedures for job seekers with disabilities. (For more information on the Biden-Harris Executive Order, see our Workforce Bulletin.)
Establishing these procedures has become a top priority for employers as nearly 1 in 4 organizations have implemented AI tools in human resource departments, according to new research from SHRM.
AI-powered recruitment and selection tools can streamline the hiring process by identifying potential candidates or screening applicant resumes, but employers must ensure their AI hiring tools do not intentionally or unintentionally perpetuate discriminatory practices or create barriers for job seekers with disabilities. Employers may rely on the DOL’s Framework as a useful starting point when implementing AI hiring tools. Employers that have already implemented such tools should review the DOL’s Framework to ensure their practices do not create unwanted liability.
As more organizations across industry sectors store personal data with cloud storage vendors— including the three largest vendors in the world, Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform—federal regulatory agencies are increasing their scrutiny of data control efforts and vetting the data privacy and security protocols of third-party vendors. AT&T’s recent settlement with the Federal Communications Commission (FCC) serves as a cautionary tale.
What Is the Cloud?
In case your cloud knowledge is, well, nebulous, cloud data storage allows user organizations to store data on remote servers that are maintained by a third party and are located off site. Users then access the data via the internet. This enables seamless collaboration and accessibility by users in disparate locations, without the burden of physical infrastructure.
According to Precedence Research, the cloud computing market will continue to rise, with the global market predicted to surpass $1 trillion by 2028. A 2023 survey of hospital and health system leaders conducted by Global Healthcare Exchange (GBX) found “cloud-based solutions are quickly becoming a new standard within hospitals and health systems and impact nearly every domain, including supply chain, clinical, finance, and HR teams.” The survey revealed that nearly 70 percent of all hospitals and health systems are likely to adopt a cloud-based approach by 2026.
The benefits of cloud storage include scalability, cost efficiencies, increased user accessibility, and improved operational resiliency. Cloud technology can even lead to increased cybersecurity. Yet the GBX study still emphasizes the importance of selecting the “right cloud partner” to achieve the best outcome and stronger data security.
On July 11, 2024, after considering comments from insurers, trade associations, advisory firms, universities, and other stakeholders, the New York State Department of Financial Services (NYSDFS) issued its Final Circular Letter regarding the “Use of Artificial Intelligence Systems and External Consumer Data and Information Sources in Insurance Underwriting and Pricing” (“Final Letter.”) By way of background, NYSDFS published its Proposed Circular Letter (“Proposed Letter”) on the subject in January 2024. As we noted in our February blog, the Proposed Letter called on insurers and others in the state of New York, using external consumer data and information sources (“ECDIS”) and artificial intelligence systems (“AIS”), to assess and mitigate bias, inequality, and discriminatory decision making or other adverse effects in the underwriting and pricing of insurance policies. While NYSDFS recognized the value of ECDIS and AI in simplifying and expediting the insurance underwriting process, the agency—following current trends—wanted to mitigate the potential for harm.
And if the opening section of the Final Letter is any indication, the agency did not back down. It continued to insist, for example, that senior management and boards of directors “have a responsibility for the overall outcomes of the use of ECDIS and AIS”; and that insurers should conduct “appropriate due diligence and oversight” with respect to third-party vendors. NYSDFS declined to define “unfair discrimination” or “unlawful discrimination,” noting that those definitions may be found in various state and federal laws dealing with insurance and insurers.
On July 12, 2024, in a keenly awaited decision, the U.S. District Court for the Northern District of California determined that Workday, Inc. (“Workday”), a provider of AI-infused human resources (HR) software, can be held liable under Title VII of the Civil Rights Act of 1964 (Title VII), the Age Discrimination in Employment Act of 1967 (ADEA), and the Americans with Disabilities Act (ADA) (collectively the “Anti-Discrimination Laws”) as an agent of the corporate clients that hire Workday to screen and source candidates for employment by utilizing its AI-infused decision-making tools. In noting that “[d]rawing an artificial distinction between software decisionmakers and human decisionmakers would potentially gut anti-discrimination laws in the modern era,” the court underscored the EEOC’s admonition, which we discussed in our previous post, that employers delegating their hiring protocols to AI must do so cognizant of the potential discriminatory impacts of such use. See Opinion at 10. Thus, the court allowed plaintiff Derek Mobley’s disparate impact claim to proceed, finding that Mobley’s allegations supported a plausible inference that Workday’s screening algorithms automatically rejected his applications based on protected characteristics rather than his qualifications.
Prior Proceedings
Mobley filed his initial complaint as a putative class action on February 21, 2023, alleging claims against Workday as an “employment agency” for disparate impact and intentional discrimination under the Anti-Discrimination Laws. His complaint centered on his allegation that he applied for “at least 80-100 positions that upon information and belief use Workday, Inc. as a screening tool for talent acquisition and/or hiring” and “has been denied employment each and every time.” Complaint at 10.
The past several years have witnessed a notable uptick in workplace artificial intelligence related legislation and agency enforcement attention, specifically focused on the infusion of AI or so-called automated decision-making tools. Colorado’s new Artificial Intelligence Act, for example, designates employment as a “high-risk” sector of AI applications and has heightened concerns of lawmakers and corporate executives. Lawsuits, such as Mobley v. Workday and Moffatt v. Air Canada, underscore the concerns of employment candidate screening, recruitment and conversational AI. Most recently, the US Equal Employment Opportunity Commission issued a Determination finding cause to believe the employer violated the Older Workers Benefit Act by using AI in a reduction in force that adversely impacted older workers. A complaint in the Southern District of New York against IBM and its spinoff technology company, Kyndryl, promptly followed.
Perhaps not surprisingly, over the past few years, the State of New York (“NYS”), following the lead of New York City, has introduced several bills that would regulate the use of AI infused decision-making tools. One such bill, called New York Workforce Stabilization Act (“NYWFSA”) was introduced in May 2024 by Senators Michelle Hinchey and Kristen Gonzalez. They will likely re-introduce the NYWFSA during the upcoming January 2025 legislative session intending to “stabilize” New York’s labor market at a time when the deployment of AI may fundamentally alter the New York industrial landscape.
The Department of Labor's (DOL) May 16, 2024 guidance, Artificial Intelligence and Worker Well-Being: Principles for Developers and Employers, published in response to the mandates of Executive Order 14110 (EO 14110) (Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence), weighs the benefits and risks of an AI-augmented workplace and establishes Principles to follow that endeavor to ensure the responsible and transparent use of AI. The DOL’s publication of these Principles follows in the footsteps of the EEOC and the OFCCP’s recent guidance on AI in the workplace and mirrors, in significant respects, the letter and spirit of their pronouncements.
While not “exhaustive,” the Principles” should be considered during the whole lifecycle of AI” from ”design to development, testing, training, deployment and use, oversight, and auditing.” Although the DOL intends the Principles to apply to all business sectors, the guidance notes that not all Principles will apply to the same extent in every industry or workplace, and thus should be reviewed and customized based on organizational context and input from workers.
While not defined in the Principles, EO 14110 defines artificial intelligence as set forth in 15 U.S.C. 9401(3): “A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.”
In line with the mandates of President Biden’s Executive Order 14110, entitled “The Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” and its call for a coordinated U.S. government approach to ensure responsible and safe development and use of artificial intelligence (AI) systems, the Office of Federal Contract Compliance Programs (OFCCP) has published a Guide addressing federal contractors’ use of AI in the context of Equal Employment Opportunity (EEO).
As discussed below, the Guide comprises a set of common questions and answers about the intersection of AI and EEO, as well as so-called “promising practices” federal contractors should consider implementing in the development and deployment of AI in the EEO context. In addition, the new OFCCP “landing page” in which the new Guide appears includes a Joint Statement signed by nine other federal agencies and the OFCCP articulating their joint commitment to protect the public from unlawful bias in the use of AI and automated systems.
In response to President Biden’s Executive Order 14110 calling for a coordinated U.S. government approach to ensuring the responsible and safe development and use of AI, the U.S. Department of Labor Wage and Hour Division (WHD) issued Field Assistance Bulletin No. 2024-1 (the “Bulletin”). This Bulletin, published on April 29, 2024, provides guidance on the application of the Fair Labor Standards Act (FLSA) and other federal labor standards in the context of increasing use of artificial intelligence (AI) and automated systems in the workplace.
Importantly, reinforcing the DOL’s position expressed in the Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems, the WHD confirms that the historical federal laws enforced by the WHD will continue to apply to new technological innovations, such as workplace AI. The WHD also notes that, although AI and automated systems may streamline tasks for employers, improve workplace efficiency and safety, and enhance workforce accountability, implementation of such tools without responsible human oversight may pose potential compliance challenges.
The Bulletin discusses multiple ways in which AI interacts with the Fair Labor Standards Act (“FLSA”), the Family and Medical Leave Act (“FMLA”), the Providing Urgent Maternal Protections for Nursing Mothers Act (“PUMP Act”), and the Employee Polygraph Protection Act (“EPPA”). The Bulletin makes the following pronouncements regarding the potential compliance issues that may arise due to the use of AI to perform wage-and-hour tasks:
Since the dawn of digitalization, the collection and retention of personal and other business confidential data by employers has implicated security and privacy challenges—by amassing a treasure trove of data for bad actors (or unwitting/unauthorized employees) and drawing a roadmap for those seeking to breach the system. Adding artificial intelligence (AI) into the mix creates further areas of concern. A recent survey undertaken by the Society of Human Resource Management of more than 2000 human resources professionals indicates that AI is being utilized by the majority of ...
As the implementation and integration of artificial intelligence and machine learning tools (AI) continue to affect nearly every industry, concerns over AI’s potentially discriminatory effects in the use of these tools continue to grow. The need for ethical, trustworthy, explainable, and transparent AI systems is gaining momentum and recognition among state and local regulatory agencies—and the insurance industry has not escaped their notice.
On January 17, 2024, the New York State Department of Financial Services (“NYSDFS”) took a further step towards imposing ...
Almost a decade ago, in September 2014, California was the first state in the nation to enact legislation prohibiting non-disparagement clauses that aimed to prevent consumers from writing negative reviews of a business. Popularly referred to as the “Yelp Bill,” AB 2365 was codified at California Civil Code Section 1670.8, which prohibits businesses from threatening or otherwise requiring consumers, in a contract or proposed contract for sale or lease of consumer goods, to waive their right to make any statement—positive or negative—regarding the business or ...
While recent public attention has largely focused on generative artificial intelligence (AI), the use of AI for recruitment and promotion screening in the employment context is already widespread. It can help HR-professionals make sense of data as the job posting and application process is increasingly conducted online. According to a survey conducted by the Society for Human Resource Management (SHRM),[1] nearly one in four organizations use automation and/or AI to support HR-related activities, such as recruitment, hiring, and promotion decisions, and that number is posed ...
The five-member Board of the California Privacy Protection Agency (the “CPPA”) held a public meeting on September 8, 2023, to discuss a range of topics, most notably, draft regulations relating to risk assessments and cybersecurity audits. Once the regulations are finalized and approved after a formal rulemaking process, they will impose additional obligations on many businesses covered by the California Consumer Privacy Act, as amended by the California Privacy Rights Act (“CCPA”). The Board’s discussion of these draft regulations is instructive for ...
On August 9, 2023, the U.S. Equal Employment Opportunity Commission (“EEOC”) and iTutorGroup, Inc. and related companies (collectively, “iTutorGroup”) filed a joint notice of settlement and a request for approval and execution of a consent decree, effectively settling claims that the EEOC brought last year against iTutorGroup regarding its application software. The EEOC claimed in its lawsuit that iTutorGroup violated the Age Discrimination in Employment Act (“ADEA”) by programming its application software to automatically reject hundreds of female applicants age 55 or older and male applicants age 60 or older.
After releasing an initial two-page “fact sheet,” Congress publicly posted the bill text of the No Robot Bosses Act (the “Proposed Act”), detailing proposed federal guardrails for use of automated decision-making systems in the employment context. Robert Casey (D-PA), Brian Schatz (D-HI), John Fetterman (D-PA), and Bernie Sanders (I-VT) currently cosponsor the Proposed Act.
On July 20, 2023, U.S. Senators Bob Casey (D-PA) and Brian Schatz (D-HI) introduced the “No Robot Bosses Act.” Other than bringing to mind a catchy title for a dystopic science fiction novel, the bill aims to regulate the use of “automated decision systems” throughout the employment life cycle and, as such, appears broader in scope than the New York City’s Local Law 144 of 2021, about which we have previously written, and which New York City recently began enforcing. Although the text of the proposed federal legislation has not yet been widely circulated, a two-page fact sheet released by the sponsoring Senators outlines the bill’s pertinent provisions regarding an employer’s use of automated decision systems affecting employees and would:
California businesses, including employers, that have not already complied with their statutory data privacy obligations under the California Consumer Privacy Act (CCPA) as amended by the California Privacy Rights Act (CPRA), including as to employee and job applicant personal information, should be taking all necessary steps to do so. See No More Exceptions: What to Do When the California Privacy Exemptions for Employee, Applicant and B2B Data Expire on January 1, 2023. As background, a covered business is one that “does business” in California, and either has annual gross revenues of $25 million, annually buys sells or shares personal information of 100,00 consumers or households, or derives 50 percent or more of its annual revenues from selling or sharing consumers’ personal information. It also applies, in certain circumstances, to entities that control or are controlled by a covered business or joint ventures. Covered businesses may be exempt from obligations under certain enumerated entity-level or information-level carve-outs.
Since late October 2021, when the Equal Employment Opportunity Commission (EEOC) launched its Initiative on Artificial Intelligence (AI) and Algorithmic Fairness, the agency has taken several steps to ensure AI and other emerging tools used in hiring and other employment decisions comply with federal civil rights laws that the agency enforces, including Title VII of the Civil Rights Act of 1964 (Title VII), the Age Discrimination in Employment Act (ADEA), and the Americans with Disabilities Act (ADA). Among other things, the EEOC has hosted disability-focused listening and educational sessions, published technical assistance regarding the ADA and the use of AI and other technologies, and held a public hearing to examine the use of automated systems in employment decisions.
A recent WSJ article about a private equity firm using AI to source investment opportunities by Laura Cooper presages a larger challenge facing employees and employers: AI tools do “the work of ‘several dozen humans’” “with greater accuracy and at lower cost.” In the competitive and employee-dense financial services sector, AI tools can provide a competitive advantage.
Ms. Cooper cites San Francisco based Pilot Growth Equity Partners, one of many of a growing number of equity investment firms to utilize AI. Pilot Growth that has developed “NavPod’ a cloud based ...
Blog Editors
Recent Updates
- The EEOC and Wearable Tech: Balancing Innovation and Compliance
- Video: 2024 Workforce Review - Top Labor and Employment Law Trends and Updates - Employment Law This Week
- Post-Chevron, Agency Challenges Aren’t Always a Slam Dunk
- Podcast: 2024’s Biggest Trade Secrets and Non-Compete Developments – Employment Law This Week
- Video: Biden’s Final Labor Moves - Employment Law This Week