Navigating the Maze of Biometric Privacy: A New Frontier in Employment Law

 
 

As 2024 unfolds, the worlds of work and privacy are undergoing dramatic transformations, primarily driven by the rapid advancements in artificial intelligence. This evolution spotlights the complex and often contentious relationship between biometric privacy and workplace dynamics.

Understanding Biometrics in the Workplace

Biometrics, at its core, involves two key processes: identification (establishing who an individual is) and authentication (verifying the claimed identity). This is achieved through unique physiological, biological, or behavioral characteristics, such as facial geometry, fingerprints, voice prints, retina or iris scans, vein patterns, and even gait. A mathematical representation, or biometric key, is derived from these identifiers and encrypted for security. Importantly, this key is generally non-reversible, preventing direct access to the original biometric data, thus enhancing security against potential hacking.

Employers utilize biometrics for various purposes, including during pre-employment processes to authenticate candidates, for secure access to facilities or IT infrastructure, to track work hours accurately, and to prevent fraudulent timekeeping practices. The benefits of biometrics are clear: increased efficiency, enhanced security, error minimization, insights into working patterns, cost reduction, and reduced risks associated with lost or stolen access devices. Notably, biometric data, due to its inherent complexity, offers a degree of protection against hacking and data breaches.

The Rise of Biometric Surveillance and Its Perils

Joy Buolamwini, President of The Algorithmic Justice League, offers profound insights in Wired about the growing use of biometric surveillance systems. AI-powered facial recognition in public spaces and for accessing government services represents both an advancement in security and a potential for significant privacy breaches, including biometric identity theft and the emergence of anti-surveillance technologies.

A critical issue highlighted by Dr. Buolamwini is the growing threat of biometric identity theft, exacerbated by generative AI tools and the vast amount of biometric data available online. This emerging concern is particularly relevant for employers who rely on biometric data for security purposes. As individuals become increasingly adept at using AI to replicate or steal biometric identifiers, businesses must enhance their strategies to safeguard sensitive biometric data.

And beyond biometric identity theft is the accuracy of biometric data in the first place. Take, for example, facial recognition technology. While this seemingly benign technology offers convenience in personal devices and social media, it also has far-reaching implications beyond these uses, as outlined in the Harvard University Graduate School Science and Policy Blog, written by Alex Najibi

For example, facial recognition software is employed in critical areas like law enforcement surveillance, airport screening, and even in making employment and housing decisions. Despite its widespread adoption, this technology faces bans in several cities due to its inaccuracy and privacy concerns.

In fact, facial recognition, among various biometric technologies, is notably the least accurate. This lack of precision is compounded by privacy concerns, especially given its use by law enforcement agencies. Alarmingly, a significant portion of the U.S. adult population is unknowingly included in facial recognition networks used by the police. This inclusion often occurs without consent or awareness, exacerbated by a lack of regulatory oversight.

The most troubling aspect of facial recognition technology is its inherent racial bias, particularly against Black Americans. Studies, including the pivotal 2018 “Gender Shades” project, have revealed that these technologies show divergent error rates across demographic groups, with the lowest accuracy observed in darker-skinned females. Despite high overall accuracy claims, these algorithms exhibit significant disparities, leading to concerns about equity in face recognition.

The use of facial recognition in law enforcement amplifies existing racial biases in the criminal justice system. Black Americans, already disproportionately affected by biased policing practices, are further victimized by these technologies. They are more likely to be arrested and incarcerated for minor crimes, leading to an overrepresentation in mugshot databases used for facial recognition. This creates a harmful cycle where biased policing strategies lead to more surveillance and potential misidentification of Black individuals. And because of these concerns, we are seeing a rise of “excoded communities.” 

The Rise of the Faceless and Anti-Surveillance Fashion

In response to privacy invasions, we're witnessing the rise of “excoded” communities—individuals whose lives have been negatively impacted by AI systems. Anti-surveillance fashion, including face coverings, is emerging as both a cultural statement and a practical response to these challenges, reflecting diverse biometric regimes across the globe.

With the increasing awareness of biometric technology concerns, employers might encounter situations where employees wish to opt out of using biometric systems at work. Should the reliance on biometric data for employment decisions disproportionately affect certain individuals, it becomes essential for employers to thoughtfully consider these circumstances. This includes formulating appropriate responses to requests from employees who seek to refrain from participating in biometric data collection.

Legal Protections for Employee Biometric Privacy: A Patchwork Quilt

But what about specific employment-related regulations relating to biometric privacy?

The legal framework for employee biometric privacy in the U.S. is inconsistent and complex. Notably, Illinois' Biometric Information Privacy Act (BIPA) stands out as the most stringent. Enacted in 2008, BIPA prohibits the collection or storage of biometric data without providing notice, obtaining written consent, and making specific disclosures. Furthermore, it requires covered entities to develop a written policy for biometric data retention and destruction.

This patchwork of state laws presents a significant challenge for HR professionals and employment lawyers navigating the varied regulations regarding biometric data protection in the workplace.

And while states like California have made strides with laws like the California Consumer Privacy Act, many states are far behind, without any specific biometric protections at the state level. 

Conclusion: The Need for Proactive Legal Frameworks

The need for comprehensive and proactive legal frameworks to protect biometric privacy in the workplace has never been more pressing. This issue transcends technology and law, touching upon fundamental human rights. Policymakers, legal professionals, and technologists must collaborateIt's crucial for policymakers, legal professionals, and technologists to collaborate in ensuring that the future of work is humane and equitable in our increasingly AI-driven world.

Questions about biometrics and compliance with employment laws? Contact Wagner Legal for guidance today!

Previous
Previous

Embracing the AI Era: Implementing Successful Reskilling and Upskilling Programs

Next
Next

The Power of Employment Agreements for Entertainment Professionals