The development of artificial intelligence tools had a meteoric rise in 2022. With chatbots, artificial-generated artwork, enhanced deep fake technology, and a host of other various innovations and concerns, there was no shortage of new use cases leveraging these new capabilities. Meanwhile, these innovations give way to the prospect of both intentional misuse or unforeseen consequences that can have a lasting impact on society and industry. This is why the use of an ethical framework in the development and deployment of AI technologies is more important than ever.

A key aspect of any such framework governing an organization’s work with AI are principles that align with institutional core values. At Paravision, we established our own AI principles that guide all aspects of our work and business. The computer vision AI models we develop, including models for face recognition, are among the most powerful and accurate in the world. Like most technology, AI models can be used for the intended positive use cases, or can be used to do harm. We seek to protect and amplify appropriate uses for our technology while ensuring our models are not used in objectionable use cases or by bad actors. To that end, we are committed to AI that is both ethically trained and conscientiously sold.

When we say ethically trained, we focus on three key factors: ensuring sufficient quantity and diversity of data is used to create fair models, obtaining necessary data rights, and investing heavily in benchmarking. These factors are critical to ensuring we train and benchmark our computer vision models with the diverse, properly annotated data to mitigate potential bias and protect and uphold privacy rights. 

Just as critical to our work is deciding who we engage in business opportunities and under what circumstances. This is what we mean by building products that are conscientiously sold. As we have stated in the past, “We fully vet every potential partner or customer with whom we do business. Our leadership team understands and approves of the use of the technology and the level of thoughtfulness around issues of privacy, security, bias and human rights.” But what does this truly mean and how do we achieve our goal?

First, we only sell AI models that meet our standards of quality. We ensure our models remain an industry-standard best through rigorous benchmarking and constant improvement. Next, we limit distribution of our products geographically. We maintain and make available on request a list of countries in which we will not do business at all. This list is informed by the United States Department of State as well as human rights organizations. Further, we will not sell our technology for law enforcement, defense, or intelligence applications outside of the United States, its close allies, and other democracies. Given the power of our products and their capabilities it is imperative that we do all we can to prevent their use by actors whose values and goals don’t align with our own or broadly with democratic society. 

There is, however, an additional internal step Paravision utilizes that sets apart from many businesses and organizations. We maintain a rigorous use case review process as another means to identify business opportunities we feel align with our company’s core values, thereby limiting distribution of this technology.

For our purposes, a use case is defined as a specific transaction in which our products would be licensed for use. Our internal process involves discussions by senior leaders from across the company to determine which use cases are acceptable given our principles and values. If we cannot come to an agreement that we can ensure our products will be used ethically based on the aforementioned criteria, then the use case is unacceptable and we decline the opportunity. Since our use case review was implemented in late 2020, we’ve declined many opportunities where we felt our technology might be misused, or where there weren’t sufficient regulatory, policy, process, or technical safeguards in place to protect against misuse.

We will not work with law enforcement, defense, and intelligence agencies, unless we believe sufficient legislation or processes exist that govern the technology’s use. Additionally, we will never sell the technology for use cases intended to discriminate against any individual or group based on protected characteristics. We believe vision AI, and specifically face recognition, can be a powerful tool for law enforcement to solve and prevent crime and for defense and intelligence agencies to protect our nations, provided the face recognition solution used:1) complies with applicable laws and is used for properly-defined purposes; 2) requires rigorous training for users on its legitimate and lawful use; and 3) operationally includes thorough human review and analysis before any consequential decision is made or action is taken. Additionally, in no event will we permit the use of our technology for lethal autonomous weapons systems (LAWS).

Recent accounts describe how face recognition evidence has been wrongly leveraged for arrest while Georgetown Law’s Center on Privacy and Technology released a report on the use of face recognition in criminal investigations, its misuse, and unintended consequences. These examples demonstrate the ramifications this technology can have on individual lives and why human analysis and strict processes to protect rights must be present as a part of any use case. 

Given the importance and vast reaching implications of our product capabilities, it’s incumbent on us as a good community steward within the AI industry to ensure our products are not just sold and leveraged legally, but we take a more in-depth and thoughtful approach to its deployment.