The computer vision AI models we develop, including models for face recognition, are among the most powerful and accurate in the world. Like most technology, AI models can be used for the intended positive use cases, or can be used to do harm. Computer vision can play an instrumental role in identity, security and safety, including enterprise security, the convergence of physical access control and video analytics, frictionless payments, frictionless travel, and identity programs, and other use cases that provide notice to, and require consent from, those interacting with the technology. We seek to protect and amplify appropriate uses for our technology while ensuring our models are not used in objectionable use cases or by bad actors.
We believe in AI that is both Ethically Trained and Conscientiously Sold.
- Ethically Trained: Paravision trains and benchmarks its computer vision models, including its face recognition models, on large amounts of diverse, annotated data. To that end, we:
- Ensure sufficient quantity and diversity of data is used to create fair models. We aim to create AI models that treat all individuals and groups fairly and equitably regardless of their demographic profiles. Our models are only as good as our ability to reduce bias. In order to accomplish this objective, we seek to professionally source and annotate data from around the world, making sure we have representation across gender, age, ethnicity, and other protected characteristics. This work is never done.
- Obtain all necessary rights in data. The source of data should inform our collection of data. To that end, we will collect widely-used and well-recognized, public datasets subject to our review of their corresponding licenses. Beyond public datasets, we will ensure that we have obtained all necessary consents, including appropriate releases, prior to the collection of data for training purposes and work with data providers following proper practices.
- Invest heavily in benchmarking. Training on high-quality, annotated data is only one piece of the puzzle. We have made and will continue to make significant investments in internal benchmarking data and tools that help us discern where our models are deficient or have performance gaps, so we can methodically and purposefully close these gaps.
- Conscientiously Sold: We fully vet every potential partner or customer with whom we do business. Our leadership team understands and approves of the use of the technology and the level of thoughtfulness around issues of privacy, security, bias and human rights. To help ensure our technology is always conscientiously sold, we:
- Only sell AI models that meet our standards of quality. We are focused on only making AI models commercially available that meet our exacting standards of quality as measured by metrics, by way of example, such as: the AI model 1) has been thoroughly and openly tested by an independent third party like NIST; 2) has been found to meet the highest standards of biometric accuracy; or 3) has shown minimal differential performance across demographics.
- Limit distribution geographically. We maintain and make available on request a list of countries in which we will not do business at all. This list is informed by the United States Department of State as well as human rights organizations. Further, we will not sell our technology for law enforcement, defense, or intelligence applications outside of the United States, its close allies, and other democracies.
- Limit distribution by use case. We will only sell to law enforcement, defense, and intelligence agencies when we believe sufficient legislation or process is in place to govern its use, and we will never sell the technology for use cases intended to discriminate against any individual or group based on protected characteristics. We believe computer vision, and specifically face recognition, can be a powerful tool for law enforcement to solve and prevent crime and for defense and intelligence agencies to protect our nations, provided the face recognition solution used: 1) complies with applicable laws and is used for properly-defined purposes; 2) requires rigorous training for users on its legitimate and lawful use; and 3) operationally includes thorough human review and analysis before any consequential decision is made or action is taken. In no event will we permit the use of our technology for lethal autonomous weapons systems (LAWS).
- Ensure a baseline level of accuracy for use cases. We closely collaborate with our partners and customers on integration to ensure that our AI models achieve a certain degree of accuracy in a development or test pilot environment before they are used in a production environment.
We ask each of our employees to abide by these principles and keep them as a “guiding light” when fulfilling their daily responsibilities. In addition to having each employee acknowledge these principles, we also foster an environment where our employees can identify and address these considerations in their work. For example, we review all use cases as part of a sales review process to align with these principles. We also provide forums, such as company-wide meetings, departmental reviews, and quarterly OKRs that serve as guideposts and opportunities for healthy discourse, debate, and critique. Each quarter, the management team shall review and assess the company’s compliance with these principles.