SAN FRANCISCO, California – Paravision, the U.S.-based leader in mission-critical computer vision, today announced that it has appointed Elizabeth M. Adams as Chief AI Ethics Advisor, and has published a set of AI Principles to guide the ethical development and appropriate use of face recognition and related technologies.
Adams is currently a Race & Technology Fellow at Stanford University’s Center for Comparative Studies in Race and Ethnicity in partnership with the Institute of Human-Centered Artificial Intelligence and the Digital Civil Society Lab. Her Civic Tech leadership and influence in the city of Minneapolis is driving improvements in governance and policy by embedding ethics, accountability, oversight, and transparency in AI-enabled systems.
Over the last 20 years, Adams has spearheaded several technology initiatives in the Washington D.C. metro area for Fortune 100 companies as well as the Department of Defense and Defense Intelligence Agency. She teaches, speaks, and writes on critical subjects within Diversity and Inclusion in Artificial Intelligence, such as racial bias in facial recognition technology, video/data surveillance, predictive analytics and children’s rights.
Face recognition technology has the potential to improve our lives in profound ways, but it must be developed and deployed with the right intentions and safeguards. Elizabeth’s leadership, her expertise in addressing AI racial bias and her vision for realizing a better future with AI will be an invaluable resource for Paravision.
Doug Aley, Paravision CEO
While Paravision is already a leader in developing face recognition with an eye toward reducing bias, Adams will help further sensitize the entire Paravision workforce to ethical issues and spearhead the next phase of tech inclusion for the company. In concert with her process of sensitization–called Ethical Tech Design–Paravision will deeply integrate the ethical workflow in its product development process as well as its partner engagement and solution deployment approach.
Elizabeth has been at the forefront of thinking about accountable AI, and any organization would be fortunate to have her lead efforts to craft a better future for computer vision.
Daniel E. Ho, Associate Director, Stanford Institute for Human-Centered Artificial Intelligence (HAI) and William Benjamin Scott and Luna M. Scott Professor of Law at Stanford University
Adams’ appointment coincides with Paravision formally publishing its AI Principles, which can be found on the company’s website. Paravision’s AI Principles are built on two major premises, which is that AI-based technology should be ethically trained and conscientiously sold.
- Ethical training centers on the concepts of ensuring sufficient quantity and diversity of data, obtaining proper data rights, and benchmarking aggressively to define and close performance gaps.
- Conscientious selling focuses on limits of distribution by use case and geography as well as ensuring proper levels of accuracy and quality for what is sold.
Paravision has shown a deep desire to pursue the ethical development and use of world-class face recognition and AI-based computer vision technologies. I’m excited to collaborate with the team and guide them to leadership in taking an ethical, inclusive, and thoughtful approach for this critical and challenging technology.
Elizabeth M. Adams, Chief AI Ethics Advisor