The use of Facial Recognition by law enforcement to identify offenders is one of the cases where the regulation of the technology lags way behind, and there is an urgent need to bring about policy to reduce potential pain in our society. There are serious problems that are inherent with the technology like misidentification and minimal mapping between the training sets and the actual photos.
According to a recent report in TechCrunch (1), the Home Minister of India admitted to using a data set trained on the Aadhaar Database to identify individuals involved in a recent riot. This software was originally obtained in the name of a good cause, to identify missing children, although according to the report (1) the software failed to even distinguish between boys and girls. Recent advances in AI have allowed making models that perform recognition by training on huge datasets, unfortunately, they are considered to be black boxes. Along with this, there is a question of how in line it is with the Privacy Laws and the inherent questions of cyber-security.
To address these issues, we should open discourse on these topics with experts. Unlike the approach in the west (2) to ban the tech, we should try to come up with a middle ground. Ideally, a policy should be discussed to thoroughly document the process of how the model was trained and the dataset that was used in the process. After the model is trained and put into production, there should a mandatory requirement to disclose the uses of the model along with the false negatives in a proper manner.
Using this information, there should be regular reviews to revisit the efficacy of the model. A mechanism should be devised to be able to appeal for manual review by an expert in case a person is identified by this system. A person should also be able to opt out of being incriminated by the use of this technology. Such mechanisms, in such a policy, could address privacy concerns as well.