Facial recognition technologies are not the big story with the linked article. Where this company obtained its data is a far bigger concern, IMHO.
Breaking the law in order to support those enforcing the law, is not a great business model. I am not saying any laws or even privacy policies were broken in this case.
What I am saying is that if you're a technology startup, established business or you are employed by the government, stay the f*** away from questionable data sources, dumbass.
I say that in relation to this story because it was the first thing that came to my mind. At the highest level, were did they get the images they're comparing unknown images to? Scraping them from publicly available sources on the internet? Social network API's? The story simply mentions Facebook and YouTube.
Do you recall authorizing them to use your data that way? Did you know that since 2014 most social networks explicitly prohibit access & use of their platform's data for 3rd party law enforcement type tools like this?
I did, because I designed a digital evidence solution that leveraged AI and social platforms in 2013; however, my concept was based on data voluntarily submitted by the public for the specific purpose of aiding law enforcement.
Stay Out of the Gray Area
I say that to say this, always remember scope. Never use data with questionable provenance, and no, it's not okay to pass the buck. The person or company that brought it to you is either a custodian or a thief.
If you want to leverage a solution like this, be sure to get off your fat ass and do some research on the data's origin first. Might suck to have all of the cases where your agency used a particular solution come back through the courts again because of your laziness. Don't be that guy.
Where did you get your data?