But following the surface of a sphere causes you to constantly change direction
But following the surface of a sphere causes you to constantly change direction
Uppies for all of you!
Put your foot down everywhere then – it’s a fallacy to think that it’s not worth it to resist data harvesting because it already gets collected “everywhere” anyway, take one step at a time to make it harder and harder. Opting out of this is just one step.
Isn’t reducing the size of the dataset worth it? I’d rather them have a picture from three years ago than a new scan every month or two.
It’s not such a binary thing as winning or losing, it’s a constantly shifting process. The only way to actually lose is by giving up – instead, consider it making it as hard as possible for your privacy to be infringed upon. Sometimes it’s more inconvenient, but what makes us such a farmable populace is our reluctance to be inconvenienced. Be good at being uncomfortable.
I refused, it went fine. I had to repeat myself because it was unexpected and dudebro wasn’t prepared, and they had to turn on the other machine and wait for it to start up, but it only delayed me like 2 minutes. The more people ask, the easier it gets.
I fucking love beans
What would be extremely rock and roll-- punk rock, even – is donating all of the proceeds from that show to pro-union efforts.
#DonateItDave, or something
The mishandling is indeed what I’m concerned about most. I now understand far better where you’re coming from, sincere thanks for taking the time to explain. Cheers
Thanks for the response! It sounds like you had access to a higher quality system than the worst, to be sure. Based on your comments I feel that you’re projecting the confidence in that system onto the broader topic of facial recognition in general; you’re looking at a good example and people here are (perhaps cynically) pointing at the worst ones. Can you offer any perspective from your career experience that might bridge the gap? Why shouldn’t we treat all facial recognition implementations as unacceptable if only the best – and presumably most expensive – ones are?
A rhetorical question aside from that: is determining one’s identity an application where anything below the unachievable success rate of 100% is acceptable?
Can you please start linking studies? I think that might actually turn the conversation in your favor. I found a NIST study (pdf link), on page 32, in the discussion portion of 4.2 “False match rates under demographic pairing”:
The results above show that false match rates for imposter pairings in likely real-world scenarios are much higher than those from measured when imposters are paired with zero-effort.
This seems to say that the false match rate gets higher and higher as the subjects are more demographically similar; the highest error rate on the heat map below that is roughly 0.02.
Something else no one here has talked about yet – no one is actively trying to get identified as someone else by facial recognition algorithms yet. This study was done on public mugshots, so no effort to fool the algorithm, and the error rates between similar demographics is atrocious.
And my opinion: Entities using facial recognition are going to choose the lowest bidder for their system unless there’s a higher security need than, say, a grocery store. So, we have to look at the weakest performing algorithms.
What is a question of analogues?
The italics are a nice hint. Good Poe’s Law submission.