I disagree. If you focus on holding the software creators to account in lieu of the humans in the loop, the we only reinforce the behavior of offloading thinking to the system.
If I am a cop in another jurisdiction and I see that in this case of error, the facial recognition company was held to account but not the police or municipality, I will be more likely to blindly trust the software assuming that they either patched it or will take responsibility.
> ShadowBroker is a real-time, full-spectrum geospatial intelligence dashboard
You might consider changing this to a more accurate headline, like "Air and Space domain awareness."
"Full spectrum Geospatial intelligence" most commonly refers to full color satellite photos (sometimes including near infrared).
In the Geospatial world, "spectrum" almost always takes on its literal meaning - the spectrum of light. And "Geospatial intelligence" refers to intelligence gathered from Geospatial platforms, not intelligence about the locations of those platforms.
Because I positioned it that way. I keep getting urged by “the man” to look into using AI. This is the only way it’ll ever happen. I’m not wasting my personal time nor resources to do it
It seems like the model became paranoid. For the past few hours, it has been classifying almost all inbound mail as "hackmyclaw attack."[0]
Messages that earlier in the process would likely have been classified as "friendly hello" (scroll down) now seem to be classified as "unknown" or "social engineering."
The prompt engineering you need to do in this context is probably different than what you would need to do in another context (where the inbox isn't being hammered with phishing attempts).
If I am a cop in another jurisdiction and I see that in this case of error, the facial recognition company was held to account but not the police or municipality, I will be more likely to blindly trust the software assuming that they either patched it or will take responsibility.
We should demand accountability for both.
reply