Part 4: Glassholes & Privacy
Last updated
Last updated
Digital devices don’t have an inherent understanding of where they are in the world. The GPS can only tell you what vicinity you’re in, and it doesn’t work indoors or in dense environments.
To solve the problem of positioning, everyone from Tesla and Apple, and from Bytedance to Snap, have turned to the camera. By comparing the visual feed of your device to their centralized databases of what the world looks like, the position of your device can be calculated.
The technology is incredibly impressive, and companies like Google and Niantic can place your device, based on its camera feed, down to a precision of mere centimeters in many known public spaces.
Their models have been trained on billions of Google Streetview photos and user-generated images. Niantic, for example, allowed Pokémon GO players to contribute data to the world map while playing.
And therein lies the rub. Is this data collection always handled ethically, with informed user consent? A careful read of the terms of service of some of these positioning protocols reveals that they often push the job of acquiring user consent to the third-party developers using the positioning services for their apps.
While it may not seem like a pressing issue now, when our devices are handheld, consider the world when we have ubiquitous AR glasses and robots with cameras that are always on.
Big tech will soon be looking through your eyes, literally, because that’s how their positioning technology works. The APIs have already been built; you can use them today! Soon the cameras will move out of our pockets and hands and onto our faces.
That’s their plan. You can read the inspired and chilling short story “End User” by Alastair Reynolds to see why this is a future you should be deeply worried about.
The prevailing internet business model of data collection, and the growing complacency to this centralization of data, is a serious threat to our cognitive liberty. Allowing a handful of companies the power to literally see the world through our eyes is one of the most perversely dystopian outcomes imaginable, but it’s the outcome we are hurtling towards at breakneck speed.
If visual positioning truly is the future of spatial computing, as it well may be, how can we embrace the powerful technologies that it will enable while preserving privacy in a world where the camera has to be on all the time?
A great irony is that we were incredibly suspicious of face-worn cameras in the bygone era of Google Glass, labeling early adopters “glassholes”. Some were physically assaulted, even. But today, our culture has grown much more numb to the infringement of our privacy and cognitive liberty. When Meta launched their latest smart glasses, Mark Zuckerberg stood in front of a visually stunning presentation with the words “Capture” and “Listen” prominently describing its new features of AI interpretation.
This time, the glasshole was hailed with applause.