Weekly Musings 084

Welcome to this edition of Weekly Musings, where each week I share some thoughts about what’s caught my interest in the last seven days.

If this week’s essay seems a bit disjointed and rushed, my apologies. Over the last few days, I’ve had my head down, wrapping up the writing, editing, and publishing of a new ebook. That’s done, and the book’s been published. If you’re curious, you can read the release announcement.

Back to the letter … What’s below the (virtual) fold is yet another one of those ideas that’s been rattling around in my noggin for a while. What prompted me to finally write this essay was a seemingly innocuous comment made by an acquaintance. He’s going to be email newsletter famous and doesn’t realize it …

With that out of the way, let’s get to this week’s musing.

On Social Media and Data

Back when I had a Netflix account, one series that I always looked forward to was Black Mirror. Admittedly, I have a bit of a thing for dystopian fiction. Especially if that dystopian fiction packs a sense of wry humour and more than a bit of a bite.

Like all good fiction, Charlie Brooker’s series holds a reflective surface up to a corner of our world and shows us what’s going wrong. Or what could easily go wrong. An episode from series five really brought that home.

The episode, titled “Smithereens”, illustrates not just the pervasiveness of social media and our addiction to it, but also what social media companies could be doing with our data. The story is simple: an angry, vengeful individual kidnaps a low-level functionary at a social networking company that, for the purposes of the plot, is a mashup of Facebook and Twitter. The character’s plan is to force the company to admit it’s culpable in his wife’s death.

Of course, there is the usual standoff with the law. As the police struggled to find out who kidnapper is and what his motivations are, engineers at the company moved into action. Using the data they collected on their platform and sophisticated data mining tools, they quickly learned who the kidnapper was, about his online habits, and his history. They were even able to passively listen in on the kidnapper’s phone even though he was on hold. All this while the hapless and technologically-challenged cops gawked in disbelief and awe.

All that seems a tad fanciful, doesn’t it? You can argue that social media companies can’t do all of that. At least not yet. As with just about everything with an episode of Black Mirror, there’s a grain or seven of truth in there.

That episode, combined with everything that hits the news, should give you pause to consider what big tech companies — not just Twitter and Facebook, but also companies like Google and Amazon, Uber and Netflix — know about us. To consider what data they hold about us. To consider how they’re using that data and why.

On the surface, that data seems innocuous. What you’ve posted. What you’ve liked or voted down. What you’ve viewed and commented on. Who you’ve interacted with. Where you’ve been. All that’s more than a simple trail of breadcrumbs. It’s a veritable roadmap of how you use certain services and, often, how you use the wider web.

A bunch of years ago, the idea of the social graph was all the rage — you heard about it whenever people talked about social media. It’s not mentioned all that much now, but it serves as the basis for how social media and other tech companies keep an eye on you.

So just was is the social graph? According to Brad Fitzpatrick, the social graph is the global mapping of everybody and how they’re related. The graph can show who you’re connected with, how you’re connected, how you’re interacting, and where online you’re interacting. That’s a lot of information about you and the folks you encounter when strolling the avenues and boulevards of the internet. More to the point, there are a lot of potential threads that algorithms can follow and from which they can generate connections and assumptions.

For me, the biggest problem with the social graph is that it’s a mapping in the hands of people whose motivations aren’t clear. You don’t know how they’re going to use that information to target you and the people with whom you’re connected. If that doesn’t give you the willies, think about how tech companies (or at least their algorithms) make connections and recommendations based on all of that.

Those connections enable tech companies to show us what they think we want to see. Or just what they, or their partners and customers believe we should see. You can joke, for example, about how off Netflix’s recommendations are — suggesting you watch a rom-com after viewing a handful of cheap action films or a serious documentary or three. But there’s a flip side to that. Algorithms target us with words, news, photos, and more that supposedly appeal to us. That tug at our emotions. That raise our collective gorges. That affirm what we believe and think we know. That whip us into irrational collective frenzies.

Consider all of the information (good and bad), all of the disinformation, and all of the outright lies that were pushed out during the 2016 US presidential election, during the Brexit referendum, and during the COVID-19 pandemic. Think about how all of that was so successfully and effectively targeted. The connections that algorithms make are shaping opinion. They’re shaping thought. They’re undermining some of the key aspects of our society. And many people don’t realize that it’s happening.

What can any of us do to fight back? If you’re not using social media, or the services offered by other technology firms, don’t start. Find alternatives — use DuckDuckGo instead of Google, for example. Think about limiting the amount of information about you that’s floating around on the internet.

If you are using social media or other services, consider quitting them. I dumped Twitter, Mastodon, and Google a couple of years ago and haven’t looked back. Not everyone is me, though. For many, ditching social media and other services can be difficult. As Todd Weaver, CEO of Purism, told an interviewer:

You have to go out of your way and inconvenience yourself to avoid these tech giants that are enslaving people’s data

The convenience that tech giants offer is what makes quitting so difficult.

At the top of this letter, I mentioned that an off-hand remark by an acquaintance nudged me into writing this musing. He recently told me that he couldn’t live without Twitter and Facebook. He’s a year younger than me, and only started using those services in 2013. He wasn’t amused when I mentioned that he lived the first 45 years of his life without social media and seemed to be doing OK …

Even if you decide to delete your account, some or all of the data about you might linger on a server somewhere. Late last year, a friend decided to drop her Instagram account. Before she did that, she downloaded a backup of all of her posts. To her surprise and shock, there was a bit more in there than she expected. She told me that a number of photos that she clearly remembers deleting were in the archive. Photos she assumed were flushed from Instagram’s servers.

She’s not the only one who ran into that problem. In fact, a security researcher:

[F]ound that his ostensibly deleted data from more than a year ago was still stored on Instagram’s servers, and could be downloaded using the company’s data download tool.

Supposedly, all of that was the result of a bug. A bug that Instagram fixed in August, 2020. Can any of us be absolutely certain that’s the case? Can we take Instagram’s claims at face value? Call me paranoid and skeptical, but I trust most tech companies as far as I can comfortably hurl a grand piano.

I don’t know about you, but when I think of the word delete, my impression that whatever I’ve deleted is gone. For good. Never to return in any shape or form.

There’s a lot of talk about the right to be forgotten. Some countries and regions have even drafted legislation around that idea. Our right to be forgotten, though, clashes with tech companies retaining one of their most important assets and revenue sources — our data. An asset and a revenue source that they’ll fiercely protect . What self-respecting company willingly throws away something that will make them money?

To be honest, I don’t think any of us really knows what data tech firms are holding about us. Or how they’re using it, who they’re sharing it with, and who they’re selling it to. All we can do it limit our exposure to them, thus limiting the amount of data that those firms store about us.

So, turn off location tracking. Be parsimonious about what you share, what you like or dislike, what you reply to or repost. Try not to use closed platforms for personal or confidential communication. Graze on sites like Twitter and Facebook, and limit your interactions. And look beyond those platforms for news and information.

It might be too late for many people, though. Their data is already out there, in databases locked in bunker-like data centres, ready to be mined and used — for someone else’s benefit or against you. But rarely for you.

Scott Nesbitt