Surveillance capitalism with a human face lol

The right to privacy is the right to not be ~self-conscious~ 24/7


Thanks for reading Sintext III. If you haven’t had a chance to read the first two issues, here are links to Sintext II (about KONY 2012) and Sintext I (about Uncanny Valley). If you want to know what’s going on ’round these parts more generally, here’s my intro post.

If you happen to also be curious about emailing an author the words you wrote about her book, it turns out she might answer? And say she enjoyed reading what you wrote? And tweet out your self-professed small-potatoes newsletter, and help your words reach well over 1,000 pairs (one presumes pairs) of eyeballs?

To the folks who subscribed as a result of that act of immense generosity: what’s up! Write me whenever. You can just reply to the email you receive, or leave a comment below the post.

If only Twitter were so intuitive. I forgot to do the .@handle thing when I quote-retweeted.

By the time I realized my goof, I was too embarrassed to try again (about which I’m now embarrassed). My tweet ended up “impressing” a grand total of 34 people, to the tune of six “engagements” and zero likes. Not quite the upward failure I was going for!

Table of contents

  1. The preamble you’re already done with, which primed you for

  2. Social media/facial recognition technology scandals in the news

  3. Which matter because privacy… matters? Why does privacy matter again?

  4. Because Internet, by linguist Gretchen McCulloch

  5. Who, I think, would have a refreshing take on an AI trained to read your facial expressions


In the last week or so — between when I decided this issue should at least try to substantiate my anti-tech screeds (“Making a better world seem attainable? Well, that’s [Silicon Valley’s] emptiest promise”), and when I began writing — the news cycled through several features of our s’media problem.

First, Facebook announced it would be “increasing transparency” rather than limiting paid political speech, under whose auspices lies continue to fit.

Then Facebook took its thumb from its forehead, rummaged some change from its pocket, and paid somebody who works at Teen Vogue to run a disclaimer-less fluff piece (sub in #SponCon or propaganda?), as if readers would see this article and think, Why yes, of course, I wholeheartedly trust Facebook to safeguard my democracy this time around.

The “article” in question had been scrubbed off the web by the time journalists finished flogging Condé Nast.

This is all mere months after another giant, the “microblogging platform,” announced it’d be blocking “political ads.” Their decision was more about “paying for reach” than “free expression.” Twitter was soon stumbling over itself to carve exceptions for non-targeted “issue ads” related to climate change, gun control, and abortion rights.

Clearly, both companies have it under control. Meanwhile Instagram finally started siccing its fact-checking filters on… memes.

What a shame. That’s the kick in the butt we needed to transcend our ungainly forms and finally prove Dostoyevsky wrong: man is much more than just “a biped, ungrateful.”


Amid the muck, the California Consumer Privacy Act (hereafter thrillingly shortened to CCPA), passed in June 2018 and amended in September 2019, finally came into effect on the first day of the year. (Though the state’s attorney general still can’t enforce it ’til July lol.) I was looking forward to getting cozy with its text.

Then Kashmir Hill, a leader on the privacy beat, published this bomb on Saturday. The article details a superlatively shady facial recognition technology company called Clearview AI, which has been peddling its wares to law enforcement bodies around the country. (Thanks for the tip, Dad!)

And the rest of my weekend became a vortex of wasted research. I say wasted because I still don’t know the answer to what should be a straightforward question: Can the CCPA help protect my privacy from a company like Clearview???

Clearview’s “innovation” is its database of over 3 billion images: on top of government records, they scraped photographs of people’s faces from “public” parts of social media sites — including Facebook, Venmo, YouTube, and wherever else the company didn’t cop to scraping from. (I’m overusing scare quotes. So much of what’s “public” is public only technically, courtesy of user agreements that are impossible for laypeople to understand, even when we read them. Clearview still violates terms of service by scraping user images, but this doesn’t seem to matter!)

According to the Electronic Frontier Foundation’s summary of the CCPA (if you’re looking for something comprehensive yet readable, that’s the one), the new law affords Californians three basic rights: to know what data companies have on you, to delete that data, and to opt-out of companies selling your data.

My face is considered biometric information — that’s protected data. So, in theory, I can contact Clearview and demand they delete all images of my face from their database. If they don’t comply, they should (eventually) get fined.

But there are exceptions! If Clearview grosses less than $25 million in annual revenue, it’s not subject to the CCPA. And if it does? They might argue that they don’t meet a separate condition: the majority of their revenue needs to be derived from “selling consumers’ personal information” — don’t they sell a face-matching algorithm? (I think. Idk. I shouldn’t have to go to law school to understand my rights!!!)

There are two quotes in Hill’s article that together evoke how thin we’re stretched on profit-motivated privacy erosion. First, from a VC whose firm was Clearview’s first patron (apart from noted privacy activist Peter Thiel):

I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy… Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.

(Whoever put that last soundbite in the subheading — good call! The Times did something right this week!)

The second is from Woodrow Hartzog (talk about a name I’d follow into battle), law/computer science professor and proponent of banning facial recognition technology outright:

We’ve relied on industry efforts to self-police and not embrace such a risky technology, but now those dams are breaking because there is so much money on the table… I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it.

If the thought of your face in a perpetual police lineup isn’t unpleasant enough, Hill reports that they’ve geared their code towards compatibility with AR-glasses. In a separate NYT op-ed, Hartzog argues the ban must happen “in both public and private sectors, before we grow so dependent on it that we accept its inevitable harms as necessary for ‘progress.’” (Guess the lone candidate supporting a total ban?)

Coring the rotten apple of Progress has been Sintext’s main intellectual agenda so far. (This has been a surprise for me, too.) The credo of privacy is another that has long seemed brittle to me.

So that’s why you’re reading about social media snafus, data expropriation, and facial recognition technology all at the same time: they threaten the same thing.


Spend enough time considering why you personally care about privacy, and I expect you’d arrive at a conclusion similar to mine. Apart from the more pressing questions of racial justice tied to state surveillance — the sort of questions this country is patently terrible at answering — I intuit that, at a not-far-away point, my privacy and personhood converge. Privacy matters because it enables individuality.

I’ve had occasion to contemplate privacy thanks to a sequence of courses concerned with the sublunary intersection of law, technology, and culture. (The syllabi from the program are all available here.)

In those classes, our professor gave us space to articulate outrageous ideas before he brought us back to earth. I flirted with one theory that went like this: Real privacy exists only between you and people you know; we should separate privacy from related concerns, like our rights to person/liberty/property, which are already protected by the Constitution. (This was bad, and very Originalist of me.) My instinct, as a Millennial/Gen Z cusper inured to exchanging data for “free” services, was once to insist that my sense of self isn’t warped by constant surveillance. My privacy would be violated only if, say, T-Mobile forwarded all my texts to my parents.

What I was ignoring was the first lesson our professor drilled into my frosh skull: that technology is governed by a combination of law, code, and norms. (Law refers to legislation and case law; code refers to the workings of the tech itself; norms refer to social standards.) In an ideal world, norms are translated into law, which in turn regulate code. That’s the point of the Katz test, a key piece of Fourth Amendment jurisprudence. Under Katz, government searches/seizures are illegal when they violate our “reasonable expectation” of privacy. But once a court rules it’s kosher for police to intercept your phone records and see who you’ve been calling, we spend forty years slipping down the slope, and, next thing you know, it’s no longer reasonable to expect the NSA doesn’t know how you like your bagel.

A facial recognition scandal can be a good thing. Such a visceral affront to our instinctual conception of privacy can shake us out of complacency, force us to think hard about what privacy actually means. If it’s just having control over information about oneself — a legacy of this country’s earliest treatise on the subject; a formulation the CCPA seems to support in order to strike at “the heart of the digital economy”; a principle Clearview turns on its head — then privacy is now nothing more than a wistful notion.

Defining privacy quickly becomes a fool’s errand. Collectively articulating what privacy does, instead, is the necessary step if we want to legislate it back into existence.


Because Internet: Understanding the New Rules of Language, by Gretchen McCulloch; Riverhead Books; 336 pp.

Because Internet | Books, Et Al.

As I again studied how scary things are already, I was also reading a book that reminded me how technology lets us affirm our individuality.

From highlighter-yellow cover to highlighter-yellow cover, Gretchen McCulloch’s Because Internet, published last July, is a charming, illuminating examination of online language. McCulloch, resident linguist at WIRED and co-host of the Lingthusiaism podcast, is passionate about “the subconscious patterns behind the language we produce every day.” This is why internet writing is her ideal object of inquiry: there’s no better trove of “unedited,” “unfiltered” writing so “beautifully mundane.”

Beautifully mundane also characterizes most of the popular history McCulloch builds into her chapters. The story of internet language is a story of happy accidents and arbitrary decisions immortalized through spurts of cleverness. The “simplified smiling face,” for example, was first proposed on a Carnegie Mellon message board, after an elevator joke misfired. Keysmash — strings of gibberish like “askjdfsaljkfhal” — is actually unique to touch-typists. The sarcasm marker one often sees on reddit, /s, is a witty riff on HTML formatting commands.

The moral of the story of internet language: Ingenuity within constraints can preserve, and even accentuate, what makes us human.

McCulloch’s own ingenuity is understated. She’s adroit at capturing the internet’s colloquialisms in (semi-)formal writing. She’s adept at parsing the significance of things we write without a second thought. And, most importantly, she’s attuned to the commonalities between the different generations of internet users. This book is relevant to everyone alive.

It can be hard to see, but when it comes to language, we all want the same thing: to express ourselves as well as possible. Boomers have certain phrases that reflect their particular nostalgia, and younger folks have the facetious register I’ve resorted to in this issue. The flat “lol” and ~bracketing tildes~ in my title are how my generation makes clear *exactly* what we mean online. The paradox of irony, McCulloch explains, is that it “creates space for sincerity. If you and I can have the same web of complex attitudes towards one thing, then maybe we can also share more straightforward attitudes towards others.”

With this shared interest in mind, McCulloch asks us to understand emoticons and emoji as indicators of this sort of knotty meaning, rather than verisimilar emotion — they’re gestures, not tiny replicas of the face we’re making.

If we say instead that people are consciously using them to guide their readers to the correct interpretation of their words, then emoticons become a positive, helpful, social behavior, a way of saying, “I want to clarify my true intentions for you.”

(Isn’t that beautiful? Remember that next time someone sends you a chain of smiling poop emojis.)

There’s obviously a catch: We can’t express ourselves the same way to everyone we know. Not that we’d want to. McCulloch cites danah boyd’s concept of “context collapse,” which occurs on social media “when people from all your overlapping friend groups see all your shared posts from different aspects of your life.” This feeling is disorienting and uncomfortable and hard to articulate.

She also brings in the idea of third places as precedent for how social media can “foster the kinds of repeated, unplanned interactions that sociologists have identified as crucial for the formation of deeper relationships.” When we’re young, we need moments free from supervision to bump into our peers and figure out who we are; when we’re older, we still need moments free from judgment to be sure we are who we think we are. All this starts to crumble when we remember that our informal writing isn’t ours alone.

Here’s how it looks when you download your Facebook data:

I expected most of what I found. Included in the corpus of my ten-year-old online self was every photograph and video I’d ever uploaded, every status update and post I’d ever shared. Each glimpse of 12-year-old Marc made me squirm, of course. There was an obscenely long list of mostly unfamiliar “Advertisers Who Uploaded a Contact List with My Information.” But what bothered me most were the transcripts of my messages.

McCulloch identifies chat — which includes texting and direct messaging — as “informal writing in its purest form.” Whereas the knowledge of an audience causes us to moderate our own posts, chat is subject only to our interlocutor’s scrutiny. It’s through this channel that we can most be ourselves; this is where we write most intimately. (McCulloch’s eminently relatable example is texting for someone while they drive — we need them to specify how to capitalize and punctuate the message, lest we make their text not sound like them.)

Most of us find that it’s worth trading away some privacy for the sake of having a life. Instead of embracing hermit-hood, we seek a balance: one study found that people differentiated between the kind of information that they’d share in a post versus a chat message, rating information about their hobbies or favorite TV shows as less intimate and therefore more likely to be shared in a post than their fears, concerns, and personal feelings, which they preferred to share in a private message, if at all.

Luckily, the words we speak aloud don’t reverberate in our ears until the end of time; unfortunately, internet writing lingers, no matter how emotional, no matter how “private.” In that “messages” folder was another folder containing every sticker I’d ever sent. All of me is data; seeing myself doesn’t make me feel less queasy.

By now it’s commonplace to say social media algorithms push us inward, and feed the entropy of our self-consciousness. What about surveillance? It’ll push us further and further under the surface.

McCulloch endorses “[p]rivacy through obscurity” as “a versatile tool for many social situations.” If information is hard enough to access, or fully understand, then, the argument goes, that information as good as private. In action, this principle looks like subtweeting, coded lyrics and quotes tweens use to evade parental detection, and this half-paragraph, which stopped me in my tracks.

Chinese internet dissidents are especially famous for using puns. For example, they might write 河蟹 héxiè, “river crab,” which sounds like 和谐, héxié, the Mandarin word for “harmony,” but with different tones. “Harmony” itself is a Chinese euphemism for “censorship,” derived from the purported goal of a 2004 internet censorship law to create a “Harmonious Society.”

China represents our dystopian tomorrow. I understand why McCulloch includes this as an example of our communicative creativity. I don’t understand how a path towards obscurity — towards obliqueness as permanent way of being — isn’t a path away from the project of self-expression Because Internet celebrates.

“Language,” McCulloch concludes, “is the ultimate participatory democracy. To put it in technological terms, language is humanity’s most spectacular open source project.” There will always be constraints. It will always be in our nature to try and shake loose.


Last thing. There’s a company called Affectiva. The video above was buried at the bottom of this article, itself linked in General Hartzog’s op-ed.

I opened the video in a new tab, and YouTube suggested I next click a TED Talk by Affectiva’s founder and CEO, Rana el Kaliouby. Her Talk’s title: “This app knows how you feel — from the look on your face.”

How does that make you feel? Intrigued or disgusted? More likely to watch or less? Is it good that your phone/computer/tablet/smart fridge/streaming service of choice/local PD might already know the answers, because it’s been analyzing your face in real time? Or is it bad that an AI has been trained to distinguish smiles of genuine joy from those that mask inner pain? (Can’t decide? No sweat, there’s a New Yorker article for that.)

Affectiva began as a research project. Housed by MIT. Funded by the government. To develop an AI to help folks with autism decipher emotional cues. They’ve since pivoted to more sizable markets: testing TV pilots, predicting product sales, and detecting distracted drivers. (Jury’s still out on the social impact of self-driving cars.)

Affectiva is a surveillance company. Their technology depersonalizes intimate information and impinges on our ability to experience emotion privately, all while claiming to be doing us a favor. They promise they can study “any face, any place, any interface, in real time.” Rest easy, their logo is a smiley face.

The end.

My heartfelt thanks for reading. If you enjoyed, please share it with someone else you think would enjoy, too! And click the heart button at the bottom. I wouldn’t mind the dopamine. Or people seeing Sintext on the Substack front page.

If you haven’t had enough, I didn’t manage to fit in the body my favorite phrase and fact from Because Internet.

  • Phrase: McCulloch calls Google “the oracle of the contemporary human id”

  • Fact: In grounding emojis within the rich tradition of typography, McCulloch harks back to the first English printers, who imported their presses from European countries that didn’t use the letter þ [apparently known as thorn?]. Rather than carve a costly new letter into every metal plate, printers tried improvising a way to represent þ. Some chose /th/. Others tried a single letter that looked similar: /y/.


Sintextually yours, but only in private, until the week after next,

Ye Olde Marc