Categories
design facial recognition marketing you know, for kids

No cam. No mic. We found other ways to surveil your children.

Projection is not only a defense mechanism where we rationalize the world by identifying behavior of others as being motivated by what motivates us, say trauma or abuse. It’s also how marketing works. When you do it deliberately, it’s called “advertisting” or “business development” or “advertainment” or whatever the tech news calls itself these days.

However, even when it’s done deliberately, the mechanism that fuels the intention and the enthusiasm for an idea still comes from somewhere in your brain that not easily understood, and is desperately hungry, all the time. Your id breaks through and tells us what’s really going on, and you don’t know it because you think just because you’re using your rational brain – you know, to make an ad campaign for a smart speaker for children that supposedly avoids the problems of surveillance capitalism by having no mic, no camera, etc – you don’t know you’re telling on yourself.

The Yoto smart speaker is a device that connects to the cloud to deliver content to pre-verbal children. “No cam. No mic. No funny business,” is an interesting claim if you believe they’re projecting what they believe when they’re asking you to believe something about them. What funny business do you mean? Are you saying it’s a completely offline device that delivers new content without having to purchase cartridges or tapes or cds? Because that’s awesome.

In fact, I had one myself and I loved it. It trained me to handle and fetishize my parents objects so I could learn to consume them, but that’s cool. I like music.

No, Yoto just wants to collect, store and monitor your child’s behavioral data, just like everyone else. “Parents can also upload content they select (say, songs from a playlist, or a certain audio book) to blank cards using a parent app; the cards work using NFC technology, like a contactless credit card, that link to content stored on Yoto’s servers.”

Probably sell it too, since many companies who do the former do the latter; some only do it to enable the latter. But we haven’t even looked up the founders of the company yet.

Elizabeth Bodiford has a nice way of describing this kind of behavior in her poem, We Tell On Ourselves:

We tell on ourselves by the way that we walk.

Even by the things of which we talk.

Categories
government

When someone shows you who they are, believe them the first time.

Today at work in a What Did You Do This Weekend conversation, my friend was telling me about making calls for a candidate in the upcoming primary.  Talking about how satisfying it is to hear someone go from disenfranchised to sounding eager to register to vote, she was like, “If only I could get my mom to vote.”

Her mother was born during World War II, in an Eastern European country that doesn’t exist anymore, and told her that she’s never voted anywhere she’s lived because it’s not safe to vote for candidates that are truly liberal and propose policies that are against the status quo.  “She gets so happy hearing me tell her about calling to register people to vote, or tell them about Bernie, living vicariously through me,” she explained.

“So I’m like, ‘Ma, you can do it too.  Let’s go and register you to vote this weekend,’ and she’s like (my friend makes a swatting motion, as in total dismissal), ‘Are you crazy?  It’s risky to vote in primaries.’ (making a sour face and another swatting motion) ‘The whole point of a primary is the government is tracking who you would vote for before you vote for real. No way,'”

The funny thing about this 80 year old saying voting is risky is she has lived experience that informs this.  Born while Hitler was in charge.  Grew up in a puppet regime that Stalin was in charge of.  Worked and travelled most of her adult life in other countries where the government was obviously installed by another government, the military, or was controlled by a tiny elite of power brokers and oligarchs.

Even though it’s been dialed back a bit in the past couple of decades, what she’s saying is, “Yeah, I get you live in the United States, and you grew up when this sort of thing was on the wane where we lived and where we live now, but do you think the fascists stop fascisting just because you like funky sweaters and protest marches?”

Sorry, that’s flippant and this is real.  What she’s saying is, “Have you ever even seen a fascist regime?  I have.  It looks like this,” and spreading her hands like a game show host showing a you the fabulous prizes all around you.

The title of this post is a well-known quote from Maya Angelou. The someone I mean is the military-industrial complex.

I thought it was ridiculous to consider that the world could ever be any way other than what I could see at the time I was seeing it. Even though I grew up with people who weren’t citizens of any country in 1945. Following the defeat of the totalitarian regime that took them prisoner for 6 years, another totalitarian regime was annexing their country of origin.

This is Poland we’re talking about. I was born with citizenship where I was because the largest mining company in the world did things like sponsor people for citizenship and pay their emigration fees in exchange for a lifetime of working in a mine.  Even though my family mostly had firsthand lived experience that was not much like mine when I grew up, and they even had lots of evidence suggesting things seemed ok, my grandparents though it foolish to be anything but vigilant and skeptical. I thought this was ridiculous, then.

75 years ago Auschwitz was freed.

50 years ago the Rev. Dr. Martin Luther King was murdered.

20 years ago we started a war because of “weapons of mass destruction,” that didn’t exist.

12 years ago we held up signs that said CHANGE and graffiti artists immortalizing the first president to order over 500 drone strikes, killing over 500 civilians (according to The Council on Foreign Relations).

20 months ago the president who used to be a game show host revoked the law requiring the the government report the number of civilians it killed when they were trying to kill people they thought weren’t civilians.

My grandmother was 12 when WWII started, and lived in a tiny remote village in a country annexed by the Nazis.  She thought it was ridiculous how kids like me violated her social norms with stuff like our hair styles and noisy music, and breaking minor laws easy for privileged white kids to get away with. Not because she was offended, but because that would get you killed where she grew up.

I even knew about and believed things like MK Ultra and the US support of the Khmer Rouge. I was the kid would could tell you Coca Cola, Ford, and GM profited enormously from supplying Nazi Germany with soda pop, trucks, and planes, among other things.

I thought it was ridiculous.

What an idiot.

Google, Microsoft, Facebook, and Amazon want to help the government decide how to regulate AI.

Google, Microsoft, Facebook, and Amazon sell the government law-enforcement, military, infrastructure, and marketing software tools powered by AI.

If you think it’s ridiculous that conflicts of interests are of concern there, or there’s potential danger to the liberty and safety of people as a result of these agreements, that’s okay.

You’re not ridiculous.

You’re not an idiot.

Prajna is a journey, not a step.

Categories
marketing

Shoshanna Zuboff’s in Sunday’s NYT

I have lots to say about Shoshanna Zuboff’s piece in this week’s New York Times, but it’s late and I thought I’d give you a chance to look it over before I say anything about it. Enjoy. It’s a great primer to her weighty The Age of Surveillance Capitalism (and a lot more accessible).

If you need a laugh, even if your sense of humor tends toward the acid I don’t recommend reading this thing I stumbled on last week, Microsoft’s book-length ad for buying AI from them. Though it’s an interesting companion piece to contrast Zuboff.

It’s a real barrel of laughs. No kidding, I bought it just because it will be funny in just a few years that a paperback was published. I’m laughing all the way to Satya and Jeff not even needing a device to track me down because in fact I bought a physical book from an online retailer, which was tracked from first click to a picture of it being taken and sent to me to prove it was delivered.

Categories
design

Chinese Finger Trap

I was in a hallway conversation where a designer peer who has been around the block asked for my take on a problem.  It was about changes the surface representation of a given digital experience – let’s call it a “skin” though we were talking about a speech interaction.

If the way that you apply that skin takes a couple of steps for procedural reasons, such as identification, authentication, purchasing, and changing default settings.  The way this system is architected, when you got to the end, you got a confirmation saying, “hey, I applied that skin like you said, and that’s how it is now.”

The question was, “What do you do if the user says, ‘oh hell no change that back!'”

So I asked:  Are we sure the user wanted to do the thing?

Well, this is the fourth step of a flow so it would be fairly difficult to get all of the way there with false accepts at every step so, yes, let’s assume so.

The next thing I asked was:  And the assumption here is that because the thing is both a purchase and a pretty specific choice – you don’t buy an Andrew Dice Clay comedy album or a Scientolgy text by accident, so is this more like that?  Or more like ordering in a restaurant, “actually, could I have the salad instead?”

Definitely Andrew Dice Clay.  You picked this on purpose and did work to get it.

So what is the concern?

Well, some people who we talked through the design were like, “How do I cancel out if I suddenly realize I did the wrong thing?”

My final question I kept to myself.  We talked through ways that you could identify and capture ranges of intents and do daily or hourly log queries, some general tech capabilities we might be able to apply here, and that was that.  Back to work.

My final question was, “Why did we build a Chinese finger trap?”

Ignoring the ways you might implement a system to use progressive disclosure, familiar words, legalese tick boxes and numerous steps to ensure that a user would not even end up in the situation where this problem were possible, my primary concern was that the people who had driven the product from the beginning had built it as a trap.

Get the revenue!  Get the impressions!  Get the clicks!  Get the engagement!  Roach motel it!

(Obviously these folks would not say “roach motel it” – Except for people who straightforwardly adopt the most coercive tactics of Hooked and other manipulation textbooks, most people in my experience solving this problem this way are merely doing what their boss said.  If they were a role playing game character they’d be “Unprincipled” or “Neutral”.)

The real problem they were trying to solve was once they saw the implementation-level experience, they saw the user would see it was a trap, and back out.  The problem was it wasn’t deceptive enough.

But it was too late to build it differently, so now we could only bolt things on at the end and say they represented safety and choice.

Like seat belts in 1968.

Build a velvet rope and an exit door to the roach motel if people might decide they don’t want to stay.

Categories
facial recognition

You Sure As Shit Can Ban Technology

In this week’s Sunday New York Times there’s an article about Clearview AI, a company taking advantage of a lack of regulation in personal and personally-identifying data to market a facial recognition application to law enforcement agencies.   The basic function of their product is to input a picture of anyone and output potential identity matches, including photos, confidence scores and links to the matched data.

The  article quotes David Scalzo, founder of a venture capital firm that was an early investor:

“I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy,” Mr. Scalzo said. “Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”

AI is particularly ripe for this kind of ploy, the type that master manipulators who throw up their hands and say, “We never broke the law!” depend on for their arguments to have something soft to land on.  Even if it’s bullshit, is softer than the cold hard ground.

More importantly, it’s a long-term strategy, where that something soft is if you say it often enough in public, and other people are saying it too, it has the potential to become true. Our human brains have a hard time dismissing information that at the time it was taken in, it was not in focus. But we absorb background information just the same as anything else we take in, we just don’t act on it the same way.

Like mercury in fish, it accumulates familiarity in journalism’s fatty tissues when you’re not paying too much attention, and only when the accumulation is at a critical level is there any discussion, after it’s too late. Polluters, bankers, human traffickers, fossil fuel purveyors, that kind of character (Scalzo run a private equity firm).  The argument goes something like this:

  1. State as fact a state of the world that has to be true for your product to be acceptable.
  2. State as fact that this state of the world is inevitable
  3. State that while there’s always bad apples, it’s for the courts to decide, later, maybe.
  4. Restate 1) and 2) as a final coda.

In this case of this statement, it’s demonstrably bullshit.  Clearview AI claims to have scraped over 3 billion photos for their database.  Yes, with a b.  As widely reported in 2019, the question of the legality of scraping sites for training data is not whether it’s legal – it’s not, except in certain circumstances involving Creative Commons licenses. (And even CC is speaking up to say, “woah, that’s not what CC is for, Yahoo and IBM“)

There’s nothing about information a priori that creates a privacy threat for individuals, only the value of individuals’ personal data to commercial and institutional interests.  It doesn’t matter what Scalzo’s opinion is anyway, since he has a vested interest.  

Importantly though is that you can absolutely ban technology.  You can ban it all day, and it’s done all the time.  Nobody is allowed to own a silencer for a pistol, a Cruise missile, a radar detector, or to set up a microphone, x ray, and camera array around your house to monitor your movements.  The thing that bans this is laws.  The law states “this is not allowed” and almost everyone who would have done it for some anti-social purpose just because it was allowed would then not do it because if they got caught they would get in trouble.

You can’t ban the idea of a Cruise missile, but that’s fine.  The idea might be used for all kinds of things.  The idea of matching a picture to a database of other pictures as a form of search already has all kinds of miraculous uses, and in essence isn’t that different from searching for text, whether you do it online or open a phone book to P to figure out what pizza place you always call that you only know by seeing the ad on the page.

Is it difficult to ban the nefarious and anti-social use of software for the purposes of making money?  Probably. (I’d say ask China, but I don’t want you to)  But the first step is doing it, and then it’s not okay to do it anymore.  David Scalzo and Clearview AI want it to be okay.

It’s not, and will never be.  But if we pay attention and talk about these issues, and if we demand the attention of our politicians and law enforcement agencies to be principled and thoughtful and push back against any technology that infringes on any individual’s personal liberty or safety just because we haven’t codified laws yet.  Laws codify what people like us think, and when we say we think that this kind of behavior is not okay, laws can ban technology just fine, thank you.

Categories
musical chairs

The Horn Bearer, Part 2

Apple music has an automatically generated playlist called For You that (from what we can understand by using it) is based on two things:

  • Artists whose music you’ve added to your library
  • Artists you’ve listened to as a result of search or some other non-Apple-curated function

Interestingly, it doesn’t matter whether you’ve ever listened to the music by those artists. If you add an album to listen to later, but never actually play it, this album will figure into your For You playlist at least until you remove it.

More interestingly, it doesn’t matter whether the digital entity it adds to your playlist is a song.

In my case, For You included a track called The Horn Bearer Part 2.

Melvins released the cd and digital editions of their album “The Maggot” with all songs split into two tracks.  And playing the album in entirety, it sounds continuous, and the listener hears the entire songs as intended.

Apple put a song by them from that album that I like very much – The Horn Bearer – into my playlist. But they didn’t put the whole song in.

Just the fragment called “Track 12 The Horn Bearer Part 2.”

That’s not a song, but Apple Music would never know the difference. But to the listener it’s a song that starts in the middle. An error. A glitch.

By design, because the customer is not me, and the purpose isn’t listening. The purpose is encouraging consumption, and the customer is the catalog provider and Apple.