
CONTEXT is a subscription-based newsletter on Substack. Truncated versions will be posted here on Medium. To sign up to get the complete newsletter in the future, you can subscribe here.
Welcome to the inaugural issue of CONTEXT — a monthly newsletter about emerging technology and its impact on society, culture, and design.

The year: 2022.
The place: New York City.
The population: 40 million.
Climate change has devastated the Earth. Overpopulation has nearly depleted essential resources like water and food. The 1% occupy lavish condos and buy tiny apples and wilted pieces of lettuce for 200 “D,” while the other 99% live on top of each other and riot in the streets over nutritional wafers. Few books are left, and even fewer who know how to read them. Euthanasia is legal and ceremonial. Pick a color and a type of music. Lay down for twenty minutes in the center of a room to watch the long-since-gone natural world materialize all around you. Colorful tulips rustling in the wind. Mountains standing tall across a horizon. Herds of deer doing deer things.
– but it’s Soylent Green’s final scene that most people remember. NYPD Detective Robert Thorn (Charlton Heston) lies bleeding in a crowded church. His boss, Hatcher (Brock Peters), is crouched at his side. You’ve gotta tell them, Thorn pleads. I promise, Tiger, Hatcher replies, but in a placating tone that only a boss can give. Then, as officers lift Thorn to carry him away, he cries again. You gotta tell them, Hatcher! Hatcher says nothing. So with his last ounces of strength, Thorn reaches a bloody hand into the air and shouts:
SOYLENT GREEN IS PEOPLE!
Not to be mistaken for that other Soylent, this Soylent refers to nutritional wafers given out as food rations by the film’s antagonist, the Soylent Corporation. It’s available in three *fun* colors. Soylent Yellow and Soylent Red are produced from “high-energy vegetable concentrate.” Soylent Green is made from “high-energy plankton” — except Thorn discovers, to his horror, that Soylent Corp exhausted Earth’s supply of plankton and switched the main ingredient to a resource that was readily available and would be for some time: human remains, or more specifically, dead bodies from the local euthanasia clinic.

Thankfully, most of what Soylent Green predicted for us in 2022 didn’t happen to those extremes, so putting aside that and the film’s unfortunate motif of women as “furniture” (never perfect, is it?), Soylent Green is still a strong allegory for our modern, data-driven times. Large companies control most of a resource. Check. People’s lives are being traded for profit. Check. Immersive deer experiences are worth it. Check.
Soylent Green also leaves many important questions unanswered that coincidentally are questions we should consider asking more regularly, such as Did these people consent to become Soylent? Don’t these people have rights when they’re dead? Who were these people anyway? What if folks just ate something else? Keep these questions and any others that strike you in your back pocket.
What’s true for Soylent Green is true for all technology. They are both made of people. This isn’t to say technology is on the exact same level of people as a cadaver cookie. I also don’t mean people in a marketing sense, e.g., technology touches people’s lives, people inspire us, round pegs in square holes, etc. I mean people in a foundational sense, built into technology’s fabric.

Did people give their consent to become technology? For some, yes. Sometimes without even knowing it. For others, no. (See? Back pocket.)
In this issue of CONTEXT, we’ll explore how no technology is an island. Technology is always motivated by who we are and who we become (This Mess We’re In). All the good and evil we attribute to technology comes from the decisions we’ve made/are making/will make (If You Want to Build a Hot Dog Classifier from Scratch…). Should anything emerge from technology, it’s us in some form or another peaking out from behind the curtain to wave (Call Me ⛵️🐳).
Technology is people!
Tell everyone.

The following are excerpts from the HBO (max?) series Silicon Valley.
Jian-Yang: Okay. Let’s start with a hot dog. (the app successfully identifies it as a hot dog)
Monica: Oh shit. It works!
Elrich: Motherfuck!
Jared: Huzzah!
Elrich: Jian-Yang, my beautiful little Asiatic friend, I’m going to buy you the palapa of your life. We will have 12 posts, braided palm leaves. You’ll never feel exposed again.
Intelligence, artificial or not, is an achieved state, a quality attributed to systems capable of manifesting it and making it true. In the case of artificial intelligence, a system must demonstrate its ability to learn and adapt to unfamiliar situations to complete a task as well or better than humans.

A system’s chances for success depend on two key factors: (1) the algorithm, or a set of instructions to handle incoming training data, and (2) the training data itself. Algorithms get a lot of attention, but an algorithm is nothing without the data. However, not any data will do.
Jared: Do pizza.
Erlich: Yes, do pizza.
(Jian-Yang tries the app on a slice of pizza)
App: Not hotdog
Monica: “Not hot dog”?
Erlich: Wait. What the fuck?
Monica: That’s…that’s it? It only does hot dogs?
Jian-Yang: No, and “Not hot dog.”
If you, like Jian-Yang, want your algorithm to correctly identify hot dogs in any and all situations, you wouldn’t want to waste time training it on pizza. You also would want to avoid training it on 100s of thousands of duplicates of a single photo of a hot dog because then your algorithm will only identify that hot dog and no others. This is similar to training your algorithm on only the creme-de-la-creme of hot dogs, as it will have trouble identifying any hot dog that is anything less than perfect — which, let’s face it, is as unlikely for hot dogs as it is for anything else.
If you are set on creating the ultimate hot dog classifier, you must train your algorithm with a large dataset containing data that spans the entire gradient of hot dog variation.
Elrich: And thus, you will scrape the Internet. You and you alone.
There’s nothing more fun than raking the internet to find and extract a large amount of relevant data, putting it all in a folder, and “cleaning it.” What makes it extra special fun is getting permission from the owners of the data you’d like to collect. If you don’t get permission and you’re caught, you could end up in a position like Stability.ai’s and you do not want that.
Stability.ai owns a model called Stable Diffusion. If you’ve ever used or seen the result of an AI art generator, you know what it does. If not, here you go.Stability.ai owns a model called Stable Diffusion. If you’ve ever used or seen the result of an AI art generator, you know what it does. If not, here you go.

The folks at Stability.ai used multiple datasets to train Stable Diffusion, including one from the Large-scale Artificial Intelligence Open Network, or LAION. It turns out that LAION’s dataset contains copyrighted work from a number of digital artists. It also includes 12 million images from photo stock company Getty Images

But Stability.ai, not LAION, was hit with lawsuits from both parties for copyright infringement.
So who is really to blame for this? On the one hand, LAION should be accountable since they collected the copyrighted material in question in the first place. On the other hand, you could argue that infringement occurred when LAION’s dataset was put into play by Stability.ai creating Stable Diffusion, which was then plugged into other developers’ apps and platforms.
LAION was reckless. Stability.ai should have known better.
Perhaps there is a world where you can collect terabytes of well-organized, well-formatted hot dog-related data that spans the vast spectrum of hot dog identity with all permissions granted, thus training what will undoubtedly be the best Hot Dog/Not Hot Dog model that the world has ever seen.
Then you face-plant back to the world we’re in, and these chances for success vanish. You start to understand why a company like LAION would be tempted to find available data, grab it, and go. After all, the Googles, Metas, Apples, and Microsofts — the ones with troves of data — can’t be the only ones training their algorithms, right?
There is something noble in the intent to level the playing field. However, when your dataset includes, in LAION’s case, “millions of images of pornography, violence, child nudity, racist memes, hate symbols,” along with even more illegally scraped images from artists and companies, it starts to look less like an act of heroism and more like you just don’t give a fuck (pardon my French) — and “not giving a fuck” is the worst way to approach anything related to AI.
So while the Davids and Goliaths duke it out, what happens to everyone else? Big Tech collects people’s data through interactions with their products and services. Groups like LAION scrape it and throw it into a data vat with who knows what else. Regardless of who wins the battle for chatbots, search engines, and hot dog classifiers — our data will be used either way to train their algorithms, usually with little oversight or thought of the repercussions.
As we’ve seen, AI is simple when you get down to it. It’s not artificial intelligence we should be most worried about. It’s how much of us, at our best and worst, will be used to achieve it. An algorithm is nothing without the data, and not just any data will do.
Jian-Yang: I fucking hate SeeFood. I have to look at different hot dogs. There’s Chinese hot dogs, Polish hot dogs, Jewish hot dogs. It’s fucking stupid.
Additional references:
- An open letter to stop major AI lab research for six months, signed by Steve Wozniak (Apple co-founder), and Tristan Harris (Center for Humane Technology), among others.
- Bloomberg interview with LAION founder, and high school physics teacher, Christoph Schuhmann
- Noam Chomsky on the false promise of ChatGPT for the New York Times
- Overview of Getty Images lawsuit against Stability.ai via The Verge
- And, of course, in its entirety —

In 2010, the former VP of Data at Kickstarter, Fred Benenson, compiled and edited a crowdsourced emoji translation of Herman Melville’s classic novel Moby Dick. He entitled it, aptly, Emoji Dick.
Benenson used Amazon Mechanical Turk to crowdsource the translation. Turk employs users for a nominal fee to complete simple tasks. In Beneson’s case, it involved feeding Turk users single lines from the novel, like “Call me Ishmael,” and prompting a user to choose the emojis that best suited the phrase, which in this case were, ☎️ 🙋♂️ ⛵️ 🐳 👌.
Does it work? 🤷🏻♀️ See for yourself.
Book! You lie there; the fact is, you books must know your places. You’ll do to give us the bare words and facts, but we come in to supply the thoughts.
A.Turk: 📖 📝 🏠 💁🏻♀️ 👨🏻! 👏 🛣️ 🍎 🧑🏻❤️💋🧑🏼 © 🌈 🍡 . 📫 😊 💃🏽 🆗 💖 🍔 🐸 ☣️ 👠.
Here’s another one.
But then, what to make of this unearthly complexion, that part of it, I mean, lying round about, and completely independent of the squares of tattooing.
A.Turk: 👑 🌙 ⚪️ ⚾️ 🙇♂️ 🐱 🎄 ⚽️ 💍 🦁 🎁 🉑 ㊙️ 5️⃣ 🍲 🎸 🎸 🗝️ 🗝️ 😡 🎥 🎤 👿 🏥 🏣 🏣 7️⃣ 7️⃣ 🎇 🐵 🍰 🕔 🕑 💋 👧 🗝️
Overall, it’s chaos. A ridiculous, joyful mess.
Yes, you’ll see a lot of “Ah! / 💼 🛀 🏈 🐬 ☺️” but then a “backcountry / 🐴 🚽” shows up, and then it’s as if the gods have released a stream of light to land on whatever page of Emoji Dick you happen to be on and, suddenly, everything makes total sense.
I’d also be remiss not to mention that Benenson only had 722 emojis at his disposal when he did this translation, which becomes evident rather quickly. Today, we have over 3,600 emojis. So perhaps this is an excellent opportunity to 👀♻️✍️📖✨ for our modern emoji-laden times 🫠?
Take the excerpt we looked at earlier:
Book! You lie there; the fact is, you books must know your places. You’ll do to give us the bare words and facts, but we come in to supply the thoughts.
JB: 📖 ! 🫵 🛌 ; 🤌 🫵 📚 ➡️ 🙇♂️ . 🫵 🤲 🧐 , 👯♀️ 🚪 🤲 💭 .
That took way too long to do. All of this took too long to do. Help us, Benenson! You (and all those people using Amazon Turk) are our 🤲 🕊️.
Until then, you can still buy Emoji Dick and find it floating somewhere in the Library of Congress, where, as of 2013, Emoji Dick became the first — and, as far as I know, the only– emoji book acquired by the LOC.

Part of an ongoing series that explores the evolution of computing and principles for designing what’s next.

There was once a time when the most exciting prospect of owning a personal computer was making a pie chart. These were the days when a PC’s primary objective was to handle logic-based tasks that were seen as too complicated for humans to wrap their heads around, like making complex calculations, visualizing data, storing large amounts of digital information, etc.
In the early 1980s, the adoption of personal computers started to rise, and our dependency on them grew beyond the glorified calculators and digital libraries we once understood them to be. Apple Computer’s co-founder, a then twenty-six-year-old Steve Jobs, spoke about this shift with Nightline anchor Ted Koppel.
Right now, we’re at the mechanical part of intelligence where one of these devices can free a person from many of the drudgeries of life and allow really [sic] humans to do what they do best which is to work on the conceptual level, to work on the creative level.
Democratization of computers did open new doors to enhanced productivity, new areas of innovation, and what seemed like endless outlets for creativity. This so much was true of Job’s prediction. However, he might have been a bit naive about how much computers would “free” us and how society’s relationship with PCs would evolve.
The personal computer would become ingrained in our lives in a profoundly intimate way. We would depend on them to build and navigate relationships, educate and entertain our children, and house our most private information. PCs would also provide a sense of validation, meaning, and self-worth. As a result, the motivation for our interactions with computers started to shift from being primarily concerned with logic to being predominantly driven by our emotions. This was the moment when personal computers became genuinely personal.
I shouldn’t have to convince anyone of this, but having a relationship with an emotionally-driven human is neither simple nor predictable. We’re in flux most of the time, shifting who we are, what we care about, what we’re doing, and why we’re doing it. Humans engage with everything all the time, all at once, without realizing we’re doing it. We fight a constant inner tug-of-war between our emotional and rational selves. We are a remarkable mess.
As personal as PCs may have made themselves out to be, they were not designed for this kind of commitment, and they continue not to be. Lighter-fancier-faster hardware upgrades aside, the personal computers we use today are still based on what came before — the machines built to handle simple, logic-based relationships.
I’ve found the best way to track this is by looking at the evolution of the computer interface. If you can believe it, these interfaces have been designed the same way for over half a century. It started with a demo of the first graphical user interface given by research scientist Douglas Engelbart in the 1960s, and the rest is quite literally history.
We still have desktops, windows, folders, files, trash cans, cursors, scrollbars, tabs, and applications (or apps). When smartphones and wearables hit the mainstream, this ethos persisted with a few tweaks so that they would work on mobile. You can even start to see this happen on spatial computing platforms like virtual and augmented reality devices, though not successfully — and for a good reason, but we’ll get to that later.
Any subtle change that did happen to interface design was less about making our lives better or what Steve referred to as drudgery-freeing. Instead, they addressed the drudgeries that these technologies added to our lives. A new way to organize or hide your apps is not to eliminate your need for them. It’s so you can hide your hundreds of apps under the rug and feel better about buying more. If you think about it, no aspect of modern computer design — laptop, phone, watch, or otherwise — is meant to try and free you from using the technology. Each change, no matter how slight, is about further conditioning you to need the technologies you’re using so you can thrive in worlds of their own creation.
There is a price we’ve always had to pay for trying to launch ourselves over the walls that nature built with aqueducts, bicycles, Furbies, and chatbots. Technology may not be good or bad, but its purpose is to enhance the human experience beyond its biological limitations, which means it also becomes a vessel for the human condition. Every technology, from the text message to the Tamagotchi, eventually falls victim to the plight of having been invented by us, and it’s only a matter of time before it gets snagged in the mess that humanity makes when we’re triggered.
With PCs, it was more of a slow burn. They followed a path of societal and cultural neglect, compounded by decades of designing and building the wrong thing. Computers never sincerely questioned or recognized in any tangible way that times change, as do people. You could argue it was a selfish attempt to continue gaining in the short term at the expense of a much more important long-term goal that each technology carries — a promise to help us become better humans, one which it never seems to be able to keep.
I want to believe that Steve Jobs imagined a world where personal computers would admire us proudly from the sidelines as we galavanted our free, creative selves across the open planes of human potential. “Seize the day!” they’d say in our chosen voice while displaying full-screen smiley emojis.
Perhaps this is a similar future the CEO of OpenAI, Sam Altman, seeks when he expresses a strangely familiar hope for his technology to the Wall Street Journal:
His goal, [Sam] said, is to forge a new world order in which machines free people to pursue more creative work…Mr. Altman even thinks that humanity will love AI so much that an advanced chatbot could represent “an extension of your will.”
– or what Marc Andreeson and Ben Horwitz of a16z believe when they talk about their technology:
Web3 technology can usher in a renaissance of creativity, innovation, democratic participation, and prosperity with few parallels in human history.
We feel momentum each time, but there are no winds of change here. There’s only an echo chamber where stale air moves there and back again as history repeats itself. Needless to say, we are far from being watched over by machines of loving grace.
Could we have built computers the right way? Is there a right way to make any of this — AI, web3, or whatever else we scrounge up to help us become better humans? Can we ever break this cycle and achieve what so many technologies have promised?


It is not drawn on any map; true places never are.
✍️ 🗺️ ❌ ; 🌉 🌇 🌅 🥹 ✅
Thanks to my editors Lauren Simpson and Jes Elliott, and a special thanks to Geoffrey Hinton and Blaise Agüera y Arcas for being my AI gurus, both knowingly and unknowingly. I stand on the shoulders of giants and am forever grateful for it.