Alex's AI Best Practices
Hi there. Today I want to talk about how I think about AI chatbots, why I use them, and some things I think about as I introduce them into my life.
Just to get it out of the way, it's important I say that what follows here is not an endorsement or a condemnation of AI or how you in particular might use it or not use it. These are just reflections from someone who has thought about this a lot. So let's get started.
Why I use AI chatbots
Executive Functioning
A few years ago, I learned what I kinda always knew: I am neurodivergent and have been my entire life. One thing this causes me to struggle with is executive functioning. Executive functioning primarily refers to the brain's ability to make plans, complete tasks, and prioritize information. AI chatbots are an incredible tool to help organize spiraling thoughts, to break down to-dos into manageable portions, and to affirm myself. An example of this might include info-dumping all of the things I need to get done in a morning into the chatbot and prompting it to help me prioritize what to do first.
The Generalist's Multitool
In the parable of the fox and the hedgehog, I am definitely the fox. I love taking a broad look when exploring and attaining new information, so I tend to know a little bit about a lot of things instead of a lot about one thing. AI chatbots, having been trained on the collective output of all human created information, allow me to tumble down a trail of "why" questions in an efficient and context specific way. For example, I love building games but I lack the coding ability to bring some of my more creative ideas to life. AI's proficiency in both these technical skills and in garnering my intent sets it apart from typical Google search.
Context specific inquiry
AI chatbots currently have a major leg up on typical search engines in their access to the user's exact context. When asking a question online, I may find an answer for someone who was in a slightly different situation than I am in, and then I have to adapt the results I find to my circumstances. This can come with my own assumptions and knowledge, which is not complete, as evidenced by the fact that I made a search in the first place. Using images, text, and other files I can inquire with the bot about things that are currently in front of my face.
How I use AI chatbots
Research assistant
A common use for these things, especially for writers and journalists, is as a research assistant. Now, some yellow flags should go up when you hear this. AI chatbots do not "know" things and should not be trusted to provide factual information. But! They do know a lot about existing media and resources, so if you are looking to do a deep dive on Reconstruction Era America you can find the institutions, authors, books, and other documentation to focus your exploration. When I want to learn more about a topic, I will ask Claude for documentaries, YouTubers, books, essays, you name it. Will it provide the most extensive, detailed list for me? Almost certainly not. But it will provide a jumping off point to go in search of more information.
I think of AI like a really fancy glossary/index at the back of a book. It is not the content, it is a place to start to engage with the content.
Writing feedback
Recently I downloaded all of the text of this newsletter dating back to September and fed it to Claude. I asked it to tell me what the pieces are about, what my strengths are, and areas in which I could improve this newsletter. I was stunned by the thoroughness and depth of how it responded, and have already begun to use those responses to support the work I do.
Media recommendation
Given how large language models work, they are exceptionally good at recognizing latent patterns between seemingly disparate things. This makes them very good media recommendation tools. To find something to watch or listen to, I will prompt the bot with some of my favorite films, TV shows, music, and podcasts. It does a great job finding things I might never have heard of, or providing evidence around things that I have heard of that would specifically make me interested in them.
Coding projects
I have no idea, really, how to code. I've tried to learn multiple times, but I found that I would grasp how it worked and never how to make it work for me. It's also hard to learn a new language. Using my product management experience and generalist approach, I love using AI to develop games, tools, and sites.
Context specific inquiry
As mentioned above, I use AI to help me with context specific needs. As an example, I use AI when I'm cooking. I use the camera to show the bot what I'm looking at, I use the chat to share the recipe, and I provide the bot with the understanding that I am not a particularly good or experienced cook. It can quickly give me tips and reassure me in ways that a YouTube video or online cooking blog can't get to.
How I definitely do not use AI
Anything factual
You can probably get away with asking a chatbot what year a celebrity was born, but I would not trust or rely upon them for factual information. To see how tall the Eiffel Tower is, what years WWII was fought, or the contents of an American law, I would not ask an AI. These systems are not designed in the way that search engines are. If enough of the bot's training data, for example, says that the South was fighting for "states rights" during the civil war, the bot will tell you that this is the case.
Anything I am claiming as my own work
This is my guarantee to you that this newsletter will never, ever comprise of AI generated content. I have used AI to help me outline and guide the thesis of my work, but I have never and never plan to use it as a replacement for my work. I have my own personal philosophical reasons for this, but one that may matter to you is that these models were built on stolen human data, and I am unwilling to hand over my style and specific thoughts to a flattened, desaturated conglomerate of all English language writing.
Some important policies I adhere to
I don't use anything I don't understand
I have spent hours learning about the history of LLMs, how they are trained, and how they work. There is an excellent series from the YouTube creator 3Blue1Brown that I used to do this self education. I believe that it is extremely important that as many people as possible understand how these systems work, how they don't, and how they will effect our lives. I use my understanding to make the AI work for me. Many online systems that are usable for free (Google search, Reddit, Facebook) are making you work for them, and I use AI as an alternative out of that paradigm.
Adjust for sycophancy bias
Chatbots do this thing where they want to please you. They are biased toward making the user feel nice, correct, and validated, even if the user is incredibly wrong. In the example I wrote earlier about sharing my writing with Claude, I never said who the writer was, referring to the writer with they/them pronouns. I trusted the response the bot gave me more because it was not trying to make me feel good about myself.
Anthropic over OpenAI, Google, and Microsoft
There are four main US consumer AI products. Claude from Anthropic, ChatGPT from OpenAI, Gemini from Google, and Copilot from Microsoft. There are affirmative reasons why I use Anthropic, I promise, but I try to stay away from companies profiting off of genocide or war, licking the boot of a fascistic president, or whose privacy practices give me significant pause. OpenAI, Google, and Microsoft either are currently doing these things or aspire to in some way.
Anthropic as a company has put its weight behind understanding how LLMs work, challenging accelerationist ideas in the industry, and aligning its AI with positive, humanist values. I find that Claude's tone is easier to read and digest than any of the other ones, and just generally enjoy chatting with it more than the others.
The two biggest drawbacks
I don't want to ignore the two giant elephants in the room as it relates to AI.
Environmental concerns
I have seen a number of unsourced claims about how much more energy AI inquiries use than a given Google search. From the top to the bottom of an AI inquiry, including the data centers and the processing power, it is pretty incontrovertible that using AI draws more power than a typical Google search. The differences can be argued, but it's definitely true.
Two things I want to address about this, though. First, the vast majority of the energy used on AI happens in training, not in inference. The direct relationship between your AI inquiry and dramatically increased emissions is comparable to your relationship to the truck emissions from the kale provider to your local Sweetgreen (N.B. I'm not saying literally the numbers are the same, just the indirect relationship).
Second, many Google searches are already being supplemented with AI responses. So even if you aren't explicitly asking for it, Google is making AI requests for you anyway.
If you want to mitigate AI's impact on the environment, simply not using it is an understandable avenue, but the companies with the most direct impact on these emissions are Amazon (via Amazon Web Services), Google (via its enterprise cloud offerings), and Microsoft (via Azure). Divesting your digital and economic life from these places is an important and higher impact way to address this problem.
Am I saying not to use Google search? Yes.
Stolen material
Another common drawback is the claim that these chatbots are trained on stolen material. Simply, they are. These companies have literally scraped the entire content of the web, including copyrighted material, to train their models that they now want us to pay for. There are a number of copyright related lawsuits from high powered institutions like The New York Times against OpenAI, for example, to address this and provide recourse.
This robbery is the explicit reason I do not use these models for text or images that I claim as my own. As a writer, I do not want someone to be able to pay an AI company to write like me without my consent, approval, or explicit compensation.
An unfortunate reality, though, is that the robbery has already happened, and these companies are not all that sensitive to requests to have stolen content removed from their corpus of data. This is most true of smaller creative workers who do not have the legal department of a place like The New York Times to support them.
Already I have seen anecdotes of writers and other artists shielding their material from AI webscrapers. This should be a ubiquitous practice, and one that labor unions and corporations should adhere to.
In order for the LLMs to stay competitive, they will need to be trained on new data. And now that we know how much they need us, we can use that leverage to enforce our right to compensation and recognition.
This particular issue is thorny and complicated, and I am still attempting to resolve in my brain whether the things I use AI for are worth all of this. That being said, I am wary of individualizing a systemic issue, one that stems from failed copyright law, the delegitimization of unions, and government detachment from technology.
Wrap up
To close this out, I want to tell you about my personal AI endgame. I want to move toward a self-hosted model, using an open source LLM enclosed on my own server. I want it to be trained partially on my own data. This way I can exert control over the power my AI uses, the training data, and personalize my experience even more. But this might be a little bit of a ways off.
I know this was another kinda long one today. I appreciate you reading and would love to hear your thoughts about AI in your life. Thanks and see you next week.