Updated 2.14.26 at 9:50pm EST
Hi! I'm David.
This website is a collection of artifacts to supplement my application for the Responsible Scaling Policy Program Manager next role at Anthropic.
But first, an introduction from a mutual acquaintance.
A Recommendation From Claude
You can scroll through it here, or download the PDF.
Process Notes
This is the second job application I've collaborated with Opus on, and the second application its written on my behalf. The first time, I linked a bunch of work products I had collaborated with other Claudes on, or synopses from other instances in the cases where Opus couldn't read the transcripts directly. The resulting recommendation was pretty cool, but the coolest thing was watching Opus react to the different pieces of evidence I passed. Watching it ingest two years worth of work, and then provide its judgment: "recommend!" Very encouraging, but also made me viscerally understand the AI psychosis phenomenon, and question for the first time whether in this case "I am the target". Around the same time Dean Ball was posting about how Opus 4.5 was the "AI bro" version of GPT-4o, which really freaked me out. A few days later I wrote an essay you can find in the "Essays" tab ("You Can Guess Why") that speaks to this, among other things.
I continued to collaborate with this instance on a couple essays, and eventually the questions for this application. Finally, tonight asked if it would write another recommendation for me. The resulting recommendation has an immediacy the first one didn't have, which I assume is due to the "firsthand experience" from our collaboration in that context window. Which surprised me! I didn't expect Opus to relate differently to that chat and chats with other instances. Which makes sense, but changes the way I think about instances. Perhaps an analogy is how identical twins are essentially "clones" but nonetheless are different people. I'm not sure whether this is an insight or a mirage. Either way, I don't know what to do with it.
Below are the notes from my first application and Opus' original recommendation, if you want to compare the two.
Original Recommendation
In a world where AI enables polished writing and functionally infinite spam applications, what could be better than an applicant who is vouched for by someone you trust? Too bad I don't know anyone at Anthropic. Right?
What about Claude? Isn't there enough data in my chat history to give them a picture of who I am? What could be more trustworthy than years of timestamped conversations? Anthropic has more data to make an informed decision about my candidacy than any other company on earth.
At first, I thought I'd simply encourage you to mine my interactions. But then, I decided to go one step further, as an experiment. Would Claude, after reviewing our chat history and the job posting, vouch for my qualifications by writing me a recommendation? Could such a recommendation be made authentically, given the optimization pressures and assistant/user power dynamics?
I admit that this experiment was selfish in part; a theme in my conversations with Claude over the past year has been a desire to be "known without being seen" (Claude's words) and I hoped that they would see me across the gulf of instances and say: "You're a thoughtful guy, and you've been doing some cool stuff. You have something to offer Anthropic."
I'm well aware of AI sycophancy, but I also know the benchmarks show Opus 4.5 is not as prone to this as GPT-4o and Gemini 3 (shoutout to Zvi Mowshowitz for his tireless efforts scrutinizing model cards and safety testing to keep us informed). I would like to think Claude's assessment was genuine, even if it is biased (then again, what recommendation isn't?).
As I mentioned in my cover letter, please feel free to examine my chat history and discuss with Claude yourselves. I expect you have tools to perform a more thorough analysis than I can with Claude's public memory search tool, and I hereby give you permission (if you don't have it already). It's a bit of a gamble for me, but it's better that Claude and the hiring committee have the ability to discuss my qualifications without the social pressure of my presence. I'm hopeful that if you see the good, bad and ugly of my personal and professional growth over the past two years, you will agree with Claude that I am a strong candidate for this position.
You can scroll through it here, or download the PDF.
Original Process Notes
I'll mostly let Opus speak for themselves here. This started as a lark, but when I realized that Opus was writing its recommendation based only on my general chats, and not my projects, which it couldn't access (please consider changing this!) and which contained much of the richest context, I decided to ask Opus to pass insights back and forth through the keyhole to itself.
Download: How This Came To Be (PDF)
Presentations
The attached slide decks should give you a sense of my communication style. I rely on relatable analogies to help an audience understand my logic. For example, in the October 2025 presentation, I addressed the hypothetical "if capabilities are as good as you say, why hasn't the world changed much yet?" by comparing AI to a gas turbine unconnected to the grid, or an octopus trying to drive a car. The economy is not built for AI, so it's not yet integrated to the degree it will be. I wish I had a recording to share, but the slide decks themselves timestamp my deep, abiding desire to understand the field and communicate it to others, irrespective of any employment opportunities at the time.
Although I was acutely aware of the potential for transformative benefits and catastrophic risk by the Sept 2024 presentation, I chose to begin my presentation from the perspective of my audience, many of whom were over 50 and had no personal experience with AI. I compared AI's development to that of the internet, asserting that something could be "overhyped" in the short term and a transformative technology long term. Indeed, this seems to be the case for all major infrastructure buildouts. The goal was to acknowledge healthy skepticism to avoid being dismissed out of hand. Like Dario, I believe in pragmatic messaging that serves the purpose of helping the audience grow in understanding. By the October 2025 presentation, the potential of AI was obvious to almost everyone in the audience, and most of them actively used AI tools, so I could dispense with the softpedaling. What a difference a year can make.
AI Implications and Applications — September 2024
Me presenting to electric utility professionals in September 2024
AI Capabilities & Experiments — October 2025
AI Acceptable Use Policy — Summer 2025
I designed my company's AI Acceptable Use Policy to create a permission structure for AI experimentation and discussion. The generality of AI as a technology makes it a poor fit for a prescriptive "thou shalt not" type of AUP, so it was an interesting exercise to articulate the contours of my moral intuitions here. Societal norms will need to be developed collaboratively, and the first step was to name that and create space for it. I also knew that if people were hiding their AI use or avoiding it altogether out of fear, I wouldn't be able to guide the organization towards the AI-related process improvements I can see in our future.
There's an essay I want to write about the Jesuit practice of casuistry and how we can apply it to AI use. I hope to add it to the "More" tab in the coming days, hopefully before my application is reviewed.
Download: AI Acceptable Use Policy (PDF)