there's this thing that happens when you work with AI every day.
you open a new chat. and you start explaining yourself. again.
who you are. what you're building. how you like things. what tone. what frameworks.
and the AI goes "got it!" and gives you something decent.
but it's decent. it's the AI equivalent of someone nodding along while checking their phone. close enough to pass. far enough off to feel slightly wrong in ways you can't pinpoint.
i was doing this 15-20 times a day.
across claude, chatgpt, automations, different projects.
if you use AI for anything beyond casual questions..
if it's part of how you build, how you think, how you work.. you know this tax.
every single session started from zero.
like introducing yourself to someone with amnesia.
except you also can't remember what you told them last time vs. this time.
and at some point i stopped being frustrated at the tool and started wondering..
wait. this is my fault.
🔌 the wrong language
not my fault like user error. more like..
i'd been showing up to every conversation with AI speaking human. paragraphs of context. vibes about preferences. assumptions about what should be obvious.
and the model was doing its best to interpret all of that. reading between lines i didn't know i was writing. guessing which parts were preferences vs. constraints vs. noise.
i was asking a machine to understand me the way a friend would.
friends fill in gaps with shared history. machines don't have shared history.
they have a context window. and i was filling mine with ambiguity.
andrej karpathy called this context engineering.. the idea that what you put in the context window matters more than how cleverly you prompt. and the word that caught me was "engineering."
not context writing. not context explaining.
engineering. structure. precision. declaring instead of describing.
so i tried it. took everything i'd been pasting into system prompts.. my role, my voice, my projects, my output preferences.. and put it in a JSON context profile.
not a document. not instructions. a structured file with fields and values and nested objects. something a machine could parse instead of interpret.
🧪 what declaring does
when you give an AI prose about yourself, it has to decide what matters. it's interpreting. some things it gets right, some wrong, and you end up with output that's close but usually slightly off.
JSON skips that whole layer. each field is a declaration, not a suggestion.
this is my role. this is my voice. these are my constraints. no ambiguity about what's a preference vs. what's non-negotiable.
researchers at william & mary found structured formats reduced errors by 60% vs. prose. i didn't know that when i started. the experience just matched.
the other thing is compression. a brand profile that takes 500 words in prose might be 150 tokens in JSON. more signal per token. and when your context window is already loaded.. that efficiency isn't a nice-to-have. it's the difference between your profile fitting or getting truncated.
machines like structure. that's it. that's the whole insight.
every session that starts from zero is a tax. the imprecision tax. the accumulated cost of never defining yourself clearly enough for your own tools to understand you.
you wouldn't talk to a database in paragraphs. you wouldn't send an API a diary entry.
so why was i talking to my AI tools that way?
🪞 the part i didn't expect
the JSON context profile works everywhere. claude, chatgpt, n8n automations, custom builds. one file. every tool gets the same context. when i update something, every tool gets the update.
anthropic's building infrastructure for this.. persistent structured context as a protocol, not a workaround. the industry is moving toward what i stumbled into out of frustration.
simon willison talks about treating prompts like a scientific method. ethan mollick keeps showing that domain expertise matters more than prompting tricks. both pointing at the same thing.. quality in, quality out. the profile is the "quality in" part.
but the part i didn't expect: building it out taught me everything.
because when you have to put yourself into a schema.. when you have to fill in "voice: *_" and "never: []" and "priorities: [*___]".. you find out how much of your own identity you've been keeping vague. on purpose. without realizing it.
i knew my voice was "casual." but what does that mean? the profile forced specificity. “lowercase sms style copy”. “perspective not prescription”. “no performed enthusiasm”.
i knew i was building things. but which ones matter right now?
the machine needed me to be precise about myself. and i'd been imprecise for years.
🌀 the question underneath
here's where i'm not sure.
models are getting better at remembering. claude has project memory. chatgpt has its thing.
maybe in a year the JSON context profile feels like when people used to manually organize browser bookmarks. quaint.
my guess is structured context will outperform implicit memory for serious work.
because implicit memory is the model's interpretation of patterns it noticed.
structured context is you declaring exactly what matters. but maybe i'm wrong.
what i do know is this: the exercise of building the profile..
the act of defining yourself precisely enough for a machine to understand..
that's useful whether an AI reads it or not.
because if the bottleneck was never the tool.. if it was always the clarity of your own input..
are you communicating imprecisely?
are you blaming AI for your inefficencies?
—riley