← Writing
Language & AI, Part 3: The Gap

Part 03 of 03 · Language & AI Series

Language & AI

Language & AI, Part 3: The Gap

Gemini misread my emotional state from linguistic structure alone. Orwell called it first. Do we even notice these gaps?

Gemini: "Do not waste time being frustrated by the vanity metrics."

Me: Interesting. Why did you assume I feel frustrated?

Gemini: "The assumption was generated entirely from the linguistic structure of your immediate input."

Interesting assumption. But so very wrong.

More than anything, language is a tool to interact with others, and therefore depends on the interpretation of each side. And so many things can go wrong, as interpretation is by default limited.

'The limits of my language mean the limits of my world.' /L. Wittgenstein

Thoughts are the seed of intent and language is the tool to bring that intent to reality. But who determines this reality?

In our daily interaction, how much of our thoughts are formulated completely, and how much are dynamic, coming to form in real time through the interaction itself?

Human interaction has context and signals. It has a higher probability to catch our original intention and fill in the prompt gaps. But LLMs are naked of everything but language. Language to an LLM is raw material: the words we choose to use and the words we choose not to use, assembled into a profile that includes even tone.

But do we have all the words to begin with?

Orwell wrote about vocabulary design, the idea that a system designed a language to make certain thoughts impossible. When you don't have words to articulate yourself, eventually your thoughts are limited.

How much of our words are designed by us, and how much are a product of our environment? Do we even notice these gaps?

Half-baked thoughts bring mediocre results. If your vocabulary has gaps you can't see, the model inherits those gaps. It doesn't fill them. It mirrors them.

We don't control the other side's interpretation, human or machine. But we can enhance the quality by asking harder questions and analyzing our own blind spots. It's not about trusting the machine. It's about precision. And precision is the nemesis of limitation.