One of the things that all programmers learn is the value of precision — computers reward precise thinking.
They do exactly what you tell them to do and that forces us to be very precise about what we want.
When I first started investigating the use with LLMs with code, I was terribly precise. When I asked a question I always ended it with a question mark. Often I still do it. Somewhere in the back of my mind I must think that precision is always necessary, or that it will give me better results, but if I am honest with myself, it is mostly a habit.
How far can we go with imprecision?
Let’s try something.
Type a noun phrase into ChatGPT. It treats it as a question:
We can do the same thing with code:
Here the LLM isn’t showing me the code for TaskList or describing it. It is giving me changes for TaskList that it generated earlier in the session.
All of our prompts are interpreted in context. Not just the context that the LLM acquired during training but also the context of what we’ve done in the session.
For programmers, this can be unnerving.
We are so used to being precise.
The conclusion I’ve reached is that when we program with LLMs we need to unprogram ourselves a bit — we need to learn how vary our level of precision when it suits us.
Here I wanted to do something more elaborate so I tried to be very precise. Despite the typo (“from from cursor” instead of “find from cursor”) it understood me and gave me the changes I wanted:
So, unprogramming, what is it?
To me, it is not about being sloppy. It is about understanding the context that you are building with an LLM and leveraging it.
A lot of this happens naturally.
You are tired of typing and you decide to be a little vague and you still get what you want. If you don’t, you just tighten up your precision.
The next level of unprogramming is learning how to use LLMs as collaborators rather than a vague programming language.
A good way to get yourself into this mode is by asking them to change something they’ve produced rather than just giving them the same prompt as before with greater precision.
To be fair, there is an aspect of programming that we still have to apply often. Today’s LLMs often have to be told to approach problem-solving procedurally. This doesn’t mean that the solutions will generate will be procedural but they do need some help at times.
Remember, you are in control.
As always, breaking big problems into smaller problems helps — so does adding precision when necessary.
It is curiosity what llm and ide are you using?