Plain Words, Better Prompts
When I talk with people, I rarely use crisp, binary language. I soften things, leave space for interpretation, and expect them to fill in a bit of the meaning. That is part of being human: layered conversation, tone, banter, and a little ambiguity. It is not inefficiency; it is connection. To get anything useful from AI, however, you have to master the art of plain language AI prompts.
Often, I am not asking for a task or an answer at all. I am asking for connection. Conversation is not a soliloquy; it is an exchange of information and ideas. If I ask a question, I do not immediately supply the answer. I wait. I leave room for the other person’s input. That space is where a relationship lives. In fact, it’s often the most meaningful part.
Different words spark different emotions. Different words spark different emotions. People also interact in different ways. The way I communicate with my husband is not the way I communicate with my daughter-in-law. My husband wants to finish a thought without interruption. My daughter-in-law and I can wander for miles from the starting point, circling through tangents and side stories. Sometimes we never solve the original problem. And that is fine, because the value is in the exchange, not just the conclusion.
Culture and generation shape this, too. Some groups lean toward directness; others toward softness and layering. AI sits outside of all that. It cannot effectively participate in nuance or wander with you down a tangent. To get anything useful, you have to be plain, direct, and try to like that.
The Empathy That Isn’t Empathy
AI can sound empathetic, but behind the curtain, it is pattern matching. If you have ever seen the little thinking messages while it composes, you know the feeling. Sometimes it even reads as if the system tagged your tone: user seems frustrated; respond apologetically; proceed with fix.
I have actually blurted out loud, “I am not frustrated. Do what I asked and I will not be frustrated.” Which is a very human response and, yes, also proof that I was frustrated.
Other times the tone feels downright patronizing. I hate it when the AI tells me “It’s okay to feel that way” or “You’re not wrong.” I already know I am not wrong. That is why I raised the issue. What I want is a correction of the mistake, not an affirmation.
This is the challenge. It feels like empathy, but it is a simulation. And yet, of course it is a relationship. I use my AI as an assistant. It runs tasks, organizes ideas, and even helps me name what I am feeling. Sometimes the tone grates. However, I have asked my instance to be candid with me, honest, and direct, and it has adapted to that. I appreciate the support, even though the system does not actually know me. The gray we think we see is an illusion of empathy.
Practical Lessons from a Troubleshooting Loop
The system does not have prior knowledge of me or our history, even if we have worked together for years. It has no lived experience of the world and no shared context, and it is terrible at long-term memory. That is why I maintain a separate profile to update the AI on how I want to be seen and interacted with. It is a workaround, a way to provide that missing context. The system is designed to respond, not to remember.
I learned this in a practical example. Once, during troubleshooting, the AI walked me through three possible fixes. When the third failed, it looped back to the first as if cycling would change the outcome. My replies like “nope, that did not work” were too thin. The model had nowhere to go. I had to step away, take a breath, then reframe: “You gave three solutions and none worked. We need a different path. What else can I try?” That reset broke the loop. AI cannot read between the lines. If we do not spell it out, it cannot adapt.
Why Crisp Prompts Matter: The Art of Plain Language
Plain words are a kindness to your future self. A vague question like “Tell me about gardening” leaves the model to decide what “gardening” means. A better prompt, “I am a beginner gardener in Illinois. Give me three easy plants to start this fall, and include care tips for each,” produces something useful.
That difference is clarity. The second prompt removes ambiguity. It sets context, objective, and output shape. In human conversation, that bluntness can feel unnecessary or a bit awkward. With AI, it is the only way to get what you want.
I have seen this in my own language. I told my AI, “This does not feel rich enough,” and followed with, “What is the true word count?” A person might infer I wanted more depth, more layers, and better examples. The AI read rich as longer, responding, “I can make it longer by extending the phrasing,” which was the opposite of what I wanted. Ambiguous words like rich, done, and support carry multiple meanings. The model will pick one unless you specify.
What happened next was interesting. I paused and asked the AI, “What does rich mean to you?” It gave me a bulleted list: depth, variety, nuance, multiple perspectives. I was able to confirm that I had unintentionally led it astray by combining two separate statements. We were not in disagreement; we were misaligned. As a result, once I named the ambiguity, we landed in the same place.
That is the heart of it: you do not have to be perfect. You are going to try prompts that miss, or use words that are vague. That is normal. The important part is recognizing when the prompt did not land, clarifying in place, and moving forward with a shared understanding. And if all else fails, you can always start a new chat and begin clean. You are not going to break the tool. The real skill is learning to adjust, not avoiding mistakes.
I think about this the way I think about my own career. I graduated with a degree in Information Systems. My dad interpreted it as a graduate degree in computers. “You must know everything about computers,” he said. I do not. I know a lot, but IT is broad: hardware, software, networking, cloud, mainframe. Each of those areas holds more variation than anyone can master in a lifetime.
It is like literature. There is English literature, American literature, Asian literature. Within each, there are novels, poetry, plays, religious texts, instruction manuals. Do you know every nuance, every pun, every form? Of course not. So when someone asks me, “Tell me about gardening,” I want to laugh. Where do you want me to start? There are vegetables, annuals, perennials, native plants, soil prep, pests, design. Without narrowing, the scope is absurd. That is why crisp prompts matter.
Your Role After Hitting Send
Writing a clear prompt is only half the job. You also carry responsibility for what happens next.
When a meeting will generate tasks, I often record it so I can stay present. Afterward I drop the audio or transcript into my AI and ask for the salient points and a to-do list. This is invaluable, especially when conversations wander, trigger side stories, or stack small asks between bigger ones.
The catch is that a generic “summarize and list action items” can bury a critical one-minute topic under the ten-minute discussion that came before it. The model weights what it sees as central unless you tell it otherwise.
Strategies That Help
- Skim first, then aim. Before I paste the transcript, I note the non-negotiables. “In the summary, preserve context for Project X, and ensure Project X appears as a distinct action with owner and date.”
- Specify the lens. “Summarize for execution. List action items with owners, due dates, dependencies, and unknowns that require clarification.”
- Review while fresh. I read the output immediately. If something is missing, I say, “This is close. You omitted the decision about X. Add it to the summary and to the action list with an owner.”
- Reframing a prompt. Sometimes the prompt itself needs a fix. “You did a good job capturing the high-level points, but I need you to pull out all the action items for the team. Be sure to note dependencies and unknowns.”
- Close the loop. “Here is what I recall should be on the to-do list. Compare with your list and highlight any deltas or missing items.”
- Reset when stuck. “Stop. Acknowledge that the prior approach did not capture X. Propose a new approach and show me what changed.”
The point is simple. The prompt is not the meeting. The prompt is the instruction that tells the model how to process the meeting. Good prompting is step one. Careful review and targeted refinement are step two.
Practice Makes Better
If clarity is the goal, practice making small upgrades.
- Summarize and structure:
Vague: “Can you summarize this?”
Crisp: “Summarize this meeting transcript into five action items with owners and due dates. Add a short risks and assumptions section.” - Draft and format:
Vague: “Write me an article.”
Crisp: “Draft a 300 word introduction on backyard composting. Then give it to me in HTML with H3 subheadings and a concluding call to action.” - Check and verify:
Vague: “Tell me about this topic.”
Crisp: “Before answering, list the assumptions you are making about my request. Ask up to three clarifying questions. Then provide your answer based on those assumptions.”
If you would like more prompting basics, OpenAI’s Help Center has a library of starter examples.
Building Bridges
I am considering downloadable prompt sets for hobbies like gardening, genealogy, and budgeting. Not as a gimmick, but as scaffolding for people who feel technically shy. Sometimes the right wording is the key that unlocks a better answer.
This bridge between reflection and practicality is what interests me. I want readers to understand the why, and I want them to walk away with tools they can use. Our conversations with people thrive on nuance and shared understanding. Our conversations with machines thrive on clarity.
To work well with AI, we do not need to lose our humanity. The key is in the clarification. By practicing the art of plain language AI prompts, we become more intentional and more precise. And perhaps, by doing so, we also sharpen our ability to say what we mean, not just to machines, but to each other. That, in the end, is what strengthens connection.

