Think With AI: Preserving Human Intelligence
- Scott Madenburg

- Mar 17
- 11 min read
The "Pass/Fail" Illusion: An Introduction
Think of a modern classroom. A student sits at their laptop, staring at a blank screen. They have to turn in an essay by midnight. Instead of doing research, they open an AI chatbot, type in the prompt, and in a matter of seconds, a well-structured essay appears.

The school, however, has AI detection software, which is a problem. They will get caught if the student turns in the essay as is. So, they take the text that the AI wrote and put it into another AI tool that is meant to "humanize" the writing by adding just the right amount of mistakes and different sentence structures to trick the detector. The student hits "submit," the software says it's okay, and the teacher gives it a passing grade.
This looks like a success story on paper. The student finished the assignment and solved the problem. But in reality, a quiet tragedy just happened: the student didn't learn anything at all. They skipped the hard part of research. They didn't have to do the hard work of putting their thoughts in order.
I just talked to a graduate student who went through this exact thing. She said that this AI-bypass method was common when she was in college. Most of the students would easily get good grades on their homework with the help of AI. The big problem? Those same students barely passed their midterms and finals. The students never really learned the material because the AI did all the work.
But now that she's in graduate school, everything has changed. Her teachers want her to use AI to do her homework, but they want her to do the exact opposite. She can't just let the AI do the work anymore; she has to be the "human in the loop." Now, her job is to make the AI output, then critically evaluate it, check the research, and fix any gaps it may have. The graduate program makes her check facts and argue with the AI, which helps her learn the core material and the tool at the same time.
From the Classroom to the Meeting Room
Now, let's leave the classroom and go to work.
How often do we act like those college students? We are busy, stressed out, and facing tight deadlines. It is very tempting to use AI to quickly make a market report, write an important email, or summarize a long contract. We copy, paste, hit send, and check the task off our list. We go through the "detector" at work.
But when we rely on AI instead of using it as a tool, our brains start to lose their strength. We might lose the human skills that make us valuable in the first place, like our intuition, our healthy skepticism, and our ability to connect dots that don't seem to be related. AI hallucinations get through the cracks, generic strategies take the place of creative thinking, and we slowly go from being the pilots of our careers to just passengers.
The challenge of this decade for managers, business owners, investors, and regular people isn't just figuring out how to use AI to get more done. It's about learning how to move from a "undergraduate" way of thinking to a "graduate" way of thinking: using AI responsibly while keeping our critical thinking and cognitive skills sharper than ever.
We don't have to get rid of the machine. We need to raise the human. Here is how we do it.
Part 1: The "AI Crutch" and the "AI Co-Pilot" are two different things.
Are we losing our mental map because of GPS?
Think about the last time you drove somewhere new to get an idea of what happens when we rely on AI. You probably typed the address into your phone's GPS, put your car in drive, and followed the voice commands that told you how to get there. You probably made it there safely, but could you draw a map of the way you just went if someone asked you to? Not likely.
This is what psychologists call "cognitive offloading." When we let the GPS do the navigating, our brains stop being aware of where we are in space. We stop paying attention to landmarks, lose our sense of direction, and if the GPS suddenly goes wrong and tells us to turn left into a lake, a lot of us will start steering toward the water before we realize what we've done wrong.
Using AI at work can cause the same thing to happen. It's very easy to let an AI do the work when we ask it to write a quarterly report, put together a competitor analysis, or write a complicated email to a client. We let the AI do the hard work and our brains go on autopilot. We are just passengers in our own jobs.
The Risk of Autopilot: Hallucinations and the Self-Assured Liar
The issue with autopilot is that AI, despite its intelligence, is essentially a pattern-matching device. It doesn't know what the words it makes mean; it just guesses them. This is why AI can lie with a lot of confidence. It will "hallucinate" numbers, make up legal precedents, and say things that are clearly wrong with the highest authority.
You will blindly believe these hallucinations if your cognitive skills have gone dormant and you can't read and question what you see anymore. You are the one who makes the left turn into the lake, taking your team, clients, and business with you.
The Co-Pilot Mindset: Why You Are Worth More Than Anyone Else
So, what else can we do? The way people think about it is changing from a crutch to a co-pilot.
This is something every professional who uses AI today needs to keep in mind: Your ability to write is not what makes you valuable to your company. Your judgment is what makes you valuable. Imagine AI as a smart, quick, and very naive intern. This intern will bring you a well-organized 10-page report on a market trend in three seconds if you ask them to. But that intern doesn't know how your company works. They don't remember the tone your CEO likes in executive summaries. They don't know that the person you're emailing had a bad experience three years ago and needs to be handled with care.
You are the only one who has that lived experience, gut feeling, and emotional intelligence.

You don't take AI's first draft as the final answer when you use it as a co-pilot. You use it as a strong starting point. You take the raw material it gives you and question it, shape it, check the facts, and add the subtlety that only a person can. You keep your mind sharp by actively engaging with the information instead of just passing it along.
If you let a machine do your thinking for you, you also let it do your value. But if you combine the speed of AI with the ability of a human mind to think critically, nothing can stop you.
Part 2: The New Cognitive Skillset
How to Use AI to Think
Okay, we both agree that we need to be co-pilots. But what does that really look like on a Tuesday afternoon when you have three deadlines and an inbox that is full? How can we use AI without letting our minds get dull?
We need to change how we do our work on purpose. We need to stop thinking of AI as a "answer machine" and start thinking of it as a "reasoning engine." It should make us think harder, not less. To accomplish this, professionals must develop three particular cognitive abilities.
1. Reading Like a Journalist: The Art of the Interrogation
When you ask AI a question, it will give you an answer that sounds smart, clear, and completely correct. Your first thought will be to quickly read it, nod, and copy and paste it. Don't. You need to take on the role of a skeptical investigative journalist instead. Don't think of what the AI gives you as a finished manuscript; think of it as a tip from an informant. You should still check it out, even if it sounds like a good idea.
Find the logic gaps: Does the AI's conclusion make sense based on the information it gave?
Look for the "confident lie": Did it make up a number or misread a law?
Play Devil's Advocate: Tell the AI to make a case against the point it just made.
When you actively question the text, you are doing mental weightlifting. You are using your analytical reasoning, your ability to check facts, and your healthy skepticism. You aren't just reading; you're thinking about what you're reading.
2. Strategic Prompting as a Way to Define a Problem
A lot of people think that to get good results from AI, you just need to know the right "magic words" or secret cheat codes. No, it isn't. Good prompting is just critical thinking.
It's like being a director of a movie. If a director just tells an actor to "act sad," the performance will be flat and boring. But if the director tells the actor about the character's past, the tension in the room, and the small emotional change they want the audience to feel, the actor can give a great performance.
You need to do a lot of thinking before you can give AI a good prompt. You need to be clear about what problem you are trying to solve. You need to figure out who your target audience is, what the limits are, and what variables are missing. If you don't know what you're talking about, your prompt will be vague, and the AI's output will be useless. One of the best ways to keep your problem-solving skills sharp is to come up with a very specific, strategic prompt.
3. The "Human Polish" (Putting It All Together and Putting It in Context)
AI is great at giving you raw data, making sense of complicated information, and coming up with lists of ideas. But until a person gives it meaning, raw data is just noise.
You could use AI to make a summary of a 50-page analysis of a competitor. The AI can tell you the main things your competitor is doing. But the AI can't tell you how that information will affect your Q3 strategy in particular. It doesn't know that your investors are very risk-averse right now or that your marketing team is short-staffed.
Using the "Human Polish" means taking the AI's generic output and fitting it into the messy, complicated world of your business. You need emotional intelligence, the ability to see how things fit together, and the ability to connect dots that don't seem to be related. The highest level of human intelligence is the ability to take separate facts and turn them into useful, context-aware knowledge. No algorithm can copy it.
Part 3: The Manager's Playbook: How to Coach the Human in the Loop
You are at a unique crossroads if you are in charge of a team. You are under a lot of pressure to get more done and be more productive, which makes AI a very appealing tool. But you also know that if your team stops thinking, the quality of your product, service, or strategy will eventually go down.
So, how do you lead a team in the age of AI? How do you get them to use these tools to be more productive without losing the critical thinking skills that make them valuable?
It begins with a complete overhaul of how we evaluate and measure work.
The "Show Your Work" Era: From Policing to Guiding
Do you remember the high school student we talked about at the beginning? Teachers who use AI detection software are playing a game of whack-a-mole that they can't win. Managers who are always asking, "Did AI write this?" at work are making the same mistake. It makes a culture of hiding, where workers secretly use AI (and use it badly) just to get the job done faster.
Progressive managers should not police the tool; they should guide the process.
Remember math class in middle school? Your teacher wanted you to show your work, not just give the answer. They knew that if you could explain how you got to the answer, you really understood the idea. This is the exact way that managers should think. When an employee gives you a report, a marketing plan, or a block of code, ask them about the process instead of the result:
"This is a great summary. What questions did you ask the AI to get it to focus on these metrics?"
"I see that the AI suggested this strategy. How did you check the data it used to make this decision?"
"What did the AI leave out of its first draft that you had to go in and fix?"
When you ask these questions, you make the employee use their brain right away. You are saying, "I want you to use the tool, but I also want you to be the best at it, not the other way around."
Creating a Culture of Productive Conflict
AI is meant to make things easier. It cuts down on three hours of hard research work to just three seconds. That sounds great for getting things done, but psychologists and teachers will let you in on a secret: People learn best when they have to deal with problems. We build cognitive muscle by working on a problem, hitting a wall, and then finding a way around it.
When AI takes away all the friction, the brain stops working. As a manager, it's your job to bring constructive friction back into the workflow on purpose.
It's like being a personal trainer for the brains of your team. You wouldn't let a client lift a plastic weight that was empty. Instead, you give them resistance so they get stronger. In the workplace, this means encouraging people to talk about AI-generated results.
Host "Tear-Down" Sessions: Have your team come up with a plan using AI, and then spend 20 minutes in a meeting actively tearing it apart. What are its weak points? What would make this not work in real life?
Reward the Catch: Give employees public praise when they spot an AI hallucination or fix a logical leap made by the software. Be happy about doubt.
Ask for the "Why": Don't just believe what an AI says. Have your team defend the AI's reasoning as if it were their own.
Your employees won't need to rely on AI if you create a culture that values the way people think as much as the end result. They will use it as a powerful engine, but they will always have their hands on the wheel.
Sharpening the saw
There is a well-known story about a lumberjack who has been cutting down trees for days with a dull axe. He is tired, sweaty, and not making much progress. Someone walking by sees him having trouble and asks, "Why don't you take a break and sharpen your axe?" The lumberjack barely stops to catch his breath and says, "I can't stop! I'm too busy chopping!"
Knowledge workers have been that lumberjack for a long time. We've been stuck because we have so many boring tasks to do, like reading endless reports, formatting emails, organizing data, and writing standard communications. We have been using a dull axe to keep up.
If someone walked up to us and gave us a brand-new chainsaw, that would be like generative AI. It moves at an incredible speed. It gets rid of the tiring, boring friction of our daily lives.
But here's the problem: a powerful tool in the hands of someone who isn't paying attention is a liability. If you don't know which tree to cut down, why you're cutting it, or how to use the chainsaw safely, it won't help you build a house. It will just make things messier faster than before.
AI coming to work doesn't mean we don't have to think anymore. It means that our brains are getting better at what they do.
We can't say we're "too busy chopping" anymore because AI is doing the hard work of collecting data and writing first drafts. We now have the time and energy to sharpen the saw. We can finally use our brains to do the high-level, uniquely human tasks we were hired to do. We can focus on making tough decisions, building relationships, and having a lot of empathy.
The mandate is the same for everyone, whether you're a manager making company policy or a daily user trying to find your way around your inbox. Don't use AI to skip the learning process. Don't let it make you a passenger in your own career. Use it to test your logic by sparring with it. Use it as a co-pilot to speed up your work. But always, always keep your hands on the wheel.
People who can blindly copy and paste from a chatbot don't have a place in the future of work. It belongs to the people who use AI to make their own intelligence, which is very real and can't be replaced.
Let's get started.





Comments