“Human in the loop” (HITL) is the most overused phrase in AI today. It’s also the most misunderstood.
In many workflows, the human has become a mechanical bottleneck – a “rubber stamp” meant to click ‘Approve’ or ‘Next’ without truly engaging. This isn’t just a waste of talent; it’s a recipe for mediocrity.
In 2026, we don’t just need a human in the loop. We need an Expert Human in the Loop.
The difference? • HITL (Mechanical): Checking for typos or formatting. Approving output because it “looks right.” • EHITL (Expert): Challenging the AI’s logic. Applying domain-specific nuance. Spotting the subtle hallucinations that only a pro with 10+ years of experience can see.
AI can give us the 80% in seconds. But that final 20% – the part that actually moves the needle – requires us to apply our expertise to the AI, not just after it.
Don’t just check the AI’s homework. Teach it how to think.
It is completely normal to feel overwhelmed by the sheer velocity of AI.
Every week brings a new model, a new feature, or a headline declaring that everything is about to change. When the landscape shifts this fast, figuring out where to begin can feel paralyzing.
Billionaire investor Howard Marks famously wrote a memo in 2001 titled: “𝗬𝗼𝘂 𝗖𝗮𝗻’𝘁 𝗣𝗿𝗲𝗱𝗶𝗰𝘁. 𝗬𝗼𝘂 𝗖𝗮𝗻 𝗣𝗿𝗲𝗽𝗮𝗿𝗲.” While he was talking about financial markets, this philosophy is the ultimate playbook for navigating the AI revolution. Here is how to apply that mindset to get yourself ready for what is next:
𝗬𝗼𝘂 𝗖𝗮𝗻𝗻𝗼𝘁 𝗣𝗿𝗲𝗱𝗶𝗰𝘁
If you try to predict exactly where AI will be in three years, you will exhaust yourself. Will AI replace software engineers or make them 10x more productive? Which AI company will dominate? What specific jobs will disappear?
The truth is, no one knows. If you tie your career strategy to a specific prediction, you are building your house on sand.
𝗬𝗼𝘂𝗖𝗮𝗻𝗣𝗿𝗲𝗽𝗮𝗿𝗲
Preparation doesn’t mean knowing the future; it means building the resilience and adaptability to thrive no matter what the future looks like.
This brings us to the most crucial shift in how you should approach your AI education: 𝗠𝗮𝘀𝘁𝗲𝗿 𝘁𝗵𝗲 𝗰𝗮𝗽𝗮𝗯𝗶𝗹𝗶𝘁𝘆, 𝗻𝗼𝘁 𝘁𝗵𝗲 𝘁𝗼𝗼𝗹.
Tools are fleeting. Their interfaces change, and eventually, they get replaced. Capabilities – knowing 𝘩𝘰𝘸 and 𝘸𝘩𝘺 to apply technology to solve a problem – last a lifetime.
Think of it like photography. Mastering a tool means memorizing the menus on a specific 2026 high-tech camera. Mastering the capability means understanding lighting, composition, and human emotion. The camera will be obsolete in three years; the eye for a great photograph lasts a lifetime.
Don’t just memorize where to click. Instead, master the underlying skills that make AI useful:
Problem Decomposition: AI struggles with massive, vague goals but excels at small, defined tasks. Learn to break big projects into bite-sized pieces an AI can actually execute.
Critical Evaluation (Taste): AI generates infinite content. The premium skill is no longer creation; it is editing – spotting errors, biases, and mediocrity.
Context Building: Models only know what you tell them. Master the ability to clearly articulate the specific constraints and goals of your problem.
How to Start Today
Pick one friction point: Don’t try to automate your whole life. Pick one annoying, repetitive weekly task to experiment with.
Experiment with curiosity: Treat AI like a brilliant but naive intern. When it fails, figure out why, adjust your instructions, and try again.
Double down on human skills: Empathy, strategic vision, and relationship-building are things AI cannot do.
You cannot predict what the AI landscape will look like tomorrow, but by focusing on timeless skills and daily experimentation, you can ensure you are ready for it.
I’d love to hear from you: What is ONE repetitive task you are trying to use AI for this week? Let me know in the comments!
We are incredibly lucky to have the privilege of two New Years. It usually serves as a grace period – a chance to review the resolutions we set on January 1st and tweak them if they aren’t working out by the time the Lunar New Year arrives.
Usually, it’s just a status update. A minor pivot.
𝗕𝘂𝘁 𝟮𝟬𝟮𝟲 𝗶𝘀 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁.
We are just 6 weeks into the year, and the landscape hasn’t just shifted; it has completely transformed. This isn’t the time for an “update.” It is time for a total 𝗿𝗲𝘄𝗿𝗶𝘁𝗲.
The pace of Agentic AI advancement in these last few weeks has been unprecedented:
• 𝗖𝗼𝗱𝗶𝗻𝗴 & 𝗗𝗲𝘃: We now have multiple autonomous models handling complex architecture.
• 𝗖𝗿𝗲𝗮𝘁𝗶𝘃𝗲 𝗦𝘂𝗶𝘁𝗲: Video and music generation have leaped forward in fidelity and control.
• 𝗧𝗵𝗲 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗘𝘅𝗽𝗹𝗼𝘀𝗶𝗼𝗻: Powerful agentic platforms are popping up from nowhere, automating workflows we thought required human oversight just two months ago.
If your strategy relies on how things worked in December 2025, you are already behind.
𝗠𝘆 𝘀𝘂𝗴𝗴𝗲𝘀𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝘆𝗼𝘂𝗿 𝗟𝘂𝗻𝗮𝗿 𝗡𝗲𝘄 𝗬𝗲𝗮𝗿 𝗥𝗲𝘀𝗼𝗹𝘂𝘁𝗶𝗼𝗻:
Stop trying to fit AI into your old boxes. Don’t just “add AI” to your current Standard Operating Procedures (SOPs).
𝗥𝗲𝘁𝗵𝗶𝗻𝗸 𝘁𝗵𝗲 𝘄𝗮𝘆 𝘆𝗼𝘂 𝘄𝗼𝗿𝗸 𝗲𝗻𝘁𝗶𝗿𝗲𝗹𝘆.
Look at your processes and redesign them from the ground up with an “AI-First” mindset. Ask yourself: If I were building this workflow today, with today’s agents, what would it look like?
The window for adaptation is shrinking.
𝗔𝗜 𝘄𝗮𝗶𝘁𝘀 𝗳𝗼𝗿 𝗻𝗼 𝗼𝗻𝗲.
AI #AgenticAI #FutureOfWork #LunarNewYear #2026Trends #Innovation #TechResolutions
Open LinkedIn right now, and it feels like a firehose of information. New tools, new models, new agents, new frameworks. It seems like AI is moving at breakneck speed, and if you blink, you’re already behind.
I’ll be honest: sometimes it feels impossible to catch up, let alone “master” it all.
But then I remembered a quote by E.L. Doctorow (originally about writing, but perfect for the AI era):
“It’s like driving a car at night. You can see only as far as your headlights, but you can make the whole trip that way.”
You don’t need a map of the entire territory. You don’t need to understand every layer of the neural net or every new tool released this morning.
Stop trying to master everything. Start by mastering something.
You just need to see as far as your headlights:
Learn one new prompt technique today.
Test one new tool this week.
Read one paper that interests you.
You can navigate the entire AI revolution just by focusing on the few meters of road right in front of you. Just keep driving.
What is the “one thing” you are focusing on learning this week?
We are rapidly hitting the natural ceiling of the “chatbot era.”
For the last three years, the dominant AI strategy has been refining human-to-machine conversation. We’ve built better models and smoother interfaces, essentially trying to make software “talk” like a human. We have automated the retrieval and summarization of information.
But as we look toward 2026, simply making AI talk better won’t generate competitive advantage. To understand the necessary strategic pivot, consider this insight from Paulo Coelho:
For centuries, humans failed to fly because we were obsessed with flapping wings. We thought flight meant mimicking nature. We only succeeded when we stopped imitating birds and built fixed wings and engines—mechanisms that looked different but achieved the outcome of flight far more effectively.
Right now, many enterprises are still flapping their wings.
We are forcing complex business processes into chat interfaces. While useful for triage, the chat box is a bottleneck for true cognitive work. It mimics human interaction, rather than leveraging machine speed and scale.
𝟐𝟎𝟐𝟔: 𝐓𝐡𝐞 𝐄𝐫𝐚 𝐨𝐟 𝐭𝐡𝐞 𝐀𝐠𝐞𝐧𝐭
If the current phase is imitation, 2026 is about flight. The strategic advantage will shift from Conversational Bots to Autonomous AI Agents.
The difference is the move from imitation to agency:
🔹 𝐓𝐡𝐞 𝟐𝟎𝟐𝟒 𝐂𝐡𝐚𝐭𝐛𝐨𝐭 tells you the steps required to fix a supply chain disruption. It chats with you about the problem.
🔹 𝐓𝐡𝐞 𝟐𝟎𝟐𝟔 𝐀𝐠𝐞𝐧𝐭 detects the delay, re-routes the shipment, updates the ERP, and simply asks for your final sign-off.
The future isn’t a better chat interface; it is cognitive orchestration.
𝐓𝐡𝐞 𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐢𝐜 𝐌𝐚𝐧𝐝𝐚𝐭𝐞
To prepare for 2026, stop asking, “How can we add a chat interface to this workflow?”
Start asking, “If we stop imitating human conversation, what outcomes can we empower an agent to execute autonomously?”