Two Years With AI: How It Changed My Work, My Thinking—and What Comes Next
Edward Wong is the Manager of Research and Community at Ibbaka.
See his LinkedIn profile.
It’s hard to believe that it’s been roughly two years since ChatGPT went mainstream. In that time, AI has changed how I work, how I think, and how I make decisions. It also carries real risks—small mistakes that are made by models can be easy to miss and can lead to larger problems down the line if not caught. This blog post is a personal reflection working with AI every day, what I’ve learned, and why thoughtful adoption is now a strategic necessity.
How AI Changed My Daily Work
When LLMs first arrived, I used them for almost everything—questions I’d normally search via the web, drafts I’d refine, any problems I'd troubleshoot. The appeal was simple: complex topics suddenly felt clearer, research felt less tedious, and blank pages turned into structure outlines. The timeframe for repetitive tasks shrank from hours to minutes.
Here’s how my thought processes with tasks shifted:
Search to summary: Instead of starting with web searches and clicking through multiple links, I begin by asking AI to provide a short summary to frame the topic. Then verify the important points and sources.
Drafting to iterating: Instead of writing from scratch, I start with providing AI context, then asking for an outline of how to approach tasks or even a draft, then I refine: add my judgment, insert additional context data, and tailor the tone for the audience.
Analysis to dialogue: Instead of analyzing alone, I treat analysis as a back-and-forth. I ask the model to challenge its own answers, surface counter arguments or follow-up questions to clarify, and to show alternative options for me to choose from.
The time saved is huge- but the larger gain is being able to focus my effort in other places. By this, I mean that more of my energy is devoted to defining the problem, providing detailed context and less on busywork, and being able to focus on taks that cant yet be handled by AI.
The Dangers: When AI Makes You Feel Smart (But Isn’t)
Two things can go wrong:
Confident Errors (Hallucinations): AI can produce answers that look polished and read as confident—but are simply wrong. This can include invented sources, misquoted research, or details stitched together that “sound right” but aren’t real. These are referred to as “hallucinations.” Full confidence in an AI is the trap, it can make you feel right, even when you are not.
Losing judgement over time: Relying on AI too heavily can dull good habits. This includes checking sources and comparing alternatives. Small errors begin to slip through, stack up, and can distort the direction of one’s decisions.
Here are a few guardrails that I like to include into my processes:
Ask “what's the confidence of the output, reasoning, and why?”
Ask the model to refute itself or provide counter-examples. If the model can’t surface assumptions , I don’t trust the answer.
Validate claims with primary sources.
Separate “shaping” from “deciding.” Meaning, using AI to shape and explore options but not not suggesting a concrete direction (this should require human input).
If the model can’t explain its reasoning or show what assumptions it made, be cautious—don’t just take all of it’s answers at face value. Always double-check and use your own judgment before relying on the output. Even when the model does explain itself, remember that the information might still not be completely accurate.
How AI Changed My Thinking
AI changed not just what I do, but how I approach problems. I don’t just ask quick questions anymore—I build prompts with a clear goal, audience, and limits in mind. Now, every time I use AI, I treat it like setting up a plan: What am I trying to solve? Who is this for? What counts as success?
From Tools to Agents: Where This Is Going
At Ibbaka we are moving from “AI as tool” to “AI as teammate”—agents with goals and the ability to act across systems. In pricing and customer value work, that looks like:
Customer-specific agents that surface value drivers and economic value (EVE) by account
Collaborative workflows where agents propose options and humans select trade-offs
The shift isn’t about replacing experts. It’s about freeing experts from drudgery so they can spend more time designing value, telling the value story, and aligning stakeholders.
Will Adopting AI Be Crucial?
Yes—with one condition. AI only delivers results when it’s guided by clear goals and tied directly to your customer’s real needs.
From my experience and what we’ve seen at Ibbaka and valueIQ, teams get real benefit by following a few simple steps:
Start with a value model—not just price points—so every decision links back to what customers actually care about.
Build AI agents that focus on specific problems and individual customers, not just broad industry categories.
Keep critical decisions in human hands: strategy, ethics, and storytelling require judgment.
Track and learn: measure outcomes, improve processes, and close the loop with every deal.
These patterns don’t just help pricing teams—they’re relevant to anyone using AI to improve work, clarify decisions, or deliver value.
Practical Ways to Use AI Today (Without Losing the Plot)
Use LLMs (Large Language Models) to draft, then verify with primary data- especially for high-stakes content.
Treat prompts like briefs: purpose, audience, constraints, evidence required.
Build “devils advocate-like” habits: ask the model to critique its own answer and propose alternatives.
Bake governance in the workflow: where agents act, where humans approve, where data must be cited and reasoning is explained.
A Personal Closing Thought
If you’re thinking about using AI, be thoughtful: design workflows so agents handle the bulk of busywork, and let human judgment own the decisions that shape strategy and relationships. That’s how you get the most out of this technology.
Adopting AI is no longer optional, but doing it with care and intention is essential. Find just one key workflow where an agent could help with the repetitive 60%, and decide where you need to step in personally. That’s the partnership worth building.
Mm