I can tell the future
Most outcomes are predictable. The hard part is being honest about what you see.
I don't mean crystal balls. I don't mean visions.
And it’s not because I’m special.
I mean something both less dramatic and far more useful: I can walk into a situation, look at how it is right now, and predict the outcome.
The trick is simple: define what success requires, then compare it to what’s actually true.
If you lead, architect, or deliver complex change, this is the skill that keeps “reasonable plans” from turning into expensive surprises.
It's a skill I've developed over years of building systems, untangling messy projects, and watching teams repeat the same patterns. And it's something I use daily in my role as a Technical Architect, where the real work isn't writing code—it's planning and shaping decisions early enough that the code has a chance to matter.
The future is usually baked in
Most project failures don't come out of nowhere. They're usually locked in early, hiding in plain sight:
- The timeline assumes everything goes right.
- The scope is fuzzy, but everyone is already "aligned."
- The data quality problem is "phase two."
- The integration is "straightforward."
- The business process will "adapt."
- The team will "figure it out as we go."
That last one is my favorite, because it’s sometimes true. But when it’s not, it becomes the most expensive sentence in the entire project.
When you hear enough of these signals, you don't need to guess the ending. You can see it.
What I actually do when I "predict the future"
When I say I can tell the future, what I’m really saying is:
I can model success as a set of requirements, then compare that model to reality.
That’s it. No magic. Just disciplined pattern recognition.
Here’s what it looks like in practice (a real-shaped example):
- Goal: “AI-powered insights” in Q3.
- Success requires: governed definitions, clean event + account data, and a feedback loop with the business.
- Current reality: definitions vary by team, data owners aren’t clear, and exceptions are handled ad hoc.
- The gap predicts: late surprises, rushed data patches, and a dashboard that people don’t trust.
- The intervention: lock definitions + owners first, narrow the first insight to one use case, and set a decision cadence.
In practice, my brain runs a quick checklist:
What does success require?
Not "what do we want," but what must be true for this to work?
Success has dependencies. It has constraints. It has invisible prerequisites.
For example:
- If the goal is "AI-powered insights," success requires clean, consistent, well-governed data.
- If the goal is "single customer view," success requires identity strategy, matching rules, and a plan for exceptions.
- If the goal is "fast delivery," success requires strong engineering processes, clear roles and responsibilities, well-documented scope boundaries, empowered decision-makers, and a team that can focus without constantly shifting priorities.
What's true right now?
This is where untested optimism gets exposed.
Not because people are dishonest, but because teams naturally describe their environment the way they wish it worked.
So I look for the real indicators:
- Is everyone aligned on the priorities?
- How are decisions made?
- How many systems are involved?
- Who owns data definitions?
- What happens when requirements change?
- How much of the work depends on "one person who knows the thing"?
- How often do stakeholders disagree—and what happens when they do?
Where's the gap?
Now we’re in prediction territory.
When the success requirements and the current reality don't match, the future isn't mysterious. The gap becomes the story.
If the gap is small, the project will probably be fine.
If the gap is large, the project will still "progress," but it will do so in a predictable direction:
- Delays become inevitable.
- Quality erodes.
- Teams start making irreversible shortcuts.
- People start blaming tools.
- Leadership starts asking for a new plan every two weeks.
- Eventually, everyone ends up tired, confused, and frustrated—while the core issue remains untouched.
That's the future I can see.
Why architecture is really "early problem detection"
People sometimes think technical architecture is about choosing technologies, drawing diagrams, or enforcing standards.
Those things matter. But architecture is really about reducing regret.
It's about noticing the mismatch between ambition and readiness while there's still time to fix it cheaply.
Because once you're six months in, "fix it" doesn't mean "adjust." It means rework, change management, data cleanup, migrations, political negotiations, and a lot of meetings where everyone insists this was not what they agreed to.
The earlier you spot the gap, the more options you have.
And options are what keep projects from becoming traps.
The signs I look for (the future tells on itself)
Here are a few patterns that reliably predict trouble—regardless of platform, team, or industry:
Success is defined as a feature list, not an outcome. If nobody can describe how the business will operate differently, the project will drift until it hits a deadline.
Everyone agrees too quickly. Real alignment takes time. Instant agreement often means people are avoiding conflict—or haven't understood the implications yet.
Data is treated as an implementation detail. It's not. Data is the product, whether you admit it or not.
"Phase two" contains all the hard parts. If the critical dependencies are deferred, they don't disappear. They come back later with interest.
There's no plan for adoption. If you build something and nobody changes how they work, the system becomes shelfware with a login page.
The organization can't make decisions at the speed the project requires. A fast-moving delivery team inside a slow-moving decision structure will always look "inefficient," even when they're doing great work.
If you've seen enough of these, you start to realize: outcomes repeat.
So what do I do with this ability?
The point isn't to be right. The point is to intervene.
Prediction is only useful if it leads to better choices.
When I see the likely future, I try to do three things:
Name the gap clearly. Not in vague terms. Not in blame language. In plain reality.
Make the hidden costs visible. People will keep choosing the default path until they understand what it will cost them.
Offer options that change the trajectory. Not perfection—just a better direction. A safer architecture. A clearer scope boundary. A real data strategy. A decision-making mechanism that works.
Small changes early beat heroic fixes later.
The real advantage
If there’s an advantage here, it’s not prediction.
It's the ability to look at a situation without comforting stories, and still stay optimistic—because once you can see the path you're on, you can choose a different one.
Most teams don't fail because they're bad. They fail because nobody slows down long enough to compare their ambition to their reality.
That's what I do.
I look at what it would take to be successful. I look at where we are. And I can usually tell you what happens next.
Not because I can see the future.
Because in most projects, the future is just the present—continued.
Related Articles
Before You Build: Solution Design for Salesforce Projects
Skipping thorough solution design in Salesforce projects, often driven by a desire to quickly address requests, leads to costly rework and misaligned systems. Instead, focusing on clearly defining the desired business outcome and asking probing questions about the underlying need is crucial for building a successful and effective Salesforce implementation.
The Lost Art of Project Management in Software Development
In the ever-evolving landscape of software development, the significance of robust project management cannot be overstated.