January 7th, 2026
intriguingcompellinglike6 reactions

The Realities of Application Modernization with Agentic AI (Early 2026)

jkordick
Software Global Black Belt

How to read this article

This article is a reflection based on hands-on experience and is written for engineers and technical leaders who are facing a new application modernization effort and want to build a realistic mental model before reaching for tools. If you are new to application modernization, I recommend reading the article end to end. The early sections focus on why modernization is hard in practice and which foundations matter before any technical decisions are made.

If you are already familiar with the app modernization space and mainly interested in the role of agentic AI, you can skip the introductory sections and jump directly to the part on agentic AI here.

This article is intentionally dense. The goal is not to provide a checklist, but to surface the complexity, trade-offs, and constraints that tend to get ignored. Take your time, skim if needed, and come back to sections as your modernization effort evolves.

 

Over the last years, I’ve seen hundreds of application modernization efforts. I’ve modernized software myself, re-architected and re-designed systems, and worked with customers trying to break apart decades of technical and organizational decisions. From that experience, two things are consistently true.

First, every application modernization hides complexity that you need to uncover as early as possible. The problem is that you rarely know what you are looking for when you start.

Second, converting code from one version to another is usually the easiest part.

Most legacy applications suffer from a similar set of issues.

They lack sufficient documentation, which means critical business logic is buried deep inside a highly coupled, hardware-bound, years-outdated monolithic application. These systems are often full of with dependency vulnerabilities and implicit assumptions no one remembers making.

At the same time, they are deeply and inconsistently integrated into the surrounding application landscape. This tight coupling is what makes them mission critical for the organization and extremely risky to change.

 

In this post, I’m trying to pull together what I (and we) have learned over the last months and years about application modernization with agentic AI. I’ll stay intentionally abstract to make the ideas transferable. If you’re looking for concrete checklists or step-by-step guidance, I’ll link to those at the end and go into more (technical) detail in follow-up articles.

Two disclaimers upfront.

First, not every application is “modernizable” in the way people usually mean it.

Some systems are bound to specific hardware and environments. Think mainframes, OT devices, or tightly coupled appliances. In those cases, you can often preserve the business logic, but the actual modernization effort becomes re-architecture, re-design, and usually a re-implementation. That is a valid modernization outcome, but it’s not a typical migration project anymore.

Migration, at least in the way many teams approach it, usually tries to preserve not only business logic but also significant parts of the implementation and operating model: code, interfaces, workflows, and the way teams work with the system today.

Second, agentic AI can absolutely reduce the time modernization projects consume, but there is no silver bullet. No “one click, modernization done” solution. If that existed, someone would be very rich and none of us would still be debugging integration tests.

Be critical when vendors promise “80% accuracy” as if that’s the whole story. This is still generative AI in early 2026. Treat claims as marketing until you’ve seen working results in your own codebase, with your constraints, and your risk profile.

Only believe what you can validate.

Why application modernization is rarely just a technical problem

Before talking about tools or AI, it’s important to be honest about why modernization is hard in the first place.

Common reasons include:

  • High operational costs: You pay a significant amount every year for specialized hardware, expensive licenses, or a very small pool of developers with niche knowledge required to keep the system running.
  • Security and compliance pressure: Known vulnerabilities exist in the codebase, but fixing them feels risky because even small changes might break business critical behavior.
  • Limited scalability, performance, and resilience: The application cannot reliably scale, recover, or meet today’s performance expectations.
  • Innovation bottlenecks: You need to change or extend business logic or add new capabilities, but every modification feels dangerous because the system is fragile and poorly understood.
  • Poor developer experience: Outdated or proprietary languages and tooling prevent teams from using modern development workflows, reducing productivity and putting the organization at a disadvantage compared to competitors.
  • Hard vendor lock-in: The application is tightly coupled to a specific platform, product, or provider, making change expensive and slow.
  • Limited data accessibility: Data is effectively trapped inside a tightly coupled application and database combination and cannot be easily reused elsewhere.

Once you’re clear on the why, the next step is to be explicit about what you are actually trying to do.

Migration is the act of moving an application or workload from one environment to another (for example, from on-premises to the cloud) with minimal changes to its architecture or behavior.

Modernization, on the other hand, is about improving an application’s architecture, code, and operating model to increase agility, scalability, security, and maintainability. This often involves refactoring or re-architecting rather than just moving the system. In some cases, this can even mean a near greenfield rebuild, as described earlier.

How to think about a legacy application before changing it

Once the problem space is clear, the next step is not action, but structured thinking.

In practice, the decision about what to do with a legacy application is not always obvious from the start. Sometimes it only becomes clear after an initial analysis. Sometimes the decision is constrained by technical reality, organizational context, or external dependencies rather than personal preference.

Instead of looking for the “right” answer immediately, it helps to assess the system first. The goal is not to classify the application perfectly, but to reduce uncertainty enough to make informed decisions.

Over time, a few recurring dimensions have proven useful when evaluating legacy applications.

Start with the business context

Before touching architecture or code, understand why the application exists and why it matters. 

Ask yourself what business capabilities it supports, how critical it is to daily operations, and which future needs it cannot currently meet. If the business value or relevance is unclear, any modernization effort will struggle to justify itself, no matter how elegant the technical solution is.

Understand what you are dealing with today

Next, build a rough mental model of the current system.

You don’t need perfect documentation, but you do need to understand the dominant architectural style, the age and health of the codebase, and where the application is tightly coupled to frameworks, runtimes, infrastructure, or external systems. This early understanding often determines which modernization paths are realistic and which ones are not.

Be honest about sustainability

Modernization is not only about moving systems, but about whether they can be maintained and evolved over time.

Signals like high code complexity, missing tests, manual build or deployment steps, and a growing backlog of defects are indicators that change will remain expensive unless something fundamental improves.

Look at security and risk early

Security and compliance constraints are often the real drivers behind modernization.

Legacy authentication models, hardcoded secrets, missing encryption, or unpatched dependencies can quickly turn into business risks. These factors should shape modernization decisions early, not be discovered late in the process.

Pay attention to data and integrations

Data and integrations are where modernization efforts often become fragile.

Highly coupled databases, complex schemas, undocumented reporting dependencies, or a dense integration landscape can limit how far and how fast an application can change. Understanding these dependencies early helps avoid surprises later.

Don’t ignore delivery and operations

How software is built, deployed, and operated matters as much as how it is written.

Limited automation, manual releases, or outdated tooling slow teams down and increase risk. Improving delivery capabilities is often a key motivation for modernization, even if it is not the original driver.

Ground decisions in reality, including cost

Finally, modernization decisions need to be grounded in reality.

This includes operational cost, licensing constraints, availability of skills, and the feasibility of running the application in a different environment. A clear understanding of today’s cost and future trade-offs helps create a credible modernization path.

The point is not perfection

The goal of this assessment mindset is not to produce a perfect classification or a one-size-fits-all answer.

It is to make complexity visible, surface constraints early, and create enough shared understanding that teams can make deliberate, defensible decisions about migration, modernization, or re-architecture.

The organizational reality we like to ignore

Even with perfect technical insight, application modernization rarely fails because of code alone. It fails because ownership is unclear, teams are fragmented, incentives favor stability over improvement, and touching a legacy system is often perceived as a career risk rather than a contribution.

Agentic AI does not fix organizational misalignment. It does not resolve political boundaries, unclear responsibilities, or the constant tension between “keep the system running” and “make it better.” What it can do is reduce the cost of understanding and experimentation enough that modernization becomes possible within these constraints. The decision to act, accept risk, and prioritize change, however, remains a human and organizational one.

And yes, I haven’t talked much about agentic AI yet

And that is intentional.

The use of agentic AI does not eliminate the need for the foundational work of a migration or modernization project. As described earlier, modernization is still about understanding systems, making trade-offs, and managing risk. Agentic AI does not replace that work. It accelerates parts of it.

Before looking at concrete examples of where generative and agentic AI help or don’t help, it’s worth aligning on terminology.

Generative AI typically refers to models that generate content in response to prompts, such as code suggestions, explanations, or documentation. They are reactive and stateless, operating within the context of a single interaction.

Agentic AI builds on generative models but adds orchestration, memory, and goal-oriented behavior. Agents can plan multi-step tasks, invoke tools, iterate over results, and operate across longer-lived workflows.

In the context of application modernization, this distinction matters. Generative AI supports individual tasks. Agentic AI supports processes.

How agentic AI helps in modernization (and where it doesn’t)

Agentic AI does not magically modernize applications. What it changes is where effort goes and how knowledge is reconstructed when the original system understanding is gone.

Modernization is rarely a linear transformation problem. In legacy systems, the harder challenge is not changing code, but understanding what the system actually does, why it behaves the way it does, and where change is safe. Agentic approaches treat modernization as a continuous process of discovery, validation, and incremental change rather than a one-time rewrite.

Discovery before change

In early phases, agentic AI is most useful as an exploration layer.

For example:

  • An agent can traverse a large legacy codebase and build a map of execution paths, data access patterns, and implicit dependencies across modules.
  • It can correlate database access, batch jobs, and integration code to surface where business rules are actually implemented, not where documentation claims they are.
  • In systems written in older languages, agents can generate explanations of code behavior and reconstruct missing documentation to give teams a shared starting point.

At this stage, the output is not “truth”. It is a set of hypotheses that make further investigation faster and more structured.

No production code is changed yet.

Making behavior explicit

Once a baseline understanding exists, agentic AI helps externalize behavior that previously lived only in people’s heads or fragile code paths.

Typical examples include:

  • Proposing executable specifications or tests based on observed behavior in the code.
  • Identifying inconsistencies between similar-looking logic implemented in different parts of the system.
  • Helping teams define clearer interfaces and boundaries by surfacing what data and behavior actually cross them.

The value here is speed and coverage. Agents can iterate across areas of the system that humans would not have the time or patience to inspect manually.

The goal is not correctness on the first try, but fast feedback loops that reduce uncertainty.

Incremental change instead of big rewrites

Agentic AI becomes most effective once change is constrained by observable behavior.

In practice, this can look like:

  • Proposing small, reviewable refactorings behind stable contracts.
  • Assisting with framework, runtime, or language upgrades where the transformation pattern is known.
  • Generating repetitive changes across many modules while preserving agreed-upon behavior.

Here, the agent acts less like an autonomous developer and more like a multiplier for experienced engineers. It accelerates repetitive work and explores alternatives, while humans decide which changes are acceptable.

This is also where misuse becomes tempting: skipping validation or trusting large automated changes too early almost always backfires.

Continuous validation in a living system

Legacy systems are rarely static, and neither is modernization.

Agentic approaches help by:

  • Continuously validating behavior as changes are introduced.
  • Detecting regressions across integration boundaries.
  • Updating specifications and tests as understanding improves over time.

This shifts modernization from a one-off project to a capability that can be applied incrementally, even while the system remains in production.

The reality check

Agentic AI does not remove complexity.

It makes complexity visible, more navigable, and cheaper to reason about. It accelerates understanding and execution, but it does not eliminate the need for architectural judgment, domain knowledge, or human responsibility.

Where agentic approaches break down

Agentic AI struggles when:

  • Architectural decisions require deep domain understanding and trade-offs.
  • Business rules are ambiguous, contradictory, or historically grown.
  • Systems are so customized that generalized patterns no longer apply.

In these cases, agents can confidently propose changes that are locally correct but globally wrong. Without strong guardrails and human review, automation amplifies errors faster than it delivers value.

The underlying pattern

Across all of these examples, one pattern repeats:

Agentic AI is most effective when it accelerates understanding, validation, and repetition.

It is least effective when asked to replace architectural judgment, domain expertise, or responsibility.

This is why human-in-the-loop is not optional, it is the design

In successful modernization efforts, agentic AI does not replace engineers. It changes how they spend their time.

Agents explore, correlate, and propose. Humans decide, review, and take responsibility. The control plane stays human, especially in mission-critical systems where correctness, compliance, and trust matter more than speed.

This is not a limitation of the technology. It is the reason it works. Agentic AI is most effective when treated as a force multiplier for experienced teams, not as an autonomous modernization solution.

Key takeaways 

If you made it this far whether line by line or by strategic scrolling these are the core ideas to take away from this article.

  • Application modernization is rarely blocked by code alone. Missing knowledge, organizational constraints, and risk tolerance are often the harder problems.
  • Migration and modernization are not the same thing. Being explicit about which one you are pursuing changes both scope and expectations.
  • Foundational work matters. Understanding architecture, data, integrations, and delivery constraints is not optional, even when using AI.
  • Agentic AI does not replace this work. It accelerates discovery, validation, and repeatable change once the foundations are in place.
  • Human judgment remains the control plane. Successful modernization keeps responsibility, architectural decisions, and risk ownership with people.
  • Agentic AI makes complexity more visible and cheaper to reason about. It does not remove it.

If these perspectives resonate, I’ll follow up with more articles over the coming days that go deeper into the technical side of application migration and modernization with agentic AI.

In those posts, I’ll walk through concrete, hands-on examples of what this looks like in practice, including spec-driven discovery and GitHub Copilot-powered upgrade workflows.

Author

jkordick
Software Global Black Belt

1 comment

Leave a comment

Your email address will not be published. Required fields are marked *

Sort by :
  • Denis Mills 2 hours ago

    Super insightful post. Would be eager to read the more detailed follow ups on this topic!