Introduction
When I started writing software professionally in 2015, the tech landscape looked completely different. Tools and frameworks kept evolving since then, but the process of building software barely moved for a decade. That changed in the last couple of months. Here’s what has changed in how the Power Pages team builds and ships software and what hasn’t.
Development in the age of AI
Before the arrival of coding agents like GitHub Copilot, Claude Code etc., software development was role-focused where the job of a product manager, designer, the engineer and tester was clearly defined. In fact, you can still see it in job postings. However, with AI and tools like GitHub Copilot, the boundary between these roles is getting blurred. If we focus on code-writing part of software development, something which used to take 2 weeks can now be done in 2 hours. What does the engineer do with this additional time? This is something everyone is trying to answer and where the fear around AI comes from. However, as I see it, engineer is not just an engineer anymore. With AI, they also have knowledge of other disciplines which they can use to improve the product. With AI, an engineer who always wanted to design can do that and complement the design team, a PM who wanted to code can do it now. In fact, I have already started seeing this within Power Pages, where product managers are also generating and contributing code. Also, with more time on our hands, we can take on things which were earlier put off due to capacity issues. Things like automating manual workflows and processes, fixing paper cuts in our product etc.
How Power Pages is leveraging AI
With the arrival of new AI tools and agents, a lot of things changed. The tools not only helped in writing code but also reduced the time to write that code. But all of that came with a new learning curve. We had to learn to use these new tools and more importantly learn to use them effectively. Since then, the models and tools have matured. The learning curve for the tools is lower, but the learning curve for working with them is not – it asks us to rethink how we build software. For each part of software development, we started asking how AI can help here. A lot of it also comes from our leadership to encourage us to use AI and providing us with the best available tools.
Today, we use AI in almost all the phases of software development and the tools change based on the task at hand and individual preferences. Given the right context, these tools help us to create artifacts like design documents, slides, and obviously code.
Here’s a closer look at how we’re using AI across different parts of our workflow:
1. Plan/Design Document
With the current set of tools, small changes and pull requests can be created with a simple prompt and some context. For some complex features though, we rely on design documents and plan mode which comes with these agents. We do design reviews religiously for large and complex features. AI helps as a peer to brainstorm design decisions and writing the doc before inviting broader team to review. The approved design is then used as context for the agent to implement the code. The code itself is sometimes spread across repositories, the implementer keeps the agent honest – small, reviewable diffs over sprawling ones.
2. Custom Agent Skills
Agent skills is a new standard which agents like Claude Code and GitHub Copilot have adopted to streamline repeatable workflows. We use a lot of custom agent skills which automate different parts of our workflows. These skills help with end-to-end feature development as well as other day-to-day tasks like creating documents, slides, reports etc. Some of the examples of most used skills are:
-
/tdd– This skill pushes the agent to follow test-driven development: write tests first, then write only enough code to make them pass while staying within the codebase guidelines. We use this skill to implement much of the server-side logic feature in the Power Pages runtime. This skill also uses a lot of sub skills to accomplish atomic tasks like building solution, running unit tests etc. -
/fix-a11y-bug– Given an Azure DevOps (ADO) work item id, this skill fetches the bug details from ADO and implements the fix. It also uses Playwright MCP server to test the changes to ensure the fix is correct. -
/skill-creator– This skill is provided by Anthropic and is part ofanthropics/skillsmarketplace. It helps us in creating new skills. We are using this skill to develop the recently released Power Pages plugin for Claude Code and GitHub Copilot CLI. Our ownAGENTS.mdguidance gives the agent the local rules and conventions it needs to produce something the team can actually use.
Whenever someone identifies they are doing something repetitive, they create a skill for it and share it with others if it’s something which can help everyone.
3. Code Reviews
When more code can be generated quickly, code review becomes even more important. We still keep humans in the loop for pull request review and approval, but we also use GitHub Copilot CLI to review changes with multiple models at once. That gives us a useful first pass on the diff and highlights places where the models disagree. Those disagreements are often where a human reviewer should look first. This does not replace human judgment but gives reviewers a better starting point. The review still needs a person to decide whether the code is correct, whether it fits the architecture, and whether it is something the team will be happy to maintain.
4. Quality Gates
Faster code generation only helps if the product quality holds up. That is why quality gates and automated tests matter more now than they did before. If the team can produce changes more quickly, validation has to keep pace. Otherwise you get a short burst of speed followed by regressions and rework. For us, that means investing in tests, automation, and the checks that keep standards from slipping as throughput rises. Those gates are what let the team move faster without lowering the quality bar. We have started adding more automated validations for pull requests and release candidate builds to ensure less manual verification and more speed.
What doesn’t work
While there are a lot of examples of how AI is making everyone more productive, everything is not sunshine and rainbows. There are still things where AI falls short. In Power Platform context, we have the entire frontend code of all applications across Power Platform in a single repo. Think Power Apps Maker Portal, Power Pages Studio, Power Automate Designer, Admin Center etc. Working in this codebase with AI is still a hit or miss. Code reusability is also something which we have noticed that agents are particularly not keen about. They tend to generate everything from scratch. We have tried to address that with some custom instructions in AGENTS.md but sometimes the agent simply ignores them.
Wrapping Up
This is the shift we see in Power Pages today. AI helps write code, but the bigger value comes from using it to tighten the whole loop around development. None of this feels settled, and that is probably the point. The models, tools and workflows are changing rapidly. What works well for us now will not be the last version of how we build. So the job is not to lock in one process and call it done. It is to keep re-evaluating. We look at where the tools help, where they still fall short, and where the team needs better structure around them. As the industry moves, we expect our approach to move with it. That is the part that matters most to us right now. We are not trying to chase every shift in the AI landscape, but we are trying to stay honest about what is working, what is not, and what needs to change next.
0 comments
Be the first to start the discussion.