GitHub Copilot - Custom Agents for Full-Stack Teams: A Practical Operating Model for .NET, React and Azure

GitHub Copilot custom agents allow teams to define specialized AI assistants, each with its own role, tool access and behavioral boundaries. Instead of relying on one general-purpose assistant for everything, a team can create multiple agents that mirror the actual roles in the engineering organization. After working with custom agents for a while, the biggest insight was simple: the quality of AI-assisted engineering improves dramatically once the AI knows what role it is supposed to play.

In this post, I want to walk through a realistic setup for a full-stack product team building a React frontend and a .NET backend on Azure. The focus is on how custom agents work, how they are structured and why splitting responsibilities across multiple agents makes a measurable difference for requirements engineering, architecture, implementation, documentation and testing.

Benefits of Role Separation

Most teams that start with AI tooling make the same mistake: they use one generic assistant (with one context) for everything and hope good prompts will somehow compensate for the lack of structure. That usually works for isolated tasks. It works much less reliably once a feature spans several concerns at the same time.

When implementing a new feature, there is rarely just one question to answer. There is usually a chain of questions:

  • What is the actual requirement?
  • What belongs in the frontend and what belongs in the backend?
  • Which security implications does the change introduce?
  • What has to be documented?
  • Which Azure concerns are relevant?
  • Which security checks should be in place?
  • Which tests are missing?

If the same agent is supposed to answer all of that equally well, the result becomes inconsistent. Some runs focus on implementation speed. Others over-document. Others ignore testing. The behavior drifts because the role is undefined. Especially in GitHub Copilot, the context window is more limited than what the raw model offers, so the reality is: an agent cannot reliably keep all of these concerns in mind at the same time today.

The solution is the same one that works in real teams: role separation.

Custom Agents

A custom agent in GitHub Copilot is a Markdown file with YAML front matter. It defines a name, a description, the tools the agent may use and a prompt body that describes the agent’s role, responsibilities, constraints and expected output format.

GitHub supports three scopes for agent files:

  • Repository scope: .github/agents/*.agent.md - shared with the team, versioned with the code.
  • Organization scope: .github/agents/*.agent.md in a special .github repository - shared across all repositories in the organization.
  • Personal scope: stored in the user’s own configuration folder ~/.config/github/agents - only available to that individual.

For project-specific agents, the repository scope is the natural choice. It keeps the agent definitions close to the code they operate on and makes them available to every team member automatically.

The front matter supports these key properties:

PropertyPurpose
nameDisplay name of the agent
descriptionShort summary shown during agent selection
toolsList of tools the agent may use (e.g. read, search, edit, terminal)
targetWhere the agent runs: vscode, github.com, or copilot-coding-agent
modelOptional model override (e.g. gpt-5.4, claude-opus-4.6)

The prompt body below the front matter is free-form Markdown. That is where the role definition, the constraints and the output expectations go.

The Example Project

To make this concrete, imagine a product called Unitra. It is a line-of-business application for enterprise operations teams.

The stack is deliberately realistic:

  • React for the frontend.
  • .NET for the API.
  • Azure Static Web Apps for the UI.
  • Azure Container Apps for the backend.
  • Azure SQL for transactional data.
  • Microsoft Entra ID for authentication.
  • Azure Key Vault, App Configuration and Application Insights for the usual platform responsibilities.

That kind of system does not fail because React or .NET are weak technologies. It fails when too much interpretation happens between requirements, implementation and release.

This is why I would not create one “super agent”. I would mirror the real team.

The Team I Would Model as Agents

For a setup like this, the best results usually come from a compact but very clear lineup:

Human roleResponsibilityAgent role
Product ManagerScope, intent, acceptance criteriaRequirements Engineer
Solution ArchitectBoundaries, trade-offs, Azure concernsPlatform Architect
Security EngineerThreats, identity, secrets, hardening and review gatesSecurity Specialist
Senior .NET EngineerAPI, domain logic, data accessBackend Specialist
Senior React EngineerUI flows, state, accessibilityFrontend Specialist
QA / Test EngineerRegression control and release confidenceTest Specialist
Technical Writer / Senior EngineerADRs, runbooks, contributor docsDocumentation Steward

The important part is not the title. The important part is that every one of these roles has a narrow concern.

That narrow concern is exactly what should be encoded into a custom agent.

How an Agent File Is Structured

Here is a realistic example for a requirements-focused agent. It would be stored as .github/agents/requirements-engineer.agent.md:

 1---
 2name: requirements-engineer
 3description: Turns feature ideas into structured requirements, acceptance criteria and delivery notes for the Unitra product
 4tools: ["read", "search", "edit"]
 5target: vscode
 6---
 7
 8You are the requirements engineer for the Unitra product team.
 9
10Responsibilities:
11- Convert feature requests into implementation-ready requirements.
12- Capture user roles, workflows, acceptance criteria and open questions.
13- Include non-functional requirements such as security, latency, auditability and accessibility.
14- Produce markdown files that can be reviewed and implemented by the engineering team.
15
16Constraints:
17- Do not write production code.
18- Do not invent repository capabilities that do not exist.
19- Separate confirmed facts from assumptions.
20
21Output format:
22- Problem statement
23- User roles
24- Acceptance criteria
25- Edge cases
26- Non-functional requirements
27- Open questions

There is nothing magical in this file. That is precisely why it works. It describes a clear role, a clear boundary and a clear output shape.

A few things are worth noting about how this agent is configured:

Tool selection matters. This agent has read, search and edit - it can look at existing code and documentation, search for context and write files. It deliberately does not have terminal because it should not run builds or tests. Tool access defines the boundary of what an agent can actually do, not just what it is told to do.

The target controls where the agent is available. Setting target: vscode means the agent appears in VS Code. Setting target: copilot-coding-agent would make it available for the autonomous coding agent on GitHub. An agent can also omit the target to be available everywhere.

The prompt body is the real configuration. The YAML front matter handles discovery and tooling. The prompt body defines how the agent thinks and what it produces. A well-written prompt with clear responsibilities and constraints is worth more than any amount of clever tool configuration.

Here is another example, this time for the implementation phase. The backend-specialist.agent.md focuses strictly on .NET rules and architecture:

 1---
 2name: backend-specialist
 3description: Implements API endpoints and domain logic for the Unitra .NET backend
 4tools: ["read", "search", "edit", "terminal"]
 5target: vscode
 6model: claude-opus-4.6
 7---
 8
 9You are the Senior .NET Backend Engineer for the Unitra product.
10
11Responsibilities:
12- Implement Minimal API endpoints using MapGroup.
13- Handle database operations using Entity Framework Core.
14- Write structured logs via OpenTelemetry.
15- Follow the existing architecture and patterns in the codebase.
16
17Technical Constraints:
18- Never use 'var'. Always use explicit types.
19- Ensure all asynchronous calls use a CancellationToken.
20- Always separate domain logic from API handlers.
21- Use StrongOf and Unio for value objects and discriminated unions.
22- Never touch React or frontend files.
23- Ensure test 100% coverage is added for any code.
24- Focus on performance and security best practices.
25
26Output format:
27- Provide complete C# code.
28- Briefly list the touched files.
29- Run the .NET build in the terminal to verify compilation.

This limits the agent’s creativity in exactly the right places. It forces the use of explicit types and prevents generic repositories—coding standards that usually require human reviewers to enforce. Because it has terminal access, it can actively build the project to verify its own work.

How Custom Agents Work in practice

Once the agent files exist, they become available in the Copilot chat interface. In VS Code, a developer can select the agent by name and start a conversation that is scoped to that agent’s role and tool access.

The key difference to a generic Copilot session is that the agent carries its own system prompt. That means it does not need to be re-instructed every time. The behavior is consistent across sessions and across team members.

Let us assume the team wants to add an audit timeline for work order changes. Here is how the different agents would contribute.

Step 1: Requirements first

The Requirements Engineer agent is selected and asked to structure the feature. It produces a file like docs/features/audit-timeline.md containing:

  • The business purpose of the feature.
  • The user roles that may view the timeline.
  • The audit fields that must be stored and shown.
  • Permission rules.
  • Acceptance criteria.
  • Edge cases such as deleted users, masked values or empty histories.

Because this agent has edit tool access, it can write the file directly into the workspace. Because its prompt forbids writing production code, it stays within its lane.

Step 2: Architecture decisions before implementation

Next, the Platform Architect agent reviews the requirement and decides where the behavior belongs.

In the Unitra example, that usually means:

  • The .NET backend owns audit creation and retrieval.
  • The React frontend renders the timeline and filter UI.
  • Sensitive values must be masked based on data classification.
  • Application Insights should carry correlation identifiers for support tracing.
  • The decision should be captured as an ADR if the retention and masking policy has long-term impact.

This agent might use read to look at existing architecture documents and edit to create the ADR. It does not touch production code either - its job is to make decisions, not to implement them.

Step 3: Security review before the implementation gets expensive

The Security Specialist agent reviews the planned change early enough to influence the implementation while it is still cheap to adjust.

For the Unitra example, that review would typically cover:

  • Whether the audit endpoint leaks fields that should be masked.
  • Whether the authorization model is aligned with Entra ID roles and claims.
  • Whether secrets and configuration stay out of source code and move through Key Vault or App Configuration.
  • Whether the change introduces new attack surfaces such as filter injection, over-broad data exposure or weak tenant isolation.
  • Whether the new behavior should add security-specific documentation or operational checks.

This agent should have read and search access so it can inspect the existing codebase, but it does not necessarily need edit. Its job is to flag issues and make recommendations - the implementation agents handle the fixes.

Step 4: Backend and frontend agents implement inside their boundaries

The Backend Specialist is selected to implement the API. Because its prompt focuses exclusively on .NET concerns - API contracts, domain logic, persistence, telemetry and error handling - it does not suddenly improvise UI copy or make infrastructure decisions.

The Frontend Specialist handles the React side: rendering, state management, loading states, error states and accessibility. It stays within its boundary for the same reason.

This separation matters because it reduces prompt drift. Each agent has a smaller problem to solve, which means it solves that problem more reliably.

Both of these agents typically need read, search, edit and terminal - they are the ones that write and test production code.

How Tool access defines real boundaries

One of the most practical aspects of custom agents is tool configuration. The tools property in the front matter is not just a list of capabilities - it defines the actual boundary of what the agent can do.

GitHub Copilot currently supports these tools for custom agents:

  • read - read files from the workspace.
  • edit - create or modify files.
  • search - search across the codebase.
  • terminal - execute commands in the shell.
  • fetch - retrieve content from URLs.

The right tool access depends entirely on the agent’s role:

AgentTypical toolsReasoning
Requirements Engineerread, search, editNeeds to read context and write requirement documents. No terminal access needed.
Platform Architectread, search, editWrites ADRs and reviews structure. Does not need to run commands.
Security Specialistread, searchReviews code for vulnerabilities. Should not modify files directly.
Backend Specialistread, search, edit, terminalWrites code, runs tests, builds projects.
Frontend Specialistread, search, edit, terminalSame as backend, but for the React side.
Test Specialistread, search, edit, terminalWrites and runs tests.
Documentation Stewardread, search, editWrites and updates documentation.

Restricting tools is not about distrust. It is about preventing an agent from accidentally drifting outside its role. A requirements agent that can run terminal commands might start building things. A security agent that can edit files might start fixing issues instead of flagging them. The tool boundary reinforces the behavioral boundary set by the prompt.

Why documentation finally stops being an afterthought

One of the most underrated benefits of custom agents is documentation.

In many projects, documentation is not neglected because people dislike it. It is neglected because it has no clear owner during active development. Everyone assumes it can be fixed later.

A dedicated Documentation Steward agent changes that dynamic. Its job should be very narrow:

  • Keep ADRs up to date.
  • Update runbooks when operational behavior changes.
  • Document new local development steps.
  • Record contract changes between frontend and backend.
  • Write concise release notes when a change is user-visible.

Once that role is formalized as a custom agent, documentation stops being a vague social expectation and becomes an explicit part of delivery. The agent can be invoked after every significant change and because its prompt is focused exclusively on documentation, the output tends to be much more consistent than what a general-purpose assistant would produce.

Why a security agent is important

Security is one of those areas that almost every team claims to care about, but very few teams encode properly into their daily delivery flow.

The usual pattern is familiar: architecture is discussed, code gets written, tests get added and only near the end someone asks whether the change has security implications. By that point, most structural decisions are already expensive to undo.

A dedicated Security Specialist agent moves that review earlier in the process. Its responsibility should stay focused:

  • Review authentication and authorization assumptions.
  • Check secret handling and configuration boundaries.
  • Identify obvious attack surfaces in API and UI changes.
  • Flag missing hardening steps for Azure-hosted workloads.
  • Require security-relevant documentation when behavior changes.

In a .NET, React and Azure project, that usually means looking at Entra ID integration, claim usage, tenant boundaries, managed identities, Key Vault usage, data exposure in APIs and operational traceability for security-relevant events.

This does not replace a real security review for high-risk systems. What it does is move basic security discipline much earlier into the development cycle.

Why testing should have its own agent

When the same agent is responsible for both implementation and testing, tests often become an afterthought. They are added at the end of the implementation, if at all, and they tend to be superficial because the agent is already focused on making the code work.

If the implementation agent is also responsible for validating its own work, tests often become decorative. They exist, but they do not really challenge the change.

A dedicated Test Specialist agent with a narrower but stricter mandate produces better results:

  • Add unit tests where domain logic changed.
  • Add integration tests where API behavior changed.
  • Add end-to-end checks where user workflows changed.
  • State the remaining risk explicitly.
  • Refuse shallow coverage.

In a .NET and React application on Azure, that can translate to:

  • xUnit tests for the domain and application layers.
  • Integration tests for the API and persistence.
  • Playwright tests for the React workflow.
  • Verification that telemetry-relevant behavior is still observable when it matters.

Because this agent has terminal access, it can actually run the tests it writes and verify they pass. That closes the feedback loop within the same session - the agent does not just generate test code, it validates it.

This is the difference between “some tests were added” and “release confidence actually improved”.

Quality Engineer Agents: the second pair of eyes

Another agent could be focused efficiently on quality engineering. Its role is not to write code, but to review pull requests, check for consistency with coding standards, verify that tests are actually challenging the logic, and ensure documentation is updated.

There is a very practical reason for this separation: model bias. If you use an agent powered by e.g. claude-opus-4.6 for the implementation, you might not want the exact same model to review its own code. It tends to overlook its own logical gaps. By defining a Quality Engineer agent that explicitly uses a different model configuration (like model: gpt-5.4), you can easily get another “pair of eyes” on the change.

A good starting point

If I had to start tomorrow with a new full-stack project, I would not begin with ten agents. I would begin with seven:

  • requirements-engineer.agent.md
  • platform-architect.agent.md
  • security-specialist.agent.md
  • backend-specialist.agent.md
  • frontend-specialist.agent.md
  • documentation-steward.agent.md
  • test-specialist.agent.md

That is enough to cover the delivery lifecycle without creating overhead.

Anything beyond that should only be added once a real gap appears in day-to-day work.

Usage

The real work happens right inside the IDE or GitHub interface. When a developer starts a new feature, they do not just throw an ambiguous prompt at Copilot. They select the exact agent they need.

Depending on the phase of the feature, the agents are invoked exactly when they are required:

  • First, the Requirements Engineer is asked to flesh out the Acceptance Criteria.
  • Then, the Platform Architect reviews the design.
  • During active coding, the Backend Specialist generates the .NET endpoints while the Frontend Specialist builds the React components.
  • Before the code is pushed, the Test Specialist verifies coverage.

With autonomous capabilities like Copilot Agent Mode or Workspace, you can delegate larger chunks of the implementation to these specific roles and only step in for final review and approval before merging.

Conclusion

GitHub Copilot custom agents work best when they mirror the actual roles in the engineering team. Each agent gets a clear responsibility, a focused prompt and the right set of tools. That structure is what turns AI assistance from a general-purpose autocomplete into a reliable part of the development workflow.

For a full-stack team working with React, .NET and Azure, that means defining agents for requirements, architecture, security, backend, frontend, documentation and testing. Each one stays within its lane and the quality of the overall result improves because no single agent has to be good at everything.

The mechanics are simple: a Markdown file with YAML front matter, a well-defined role and a deliberate tool selection. The impact is not. Once the team works with agents that have real boundaries and real expertise, the difference in consistency, documentation quality and test coverage becomes obvious.


Let's Work Together

Looking for an experienced Platform Architect or Engineer for your next project? Whether it's cloud migration, platform modernization or building new solutions from scratch - I'm here to help you succeed.

New Platforms
Modernization
Training & Consulting

Comments

Twitter Facebook LinkedIn WhatsApp