top of page

Github Copilot - Unit Tests

Unit Testing Tools and Environment


Overview


GitHub Copilot helps developers generate, improve, and maintain unit tests.

It can:

  • Generate tests

  • Suggest assertions

  • Identify edge cases

  • Fix failing tests

  • Maintain consistent testing patterns


However:

Manual review and testing are still required.


How GitHub Copilot Helps With Unit Testing

Capability

Description

Setup testing frameworks

Suggests frameworks and extensions

Generate test code

Creates unit, integration, and end-to-end tests

Handle edge cases

Identifies boundary conditions

Fix failing tests

Suggests fixes for broken tests

Maintain consistency

Follows project testing conventions


Important Limitation


Generated tests do not guarantee full coverage.

Developers must still:

  • Review tests

  • Add missing cases

  • Validate correctness


Setting Up a Testing Framework


GitHub Copilot can help configure testing using the /setupTests command.

Steps:

  1. Open Copilot Chat

  2. Enter:

/setupTests
  1. Follow Copilot’s instructions


It will recommend:

  • Testing framework

  • Required packages

  • VS Code extensions


Methods to Generate Unit Tests

Method

Description

Chat View

Generate tests for a project, class, or method

Inline Chat

Generate tests for selected code

Smart Actions

Use “Generate Tests” action

Code completion

Suggest additional test cases


Fixing Failing Tests


GitHub Copilot integrates with VS Code Test Explorer.

Ways to fix failing tests:


Method 1 – Fix button

  1. Open Test Explorer

  2. Hover over failing test

  3. Click Fix Test Failure (sparkle icon)


Method 2 – Slash command

Use Copilot Chat:

/fixTestFailure

Copilot suggests solutions.


Agent Mode Feature

If using Agent mode:

  • Copilot monitors test output

  • Automatically fixes failing tests

  • Reruns tests


Maintaining Test Consistency


Copilot can be customized to follow organization testing standards.

Examples of customization:

Customization

Example

Testing framework

xUnit, NUnit

Naming conventions

Test naming format

Code structure

Arrange-Act-Assert pattern

Testing methodology

Unit test style


Visual Studio Code Support for Testing


To run C# unit tests in VS Code you need:

Requirement

Purpose

.NET 8 SDK

Runtime environment

C# Dev Kit extension

Testing tools

Test framework package

Testing library


C# Dev Kit Testing Features

Feature

Description

Test Explorer

View and organize tests

Run / Debug tests

Execute tests

Test results viewer

See pass/fail status

Testing commands

Run tests from Command Palette

Testing settings

Configure testing options


Supported C# Test Frameworks

Framework

Description

xUnit

Modern lightweight framework

NUnit

Popular open-source testing framework

MSTest

Microsoft testing framework


Creating a Test Project


Use VS Code Command Palette.


Open it using:


Windows / Linux

Ctrl + Shift + P

Mac

Cmd + Shift + P

Then choose:

.NET: New Project

Select a test framework:

  • xUnit

  • NUnit

  • MSTest


Linking the Test Project to the Main Project


Use the terminal command:

dotnet add [test project] reference [project to test]

This allows tests to access application code.


Example: xUnit Package Setup

xUnit test projects include these packages:

Package

Purpose

Microsoft.NET.Test.Sdk

Testing infrastructure

xUnit

Test framework

xunit.runner.visualstudio

Test runner

coverlet.collector

Code coverage


Unit Testing Workflow with Copilot

The process typically has 3 stages.

Stage

Tool

Create test project

Visual Studio Code

Generate tests

GitHub Copilot Chat

Run and manage tests

C# Dev Kit


Generating Tests with Copilot


Generate test for visible method

Prompt example:

Write a unit test for the method in #editor

Generate test for selected code

Prompt example:

#selection write a unit test for the selected code

Running Unit Tests


You can run tests using:


Editor shortcut

Click the green play button beside test methods.


Test Explorer

Open via:

Beaker icon in VS Code sidebar

Features:

  • Run tests

  • Debug tests

  • View results


Command Palette

Search for commands like:

Test: Run All Tests

Viewing Test Results

After running tests:

  • Pass/Fail results appear in:

    • Test Explorer

    • Editor decorations

Stack traces allow navigation to error locations.


Key Benefits of Using Copilot for Testing

Benefit

Result

Faster test generation

Less repetitive work

Better edge-case coverage

Stronger tests

Automated suggestions

Increased productivity

Consistent test style

Cleaner codebase


Key Takeaway

GitHub Copilot helps accelerate unit testing, but developers must still:

  • Review tests

  • Ensure coverage

  • Validate behavior

Create Unit Tests Using the Generate Tests Smart Action


Overview


The Generate Tests smart action is a GitHub Copilot feature that:

  • Automatically creates unit tests

  • Uses your code’s:

    • Structure

    • Logic

    • Behavior

It reduces manual effort and speeds up testing.


What the Feature Does

Feature

Description

Automatic test generation

Creates test cases from code

Covers functions & methods

Works on entire files or selections

Adds assertions

Validates expected outcomes

Saves time

Eliminates repetitive work


Two Ways to Use It

Mode

When to Use

Entire file

Test all functions at once

Code selection

Test a specific function/method


1. Generate Tests for an Entire File

Steps

Step

Action

1

Open the file

2

Right-click in editor

3

Select Generate Code → Generate Tests

4

Copilot generates tests

5

Review tests

6

Accept or discard

7

Save test file

8

Run tests

9

Refine if needed


What Copilot Generates

  • Test cases for:

    • All functions

    • All methods

  • Includes:

    • Assertions

    • Multiple scenarios


Where Tests Are Saved

Usually:

  • New test file

  • Or tests/ directory

(depending on project setup)


2. Generate Tests for Selected Code

Steps

Step

Action

1

Open file

2

Select code block

3

Right-click selection

4

Select Generate Code → Generate Tests

5

Review generated tests

6

Save file

7

Run tests

8

Refine if needed


When to Use This

Best for:

  • Single function testing

  • Focused debugging

  • Incremental test creation


What Happens Behind the Scenes


Copilot analyzes:

  • Function inputs

  • Expected outputs

  • Code logic

Then generates:

Element

Purpose

Test cases

Validate behavior

Assertions

Check correctness

Edge cases

Improve coverage


Important Best Practices

Always Review Generated Tests

Reason

Explanation

Accuracy

Copilot may miss scenarios

Coverage

Not all edge cases included

Naming

Improve readability


Refine Tests

You should:

  • Rename test cases

  • Add edge cases

  • Adjust assertions

Run Tests After Generation

  • Ensure all tests pass

  • Validate real behavior


Key Benefits

Benefit

Impact

Speed

Faster test creation

Coverage

More scenarios tested

Automation

Less manual work

Consistency

Standardized tests


Limitations

Limitation

Explanation

Not complete

May miss edge cases

Needs review

Cannot replace human validation

Depends on context

Better with clear code


Final Takeaway

The Generate Tests smart action is best used as:

  • A starting point, not final solution

  • A way to accelerate testing, not replace it

Create Unit Tests Using Inline Chat


Overview


Inline Chat lets you generate unit tests directly inside the editor without switching to the Chat panel.

It gives you:

  • More control than Generate Tests

  • Faster workflow (no context switching)


Key Idea

Feature

Benefit

Inline Chat

Works inside your code

Prompt-driven

Full control over output

Selection-based

Target specific code

Flexible

Customize tests easily


When to Use Inline Chat

Scenario

Why Use Inline Chat

Testing one method

More precise than full file generation

Custom test logic

You control prompt

Adding edge cases

Explicitly request them

Refining tests

Iterate quickly


Steps to Create Unit Tests

Step-by-Step Workflow

Step

Action

1

Open file

2

Select code to test

3

Open Inline Chat

4

Enter prompt

5

Review generated tests

6

Accept or discard

7

Save file

8

Build project

9

Run tests

10

Refine if needed

How to Open Inline Chat

Method

Shortcut

Keyboard

Ctrl + I (Windows/Linux)

Keyboard

Cmd + I (Mac)

Menu

Editor → Inline Chat


Example Prompt

/tests Generate unit tests for this method. Validate both success and failure, and include edge cases.

What Copilot Generates

Element

Description

Test cases

Based on function behavior

Assertions

Validate expected results

Success scenarios

Normal use cases

Failure scenarios

Error handling

Edge cases

Boundary conditions


Where Tests Are Added

  • Existing test file

    OR

  • New test file (if none exists)

Typically stored in:

  • tests/ directory


Build and Run Process


Build Project

  • Ensures:

    • Test file is included

    • No compilation errors

Run Tests

  • Verify:

    • Logic correctness

    • Expected behavior

Fix Issues

If needed:

  • Update tests

  • Fix build errors

  • Re-run tests


Inline Chat vs Generate Tests


Comparison Table

Feature

Inline Chat

Generate Tests

Control

High (custom prompts)

Low (automatic)

Flexibility

Very flexible

Limited

Speed

Slightly slower

Faster

Precision

High

Medium

Best for

Specific cases

Full file testing


Best Practices


1. Use Clear Prompts

Good prompt = better tests

Example improvements:

  • Specify edge cases

  • Ask for failure scenarios

  • Define test framework

2. Always Review Output

Why

Reason

Accuracy

May miss cases

Quality

Improve readability

Coverage

Add missing tests

3. Iterate with Inline Chat

You can refine tests by asking:

  • “Add more edge cases”

  • “Improve assertions”

  • “Use better naming”


Key Benefits

Benefit

Result

No context switching

Faster workflow

Customizable tests

Better quality

Targeted generation

Less noise

Iterative refinement

Continuous improvement

Limitations

Limitation

Explanation

Not complete

May miss scenarios

Requires prompts

Needs good instructions

Needs validation

Must review output


Final Takeaway

Inline Chat is best when you want:

  • Precision

  • Control

  • Custom test generation

Simple Rule

  • Use Generate Tests → for speed

  • Use Inline Chat → for control

Create Unit Tests Using Chat View Modes


Overview

GitHub Copilot Chat provides 3 agents for creating unit tests:

  • Ask

  • Plan

  • Agent

Each is designed for a different level of complexity and workflow.


Chat View Agents Summary

Agent

Purpose

Best Use Case

Ask

Generate answers + code

Quick test generation

Plan

Create structured steps

Complex test planning

Agent

Execute tasks end-to-end

Full automation workflow


Key Concept

Choosing the right agent depends on:

  • Task complexity

  • Level of control needed

  • Amount of automation desired


1. Ask Agent


What It Does

  • Analyzes code

  • Generates unit tests

  • Works with provided context


When to Use

Scenario

Why Use Ask

Testing a file

Fast generation

Multiple functions

Handles broader scope

Simple workflows

Minimal setup needed

Steps to Use Ask Agent

Step

Action

1

Open file

2

Open Chat view

3

Select Ask agent

4

Add context (optional)

5

Enter prompt

6

Review results

7

Apply to editor

8

Save + run tests

Adding Context

Method

Description

Drag & drop files

Add files to chat

Add Context button

Include resources

@workspace

Use full project context

Example Prompt

@workspace /explain Create unit tests for this file using Python unittest.

Key Benefit

  • Fast and simple

  • Good for initial test generation


2. Plan Agent

What It Does

  • Creates step-by-step testing plan

  • Asks clarifying questions

  • Structures implementation before coding

When to Use

Scenario

Why Use Plan

Large projects

Need structured approach

Complex logic

Requires planning

Team workflows

Clear steps needed

Steps to Use Plan Agent

Step

Action

1

Open file

2

Open Chat view

3

Select Plan agent

4

Enter task description

5

Answer clarifying questions

6

Review plan

7

Refine plan

8

Approve implementation

Example Prompt

I need to create unit tests for this file using Python unittest. Create a structured plan.

What Plan Produces

Output

Description

High-level summary

Overview of approach

Step-by-step plan

Implementation steps

Verification steps

How to test results

Decisions

Documented assumptions

Key Benefit

Ensures:

  • Better structure

  • Fewer mistakes

  • Clear workflow


3. Agent (Automation Mode)

What It Does

  • Executes full workflow automatically

  • Can:

    • Create test project

    • Generate tests

    • Run tests

    • Fix issues

When to Use

Scenario

Why Use Agent

Full automation

End-to-end workflow

Large systems

Needs deep context

Complex setups

Multiple steps required

Steps to Use Agent

Step

Action

1

Open file

2

Open Chat view

3

Select Agent

4

Enter task

5

Confirm tool usage

6

Monitor execution

7

Review results

8

Accept or refine

Example Prompt

Create a unit test project, generate tests for this file using xUnit, and run the tests.

Tools Agent Can Use

Tool

Purpose

Test Explorer

Run tests

Terminal

Execute commands

Copilot

Generate code

Important Note

Agent may use:

  • Multiple premium requests (PRUs)

  • Depends on:

    • Task complexity

    • Number of steps

    • Model used


Agent Capabilities

Capability

Description

Auto context detection

Finds relevant files

Multi-step execution

Performs chained tasks

Tool integration

Uses terminal + testing tools

Iteration

Can refine results


Comparison of All 3 Agents


Summary Table

Feature

Ask

Plan

Agent

Speed

Fast

Medium

Slower

Control

Medium

High

High

Automation

Low

Medium

Very High

Complexity handling

Medium

High

Very High

Best use

Quick tests

Planning

Full workflow


Best Practices


1. Choose the Right Agent

Need

Use

Quick tests

Ask

Structured plan

Plan

Full automation

Agent

2. Always Review Output

  • Validate:

    • Test logic

    • Coverage

    • Assertions

3. Iterate

  • Improve results by:

    • Refining prompts

    • Adding constraints

    • Requesting changes


Final Takeaway


Think of the agents like this:

  • Ask → Quick helper

  • Plan → Architect

  • Agent → Executor


Simple Rule

  • Start with Ask

  • Use Plan for complexity

  • Use Agent for automation


 
 
bottom of page