Skip to content

Testing Strategy

This document defines the testing strategy and practices for ConnectSoft engineering teams. It is written for engineers and QA teams understanding how we test software, including how AI agents assist with testing.

ConnectSoft uses a comprehensive testing strategy covering unit, integration, contract, end-to-end, and performance testing. QA agents generate tests alongside code, ensuring high test coverage and quality from the start.

Important

All code must have tests. Minimum 80% code coverage required. Tests must pass before merging PRs. QA agents generate tests automatically, but humans review and approve them.

Test Types

Unit Tests

Purpose: Test individual components in isolation.

Scope: - Domain entities and value objects - Domain services - Application use cases - Utility functions

Characteristics: - Fast execution (< 1ms per test) - No external dependencies (mocks/stubs) - Test business logic, not infrastructure

Example:

[Fact]
public void Invoice_ProcessPayment_ShouldUpdateStatus()
{
    // Arrange
    var invoice = new Invoice(Money.FromAmount(100), InvoiceStatus.Pending);

    // Act
    invoice.ProcessPayment(Money.FromAmount(100));

    // Assert
    Assert.Equal(InvoiceStatus.Paid, invoice.Status);
}

Integration Tests

Purpose: Test components working together.

Scope: - API endpoints - Database repositories - External service integrations - Event publishing/subscription

Characteristics: - Use real dependencies (database, messaging) - Test infrastructure integration - Slower than unit tests (seconds per test)

Example:

[Fact]
public async Task InvoiceController_CreateInvoice_ShouldReturnCreatedInvoice()
{
    // Arrange
    var client = _factory.CreateClient();
    var request = new CreateInvoiceRequest { Amount = 100 };

    // Act
    var response = await client.PostAsync("/invoices", JsonContent.Create(request));

    // Assert
    response.EnsureSuccessStatusCode();
    var invoice = await response.Content.ReadFromJsonAsync<InvoiceResponse>();
    Assert.NotNull(invoice);
    Assert.Equal(100, invoice.Amount);
}

Contract Tests

Purpose: Verify API contracts between services.

Scope: - API request/response schemas - Event schemas - Integration contracts

Characteristics: - Test contracts, not implementations - Prevent breaking changes - Use contract testing tools (Pact, etc.)

End-to-End Tests

Purpose: Test complete user workflows.

Scope: - Complete user journeys - Multi-service interactions - Business processes

Characteristics: - Test real systems (or close to real) - Slow execution (minutes per test) - Use sparingly (critical paths only)

Example:

[Fact]
public async Task CreateInvoice_EndToEnd_ShouldCreateAndNotify()
{
    // Arrange
    var client = _factory.CreateClient();

    // Act
    var invoice = await CreateInvoice(client);
    var notification = await WaitForNotification(invoice.Id);

    // Assert
    Assert.NotNull(notification);
    Assert.Equal("Invoice created", notification.Message);
}

Performance Tests

Purpose: Verify performance characteristics.

Scope: - Response time - Throughput - Resource usage

Characteristics: - Run in performance test environment - Measure against SLOs - Not part of regular CI/CD (run separately)

Where Tests Live in Generated Services

Test Structure

Location: tests/{ServiceName}.Tests/

Structure:

tests/
└── InvoiceService.Tests/
    ├── Unit/
    │   ├── Domain/
    │   │   └── InvoiceTests.cs
    │   └── Application/
    │       └── CreateInvoiceUseCaseTests.cs
    ├── Integration/
    │   ├── InvoiceControllerTests.cs
    │   └── InvoiceRepositoryTests.cs
    └── Acceptance/
        └── CreateInvoiceAcceptanceTests.cs

Test Projects: - One test project per service - Organized by test type (Unit, Integration, Acceptance) - Can test any layer (Domain, Application, Infrastructure, API)

Test Naming Conventions

Format: {ClassUnderTest}_{Method}_{ExpectedBehavior}

Examples: - Invoice_ProcessPayment_ShouldUpdateStatus - CreateInvoiceUseCase_Execute_ShouldCreateInvoice - InvoiceController_CreateInvoice_ShouldReturnCreatedInvoice

Automated Testing in CI/CD

Test Execution in Pipelines

Pipeline Stages:

  1. Build - Compile code and tests
  2. Unit Tests - Run unit tests (fast, required)
  3. Integration Tests - Run integration tests (slower, required)
  4. Code Coverage - Measure code coverage (must be ≥ 80%)
  5. Deploy - Deploy if all tests pass

Example Pipeline:

stages:
  - stage: Test
    jobs:
      - job: UnitTests
        steps:
          - task: DotNetCoreCLI@2
            inputs:
              command: test
              projects: '**/*Tests.csproj'
              arguments: '--filter "Category=Unit"'

      - job: IntegrationTests
        steps:
          - task: DotNetCoreCLI@2
            inputs:
              command: test
              projects: '**/*Tests.csproj'
              arguments: '--filter "Category=Integration"'

      - job: CodeCoverage
        steps:
          - task: DotNetCoreCLI@2
            inputs:
              command: test
              arguments: '--collect:"XPlat Code Coverage"'

Quality Gates

Required Gates:

  • All tests pass - No failing tests allowed
  • Code coverage ≥ 80% - Minimum coverage threshold
  • No critical security issues - Static analysis passes
  • Performance within SLOs - Performance tests pass (if applicable)

Important

All PRs must pass all quality gates before merging. No exceptions. If tests fail or coverage is below threshold, fix before merging.

AI-Assisted Testing

How QA Agents Generate Tests

QA Agents:

  • Analyze code - Understand code structure and logic
  • Generate test cases - Create comprehensive test suites
  • Cover edge cases - Identify and test edge cases
  • Ensure coverage - Generate tests to meet coverage thresholds

Test Generation Process:

  1. Code Analysis - QA agent analyzes generated code
  2. Test Generation - Generates unit, integration, and acceptance tests
  3. Test Review - Human reviews generated tests
  4. Test Execution - Tests run in CI/CD pipeline
  5. Coverage Validation - Verify coverage meets threshold

How QA Agents Evaluate Coverage

Coverage Analysis:

  • Line coverage - Percentage of lines executed
  • Branch coverage - Percentage of branches executed
  • Method coverage - Percentage of methods executed

Coverage Goals:

  • Domain layer - 100% coverage (critical business logic)
  • Application layer - ≥ 90% coverage
  • Infrastructure layer - ≥ 80% coverage
  • API layer - ≥ 80% coverage

How QA Agents Identify Edge Cases

Edge Case Detection:

  • Boundary conditions - Test min/max values, null checks
  • Error paths - Test exception handling
  • Concurrency - Test race conditions (if applicable)
  • Integration failures - Test external service failures

Example:

// QA agent generates these edge case tests:
[Fact]
public void Invoice_ProcessPayment_WithZeroAmount_ShouldThrowException()
{
    // Test edge case
}

[Fact]
public void Invoice_ProcessPayment_WithNegativeAmount_ShouldThrowException()
{
    // Test edge case
}

[Fact]
public async Task InvoiceRepository_GetById_WithNonExistentId_ShouldReturnNull()
{
    // Test edge case
}