Skip to content

Threat Models

This document provides platform-level threat modeling for ConnectSoft's core platforms and services using STRIDE methodology. It is written for security architects, platform engineers, and developers who need to understand threats and mitigations across the ConnectSoft ecosystem.

Threat modeling at ConnectSoft follows a systematic approach, analyzing threats across all STRIDE categories and including AI-specific threats. Each platform and SaaS product can reference these threat models as baselines, adding product-specific details as needed.

Important

Threat Modeling is Continuous: Threat models are living documents that should be updated as systems evolve, new threats emerge, and mitigations are implemented. This document provides baseline threat models that should be refined per product and reviewed regularly.

Methodology

STRIDE Categories

STRIDE is a threat modeling methodology that categorizes threats into six categories:

Spoofing:

  • Impersonating another user, service, or system
  • Examples: Stolen credentials, token theft, OAuth misuse, service impersonation

Tampering:

  • Unauthorized modification of data or code
  • Examples: Token tampering, configuration tampering, data modification, code injection

Repudiation:

  • Denying that an action occurred
  • Examples: Lack of audit logs, missing authentication records, untraceable actions

Information Disclosure:

  • Unauthorized access to information
  • Examples: Token leaks, PII exposure, data breaches, log exposure

Denial of Service (DoS):

  • Preventing legitimate users from accessing services
  • Examples: Brute force attacks, resource exhaustion, API flooding, DDoS

Elevation of Privilege:

  • Gaining unauthorized access or privileges
  • Examples: Privilege escalation, mis-scoped roles, unauthorized access to admin functions

AI-Specific Threats

In addition to standard STRIDE threats, AI systems face unique threats:

Prompt Injection:

  • Malicious input designed to manipulate AI model behavior
  • Examples: User input containing injection attempts, malicious documents, adversarial prompts

Data Exfiltration Through Tools:

  • AI agents using tools to exfiltrate data across tenant boundaries
  • Examples: Cross-tenant data access via shared tools, unauthorized API calls, context leakage

Model Misuse:

  • Using AI models for unintended purposes
  • Examples: Jailbreaks, bypassing safety filters, generating harmful content

Jailbreaks:

  • Techniques to bypass AI model safety controls
  • Examples: Prompt engineering to bypass filters, adversarial inputs, model manipulation

See: Security Overview for security principles.

See: Patterns Cookbook for mitigation patterns.

Threat Model: Identity & Auth

Assets

User Identities:

  • User accounts, credentials, profiles
  • Multi-factor authentication (MFA) settings
  • External identity provider links

Tokens:

  • Access tokens (short-lived)
  • Refresh tokens (long-lived, rotated)
  • ID tokens (user identity claims)

Credentials:

  • Passwords (hashed, never stored in plain text)
  • MFA secrets (TOTP seeds, recovery codes)
  • API keys and client secrets

Data Flow

The following diagram illustrates the identity and authentication data flow with trust boundaries:

flowchart TD
    A[User/SaaS Frontend] -->|Auth Request| B[API Gateway]
    B -->|Validate & Route| C[Identity Platform]

    C -->|Validate Credentials| D{External IdP?}
    D -->|Yes| E[External IdP<br/>Azure AD/Google]
    D -->|No| F[Internal User Store]

    E -->|User Claims| C
    F -->|User Data| C

    C -->|Issue Tokens| B
    B -->|Return Tokens| A

    A -->|API Request with Token| G[Service API]
    G -->|Validate Token| C
    C -->|Token Valid| G
    G -->|Process Request| H[Business Logic]

    C -->|Audit Events| I[Audit Platform]
    G -->|Audit Events| I

    style C fill:#e1f5ff
    style E fill:#fff4e1
    style F fill:#fff4e1
    style I fill:#e8f5e9
Hold "Alt" / "Option" to enable pan & zoom

Trust Boundaries:

  • External Boundary - User/SaaS Frontend to API Gateway (public internet)
  • Internal Boundary - API Gateway to Identity Platform (internal network)
  • External IdP Boundary - Identity Platform to External IdP (external network)
  • Service Boundary - Service API to Identity Platform (internal network)

Threats by STRIDE Category

Spoofing

Threats:

  1. Stolen Tokens - Attacker steals access or refresh tokens
  2. OAuth Misuse - Attacker uses compromised OAuth client credentials
  3. Credential Theft - Attacker steals user passwords or MFA secrets
  4. Service Impersonation - Attacker impersonates Identity Platform or downstream services

Mitigations:

  • Short-lived access tokens (15-60 minutes)
  • Refresh token rotation on use
  • Token binding (bind tokens to client IP/user agent)
  • Strong password policies and MFA enforcement
  • Service-to-service authentication using managed identities
  • Certificate pinning and token signature validation

Tampering

Threats:

  1. Token Tampering - Attacker modifies token claims or signatures
  2. Configuration Tampering - Attacker modifies identity provider configuration
  3. User Data Tampering - Attacker modifies user profile or permissions
  4. Audit Log Tampering - Attacker modifies or deletes audit logs

Mitigations:

  • Cryptographically signed tokens (JWT with RSA/ECDSA)
  • Token signature validation on every request
  • Configuration stored in secure, version-controlled config store
  • Role-based access control (RBAC) for configuration changes
  • Immutable audit logs with integrity checks
  • Regular configuration audits

Repudiation

Threats:

  1. Lack of Login Audit - No record of authentication attempts
  2. Missing Token Usage Logs - No record of token validation or usage
  3. Untraceable Admin Actions - Admin actions not logged or attributed

Mitigations:

  • Comprehensive audit logging of all authentication events
  • Token validation and usage logged with correlation IDs
  • All admin actions logged with user identity and timestamp
  • Audit logs stored in immutable, tamper-evident store
  • Regular audit log reviews and compliance reporting

Information Disclosure

Threats:

  1. Token Leaks - Tokens exposed in logs, URLs, or error messages
  2. PII Exposure - User PII exposed in logs or API responses
  3. Credential Exposure - Passwords or secrets exposed in logs or config
  4. Configuration Exposure - Identity provider configuration exposed

Mitigations:

  • Tokens never logged or included in URLs
  • PII redaction in logs and error messages
  • Secrets stored in Key Vault, never in code or config
  • Configuration access restricted to authorized personnel
  • Data classification and handling policies
  • Regular security scanning for exposed secrets

Denial of Service

Threats:

  1. Brute Force Attacks - Repeated login attempts to guess passwords
  2. Token Validation Flooding - Excessive token validation requests
  3. Auth API Flooding - DDoS attacks on authentication endpoints
  4. Resource Exhaustion - Exhausting identity service resources

Mitigations:

  • Rate limiting on authentication endpoints
  • Account lockout after failed attempts
  • CAPTCHA or challenge-response for suspicious activity
  • Token validation caching and rate limiting
  • DDoS protection at network and application layers
  • Resource quotas and throttling

Elevation of Privilege

Threats:

  1. Privilege Escalation - User gains unauthorized privileges
  2. Mis-Scoped Roles - Tokens contain broader scopes than intended
  3. Admin Function Access - Unauthorized access to admin functions
  4. Cross-Tenant Access - User accesses resources from another tenant

Mitigations:

  • Principle of least privilege in role definitions
  • Scope-based authorization (tokens scoped to specific resources)
  • Tenant-scoped access control (all resources tagged with tenant ID)
  • Regular access reviews and role audits
  • Break-glass procedures for elevated access
  • Multi-factor authentication for admin functions

See: Identity Platform for platform details.

See: Patterns Cookbook for implementation patterns.

Threat Model: Billing & Payments

Assets

Payment Data:

  • Payment service provider (PSP) tokens
  • Payment method references (never full card numbers)
  • Billing addresses and payment metadata

Subscription State:

  • Subscription status and lifecycle
  • Plan and edition assignments
  • Usage limits and quotas

Billing Events:

  • Metering events (usage tracking)
  • Invoice generation and delivery
  • Payment processing events

Financial Records:

  • Invoices and billing history
  • Payment transactions
  • Refund and credit records

Threats

Fraudulent Usage:

  • Attacker creates fake accounts to consume free tier resources
  • Attacker manipulates usage metering to avoid charges
  • Attacker uses stolen payment methods

Payment Tampering:

  • Attacker modifies invoice amounts
  • Attacker manipulates payment processing
  • Attacker modifies subscription state to avoid charges

Metering Tampering:

  • Attacker modifies usage metering events
  • Attacker manipulates usage calculations
  • Attacker bypasses usage limits

Invoice Fraud:

  • Attacker generates fraudulent invoices
  • Attacker modifies invoice data
  • Attacker manipulates billing cycles

Mitigations

Payment Service Provider (PSP) Usage:

  • Never store full payment card data
  • Use PCI DSS-compliant PSP (Stripe, PayPal, etc.)
  • Payment data only in PSP tokenized form
  • PSP handles all payment card processing

Signed Metering Events:

  • All metering events cryptographically signed
  • Event integrity validated before billing
  • Immutable event log for audit trail
  • Double-entry style ledger for financial records

Restricted Roles:

  • Billing and payment operations require elevated roles
  • Multi-factor authentication for payment operations
  • Separation of duties (billing admin vs. payment processor)
  • Regular access reviews for billing roles

Audit and Compliance:

  • All billing events logged to Audit Platform
  • Financial records stored in tamper-evident store
  • Regular financial audits and reconciliation
  • Compliance with financial regulations

Rate Limiting and Fraud Detection:

  • Rate limiting on subscription creation
  • Fraud detection for suspicious payment patterns
  • Account verification for high-value subscriptions
  • Usage anomaly detection

See: Billing & Subscription Platform for platform details.

Threat Model: Documents & Forms

Assets

Uploaded Documents:

  • User-uploaded files (PDFs, images, documents)
  • Document metadata and classifications
  • Document versions and history

Generated Documents:

  • System-generated documents (invoices, reports, forms)
  • Document templates and content
  • Document rendering and formatting

Form Submissions:

  • Form data and responses
  • File attachments to forms
  • Form metadata and workflow state

Document Storage:

  • Blob storage (encrypted at rest)
  • Database records (document metadata)
  • Backup and archive storage

Threats

Unauthorized Access:

  • Attacker accesses documents from another tenant
  • Attacker bypasses access control to view sensitive documents
  • Attacker accesses documents through API vulnerabilities

Tampering:

  • Attacker modifies stored documents
  • Attacker manipulates document metadata
  • Attacker tampers with form submissions

Injection via Form Content:

  • Malicious content in form submissions
  • File upload attacks (malicious files, oversized files)
  • Content injection in document processing

Data Leakage:

  • PII/PHI exposure in logs
  • Document content in error messages
  • Backup exposure or improper disposal
  • Cross-tenant data leakage

Mitigations

Data Classification:

  • Documents classified by sensitivity (Public, Internal, Confidential, Sensitive, Highly Sensitive)
  • Classification determines encryption and access control requirements
  • Automatic classification based on content analysis

Encryption:

  • Encryption at rest for all document storage
  • Encryption in transit for all document transfers
  • Key management via Key Vault
  • Per-tenant encryption keys where required

Row-Level Security (RLS):

  • Tenant-scoped access control at database level
  • Document access requires valid tenant context
  • RLS policies enforce tenant isolation

Redaction:

  • PII/PHI redaction in logs and error messages
  • Document content redaction for lower-privilege access
  • Safe logging practices (no document content in logs)

Input Validation:

  • File type validation and restrictions
  • File size limits and quotas
  • Content scanning for malicious files
  • Form input sanitization and validation

Integrity Checks:

  • Document hashes stored for integrity verification
  • Tamper detection through hash validation
  • Version control with integrity checks
  • Audit trail for all document access and modifications

See: Documents Platform for platform details.

See: Patterns Cookbook for implementation patterns.

Threat Model: Integrations (Webhooks & Connectors)

Assets

Webhook Endpoints:

  • Inbound webhook URLs and endpoints
  • Webhook payloads and event data
  • Webhook delivery status and retry logic

Connector Credentials:

  • External system API keys and tokens
  • OAuth tokens for external services
  • Per-tenant connector configurations

Inbound/Outbound Data:

  • Data received from external systems
  • Data sent to external systems
  • Data transformation and mapping

Threats

Spoofed Webhooks:

  • Attacker sends fake webhook events
  • Attacker replays old webhook events
  • Attacker manipulates webhook payloads

Replay Attacks:

  • Attacker replays valid webhook events
  • Attacker replays connector API calls
  • Time-based replay attacks

Connector Abuse:

  • Attacker uses connector to pivot into external systems
  • Attacker abuses connector permissions
  • Attacker exfiltrates data through connectors

Exfiltration Through Connectors:

  • Attacker uses connector to send data to unauthorized destinations
  • Attacker manipulates connector configuration
  • Cross-tenant data exfiltration via shared connectors

Mitigations

Webhook Signatures:

  • All webhooks signed with shared secrets or HMAC
  • Signature validation on every webhook request
  • Per-tenant webhook secrets stored in Key Vault
  • Signature algorithm and secret rotation

Replay Protection:

  • Nonce or timestamp validation for webhook events
  • Event ID tracking to prevent duplicate processing
  • Time-window validation (reject events outside time window)
  • Idempotency keys for connector operations

IP Allowlists:

  • Webhook endpoints restricted to known source IPs
  • Connector outbound calls from known IP ranges
  • Network-level filtering and validation

Per-Tenant Key Vault Secrets:

  • Each tenant's connector credentials stored separately
  • Credentials scoped to tenant and specific external system
  • Credential rotation without affecting other tenants
  • Audit logging of credential access and usage

Strict Scopes:

  • Connector OAuth tokens scoped to minimum necessary permissions
  • Per-connector permission boundaries
  • Regular scope reviews and audits
  • Principle of least privilege for connector permissions

Throttling and Rate Limiting:

  • Rate limiting on webhook endpoints
  • Throttling of connector API calls
  • Quotas per tenant and per connector
  • Anomaly detection for unusual connector activity

See: Integration Platform for platform details.

See: Webhooks and Events for integration patterns.

Threat Model: AI Agents & Tools

Assets

Prompts:

  • User prompts and instructions
  • System prompts and guardrails
  • Prompt templates and configurations

Context:

  • Tenant data and context
  • Knowledge base content
  • Vector embeddings and retrieval results

Agent Configurations:

  • Agent definitions and capabilities
  • Tool access permissions
  • Agent execution policies

Tool Invocations:

  • API calls made by agents
  • Database queries executed by agents
  • External system interactions

Threats

Prompt Injection:

  • Malicious user input designed to manipulate agent behavior
  • Injection attempts in uploaded documents
  • Adversarial prompts to bypass safety controls

Data Exfiltration Across Tenants:

  • Agent accessing data from wrong tenant
  • Shared tools or vectors leaking tenant data
  • Context contamination between tenants

High-Privilege Tool Misuse:

  • Agent using high-privilege tools without proper authorization
  • Agent making unauthorized API calls
  • Agent performing destructive or sensitive operations

Model Misuse and Jailbreaks:

  • Techniques to bypass AI model safety filters
  • Generating harmful or inappropriate content
  • Manipulating model behavior for unintended purposes

Mitigations

Tool-Level Authorization:

  • Each tool requires explicit authorization per agent
  • Tenant-scoped tool access (tools can only access tenant's data)
  • Role-based tool permissions (different tools for different agent roles)
  • Tool invocation logging and audit

Hard Separation of Tenant Contexts:

  • Tenant context strictly enforced in retrieval and tools
  • No cross-tenant data access in RAG or vector stores
  • Tenant ID validated before any data access
  • Explicit "no cross-tenant context" rules in agent execution

Guardrails and Content Filters:

  • Input validation and sanitization for prompts
  • Content filters for generated responses
  • Instruction hierarchy (system instructions override user input)
  • RAG scoping by tenant (retrieval only from tenant's knowledge base)

Redaction & Classification:

  • Sensitive data redacted from prompts and logs
  • Data classification determines what can be included in prompts
  • PII/PHI redaction before sending to AI models
  • Safe logging practices (no sensitive data in agent logs)

Human-in-the-Loop:

  • Critical actions require human approval (payments, destructive changes)
  • High-privilege tool invocations require confirmation
  • Sensitive operations logged and reviewed
  • Escalation procedures for suspicious agent behavior

Prompt Security:

  • Prompt injection detection and prevention
  • Input sanitization and validation
  • System prompt protection (system prompts cannot be overridden by user input)
  • Adversarial prompt detection and blocking

The following diagram illustrates AI agent execution with trust boundaries:

flowchart TD
    A[User Input] -->|Prompt| B[AI Gateway]
    B -->|Validate & Classify| C[Content Filter]
    C -->|Sanitized Prompt| D[Agent Orchestrator]

    D -->|Tenant Context| E[Tenant Isolation Layer]
    E -->|Scoped Context| F[AI Model]

    F -->|Response| G[Tool Authorization]
    G -->|Authorized Tools| H{Tool Type?}

    H -->|High-Privilege| I[Human Approval]
    H -->|Standard| J[Tool Execution]

    I -->|Approved| J
    J -->|Tool Result| K[Response Assembly]

    K -->|Final Response| L[Content Filter]
    L -->|Validated| B
    B -->|Response| A

    D -->|Audit Events| M[Audit Platform]
    G -->|Audit Events| M
    J -->|Audit Events| M

    E -->|Tenant-Scoped RAG| N[Knowledge Base]
    N -->|Tenant Data Only| E

    style E fill:#e1f5ff
    style G fill:#fff4e1
    style I fill:#ffebee
    style M fill:#e8f5e9
Hold "Alt" / "Option" to enable pan & zoom

Trust Boundaries:

  • User Boundary - User Input to AI Gateway (public interface)
  • Content Filter Boundary - Input/output validation and filtering
  • Tenant Isolation Boundary - Strict tenant context enforcement
  • Tool Authorization Boundary - Tool access control and approval
  • Human Approval Boundary - Critical operations requiring human review

See: Factory Overview for Factory architecture.

See: Patterns Cookbook for AI security patterns.

Summary

Using These Threat Models

These threat models provide baseline threat analysis for ConnectSoft's core platforms. Each platform and SaaS product should:

  1. Reference Baseline Models - Use these threat models as starting points
  2. Add Product-Specific Threats - Identify threats specific to the product
  3. Refine Mitigations - Adapt mitigations to product architecture
  4. Regular Reviews - Update threat models as systems evolve

Platform-Specific Extensions

Identity Platform:

  • Reference Identity & Auth threat model
  • Add product-specific threats (external IdP federation, multi-tenant user management)

Billing Platform:

  • Reference Billing & Payments threat model
  • Add product-specific threats (usage metering, subscription lifecycle)

Documents Platform:

  • Reference Documents & Forms threat model
  • Add product-specific threats (document processing, version control)

Integration Platform:

  • Reference Integrations threat model
  • Add product-specific threats (connector marketplace, integration templates)

AI Factory:

  • Reference AI Agents & Tools threat model
  • Add product-specific threats (code generation security, agent collaboration)

Continuous Improvement

Threat models should be:

  • Reviewed Regularly - Updated as systems evolve and new threats emerge
  • Validated - Tested through security assessments and penetration testing
  • Documented - Changes tracked and communicated to stakeholders
  • Integrated - Part of design reviews and security architecture decisions

See: Security Overview for security principles.

See: Patterns Cookbook for mitigation implementation.

See: Compliance Blueprints for compliance considerations.

Security Documentation

Platform Documentation

Factory & AI

Integration