writing-tests
Write behavior-focused tests following Testing Trophy model with real dependencies, avoiding common anti-patterns like testing mocks and polluting production code. Use when writing new tests, reviewing test quality, or improving test coverage.
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install third774-dotfiles-writing-tests
Repository
Skill path: opencode/skills/writing-tests
Write behavior-focused tests following Testing Trophy model with real dependencies, avoiding common anti-patterns like testing mocks and polluting production code. Use when writing new tests, reviewing test quality, or improving test coverage.
Open repositoryBest for
Primary workflow: Write Technical Docs.
Technical facets: Full Stack, Tech Writer, Testing.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: third774.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install writing-tests into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/third774/dotfiles before adding writing-tests to shared team environments
- Use writing-tests for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: writing-tests
description: Write behavior-focused tests following Testing Trophy model with real dependencies, avoiding common anti-patterns like testing mocks and polluting production code. Use when writing new tests, reviewing test quality, or improving test coverage.
---
# Writing Tests
**Core Philosophy:** Test user-observable behavior with real dependencies. Tests should survive refactoring when behavior is unchanged.
**Iron Laws:**
<IMPORTANT>
1. Test real behavior, not mock behavior
2. Never add test-only methods to production code
3. Never mock without understanding dependencies
</IMPORTANT>
## Testing Trophy Model
Write tests in this priority order:
1. **Integration Tests (PRIMARY)** - Multiple units with real dependencies
2. **E2E Tests (SECONDARY)** - Complete workflows across the stack
3. **Unit Tests (RARE)** - Pure functions only (no dependencies)
**Default to integration tests.** Only drop to unit tests for pure utility functions.
## Pre-Test Workflow
BEFORE writing any tests, copy this checklist and track your progress:
```
Test Writing Progress:
- [ ] Step 1: Review project standards (check existing tests)
- [ ] Step 2: Understand behavior (what should it do? what can fail?)
- [ ] Step 3: Choose test type (Integration/E2E/Unit)
- [ ] Step 4: Identify dependencies (real vs mocked)
- [ ] Step 5: Write failing test first (TDD)
- [ ] Step 6: Implement minimal code to pass
- [ ] Step 7: Verify coverage (happy path, errors, edge cases)
```
**Before writing any tests:**
1. **Review project standards** - Check existing test files, testing docs, or project conventions
2. **Understand behavior** - What should this do? What can go wrong?
3. **Choose test type** - Integration (default), E2E (critical workflows), or Unit (pure functions)
4. **Identify dependencies** - What needs to be real vs mocked?
## Test Type Decision
```
Is this a complete user workflow?
→ YES: E2E test
Is this a pure function (no side effects/dependencies)?
→ YES: Unit test
Everything else:
→ Integration test (with real dependencies)
```
## Mocking Guidelines
**Default: Don't mock. Use real dependencies.**
### Only Mock These
- External HTTP/API calls
- Time-dependent operations (timers, dates)
- Randomness (random numbers, UUIDs)
- File system I/O
- Third-party services (payments, analytics, email)
- Network boundaries
### Never Mock These
- Internal modules/packages
- Database queries (use test database)
- Business logic
- Data transformations
- Your own code calling your own code
**Why:** Mocking internal dependencies creates brittle tests that break during refactoring.
### Before Mocking, Ask:
1. "What side effects does this method have?"
2. "Does my test depend on those side effects?"
3. If yes → Mock at lower level (the slow/external operation, not the method test needs)
4. Unsure? → Run with real implementation first, observe what's needed, THEN add minimal mocking
### Mock Red Flags
- "I'll mock this to be safe"
- "This might be slow, better mock it"
- Can't explain why mock is needed
- Mock setup longer than test logic
- Test fails when removing mock
## Integration Test Pattern
```
describe("Feature Name", () => {
setup(initialState)
test("should produce expected output when action is performed", () => {
// Arrange: Set up preconditions
// Act: Perform the action being tested
// Assert: Verify observable output
})
})
```
**Key principles:**
- Use real state/data, not mocks
- Assert on outputs users/callers can observe
- Test the behavior, not the implementation
For language-specific patterns, see the Language-Specific Patterns section.
## Async Waiting Patterns
When tests involve async operations, avoid arbitrary timeouts:
```
// BAD: Guessing at timing
sleep(500)
assert result == expected
// GOOD: Wait for the actual condition
wait_for(lambda: result == expected)
```
**When to use condition-based waiting:**
- Tests use `sleep`, `setTimeout`, or arbitrary delays
- Tests are flaky (pass locally, fail in CI)
- Tests timeout when run in parallel
- Waiting for async operations to complete
**Delegate to skill:** When you encounter these patterns, invoke `Skill(ce:condition-based-waiting)` for detailed guidance on implementing proper condition polling and fixing flaky tests.
## Assertion Strategy
**Principle:** Assert on observable outputs, not internal state.
| Context | Assert On | Avoid |
| ------- | ----------------------------------------------------- | ------------------------------------- |
| UI | Visible text, accessibility roles, user-visible state | CSS classes, internal state, test IDs |
| API | Response body, status code, headers | Internal DB state directly |
| CLI | stdout/stderr, exit code | Internal variables |
| Library | Return values, documented side effects | Private methods, internal state |
**Why:** Tests that assert on implementation details break when you refactor, even if behavior is unchanged.
## Test Data Management
**Use source constants and fixtures, not hard-coded values:**
```
// Good - References actual constant or fixture
expected_message = APP_MESSAGES.SUCCESS
assert response.message == expected_message
// Bad - Hard-coded, breaks when copy changes
assert response.message == "Action completed successfully!"
```
**Why:** When product copy changes, you want one place to update, not every test file.
## Anti-Patterns to Avoid
### Testing Mock Behavior
```
// BAD: Testing that the mock was called, not real behavior
mock_service.assert_called_once()
// GOOD: Test the actual outcome
assert user.is_active == True
assert len(sent_emails) == 1
```
**Gate:** Before asserting on mock calls, ask "Am I testing real behavior or mock interactions?" If testing mocks → Stop, test the actual outcome instead.
### Test-Only Methods in Production
```
// BAD: destroy() only used in tests - pollutes production code
class Session:
def destroy(self): # Only exists for test cleanup
...
// GOOD: Test utilities handle cleanup
# In test_utils.py
def cleanup_session(session):
# Access internals here, not in production code
...
```
**Gate:** Before adding methods to production code, ask "Is this only for tests?" Yes → Put in test utilities.
### Mocking Without Understanding
```
// BAD: Mock prevents side effect test actually needs
mock(database.save) # Now duplicate detection won't work!
add_item(item)
add_item(item) # Should fail as duplicate, but won't
// GOOD: Mock at correct level
mock(external_api.validate) # Mock slow external call only
add_item(item) # DB save works, duplicate detected
add_item(item) # Fails correctly
```
### Incomplete Mocks
```
// BAD: Partial mock - missing fields downstream code needs
mock_response = {
status: "success",
data: {...}
// Missing: metadata.request_id that downstream code uses
}
// GOOD: Mirror real API completely
mock_response = {
status: "success",
data: {...},
metadata: {request_id: "...", timestamp: ...}
}
```
**Gate:** Before creating mocks, check "What does the real thing return?" Include ALL fields.
## TDD Prevents Anti-Patterns
1. **Write test first** → Think about what you're testing (not mocks)
2. **Watch it fail** → Confirms test tests real behavior
3. **Minimal implementation** → No test-only methods creep in
4. **Real dependencies first** → See what test needs before mocking
**If testing mock behavior, you violated TDD** - you added mocks without watching test fail against real code.
## Language-Specific Patterns
For detailed framework and language-specific patterns:
- **JavaScript/React**: See `references/javascript-react.md` for React Testing Library queries, Jest/Vitest setup, Playwright E2E, and component testing patterns
- **Python**: See `references/python.md` for pytest fixtures, polyfactory, respx mocking, testcontainers, and FastAPI testing
- **Go**: See `references/go.md` for table-driven tests, testify/go-cmp assertions, testcontainers-go, and interface fakes
## Quality Checklist
Before completing tests, verify:
- [ ] Happy path covered
- [ ] Error conditions handled
- [ ] Edge cases considered
- [ ] Real dependencies used (minimal mocking)
- [ ] Async waiting uses conditions, not arbitrary timeouts
- [ ] Tests survive refactoring (no implementation details)
- [ ] No test-only methods added to production code
- [ ] No assertions on mock existence or call counts
- [ ] Test names describe behavior, not implementation
## What NOT to Test
- Internal state
- Private methods
- Function call counts
- Implementation details
- Mock existence
- Framework internals
**Test behavior users/callers observe, not code structure.**
## Quick Reference
| Test Type | When | Dependencies |
| ----------- | ----------------------- | ---------------------------- |
| Integration | Default choice | Real (test DB, real modules) |
| E2E | Critical user workflows | Real (full stack) |
| Unit | Pure functions only | None |
| Anti-Pattern | Fix |
| ------------------------------- | --------------------------------------- |
| Testing mock existence | Test actual outcome instead |
| Test-only methods in production | Move to test utilities |
| Mocking without understanding | Understand dependencies, mock minimally |
| Incomplete mocks | Mirror real API completely |
| Tests as afterthought | TDD - write tests first |
| Arbitrary timeouts/sleeps | Use condition-based waiting |
<IMPORTANT>
**Remember:** Behavior over implementation. Real over mocked. Outputs over internals.
</IMPORTANT>
---
## Referenced Files
> The following files are referenced in this skill and included for context.
### references/javascript-react.md
```markdown
# JavaScript/React Testing Patterns
Language-specific patterns for testing JavaScript and React applications with Jest, Vitest, React Testing Library, and Playwright.
## Contents
- [Integration Test Pattern (React Testing Library)](#integration-test-pattern-react-testing-library)
- [E2E Test Pattern (Playwright)](#e2e-test-pattern-playwright)
- [Query Strategy](#query-strategy)
- [String Management](#string-management)
- [React-Specific Anti-Patterns](#react-specific-anti-patterns)
- [Async Waiting Patterns](#async-waiting-patterns)
- [Tooling Quick Reference](#tooling-quick-reference)
- [Setup Patterns](#setup-patterns)
## Integration Test Pattern (React Testing Library)
```javascript
describe("Feature Name", () => {
// Real state/providers, not mocks
const setup = (initialState = {}) => {
return render(<Component />, {
wrapper: ({ children }) => (
<StateProvider initialState={initialState}>{children}</StateProvider>
),
});
};
it("should show result when user performs action", async () => {
setup({ items: [] });
// Semantic query (role/label/text)
const button = screen.getByRole("button", { name: /add item/i });
await userEvent.click(button);
// Assert on UI output
await waitFor(() => expect(screen.getByText(/item added/i)).toBeVisible());
});
});
```
## E2E Test Pattern (Playwright)
```javascript
test("should complete workflow when user takes action", async ({ page }) => {
await page.goto("/dashboard");
// Given: precondition
await expect(page.getByRole("heading", { name: "Dashboard" })).toBeVisible();
// When: user action
await page.getByRole("button", { name: "Add Item" }).click();
// Then: expected outcome
await expect(page.getByText("Item added successfully")).toBeVisible();
});
```
## Query Strategy
**Use semantic queries (order of preference):**
1. `getByRole('button', { name: /submit/i })` - Accessibility-based
2. `getByLabelText(/email/i)` - Form labels
3. `getByText(/welcome/i)` - Visible text
4. `getByPlaceholderText(/search/i)` - Input placeholders
**Avoid:**
- `getByTestId` - Implementation detail
- CSS selectors - Brittle, breaks during refactoring
- Internal state queries - Not user-observable
## String Management
**Use source constants, not hard-coded strings:**
```javascript
// Good - References actual constant
import { MESSAGES } from "@/constants/messages";
expect(screen.getByText(MESSAGES.SUCCESS)).toBeVisible();
// Bad - Hard-coded, breaks when copy changes
expect(screen.getByText("Action completed successfully!")).toBeVisible();
```
## React-Specific Anti-Patterns
### Testing Mock Behavior
```typescript
// BAD: Testing mock existence, not real behavior
test("renders sidebar", () => {
render(<Page />);
expect(screen.getByTestId("sidebar-mock")).toBeInTheDocument();
});
// GOOD: Test real component with semantic query
test("renders sidebar", () => {
render(<Page />); // Don't mock sidebar
expect(screen.getByRole("navigation")).toBeInTheDocument();
});
```
### Mocking Internal Components
```typescript
// BAD: Mock internal dependencies
vi.mock("./Sidebar", () => ({
Sidebar: () => <div data-testid="sidebar-mock" />,
}));
// GOOD: Use real components, mock at system boundaries
// Only mock external APIs, not internal components
```
## Async Waiting Patterns
Use framework-provided waiting utilities, not arbitrary timeouts:
```typescript
// BAD: Guessing at timing
await new Promise((r) => setTimeout(r, 500));
expect(screen.getByText("Done")).toBeVisible();
// GOOD: Wait for the actual condition
await waitFor(() => expect(screen.getByText("Done")).toBeVisible());
// GOOD: Playwright auto-waits
await expect(page.getByText("Done")).toBeVisible();
```
For flaky test debugging, invoke `Skill(ce:condition-based-waiting)`.
## Tooling Quick Reference
| Tool | Purpose | Best For |
| --------------------- | ------------------ | --------------------------------- |
| Jest | Test runner | Unit and integration tests |
| Vitest | Test runner | Vite projects, faster than Jest |
| React Testing Library | Component testing | Integration tests with real DOM |
| Playwright | Browser automation | E2E tests, cross-browser |
| Cypress | Browser automation | E2E tests, time-travel debugging |
| MSW | API mocking | Mock fetch/axios at network level |
## Setup Patterns
### Jest + RTL Setup
```javascript
// jest.setup.js
import "@testing-library/jest-dom";
// Clear mocks between tests
beforeEach(() => {
jest.clearAllMocks();
});
```
### Vitest + RTL Setup
```javascript
// vitest.setup.ts
import "@testing-library/jest-dom/vitest";
beforeEach(() => {
vi.clearAllMocks();
});
```
### Playwright Setup
```javascript
// playwright.config.js
export default {
testDir: "./e2e",
use: {
baseURL: "http://localhost:3000",
trace: "on-first-retry",
},
};
```
```
### references/python.md
```markdown
# Python Testing Patterns
Language-specific patterns for testing Python applications with `pytest`, `httpx`, and modern architecture tools like `polyfactory` and `testcontainers`.
## Contents
- [Pytest Configuration](#pytest-configuration-pyprojecttoml)
- [Pytest Fixture Pattern](#pytest-fixture-pattern)
- [Data Generation (Polyfactory)](#data-generation-polyfactory)
- [Mocking External Boundaries (RESPX)](#mocking-external-boundaries-respx)
- [FastAPI Testing Patterns (Async)](#fastapi-testing-patterns-async)
- [Integration Testing (Testcontainers)](#integration-testing-testcontainers)
- [Table-Driven Tests (Parametrize)](#table-driven-tests-parametrize)
- [Tooling Quick Reference](#tooling-quick-reference)
- [Property-Based Testing (Hypothesis)](#property-based-testing-hypothesis)
## Pytest Configuration (`pyproject.toml`)
Modern pytest setup uses `pyproject.toml` for configuration.
```toml
[tool.pytest.ini_options]
addopts = "-ra -q --cov=app --cov-report=term-missing"
testpaths = ["tests"]
asyncio_mode = "auto" # Eliminates need for @pytest.mark.asyncio decorators
asyncio_default_fixture_loop_scope = "function"
```
## Pytest Fixture Pattern
Use `yield` for setup/teardown and prefer `scope="session"` for expensive resources (like containers) and `scope="function"` for isolation (like db transactions).
```python
import pytest
@pytest.fixture
def db_session(db_engine):
"""Creates a fresh database session for a test."""
connection = db_engine.connect()
transaction = connection.begin()
session = Session(bind=connection)
yield session
session.close()
transaction.rollback()
connection.close()
@pytest.fixture
def user(db_session):
"""Create a test user using a Factory (preferred over manual creation)."""
return UserFactory.create_sync(session=db_session)
def test_user_update(db_session, user):
# Arrange
user.name = "Updated Name"
db_session.commit()
# Act & Assert
refreshed = db_session.get(User, user.id)
assert refreshed.name == "Updated Name"
```
## Data Generation (Polyfactory)
Replace manual object creation and `factory_boy` with **Polyfactory**. It uses type hints to automatically generate valid data.
```python
from polyfactory.factories.pydantic_factory import ModelFactory
from app.models import User, UserRole
# Automatically infers fields from the Pydantic model or Dataclass
class UserFactory(ModelFactory[User]):
__model__ = User
# Override defaults if specific values are needed
role = UserRole.USER
is_active = True
def test_admin_dashboard(client):
# Generate a full user object with valid random data, overriding just the role
admin_user = UserFactory.build(role=UserRole.ADMIN)
response = client.post("/login", json={"user": admin_user.dict()})
assert response.status_code == 200
```
## Mocking External Boundaries (RESPX)
Avoid patching `requests` directly. Use `respx` with `httpx` for robust, router-based HTTP mocking.
```python
import respx
import httpx
import pytest
@respx.mock
async def test_external_github_api_call():
# Arrange: Define the mock behavior (Router style)
my_route = respx.get("https://api.github.com/user").mock(
return_value=httpx.Response(200, json={"login": "test_user"})
)
# Act: Call function using httpx.AsyncClient
async with httpx.AsyncClient() as client:
response = await client.get("https://api.github.com/user")
# Assert
assert response.json()["login"] == "test_user"
assert my_route.called
```
## FastAPI Testing Patterns (Async)
Use `httpx.AsyncClient` with `ASGITransport` for modern FastAPI testing.
```python
from httpx import AsyncClient, ASGITransport
import pytest
@pytest.fixture
async def client(db_session):
"""Create async test client with DB overrides."""
# Override dependency
app.dependency_overrides[get_db] = lambda: db_session
# Connect directly to the app (no network overhead)
transport = ASGITransport(app=app)
async with AsyncClient(transport=transport, base_url="http://test") as c:
yield c
app.dependency_overrides.clear()
async def test_create_user(client):
user_data = UserFactory.build() # Generate data
response = await client.post("/users", json=user_data.model_dump())
assert response.status_code == 201
assert response.json()["email"] == user_data.email
```
## Integration Testing (Testcontainers)
Do not use SQLite for Postgres tests. Use **Testcontainers** to spin up real, disposable Docker instances.
```python
import pytest
from testcontainers.postgres import PostgresContainer
from sqlalchemy import create_engine
@pytest.fixture(scope="session")
def postgres_container():
"""Spin up a real Postgres container for the test session."""
with PostgresContainer("postgres:16-alpine") as postgres:
yield postgres
@pytest.fixture(scope="session")
def db_engine(postgres_container):
"""Create engine connected to the container."""
db_url = postgres_container.get_connection_url()
engine = create_engine(db_url)
# Run migrations (e.g., Alembic) here to set up schema
Base.metadata.create_all(engine)
yield engine
engine.dispose()
```
## Table-Driven Tests (Parametrize)
```python
import pytest
@pytest.mark.parametrize("email, expected_error", [
pytest.param("no-at-sign", "Invalid email", id="missing_at"),
pytest.param("", "Field required", id="empty"),
pytest.param("user@domain", "Missing TLD", id="missing_tld"),
])
def test_email_validation_errors(email, expected_error):
with pytest.raises(ValueError, match=expected_error):
validate_email(email)
```
## Tooling Quick Reference
| Tool | Purpose | Best For |
| ------------------ | --------------- | ---------------------------------- |
| **pytest** | Test runner | The industry standard |
| **ruff** | Linting | Fast linting/formatting |
| **polyfactory** | Data generation | Modern replacement for factory_boy |
| **respx** | HTTP Mocking | Mocking `httpx` requests |
| **testcontainers** | Infrastructure | Real integration tests (Docker) |
| **httpx** | HTTP Client | Async & Sync API testing |
| **pytest-asyncio** | Async support | `asyncio_mode="auto"` |
## Property-Based Testing (Hypothesis)
For critical logic, generate thousands of edge cases automatically.
```python
from hypothesis import given, strategies as st
@given(st.integers(), st.integers())
def test_addition_properties(x, y):
# Commutative property
assert add(x, y) == add(y, x)
```
```
### references/go.md
```markdown
# Go Testing Patterns
Language-specific patterns for testing Go applications using the standard library, `testify`, and modern integration tools.
## Contents
- [The Pragmatic Assertion Strategy](#the-pragmatic-assertion-strategy)
- [Table-Driven Tests](#table-driven-tests-the-gold-standard)
- [Integration Testing (Testcontainers)](#integration-testing-testcontainers)
- [Mocking Strategies](#mocking-strategies)
- [Native Fuzzing](#native-fuzzing)
- [HTTP Handlers with httptest](#http-handlers-with-httptest)
- [Tooling Quick Reference](#tooling-quick-reference)
## The Pragmatic Assertion Strategy
Don't be a purist. Use tools where they help, but know their limits.
- **Use `testify/require`** for setup and errors (stop the test immediately).
- **Use `testify/assert`** for simple value checks (booleans, strings, counts).
- **Use `google/go-cmp`** for complex structs (superior diff output).
```go
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/google/go-cmp/cmp"
)
func TestUserProcessing(t *testing.T) {
// SETUP
// Use 'require' to fail fast if setup fails
user, err := CreateUser("[email protected]")
require.NoError(t, err, "Setup failed, stopping test")
require.NotNil(t, user)
// ACTION
processedUser := Process(user)
// ASSERTIONS
// Use 'assert' for simple scalar values
assert.Equal(t, "processed", processedUser.Status)
assert.True(t, processedUser.IsActive)
// Use 'go-cmp' for complex objects
// Testify's output for large structs can be unreadable.
// cmp.Diff shows exactly which field differs (-want +got)
want := User{
Email: "[email protected]",
Status: "processed",
IsActive: true,
Metadata: map[string]string{"source": "web"},
}
if diff := cmp.Diff(want, processedUser); diff != "" {
t.Errorf("Process() mismatch (-want +got):\n%s", diff)
}
}
```
## Table-Driven Tests (The Gold Standard)
This is the dominant pattern in Go. Combine it with `t.Parallel()` for speed.
```go
func TestParseURL(t *testing.T) {
tests := []struct {
name string
input string
want string // simplified for example
wantErr string // use string to match partial error messages
}{
{
name: "valid http",
input: "http://example.com",
want: "example.com",
},
{
name: "missing protocol",
input: "example.com",
wantErr: "invalid URL",
},
}
for _, tt := range tests {
tt := tt // Capture variable for parallel execution
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
got, err := ParseURL(tt.input)
if tt.wantErr != "" {
require.Error(t, err)
assert.Contains(t, err.Error(), tt.wantErr)
return
}
require.NoError(t, err)
assert.Equal(t, tt.want, got)
})
}
}
```
## Integration Testing (Testcontainers)
Do not mock database drivers. It creates low-confidence tests. Use `testcontainers-go` to spin up real dependencies.
```go
import (
"context"
"testing"
"github.com/testcontainers/testcontainers-go/modules/postgres"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestUserDAO(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test")
}
ctx := context.Background()
// Spin up real Postgres
pgContainer, err := postgres.Run(ctx, "docker.io/postgres:16-alpine",
postgres.WithDatabase("testdb"),
postgres.WithUsername("user"),
postgres.WithPassword("password"),
)
require.NoError(t, err)
// Clean up container when test ends
t.Cleanup(func() {
pgContainer.Terminate(ctx)
})
// Get connection string and run assertions
connStr, _ := pgContainer.ConnectionString(ctx, "sslmode=disable")
// ... connect to DB and test ...
}
```
## Mocking Strategies
### 1. Interface Fakes (Preferred)
For internal logic, handwritten fakes are often cleaner than mock frameworks. They are type-safe and refactor-friendly.
```go
// Dependency Interface
type EmailSender interface {
Send(to, msg string) error
}
// Handmade Fake
type FakeSender struct {
SentMessages []string
}
func (f *FakeSender) Send(to, msg string) error {
f.SentMessages = append(f.SentMessages, to)
return nil
}
func TestRegistration(t *testing.T) {
fake := &FakeSender{}
svc := NewService(fake)
svc.Register("[email protected]")
assert.Equal(t, 1, len(fake.SentMessages))
assert.Equal(t, "[email protected]", fake.SentMessages[0])
}
```
### 2. Testify Mocks (For External Libs)
Use `testify/mock` when the interface is huge or complex (e.g., AWS SDKs) and a handwritten fake is too much work.
```go
import "github.com/stretchr/testify/mock"
type MockS3 struct {
mock.Mock
}
func (m *MockS3) GetObject(key string) ([]byte, error) {
args := m.Called(key)
return args.Get(0).([]byte), args.Error(1)
}
func TestDownload(t *testing.T) {
m := new(MockS3)
m.On("GetObject", "avatar.jpg").Return([]byte("data"), nil)
// ... test logic ...
m.AssertExpectations(t)
}
```
## Native Fuzzing
Use standard library fuzzing for parsers and validators. It finds edge cases (empty bytes, huge inputs) that humans miss.
```go
func FuzzJSONParser(f *testing.F) {
f.Add("{\"foo\":\"bar\"}") // Seed corpus
f.Fuzz(func(t *testing.T, jsonInput string) {
val, err := Parse(jsonInput)
if err == nil {
// Property: Re-encoding should match input
output, _ := Marshal(val)
if jsonInput != output {
t.Errorf("Roundtrip failure! Input: %q, Output: %q", jsonInput, output)
}
}
})
}
```
## HTTP Handlers with `httptest`
```go
func TestHandleHealth(t *testing.T) {
// Arrange
req := httptest.NewRequest("GET", "/health", nil)
w := httptest.NewRecorder()
// Act
HealthHandler(w, req)
// Assert
res := w.Result()
assert.Equal(t, 200, res.StatusCode)
// Helper for reading body
body, _ := io.ReadAll(res.Body)
assert.JSONEq(t, `{"status": "ok"}`, string(body))
}
```
## Tooling Quick Reference
| Tool | Purpose | Best Use Case |
| ------------------- | -------------- | ----------------------------------------------- |
| **testify/assert** | Assertions | 90% of unit tests. Fast, readable. |
| **testify/require** | Assertions | Checking errors/nil before proceeding. |
| **google/go-cmp** | Comparison | Complex structs, huge slices, map diffs. |
| **testcontainers** | Infrastructure | Database/Cache integration tests. |
| **httptest** | HTTP | Testing API handlers without starting a server. |
| **testing.F** | Fuzzing | Robustness testing for inputs/parsers. |
```