newman
Automated API testing with Postman collections via Newman CLI. Use when user requests API testing, collection execution, automated testing, CI/CD integration, or mentions "Postman", "Newman", "API tests", "run collection", or "automated testing".
Packaged view
This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.
Install command
npx @skill-hub/cli install openclaw-skills-newman
Repository
Skill path: skills/1999azzar/newman
Automated API testing with Postman collections via Newman CLI. Use when user requests API testing, collection execution, automated testing, CI/CD integration, or mentions "Postman", "Newman", "API tests", "run collection", or "automated testing".
Open repositoryBest for
Primary workflow: Run DevOps.
Technical facets: Full Stack, Backend, DevOps, Testing, Integration.
Target audience: everyone.
License: Unknown.
Original source
Catalog source: SkillHub Club.
Repository owner: openclaw.
This is still a mirrored public skill entry. Review the repository before installing into production workflows.
What it helps with
- Install newman into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
- Review https://github.com/openclaw/skills before adding newman to shared team environments
- Use newman for development workflows
Works across
Favorites: 0.
Sub-skills: 0.
Aggregator: No.
Original source / Raw SKILL.md
---
name: newman
description: Automated API testing with Postman collections via Newman CLI. Use when user requests API testing, collection execution, automated testing, CI/CD integration, or mentions "Postman", "Newman", "API tests", "run collection", or "automated testing".
---
# Newman - Postman CLI Runner
Newman is the command-line Collection Runner for Postman. Run and test Postman collections directly from the command line with powerful reporting, environment management, and CI/CD integration.
## Quick Start
### Installation
```bash
# Global install (recommended)
npm install -g newman
# Project-specific
npm install --save-dev newman
# Verify
newman --version
```
### Basic Execution
```bash
# Run collection
newman run collection.json
# With environment
newman run collection.json -e environment.json
# With globals
newman run collection.json -g globals.json
# Combined
newman run collection.json -e env.json -g globals.json -d data.csv
```
## Core Workflows
### 1. Export from Postman Desktop
**In Postman:**
1. Collections → Click "..." → Export
2. Choose "Collection v2.1" (recommended)
3. Save as `collection.json`
**Environment:**
1. Environments → Click "..." → Export
2. Save as `environment.json`
### 2. Run Tests
```bash
# Basic run
newman run collection.json
# With detailed output
newman run collection.json --verbose
# Fail on errors
newman run collection.json --bail
# Custom timeout (30s)
newman run collection.json --timeout-request 30000
```
### 3. Data-Driven Testing
**CSV format:**
```csv
username,password
user1,pass1
user2,pass2
```
**Run:**
```bash
newman run collection.json -d test_data.csv --iteration-count 2
```
### 4. Reporters
```bash
# CLI only (default)
newman run collection.json
# HTML report
newman run collection.json --reporters cli,html --reporter-html-export report.html
# JSON export
newman run collection.json --reporters cli,json --reporter-json-export results.json
# JUnit (for CI)
newman run collection.json --reporters cli,junit --reporter-junit-export junit.xml
# Multiple reporters
newman run collection.json --reporters cli,html,json,junit \
--reporter-html-export ./reports/newman.html \
--reporter-json-export ./reports/newman.json \
--reporter-junit-export ./reports/newman.xml
```
### 5. Security Best Practices
**❌ NEVER hardcode secrets in collections!**
Use environment variables:
```bash
# Export sensitive vars
export API_KEY="your-secret-key"
export DB_PASSWORD="your-db-pass"
# Newman auto-loads from env
newman run collection.json -e environment.json
# Or pass directly
newman run collection.json --env-var "API_KEY=secret" --env-var "DB_PASSWORD=pass"
```
**In Postman collection tests:**
```javascript
// Use {{API_KEY}} in requests
pm.request.headers.add({key: 'Authorization', value: `Bearer {{API_KEY}}`});
// Access in scripts
const apiKey = pm.environment.get("API_KEY");
```
**Environment file (environment.json):**
```json
{
"name": "Production",
"values": [
{"key": "BASE_URL", "value": "https://api.example.com", "enabled": true},
{"key": "API_KEY", "value": "{{$processEnvironment.API_KEY}}", "enabled": true}
]
}
```
Newman will replace `{{$processEnvironment.API_KEY}}` with the environment variable.
## Common Use Cases
### CI/CD Integration
See `references/ci-cd-examples.md` for GitHub Actions, GitLab CI, and Jenkins examples.
### Automated Regression Testing
```bash
#!/bin/bash
# scripts/run-api-tests.sh
set -e
echo "Running API tests..."
newman run collections/api-tests.json \
-e environments/staging.json \
--reporters cli,html,junit \
--reporter-html-export ./test-results/newman.html \
--reporter-junit-export ./test-results/newman.xml \
--bail \
--color on
echo "Tests completed. Report: ./test-results/newman.html"
```
### Load Testing
```bash
# Run with high iteration count
newman run collection.json \
-n 100 \
--delay-request 100 \
--timeout-request 5000 \
--reporters cli,json \
--reporter-json-export load-test-results.json
```
### Parallel Execution
```bash
# Install parallel runner
npm install -g newman-parallel
# Run collections in parallel
newman-parallel -c collection1.json,collection2.json,collection3.json \
-e environment.json \
--reporters cli,html
```
## Advanced Features
### Custom Scripts
**Pre-request Script (in Postman):**
```javascript
// Generate dynamic values
pm.environment.set("timestamp", Date.now());
pm.environment.set("nonce", Math.random().toString(36).substring(7));
```
**Test Script (in Postman):**
```javascript
// Status code check
pm.test("Status is 200", function() {
pm.response.to.have.status(200);
});
// Response body validation
pm.test("Response has user ID", function() {
const jsonData = pm.response.json();
pm.expect(jsonData).to.have.property('user_id');
});
// Response time check
pm.test("Response time < 500ms", function() {
pm.expect(pm.response.responseTime).to.be.below(500);
});
// Set variable from response
pm.environment.set("user_token", pm.response.json().token);
```
### SSL/TLS Configuration
```bash
# Disable SSL verification (dev only!)
newman run collection.json --insecure
# Custom CA certificate
newman run collection.json --ssl-client-cert-list cert-list.json
# Client certificates
newman run collection.json \
--ssl-client-cert client.pem \
--ssl-client-key key.pem \
--ssl-client-passphrase "secret"
```
### Error Handling
```bash
# Continue on errors
newman run collection.json --suppress-exit-code
# Fail fast
newman run collection.json --bail
# Custom error handling in wrapper
#!/bin/bash
newman run collection.json -e env.json
EXIT_CODE=$?
if [ $EXIT_CODE -ne 0 ]; then
echo "Tests failed! Exit code: $EXIT_CODE"
# Send alert, rollback deployment, etc.
exit 1
fi
```
## Troubleshooting
**Collection not found:**
- Use absolute paths: `newman run /full/path/to/collection.json`
- Check file permissions: `ls -la collection.json`
**Environment variables not loading:**
- Verify syntax: `{{$processEnvironment.VAR_NAME}}`
- Check export: `echo $VAR_NAME`
- Use `--env-var` flag as fallback
**Timeout errors:**
- Increase timeout: `--timeout-request 60000` (60s)
- Check network connectivity
- Verify API endpoint is reachable
**SSL errors:**
- Development: Use `--insecure` temporarily
- Production: Add CA cert with `--ssl-extra-ca-certs`
**Memory issues (large collections):**
- Reduce iteration count
- Split collection into smaller parts
- Increase Node heap: `NODE_OPTIONS=--max-old-space-size=4096 newman run ...`
## Best Practices
1. **Version Control**: Store collections and environments in Git
2. **Environment Separation**: Separate files for dev/staging/prod
3. **Secret Management**: Use environment variables, never commit secrets
4. **Meaningful Names**: Use descriptive collection and folder names
5. **Test Atomicity**: Each request should test one specific thing
6. **Assertions**: Add comprehensive test scripts to every request
7. **Documentation**: Use Postman descriptions for context
8. **CI Integration**: Run Newman in CI pipeline for every PR
9. **Reports**: Archive HTML reports for historical analysis
10. **Timeouts**: Set reasonable timeout values for production APIs
## References
- **CI/CD Examples**: See `references/ci-cd-examples.md`
- **Advanced Patterns**: See `references/advanced-patterns.md`
- **Official Docs**: https://learning.postman.com/docs/running-collections/using-newman-cli/command-line-integration-with-newman/
---
## Referenced Files
> The following files are referenced in this skill and included for context.
### references/ci-cd-examples.md
```markdown
# CI/CD Integration Examples
## GitHub Actions
### Basic Workflow
```yaml
# .github/workflows/api-tests.yml
name: API Tests
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
api-tests:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install Newman
run: npm install -g newman newman-reporter-htmlextra
- name: Run API Tests
env:
API_KEY: ${{ secrets.API_KEY }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
run: |
newman run collections/api-tests.json \
-e environments/staging.json \
--reporters cli,htmlextra,junit \
--reporter-htmlextra-export ./test-results/newman.html \
--reporter-junit-export ./test-results/newman.xml \
--bail
- name: Upload Test Results
if: always()
uses: actions/upload-artifact@v3
with:
name: newman-reports
path: test-results/
- name: Publish Test Report
if: always()
uses: dorny/test-reporter@v1
with:
name: Newman Tests
path: test-results/newman.xml
reporter: java-junit
```
### Multi-Environment
```yaml
# .github/workflows/api-tests-matrix.yml
name: API Tests (Multi-Env)
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
environment: [dev, staging, production]
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install Newman
run: npm install -g newman
- name: Run Tests - ${{ matrix.environment }}
env:
API_KEY: ${{ secrets[format('{0}_API_KEY', matrix.environment)] }}
run: |
newman run collections/api-tests.json \
-e environments/${{ matrix.environment }}.json \
--reporters cli,json \
--reporter-json-export ./results-${{ matrix.environment }}.json
- name: Upload Results
uses: actions/upload-artifact@v3
with:
name: test-results-${{ matrix.environment }}
path: results-${{ matrix.environment }}.json
```
## GitLab CI
### Basic Pipeline
```yaml
# .gitlab-ci.yml
stages:
- test
api_tests:
stage: test
image: node:18-alpine
before_script:
- npm install -g newman
script:
- |
newman run collections/api-tests.json \
-e environments/staging.json \
--env-var "API_KEY=$API_KEY" \
--reporters cli,junit \
--reporter-junit-export newman-report.xml
artifacts:
when: always
reports:
junit: newman-report.xml
paths:
- newman-report.xml
expire_in: 30 days
variables:
API_KEY: $CI_API_KEY
only:
- merge_requests
- main
```
### Scheduled Tests
```yaml
# .gitlab-ci.yml
scheduled_smoke_tests:
stage: test
image: node:18-alpine
before_script:
- npm install -g newman newman-reporter-htmlextra
script:
- |
newman run collections/smoke-tests.json \
-e environments/production.json \
--env-var "API_KEY=$PROD_API_KEY" \
--reporters cli,htmlextra \
--reporter-htmlextra-export smoke-test-report.html
artifacts:
paths:
- smoke-test-report.html
expire_in: 7 days
only:
- schedules
```
## Jenkins
### Declarative Pipeline
```groovy
// Jenkinsfile
pipeline {
agent any
environment {
API_KEY = credentials('api-key')
DB_PASSWORD = credentials('db-password')
}
stages {
stage('Setup') {
steps {
sh 'npm install -g newman newman-reporter-htmlextra'
}
}
stage('API Tests') {
steps {
sh '''
newman run collections/api-tests.json \
-e environments/staging.json \
--env-var "API_KEY=$API_KEY" \
--env-var "DB_PASSWORD=$DB_PASSWORD" \
--reporters cli,htmlextra,junit \
--reporter-htmlextra-export ./reports/newman.html \
--reporter-junit-export ./reports/newman.xml \
--bail
'''
}
}
}
post {
always {
junit 'reports/newman.xml'
publishHTML([
reportDir: 'reports',
reportFiles: 'newman.html',
reportName: 'Newman Test Report'
])
}
failure {
emailext(
subject: "API Tests Failed - ${env.JOB_NAME} #${env.BUILD_NUMBER}",
body: "Check console output at ${env.BUILD_URL}",
to: '[email protected]'
)
}
}
}
```
### Scripted Pipeline with Parallel Execution
```groovy
// Jenkinsfile
node {
stage('Checkout') {
checkout scm
}
stage('Setup') {
sh 'npm install -g newman'
}
stage('API Tests') {
parallel(
'Auth Tests': {
sh 'newman run collections/auth-tests.json -e environments/staging.json'
},
'User Tests': {
sh 'newman run collections/user-tests.json -e environments/staging.json'
},
'Payment Tests': {
sh 'newman run collections/payment-tests.json -e environments/staging.json'
}
)
}
stage('Report') {
publishHTML([
reportDir: 'reports',
reportFiles: '*.html',
reportName: 'Newman Reports'
])
}
}
```
## Docker
### Standalone Container
```dockerfile
# Dockerfile
FROM node:18-alpine
WORKDIR /app
# Install Newman and reporters
RUN npm install -g newman newman-reporter-htmlextra
# Copy collections and environments
COPY collections/ ./collections/
COPY environments/ ./environments/
# Entry point
ENTRYPOINT ["newman"]
CMD ["run", "collections/api-tests.json", "-e", "environments/staging.json"]
```
**Build and run:**
```bash
docker build -t api-tests .
docker run --rm -e API_KEY="secret" api-tests
```
### Docker Compose
```yaml
# docker-compose.yml
version: '3.8'
services:
api-tests:
image: node:18-alpine
working_dir: /app
volumes:
- ./collections:/app/collections
- ./environments:/app/environments
- ./reports:/app/reports
environment:
- API_KEY=${API_KEY}
- DB_PASSWORD=${DB_PASSWORD}
command: >
sh -c "npm install -g newman newman-reporter-htmlextra &&
newman run collections/api-tests.json
-e environments/staging.json
--reporters cli,htmlextra
--reporter-htmlextra-export /app/reports/newman.html"
```
**Run:**
```bash
docker-compose run --rm api-tests
```
## Bitbucket Pipelines
```yaml
# bitbucket-pipelines.yml
image: node:18
pipelines:
default:
- step:
name: API Tests
caches:
- node
script:
- npm install -g newman
- |
newman run collections/api-tests.json \
-e environments/staging.json \
--env-var "API_KEY=$API_KEY" \
--reporters cli,junit \
--reporter-junit-export test-results/newman.xml
artifacts:
- test-results/**
branches:
main:
- step:
name: Production Smoke Tests
deployment: production
script:
- npm install -g newman
- |
newman run collections/smoke-tests.json \
-e environments/production.json \
--env-var "API_KEY=$PROD_API_KEY" \
--bail
```
## CircleCI
```yaml
# .circleci/config.yml
version: 2.1
jobs:
api-tests:
docker:
- image: cimg/node:18.0
steps:
- checkout
- run:
name: Install Newman
command: npm install -g newman newman-reporter-htmlextra
- run:
name: Run API Tests
command: |
newman run collections/api-tests.json \
-e environments/staging.json \
--env-var "API_KEY=$API_KEY" \
--reporters cli,htmlextra,junit \
--reporter-htmlextra-export ./test-results/newman.html \
--reporter-junit-export ./test-results/newman.xml
- store_test_results:
path: test-results
- store_artifacts:
path: test-results
workflows:
version: 2
test:
jobs:
- api-tests:
context: api-credentials
```
## Best Practices for CI/CD
1. **Secret Management**: Use CI platform secrets, never commit
2. **Fail Fast**: Use `--bail` to stop on first failure
3. **Parallel Execution**: Split collections for faster feedback
4. **Artifacts**: Always save reports (even on failure)
5. **Notifications**: Alert team on failures
6. **Caching**: Cache Newman install to speed up runs
7. **Matrix Testing**: Test multiple environments in parallel
8. **Timeouts**: Set appropriate timeout values for CI
9. **Version Pinning**: Pin Newman version for consistency
10. **Smoke Tests**: Run critical tests on every deploy
```
### references/advanced-patterns.md
```markdown
# Advanced Newman Patterns
## Custom Reporters
### Creating a Custom Reporter
```javascript
// custom-reporter.js
function CustomReporter(emitter, reporterOptions, collectionRunOptions) {
// newman.run event lifecycle:
// start, beforeIteration, beforeItem, beforePrerequest, prerequest,
// beforeRequest, request, beforeTest, test, beforeScript, script,
// item, iteration, assertion, console, exception, beforeDone, done
emitter.on('start', function (err, args) {
console.log('Collection run started');
});
emitter.on('request', function (err, args) {
if (err) {
console.error('Request error:', err);
return;
}
console.log(`[${args.request.method}] ${args.request.url.toString()}`);
console.log(`Status: ${args.response.code} ${args.response.status}`);
console.log(`Time: ${args.response.responseTime}ms`);
});
emitter.on('assertion', function (err, args) {
if (err) {
console.error(`✗ ${args.assertion}`, err.message);
} else {
console.log(`✓ ${args.assertion}`);
}
});
emitter.on('done', function (err, summary) {
if (err) {
console.error('Collection run failed:', err);
return;
}
console.log('\n=== Summary ===');
console.log(`Total: ${summary.run.stats.requests.total}`);
console.log(`Failed: ${summary.run.stats.requests.failed}`);
console.log(`Assertions: ${summary.run.stats.assertions.total}`);
console.log(`Failures: ${summary.run.stats.assertions.failed}`);
});
}
module.exports = CustomReporter;
```
**Usage:**
```bash
newman run collection.json -r custom-reporter.js
```
### Slack Notification Reporter
```javascript
// slack-reporter.js
const https = require('https');
function SlackReporter(emitter, reporterOptions, collectionRunOptions) {
const webhookUrl = reporterOptions.webhookUrl || process.env.SLACK_WEBHOOK;
emitter.on('done', function (err, summary) {
const stats = summary.run.stats;
const failed = stats.requests.failed + stats.assertions.failed;
const color = failed > 0 ? 'danger' : 'good';
const status = failed > 0 ? 'Failed' : 'Passed';
const payload = JSON.stringify({
username: 'Newman',
icon_emoji: ':test_tube:',
attachments: [{
color: color,
title: `API Tests ${status}`,
fields: [
{ title: 'Requests', value: `${stats.requests.total}`, short: true },
{ title: 'Failed', value: `${stats.requests.failed}`, short: true },
{ title: 'Assertions', value: `${stats.assertions.total}`, short: true },
{ title: 'Failures', value: `${stats.assertions.failed}`, short: true }
]
}]
});
const options = {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': Buffer.byteLength(payload)
}
};
const req = https.request(webhookUrl, options);
req.write(payload);
req.end();
});
}
module.exports = SlackReporter;
```
**Usage:**
```bash
newman run collection.json -r slack-reporter.js --reporter-slack-webhookUrl="https://hooks.slack.com/..."
```
## Advanced Test Patterns
### Dynamic Request Chaining
**Collection structure:**
1. Login → Get token
2. Create User → Get user_id
3. Update User → Use user_id from step 2
4. Delete User → Use user_id from step 2
**Step 1 - Login (Tests tab):**
```javascript
pm.test("Login successful", function() {
pm.response.to.have.status(200);
const jsonData = pm.response.json();
pm.expect(jsonData).to.have.property('token');
// Store for next requests
pm.collectionVariables.set("auth_token", jsonData.token);
});
```
**Step 2 - Create User (Pre-request):**
```javascript
// Use token from login
pm.request.headers.add({
key: 'Authorization',
value: `Bearer {{auth_token}}`
});
```
**Step 2 - Create User (Tests tab):**
```javascript
pm.test("User created", function() {
const jsonData = pm.response.json();
// Store user_id for subsequent requests
pm.collectionVariables.set("created_user_id", jsonData.user_id);
});
```
**Step 3 - Update User (URL):**
```
PUT {{BASE_URL}}/users/{{created_user_id}}
```
### Conditional Request Execution
**Pre-request Script:**
```javascript
// Skip request based on condition
if (pm.environment.get("skip_payment_tests") === "true") {
postman.setNextRequest(null); // Skip this request
}
// Or skip to specific request
if (pm.environment.get("user_type") === "guest") {
postman.setNextRequest("Guest Flow - Get Products");
}
```
**Test Script:**
```javascript
// Conditional flow
pm.test("Admin check", function() {
const isAdmin = pm.response.json().is_admin;
if (isAdmin) {
postman.setNextRequest("Admin - Dashboard");
} else {
postman.setNextRequest("User - Dashboard");
}
});
```
### Data Validation Patterns
**Schema Validation:**
```javascript
const schema = {
type: "object",
required: ["id", "name", "email"],
properties: {
id: { type: "integer" },
name: { type: "string", minLength: 1 },
email: { type: "string", format: "email" },
age: { type: "integer", minimum: 0, maximum: 150 }
}
};
pm.test("Response matches schema", function() {
const jsonData = pm.response.json();
pm.expect(tv4.validate(jsonData, schema)).to.be.true;
});
```
**Array Validation:**
```javascript
pm.test("Users array validation", function() {
const users = pm.response.json().users;
pm.expect(users).to.be.an('array');
pm.expect(users).to.have.lengthOf.at.least(1);
// Validate each user
users.forEach((user, index) => {
pm.expect(user, `User ${index}`).to.have.all.keys('id', 'name', 'email');
pm.expect(user.email, `User ${index} email`).to.match(/^[^\s@]+@[^\s@]+\.[^\s@]+$/);
});
});
```
### Performance Testing
**Response Time Assertions:**
```javascript
pm.test("Response time < 200ms", function() {
pm.expect(pm.response.responseTime).to.be.below(200);
});
pm.test("Response time within acceptable range", function() {
const responseTime = pm.response.responseTime;
pm.expect(responseTime).to.be.within(50, 500);
});
```
**Throughput Measurement:**
```javascript
// Collection-level variable: request_count, start_time
// Pre-request (first request only)
if (!pm.collectionVariables.get("start_time")) {
pm.collectionVariables.set("start_time", Date.now());
pm.collectionVariables.set("request_count", 0);
}
// Test (every request)
pm.test("Track throughput", function() {
let count = pm.collectionVariables.get("request_count") || 0;
pm.collectionVariables.set("request_count", count + 1);
const startTime = pm.collectionVariables.get("start_time");
const elapsed = (Date.now() - startTime) / 1000; // seconds
const throughput = (count + 1) / elapsed;
console.log(`Throughput: ${throughput.toFixed(2)} req/s`);
});
```
## Library Integration
### Using External Libraries
**CryptoJS for signing:**
```javascript
// Load in collection Pre-request or Test
const CryptoJS = require('crypto-js');
const timestamp = Date.now();
const nonce = Math.random().toString(36).substring(7);
const message = `${timestamp}${nonce}${pm.request.method}${pm.request.url.getPath()}`;
const signature = CryptoJS.HmacSHA256(message, pm.environment.get("API_SECRET"));
const signatureBase64 = CryptoJS.enc.Base64.stringify(signature);
pm.request.headers.add({
key: 'X-Signature',
value: signatureBase64
});
pm.request.headers.add({
key: 'X-Timestamp',
value: timestamp.toString()
});
pm.request.headers.add({
key: 'X-Nonce',
value: nonce
});
```
**Moment.js for dates:**
```javascript
const moment = require('moment');
// Dynamic date ranges
const startDate = moment().subtract(7, 'days').format('YYYY-MM-DD');
const endDate = moment().format('YYYY-MM-DD');
pm.environment.set("start_date", startDate);
pm.environment.set("end_date", endDate);
```
**Faker.js for test data:**
```javascript
const faker = require('faker');
pm.environment.set("random_email", faker.internet.email());
pm.environment.set("random_name", faker.name.findName());
pm.environment.set("random_address", faker.address.streetAddress());
```
## Advanced Environment Management
### Multi-Environment Variables
**Base environment (base.json):**
```json
{
"name": "Base",
"values": [
{"key": "API_VERSION", "value": "v1"},
{"key": "TIMEOUT", "value": "5000"},
{"key": "RETRY_COUNT", "value": "3"}
]
}
```
**Override in specific environment (production.json):**
```json
{
"name": "Production",
"values": [
{"key": "BASE_URL", "value": "https://api.production.com"},
{"key": "TIMEOUT", "value": "10000"}
]
}
```
**Merge via script:**
```bash
#!/bin/bash
# Merge base + environment-specific
jq -s '.[0].values + .[1].values' base.json production.json > merged.json
newman run collection.json -e merged.json
```
### Encrypted Variables
**Encrypt sensitive values:**
```javascript
// scripts/encrypt-env.js
const crypto = require('crypto');
const fs = require('fs');
const ENCRYPTION_KEY = process.env.ENCRYPTION_KEY; // 32-char key
const algorithm = 'aes-256-cbc';
function encrypt(text) {
const iv = crypto.randomBytes(16);
const cipher = crypto.createCipheriv(algorithm, Buffer.from(ENCRYPTION_KEY), iv);
let encrypted = cipher.update(text, 'utf8', 'hex');
encrypted += cipher.final('hex');
return iv.toString('hex') + ':' + encrypted;
}
const env = JSON.parse(fs.readFileSync('environment.json'));
env.values.forEach(v => {
if (v.key.includes('SECRET') || v.key.includes('PASSWORD')) {
v.value = encrypt(v.value);
}
});
fs.writeFileSync('environment.encrypted.json', JSON.stringify(env, null, 2));
```
**Decrypt in collection:**
```javascript
// Pre-request Script
const crypto = require('crypto');
function decrypt(text) {
const parts = text.split(':');
const iv = Buffer.from(parts[0], 'hex');
const encrypted = parts[1];
const decipher = crypto.createDecipheriv(
'aes-256-cbc',
Buffer.from(pm.environment.get("ENCRYPTION_KEY")),
iv
);
let decrypted = decipher.update(encrypted, 'hex', 'utf8');
decrypted += decipher.final('utf8');
return decrypted;
}
const encryptedApiKey = pm.environment.get("API_KEY_ENCRYPTED");
const apiKey = decrypt(encryptedApiKey);
pm.request.headers.add({
key: 'Authorization',
value: `Bearer ${apiKey}`
});
```
## Mock Server Integration
**Create mock from collection:**
```bash
# Start Postman mock server via CLI (requires Postman account)
# Or use json-server for local mocking
npm install -g json-server
# db.json
cat > db.json << 'EOF'
{
"users": [
{"id": 1, "name": "John Doe", "email": "[email protected]"}
],
"posts": [
{"id": 1, "title": "Hello", "userId": 1}
]
}
EOF
# Start mock
json-server --watch db.json --port 3000
# Run collection against mock
newman run collection.json --env-var "BASE_URL=http://localhost:3000"
```
## Performance Optimization
### Parallel Collection Execution
```bash
#!/bin/bash
# scripts/parallel-runner.sh
collections=(
"collections/auth.json"
"collections/users.json"
"collections/products.json"
)
pids=()
for collection in "${collections[@]}"; do
newman run "$collection" -e environments/staging.json &
pids+=($!)
done
# Wait for all to complete
for pid in "${pids[@]}"; do
wait $pid
if [ $? -ne 0 ]; then
echo "Collection failed (PID: $pid)"
exit 1
fi
done
echo "All collections passed!"
```
### Request Pooling
```javascript
// Collection Pre-request
if (!pm.collectionVariables.get("connection_pool")) {
// Initialize connection pool simulation
pm.collectionVariables.set("connection_pool", {
max: 10,
active: 0
});
}
```
## Best Practices Summary
1. **Modular Collections**: Split by feature/domain for parallel execution
2. **DRY Principle**: Use collection/folder-level scripts to avoid duplication
3. **Environment Abstraction**: Never hardcode URLs or credentials
4. **Comprehensive Assertions**: Test status, schema, headers, and response time
5. **Error Handling**: Use try-catch in scripts to prevent collection failures
6. **Logging**: Use console.log strategically for debugging
7. **Version Control**: Track collections and environments in Git
8. **Documentation**: Use Postman descriptions for each request
9. **CI Integration**: Run Newman in every CI pipeline
10. **Security**: Encrypt secrets, use short-lived tokens, rotate credentials
```
---
## Skill Companion Files
> Additional files collected from the skill directory layout.
### README.md
```markdown
# Newman Skill 🧪
> Production-ready Newman (Postman CLI) skill for automated API testing with gold-standard security practices
[](LICENSE)
[](https://github.com/1999AZZAR/newman-skill/releases)
[](https://openclaw.ai)
## 🎯 What is This?
A comprehensive OpenClaw skill for running automated API tests using [Newman](https://github.com/postmanlabs/newman), the command-line Collection Runner for Postman. This skill includes production-ready scripts, security scanning, and CI/CD integration templates.
## ✨ Features
### 🔒 Security-First Design
- **Hardcoded secret detection** - Prevents API key leaks
- **SSL/TLS enforcement** - No insecure connections in production
- **Environment variable validation** - Ensures proper variable usage
- **PII exposure scanner** - Detects SSN, credit cards, etc.
- **Comprehensive security audit** - 8 critical security checks
### 🚀 Production-Ready Scripts
1. **`install-newman.sh`** - Automated Newman installation (global/local)
2. **`run-tests.sh`** - Test runner with security checks & multi-reporter support
3. **`security-audit.sh`** - Collection security scanner with detailed reports
### 📊 Multi-Reporter Support
- CLI (console output)
- HTML (beautiful reports via htmlextra)
- JSON (machine-readable)
- JUnit (CI/CD integration)
- Custom (build your own)
### 🔄 CI/CD Integration
Ready-to-use templates for:
- GitHub Actions
- GitLab CI
- Jenkins (Declarative & Scripted)
- CircleCI
- Bitbucket Pipelines
- Docker / Docker Compose
## 📦 Installation
### Install the Skill
```bash
# Clone this repository
git clone https://github.com/1999AZZAR/newman-skill.git ~/.openclaw/workspace/skills/newman
# Or download and extract
curl -L https://github.com/1999AZZAR/newman-skill/archive/main.tar.gz | tar -xz -C ~/.openclaw/workspace/skills/
```
### Install Newman CLI
```bash
# Run the installation script
~/.openclaw/workspace/skills/newman/scripts/install-newman.sh --global
# Or install manually
npm install -g newman newman-reporter-htmlextra
```
## 🚀 Quick Start
### 1. Export from Postman
**Collection:**
- Open Postman → Collections
- Click "..." → Export
- Choose "Collection v2.1"
- Save as `api-tests.json`
**Environment:**
- Environments → Click "..." → Export
- Save as `staging.json`
### 2. Run Tests
**Basic:**
```bash
newman run api-tests.json -e staging.json
```
**With reports:**
```bash
~/.openclaw/workspace/skills/newman/scripts/run-tests.sh \
api-tests.json \
staging.json \
--output ./reports \
--reporters cli,htmlextra,junit \
--bail
```
### 3. Security Audit
```bash
~/.openclaw/workspace/skills/newman/scripts/security-audit.sh \
api-tests.json \
staging.json
```
**Example output:**
```
🔒 Newman Security Audit
=======================
[1/8] Checking for hardcoded secrets...
[OK] No hardcoded secrets found
[2/8] Checking for Basic Auth credentials...
[WARNING] Basic Auth credentials found (ensure they use variables)
[3/8] Checking for insecure HTTP URLs...
[OK] All URLs use HTTPS
...
✅ Security audit passed!
```
## 📚 Documentation
- **[SKILL.md](SKILL.md)** - Main guide (quick start, workflows, best practices)
- **[INSTALLATION.md](INSTALLATION.md)** - Detailed setup instructions
- **[CI/CD Examples](references/ci-cd-examples.md)** - Integration templates
- **[Advanced Patterns](references/advanced-patterns.md)** - Custom reporters, validation, performance testing
## 🔐 Security Best Practices
### ❌ NEVER Do This
```json
{
"key": "API_KEY",
"value": "sk_live_abc123xyz", ❌ Hardcoded!
"enabled": true
}
```
### ✅ ALWAYS Do This
```json
{
"key": "API_KEY",
"value": "{{$processEnvironment.API_KEY}}", ✅ Environment variable!
"enabled": true
}
```
```bash
export API_KEY="sk_live_abc123xyz"
newman run collection.json -e environment.json
```
## 🎓 Use Cases
- **Regression Testing** - Automated API tests on every commit
- **Load Testing** - Performance validation with high iteration counts
- **Smoke Tests** - Scheduled health checks for production APIs
- **CI/CD Integration** - Run in GitHub Actions, GitLab CI, Jenkins
- **Multi-Environment** - Test dev/staging/prod with different configs
- **Security Compliance** - Validate API security before deployment
## 🛠️ Scripts Reference
### install-newman.sh
```bash
# Global install (recommended)
./scripts/install-newman.sh --global
# Local install (project-specific)
./scripts/install-newman.sh --local
```
### run-tests.sh
```bash
./scripts/run-tests.sh <collection> <environment> [options]
Options:
-o, --output DIR Output directory (default: ./test-results)
-r, --reporters LIST Reporters (default: cli,htmlextra)
-b, --bail Stop on first failure
-v, --verbose Verbose output
-n, --iterations NUM Iteration count
-t, --timeout MS Request timeout
```
### security-audit.sh
```bash
./scripts/security-audit.sh <collection.json> [environment.json]
```
**Checks:**
- Hardcoded secrets/API keys
- Basic Auth credentials
- Insecure HTTP URLs
- SSL verification disabled
- PII exposure
- Variable usage best practices
- Timeout configurations
- Authentication patterns
## 🔄 CI/CD Example (GitHub Actions)
```yaml
name: API Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install Newman
run: npm install -g newman newman-reporter-htmlextra
- name: Run API Tests
env:
API_KEY: ${{ secrets.API_KEY }}
run: |
newman run collections/api-tests.json \
-e environments/staging.json \
--reporters cli,htmlextra,junit \
--reporter-htmlextra-export ./reports/newman.html \
--reporter-junit-export ./reports/newman.xml \
--bail
- name: Upload Reports
if: always()
uses: actions/upload-artifact@v3
with:
name: test-reports
path: reports/
```
## 📊 Package Contents
```
newman/
├── SKILL.md (7.5KB) - Main guide
├── INSTALLATION.md (4.4KB) - Setup instructions
├── README.md - This file
├── references/
│ ├── ci-cd-examples.md (9.5KB) - CI/CD templates
│ └── advanced-patterns.md (12.9KB) - Advanced usage
└── scripts/
├── install-newman.sh (1.4KB) - Auto-installer
├── run-tests.sh (5.5KB) - Test runner
└── security-audit.sh (4.8KB) - Security scanner
Total: ~46KB uncompressed
```
## 🤝 Contributing
Contributions are welcome! Please:
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Test thoroughly
5. Submit a pull request
## 📄 License
MIT License - see [LICENSE](LICENSE) for details
## 🔗 Links
- **Newman Documentation**: https://learning.postman.com/docs/running-collections/using-newman-cli/
- **Postman Documentation**: https://learning.postman.com/
- **OpenClaw**: https://openclaw.ai
- **Issues**: https://github.com/1999AZZAR/newman-skill/issues
## 🙏 Acknowledgments
- [Postman Labs](https://www.postman.com/) for Newman
- [OpenClaw](https://openclaw.ai) for the skill framework
- Community contributors
---
**Version**: 1.0.0
**Created**: 2026-02-10
**Maintainer**: [@1999AZZAR](https://github.com/1999AZZAR)
**Status**: Production-Ready ✅
```
### _meta.json
```json
{
"owner": "1999azzar",
"slug": "newman",
"displayName": "Newman",
"latest": {
"version": "1.0.0",
"publishedAt": 1770939211714,
"commit": "https://github.com/openclaw/skills/commit/263767b4ecc005e9c1d3d099bb72da23a37782f9"
},
"history": []
}
```
### scripts/install-newman.sh
```bash
#!/bin/bash
# Newman installation and verification script
# Usage: ./install-newman.sh [--global|--local]
set -e
INSTALL_TYPE="${1:---global}"
echo "🚀 Installing Newman..."
if [ "$INSTALL_TYPE" = "--global" ]; then
echo "Installing globally (requires sudo/root if needed)..."
npm install -g newman newman-reporter-htmlextra
echo ""
echo "✅ Newman installed globally!"
echo ""
newman --version
elif [ "$INSTALL_TYPE" = "--local" ]; then
echo "Installing locally to current directory..."
if [ ! -f "package.json" ]; then
npm init -y
fi
npm install --save-dev newman newman-reporter-htmlextra
echo ""
echo "✅ Newman installed locally!"
echo ""
npx newman --version
echo ""
echo "Add to package.json scripts:"
echo ' "test": "newman run collections/api-tests.json"'
else
echo "❌ Invalid option: $INSTALL_TYPE"
echo "Usage: $0 [--global|--local]"
exit 1
fi
echo ""
echo "📦 Installed reporters:"
newman run --reporters 2>&1 | grep -A 20 "reporters:" || echo "- cli (built-in)"
echo "- htmlextra (installed)"
echo ""
echo "✅ Installation complete!"
echo ""
echo "Next steps:"
echo "1. Export your Postman collection (v2.1 format)"
echo "2. Export your environment (if using variables)"
echo "3. Run: newman run your-collection.json -e your-environment.json"
```
### scripts/run-tests.sh
```bash
#!/bin/bash
# Comprehensive Newman test runner with security best practices
# Usage: ./run-tests.sh <collection> <environment> [options]
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Default values
COLLECTION=""
ENVIRONMENT=""
REPORTERS="cli,htmlextra"
OUTPUT_DIR="./test-results"
BAIL=false
VERBOSE=false
ITERATIONS=1
TIMEOUT=30000
# Function to print colored output
print_info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
print_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
# Usage function
usage() {
cat << EOF
Usage: $0 <collection> <environment> [options]
Arguments:
collection Path to Postman collection JSON file
environment Path to environment JSON file
Options:
-o, --output DIR Output directory for reports (default: ./test-results)
-r, --reporters LIST Comma-separated reporters (default: cli,htmlextra)
-b, --bail Stop on first test failure
-v, --verbose Verbose output
-n, --iterations NUM Number of iterations (default: 1)
-t, --timeout MS Request timeout in milliseconds (default: 30000)
-h, --help Show this help message
Examples:
$0 api-tests.json staging.json
$0 api-tests.json staging.json --bail --verbose
$0 api-tests.json prod.json -o ./reports -r cli,json,junit
Environment Variables:
API_KEY API authentication key
DB_PASSWORD Database password
ENCRYPTION_KEY Encryption key for secure variables
EOF
exit 1
}
# Parse arguments
if [ $# -lt 2 ]; then
usage
fi
COLLECTION="$1"
ENVIRONMENT="$2"
shift 2
while [ $# -gt 0 ]; do
case "$1" in
-o|--output)
OUTPUT_DIR="$2"
shift 2
;;
-r|--reporters)
REPORTERS="$2"
shift 2
;;
-b|--bail)
BAIL=true
shift
;;
-v|--verbose)
VERBOSE=true
shift
;;
-n|--iterations)
ITERATIONS="$2"
shift 2
;;
-t|--timeout)
TIMEOUT="$2"
shift 2
;;
-h|--help)
usage
;;
*)
print_error "Unknown option: $1"
usage
;;
esac
done
# Validation
if [ ! -f "$COLLECTION" ]; then
print_error "Collection file not found: $COLLECTION"
exit 1
fi
if [ ! -f "$ENVIRONMENT" ]; then
print_error "Environment file not found: $ENVIRONMENT"
exit 1
fi
# Check if Newman is installed
if ! command -v newman &> /dev/null; then
print_error "Newman is not installed!"
echo "Install with: npm install -g newman"
exit 1
fi
# Security checks
print_info "Running security checks..."
# Check for hardcoded secrets in collection
if grep -qiE '(password|secret|apikey|token).*:.*["\047][a-zA-Z0-9]{8,}' "$COLLECTION"; then
print_warning "Possible hardcoded secrets detected in collection!"
print_warning "Please use environment variables instead."
fi
# Check for required environment variables
REQUIRED_VARS=()
if grep -q "{{API_KEY}}" "$COLLECTION" 2>/dev/null; then
REQUIRED_VARS+=("API_KEY")
fi
if grep -q "{{DB_PASSWORD}}" "$COLLECTION" 2>/dev/null; then
REQUIRED_VARS+=("DB_PASSWORD")
fi
for var in "${REQUIRED_VARS[@]}"; do
if [ -z "${!var}" ]; then
print_warning "Environment variable $var is not set!"
print_warning "Tests may fail if this variable is required."
fi
done
# Create output directory
mkdir -p "$OUTPUT_DIR"
# Build Newman command
NEWMAN_CMD="newman run \"$COLLECTION\" -e \"$ENVIRONMENT\""
if [ "$BAIL" = true ]; then
NEWMAN_CMD="$NEWMAN_CMD --bail"
fi
if [ "$VERBOSE" = true ]; then
NEWMAN_CMD="$NEWMAN_CMD --verbose"
fi
NEWMAN_CMD="$NEWMAN_CMD -n $ITERATIONS"
NEWMAN_CMD="$NEWMAN_CMD --timeout-request $TIMEOUT"
NEWMAN_CMD="$NEWMAN_CMD --reporters $REPORTERS"
# Add reporter outputs
if [[ "$REPORTERS" == *"html"* ]]; then
NEWMAN_CMD="$NEWMAN_CMD --reporter-htmlextra-export \"$OUTPUT_DIR/newman-report.html\""
fi
if [[ "$REPORTERS" == *"json"* ]]; then
NEWMAN_CMD="$NEWMAN_CMD --reporter-json-export \"$OUTPUT_DIR/newman-report.json\""
fi
if [[ "$REPORTERS" == *"junit"* ]]; then
NEWMAN_CMD="$NEWMAN_CMD --reporter-junit-export \"$OUTPUT_DIR/newman-junit.xml\""
fi
# Color output
NEWMAN_CMD="$NEWMAN_CMD --color on"
# Print configuration
print_info "Configuration:"
echo " Collection: $COLLECTION"
echo " Environment: $ENVIRONMENT"
echo " Output Dir: $OUTPUT_DIR"
echo " Reporters: $REPORTERS"
echo " Bail: $BAIL"
echo " Verbose: $VERBOSE"
echo " Iterations: $ITERATIONS"
echo " Timeout: ${TIMEOUT}ms"
echo ""
# Run Newman
print_info "Running tests..."
echo ""
eval $NEWMAN_CMD
EXIT_CODE=$?
echo ""
# Report results
if [ $EXIT_CODE -eq 0 ]; then
print_success "All tests passed! ✓"
else
print_error "Tests failed with exit code: $EXIT_CODE"
fi
# Show report locations
if [ -f "$OUTPUT_DIR/newman-report.html" ]; then
print_info "HTML report: $OUTPUT_DIR/newman-report.html"
fi
if [ -f "$OUTPUT_DIR/newman-report.json" ]; then
print_info "JSON report: $OUTPUT_DIR/newman-report.json"
fi
if [ -f "$OUTPUT_DIR/newman-junit.xml" ]; then
print_info "JUnit report: $OUTPUT_DIR/newman-junit.xml"
fi
exit $EXIT_CODE
```
### scripts/security-audit.sh
```bash
#!/bin/bash
# Security scanner for Postman collections and environments
# Detects hardcoded secrets, weak configurations, and security issues
set -e
RED='\033[0;31m'
YELLOW='\033[1;33m'
GREEN='\033[0;32m'
NC='\033[0m'
ISSUES_FOUND=0
print_error() {
echo -e "${RED}[CRITICAL]${NC} $1"
((ISSUES_FOUND++))
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
print_ok() {
echo -e "${GREEN}[OK]${NC} $1"
}
if [ $# -eq 0 ]; then
echo "Usage: $0 <collection.json> [environment.json]"
exit 1
fi
COLLECTION="$1"
ENVIRONMENT="${2:-}"
echo "🔒 Newman Security Audit"
echo "======================="
echo ""
# Check collection file
if [ ! -f "$COLLECTION" ]; then
print_error "Collection file not found: $COLLECTION"
exit 1
fi
echo "Scanning: $COLLECTION"
if [ -n "$ENVIRONMENT" ] && [ -f "$ENVIRONMENT" ]; then
echo "Environment: $ENVIRONMENT"
fi
echo ""
# 1. Check for hardcoded API keys/tokens
echo "[1/8] Checking for hardcoded secrets..."
if grep -qiE '"(apikey|api_key|token|password|secret)"[[:space:]]*:[[:space:]]*"[a-zA-Z0-9_-]{8,}"' "$COLLECTION"; then
print_error "Hardcoded secrets detected in collection!"
echo " Found in lines:"
grep -niE '"(apikey|api_key|token|password|secret)"[[:space:]]*:[[:space:]]*"[a-zA-Z0-9_-]{8,}"' "$COLLECTION" | head -5
else
print_ok "No hardcoded secrets found"
fi
# 2. Check for Basic Auth credentials
echo "[2/8] Checking for Basic Auth credentials..."
if grep -qiE '"username"[[:space:]]*:[[:space:]]*"[^{]' "$COLLECTION"; then
print_warning "Basic Auth credentials found (ensure they use variables)"
grep -niE '"username"[[:space:]]*:[[:space:]]*"[^{]' "$COLLECTION" | head -3
else
print_ok "No hardcoded Basic Auth found"
fi
# 3. Check for insecure HTTP URLs
echo "[3/8] Checking for insecure HTTP URLs..."
if grep -qE '"url"[[:space:]]*:[[:space:]]*"http://' "$COLLECTION"; then
print_warning "HTTP (non-HTTPS) URLs detected!"
grep -nE '"url"[[:space:]]*:[[:space:]]*"http://' "$COLLECTION" | head -5
else
print_ok "All URLs use HTTPS"
fi
# 4. Check SSL verification settings
echo "[4/8] Checking SSL verification settings..."
if grep -qiE '"disableStrictSSL"[[:space:]]*:[[:space:]]*true' "$COLLECTION"; then
print_error "SSL verification is disabled!"
echo " This is a critical security risk in production."
else
print_ok "SSL verification enabled"
fi
# 5. Check for exposed PII patterns
echo "[5/8] Checking for potential PII exposure..."
if grep -qiE '"(ssn|social_security|credit_card|passport)"[[:space:]]*:[[:space:]]*"[0-9]' "$COLLECTION"; then
print_error "Potential PII data found in collection!"
else
print_ok "No obvious PII patterns detected"
fi
# 6. Check environment file (if provided)
if [ -n "$ENVIRONMENT" ] && [ -f "$ENVIRONMENT" ]; then
echo "[6/8] Checking environment file..."
# Check for secrets in environment
if grep -qiE '"(password|secret|apikey)"[[:space:]]*:[[:space:]]*"[a-zA-Z0-9_-]{8,}"' "$ENVIRONMENT"; then
print_error "Hardcoded secrets in environment file!"
echo " Use {{$processEnvironment.VAR_NAME}} instead"
fi
# Check for production credentials
if grep -qiE '"BASE_URL"[[:space:]]*:[[:space:]]*"https?://.*production' "$ENVIRONMENT"; then
print_warning "Production URL detected in environment file"
echo " Ensure this file is not committed to public repositories"
fi
else
echo "[6/8] Skipping environment check (no file provided)"
fi
# 7. Check for variable usage best practices
echo "[7/8] Checking variable usage..."
VAR_COUNT=$(grep -oE '\{\{[^}]+\}\}' "$COLLECTION" | wc -l)
if [ "$VAR_COUNT" -lt 3 ]; then
print_warning "Low variable usage detected (found: $VAR_COUNT)"
echo " Consider using variables for URLs, auth, and common values"
else
print_ok "Good variable usage (found: $VAR_COUNT variables)"
fi
# 8. Check for timeout configurations
echo "[8/8] Checking timeout configurations..."
if ! grep -qE '"timeout"[[:space:]]*:[[:space:]]*[0-9]+' "$COLLECTION"; then
print_warning "No timeout configuration found"
echo " Set timeouts to prevent hanging requests"
else
print_ok "Timeout configuration present"
fi
echo ""
echo "======================="
if [ $ISSUES_FOUND -gt 0 ]; then
echo -e "${RED}❌ Security audit failed: $ISSUES_FOUND critical issue(s) found${NC}"
echo ""
echo "Recommendations:"
echo "1. Remove all hardcoded secrets"
echo "2. Use environment variables: {{$processEnvironment.VAR_NAME}}"
echo "3. Enable SSL verification in production"
echo "4. Use HTTPS for all endpoints"
echo "5. Store sensitive environments in secure vaults"
exit 1
else
echo -e "${GREEN}✅ Security audit passed!${NC}"
exit 0
fi
```