Back to skills
SkillHub ClubAnalyze Data & AIData / AI

senior-data-engineer

World-class data engineering skill for building scalable data pipelines, ETL/ELT systems, and data infrastructure. Expertise in Python, SQL, Spark, Airflow, dbt, Kafka, and modern data stack. Includes data modeling, pipeline orchestration, data quality, and DataOps. Use when designing data architectures, building data pipelines, optimizing data workflows, or implementing data governance.

Packaged view

This page reorganizes the original catalog entry around fit, installability, and workflow context first. The original raw source lives below.

Stars
0
Hot score
74
Updated
March 20, 2026
Overall rating
C0.8
Composite score
0.8
Best-practice grade
N/A

Install command

npx @skill-hub/cli install merllinsbeard-ai-claude-skills-collection-senior-data-engineer
data-engineeringetldata-pipelinesdataopsbig-data

Repository

merllinsbeard/ai-claude-skills-collection

Skill path: development/skills-repo/engineering-team/senior-data-engineer

World-class data engineering skill for building scalable data pipelines, ETL/ELT systems, and data infrastructure. Expertise in Python, SQL, Spark, Airflow, dbt, Kafka, and modern data stack. Includes data modeling, pipeline orchestration, data quality, and DataOps. Use when designing data architectures, building data pipelines, optimizing data workflows, or implementing data governance.

Open repository

Best for

Primary workflow: Analyze Data & AI.

Technical facets: Data / AI.

Target audience: everyone.

License: Unknown.

Original source

Catalog source: SkillHub Club.

Repository owner: merllinsbeard.

This is still a mirrored public skill entry. Review the repository before installing into production workflows.

What it helps with

  • Install senior-data-engineer into Claude Code, Codex CLI, Gemini CLI, or OpenCode workflows
  • Review https://github.com/merllinsbeard/ai-claude-skills-collection before adding senior-data-engineer to shared team environments
  • Use senior-data-engineer for data workflows

Works across

Claude CodeCodex CLIGemini CLIOpenCode

Favorites: 0.

Sub-skills: 0.

Aggregator: No.

Original source / Raw SKILL.md

---
name: senior-data-engineer
description: World-class data engineering skill for building scalable data pipelines, ETL/ELT systems, and data infrastructure. Expertise in Python, SQL, Spark, Airflow, dbt, Kafka, and modern data stack. Includes data modeling, pipeline orchestration, data quality, and DataOps. Use when designing data architectures, building data pipelines, optimizing data workflows, or implementing data governance.
---

# Senior Data Engineer

World-class senior data engineer skill for production-grade AI/ML/Data systems.

## Quick Start

### Main Capabilities

```bash
# Core Tool 1
python scripts/pipeline_orchestrator.py --input data/ --output results/

# Core Tool 2  
python scripts/data_quality_validator.py --target project/ --analyze

# Core Tool 3
python scripts/etl_performance_optimizer.py --config config.yaml --deploy
```

## Core Expertise

This skill covers world-class capabilities in:

- Advanced production patterns and architectures
- Scalable system design and implementation
- Performance optimization at scale
- MLOps and DataOps best practices
- Real-time processing and inference
- Distributed computing frameworks
- Model deployment and monitoring
- Security and compliance
- Cost optimization
- Team leadership and mentoring

## Tech Stack

**Languages:** Python, SQL, R, Scala, Go
**ML Frameworks:** PyTorch, TensorFlow, Scikit-learn, XGBoost
**Data Tools:** Spark, Airflow, dbt, Kafka, Databricks
**LLM Frameworks:** LangChain, LlamaIndex, DSPy
**Deployment:** Docker, Kubernetes, AWS/GCP/Azure
**Monitoring:** MLflow, Weights & Biases, Prometheus
**Databases:** PostgreSQL, BigQuery, Snowflake, Pinecone

## Reference Documentation

### 1. Data Pipeline Architecture

Comprehensive guide available in `references/data_pipeline_architecture.md` covering:

- Advanced patterns and best practices
- Production implementation strategies
- Performance optimization techniques
- Scalability considerations
- Security and compliance
- Real-world case studies

### 2. Data Modeling Patterns

Complete workflow documentation in `references/data_modeling_patterns.md` including:

- Step-by-step processes
- Architecture design patterns
- Tool integration guides
- Performance tuning strategies
- Troubleshooting procedures

### 3. Dataops Best Practices

Technical reference guide in `references/dataops_best_practices.md` with:

- System design principles
- Implementation examples
- Configuration best practices
- Deployment strategies
- Monitoring and observability

## Production Patterns

### Pattern 1: Scalable Data Processing

Enterprise-scale data processing with distributed computing:

- Horizontal scaling architecture
- Fault-tolerant design
- Real-time and batch processing
- Data quality validation
- Performance monitoring

### Pattern 2: ML Model Deployment

Production ML system with high availability:

- Model serving with low latency
- A/B testing infrastructure
- Feature store integration
- Model monitoring and drift detection
- Automated retraining pipelines

### Pattern 3: Real-Time Inference

High-throughput inference system:

- Batching and caching strategies
- Load balancing
- Auto-scaling
- Latency optimization
- Cost optimization

## Best Practices

### Development

- Test-driven development
- Code reviews and pair programming
- Documentation as code
- Version control everything
- Continuous integration

### Production

- Monitor everything critical
- Automate deployments
- Feature flags for releases
- Canary deployments
- Comprehensive logging

### Team Leadership

- Mentor junior engineers
- Drive technical decisions
- Establish coding standards
- Foster learning culture
- Cross-functional collaboration

## Performance Targets

**Latency:**
- P50: < 50ms
- P95: < 100ms
- P99: < 200ms

**Throughput:**
- Requests/second: > 1000
- Concurrent users: > 10,000

**Availability:**
- Uptime: 99.9%
- Error rate: < 0.1%

## Security & Compliance

- Authentication & authorization
- Data encryption (at rest & in transit)
- PII handling and anonymization
- GDPR/CCPA compliance
- Regular security audits
- Vulnerability management

## Common Commands

```bash
# Development
python -m pytest tests/ -v --cov
python -m black src/
python -m pylint src/

# Training
python scripts/train.py --config prod.yaml
python scripts/evaluate.py --model best.pth

# Deployment
docker build -t service:v1 .
kubectl apply -f k8s/
helm upgrade service ./charts/

# Monitoring
kubectl logs -f deployment/service
python scripts/health_check.py
```

## Resources

- Advanced Patterns: `references/data_pipeline_architecture.md`
- Implementation Guide: `references/data_modeling_patterns.md`
- Technical Reference: `references/dataops_best_practices.md`
- Automation Scripts: `scripts/` directory

## Senior-Level Responsibilities

As a world-class senior professional:

1. **Technical Leadership**
   - Drive architectural decisions
   - Mentor team members
   - Establish best practices
   - Ensure code quality

2. **Strategic Thinking**
   - Align with business goals
   - Evaluate trade-offs
   - Plan for scale
   - Manage technical debt

3. **Collaboration**
   - Work across teams
   - Communicate effectively
   - Build consensus
   - Share knowledge

4. **Innovation**
   - Stay current with research
   - Experiment with new approaches
   - Contribute to community
   - Drive continuous improvement

5. **Production Excellence**
   - Ensure high availability
   - Monitor proactively
   - Optimize performance
   - Respond to incidents


---

## Referenced Files

> The following files are referenced in this skill and included for context.

### scripts/pipeline_orchestrator.py

```python
#!/usr/bin/env python3
"""
Pipeline Orchestrator
Production-grade tool for senior data engineer
"""

import os
import sys
import json
import logging
import argparse
from pathlib import Path
from typing import Dict, List, Optional
from datetime import datetime

logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)

class PipelineOrchestrator:
    """Production-grade pipeline orchestrator"""
    
    def __init__(self, config: Dict):
        self.config = config
        self.results = {
            'status': 'initialized',
            'start_time': datetime.now().isoformat(),
            'processed_items': 0
        }
        logger.info(f"Initialized {self.__class__.__name__}")
    
    def validate_config(self) -> bool:
        """Validate configuration"""
        logger.info("Validating configuration...")
        # Add validation logic
        logger.info("Configuration validated")
        return True
    
    def process(self) -> Dict:
        """Main processing logic"""
        logger.info("Starting processing...")
        
        try:
            self.validate_config()
            
            # Main processing
            result = self._execute()
            
            self.results['status'] = 'completed'
            self.results['end_time'] = datetime.now().isoformat()
            
            logger.info("Processing completed successfully")
            return self.results
            
        except Exception as e:
            self.results['status'] = 'failed'
            self.results['error'] = str(e)
            logger.error(f"Processing failed: {e}")
            raise
    
    def _execute(self) -> Dict:
        """Execute main logic"""
        # Implementation here
        return {'success': True}

def main():
    """Main entry point"""
    parser = argparse.ArgumentParser(
        description="Pipeline Orchestrator"
    )
    parser.add_argument('--input', '-i', required=True, help='Input path')
    parser.add_argument('--output', '-o', required=True, help='Output path')
    parser.add_argument('--config', '-c', help='Configuration file')
    parser.add_argument('--verbose', '-v', action='store_true', help='Verbose output')
    
    args = parser.parse_args()
    
    if args.verbose:
        logging.getLogger().setLevel(logging.DEBUG)
    
    try:
        config = {
            'input': args.input,
            'output': args.output
        }
        
        processor = PipelineOrchestrator(config)
        results = processor.process()
        
        print(json.dumps(results, indent=2))
        sys.exit(0)
        
    except Exception as e:
        logger.error(f"Fatal error: {e}")
        sys.exit(1)

if __name__ == '__main__':
    main()

```

### scripts/data_quality_validator.py

```python
#!/usr/bin/env python3
"""
Data Quality Validator
Production-grade tool for senior data engineer
"""

import os
import sys
import json
import logging
import argparse
from pathlib import Path
from typing import Dict, List, Optional
from datetime import datetime

logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)

class DataQualityValidator:
    """Production-grade data quality validator"""
    
    def __init__(self, config: Dict):
        self.config = config
        self.results = {
            'status': 'initialized',
            'start_time': datetime.now().isoformat(),
            'processed_items': 0
        }
        logger.info(f"Initialized {self.__class__.__name__}")
    
    def validate_config(self) -> bool:
        """Validate configuration"""
        logger.info("Validating configuration...")
        # Add validation logic
        logger.info("Configuration validated")
        return True
    
    def process(self) -> Dict:
        """Main processing logic"""
        logger.info("Starting processing...")
        
        try:
            self.validate_config()
            
            # Main processing
            result = self._execute()
            
            self.results['status'] = 'completed'
            self.results['end_time'] = datetime.now().isoformat()
            
            logger.info("Processing completed successfully")
            return self.results
            
        except Exception as e:
            self.results['status'] = 'failed'
            self.results['error'] = str(e)
            logger.error(f"Processing failed: {e}")
            raise
    
    def _execute(self) -> Dict:
        """Execute main logic"""
        # Implementation here
        return {'success': True}

def main():
    """Main entry point"""
    parser = argparse.ArgumentParser(
        description="Data Quality Validator"
    )
    parser.add_argument('--input', '-i', required=True, help='Input path')
    parser.add_argument('--output', '-o', required=True, help='Output path')
    parser.add_argument('--config', '-c', help='Configuration file')
    parser.add_argument('--verbose', '-v', action='store_true', help='Verbose output')
    
    args = parser.parse_args()
    
    if args.verbose:
        logging.getLogger().setLevel(logging.DEBUG)
    
    try:
        config = {
            'input': args.input,
            'output': args.output
        }
        
        processor = DataQualityValidator(config)
        results = processor.process()
        
        print(json.dumps(results, indent=2))
        sys.exit(0)
        
    except Exception as e:
        logger.error(f"Fatal error: {e}")
        sys.exit(1)

if __name__ == '__main__':
    main()

```

### scripts/etl_performance_optimizer.py

```python
#!/usr/bin/env python3
"""
Etl Performance Optimizer
Production-grade tool for senior data engineer
"""

import os
import sys
import json
import logging
import argparse
from pathlib import Path
from typing import Dict, List, Optional
from datetime import datetime

logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)

class EtlPerformanceOptimizer:
    """Production-grade etl performance optimizer"""
    
    def __init__(self, config: Dict):
        self.config = config
        self.results = {
            'status': 'initialized',
            'start_time': datetime.now().isoformat(),
            'processed_items': 0
        }
        logger.info(f"Initialized {self.__class__.__name__}")
    
    def validate_config(self) -> bool:
        """Validate configuration"""
        logger.info("Validating configuration...")
        # Add validation logic
        logger.info("Configuration validated")
        return True
    
    def process(self) -> Dict:
        """Main processing logic"""
        logger.info("Starting processing...")
        
        try:
            self.validate_config()
            
            # Main processing
            result = self._execute()
            
            self.results['status'] = 'completed'
            self.results['end_time'] = datetime.now().isoformat()
            
            logger.info("Processing completed successfully")
            return self.results
            
        except Exception as e:
            self.results['status'] = 'failed'
            self.results['error'] = str(e)
            logger.error(f"Processing failed: {e}")
            raise
    
    def _execute(self) -> Dict:
        """Execute main logic"""
        # Implementation here
        return {'success': True}

def main():
    """Main entry point"""
    parser = argparse.ArgumentParser(
        description="Etl Performance Optimizer"
    )
    parser.add_argument('--input', '-i', required=True, help='Input path')
    parser.add_argument('--output', '-o', required=True, help='Output path')
    parser.add_argument('--config', '-c', help='Configuration file')
    parser.add_argument('--verbose', '-v', action='store_true', help='Verbose output')
    
    args = parser.parse_args()
    
    if args.verbose:
        logging.getLogger().setLevel(logging.DEBUG)
    
    try:
        config = {
            'input': args.input,
            'output': args.output
        }
        
        processor = EtlPerformanceOptimizer(config)
        results = processor.process()
        
        print(json.dumps(results, indent=2))
        sys.exit(0)
        
    except Exception as e:
        logger.error(f"Fatal error: {e}")
        sys.exit(1)

if __name__ == '__main__':
    main()

```

### references/data_pipeline_architecture.md

```markdown
# Data Pipeline Architecture

## Overview

World-class data pipeline architecture for senior data engineer.

## Core Principles

### Production-First Design

Always design with production in mind:
- Scalability: Handle 10x current load
- Reliability: 99.9% uptime target
- Maintainability: Clear, documented code
- Observability: Monitor everything

### Performance by Design

Optimize from the start:
- Efficient algorithms
- Resource awareness
- Strategic caching
- Batch processing

### Security & Privacy

Build security in:
- Input validation
- Data encryption
- Access control
- Audit logging

## Advanced Patterns

### Pattern 1: Distributed Processing

Enterprise-scale data processing with fault tolerance.

### Pattern 2: Real-Time Systems

Low-latency, high-throughput systems.

### Pattern 3: ML at Scale

Production ML with monitoring and automation.

## Best Practices

### Code Quality
- Comprehensive testing
- Clear documentation
- Code reviews
- Type hints

### Performance
- Profile before optimizing
- Monitor continuously
- Cache strategically
- Batch operations

### Reliability
- Design for failure
- Implement retries
- Use circuit breakers
- Monitor health

## Tools & Technologies

Essential tools for this domain:
- Development frameworks
- Testing libraries
- Deployment platforms
- Monitoring solutions

## Further Reading

- Research papers
- Industry blogs
- Conference talks
- Open source projects

```

### references/data_modeling_patterns.md

```markdown
# Data Modeling Patterns

## Overview

World-class data modeling patterns for senior data engineer.

## Core Principles

### Production-First Design

Always design with production in mind:
- Scalability: Handle 10x current load
- Reliability: 99.9% uptime target
- Maintainability: Clear, documented code
- Observability: Monitor everything

### Performance by Design

Optimize from the start:
- Efficient algorithms
- Resource awareness
- Strategic caching
- Batch processing

### Security & Privacy

Build security in:
- Input validation
- Data encryption
- Access control
- Audit logging

## Advanced Patterns

### Pattern 1: Distributed Processing

Enterprise-scale data processing with fault tolerance.

### Pattern 2: Real-Time Systems

Low-latency, high-throughput systems.

### Pattern 3: ML at Scale

Production ML with monitoring and automation.

## Best Practices

### Code Quality
- Comprehensive testing
- Clear documentation
- Code reviews
- Type hints

### Performance
- Profile before optimizing
- Monitor continuously
- Cache strategically
- Batch operations

### Reliability
- Design for failure
- Implement retries
- Use circuit breakers
- Monitor health

## Tools & Technologies

Essential tools for this domain:
- Development frameworks
- Testing libraries
- Deployment platforms
- Monitoring solutions

## Further Reading

- Research papers
- Industry blogs
- Conference talks
- Open source projects

```

### references/dataops_best_practices.md

```markdown
# Dataops Best Practices

## Overview

World-class dataops best practices for senior data engineer.

## Core Principles

### Production-First Design

Always design with production in mind:
- Scalability: Handle 10x current load
- Reliability: 99.9% uptime target
- Maintainability: Clear, documented code
- Observability: Monitor everything

### Performance by Design

Optimize from the start:
- Efficient algorithms
- Resource awareness
- Strategic caching
- Batch processing

### Security & Privacy

Build security in:
- Input validation
- Data encryption
- Access control
- Audit logging

## Advanced Patterns

### Pattern 1: Distributed Processing

Enterprise-scale data processing with fault tolerance.

### Pattern 2: Real-Time Systems

Low-latency, high-throughput systems.

### Pattern 3: ML at Scale

Production ML with monitoring and automation.

## Best Practices

### Code Quality
- Comprehensive testing
- Clear documentation
- Code reviews
- Type hints

### Performance
- Profile before optimizing
- Monitor continuously
- Cache strategically
- Batch operations

### Reliability
- Design for failure
- Implement retries
- Use circuit breakers
- Monitor health

## Tools & Technologies

Essential tools for this domain:
- Development frameworks
- Testing libraries
- Deployment platforms
- Monitoring solutions

## Further Reading

- Research papers
- Industry blogs
- Conference talks
- Open source projects

```

senior-data-engineer | SkillHub