Spry for Data Engineering

Design Data Pipelines as Markdown Documents

Combine SQL, Bash, JSON, and command outputs deterministically.
Express lineage, dependencies, and metadata inline with human-readable reproducibility.

Pipelines as Literate Code

Think of Spry as dbt + Makefile + Markdown combined.
Write your data transformation logic, document why decisions were made, and execute everything in one place.

Example Pipeline Structure

# Data Ingestion Pipeline
## Extract raw data
```bash
curl api.example.com/data → raw.json
```
## Transform and load
```sql
CREATE TABLE cleaned AS SELECT ...
```
## Validate results
```sql
SELECT COUNT(*) FROM cleaned WHERE ...
```

Built-in Lineage

Every cell knows what came before it. Spry tracks data dependencies automatically, giving you lineage for free.

  • Automatic dependency resolution
  • Visual lineage graphs
  • Impact analysis built-in

Type-Safe Execution

Zod validation ensures your pipelines are deterministic and predictable. No surprises in production.

  • Schema validation at runtime
  • Type inference from data
  • Early error detection

Why Data Engineers Choose Spry

No YAML Hell

Just write Markdown. No configuration files, no complex DSLs. Your pipelines are readable by humans.

Version Everything

Git is your version control. Every pipeline run is reproducible. Roll back with confidence.

Test Inline

Write assertions as Markdown cells. Test data quality, row counts, and business logic in the same file.

Common Use Cases

1

ETL/ELT Pipelines

Extract from APIs, transform with SQL, load into warehouses. All documented inline.

2

Data Quality Checks

Run validation queries and capture results as evidence

3

Migration Scripts

Document schema changes alongside the SQL that implements them

4

Analytics Workflows

Combine data extraction, transformation, and visualization in one notebook

5

Scheduled Jobs

Run Spry pipelines on cron or with orchestration tools