About
Alex Huggler
I am a senior data and analytics engineer with six years at AT&T, building production data platforms on Azure and Databricks. My work spans PySpark ingestion, dbt analytics models, BI delivery, and the operational side of running pipelines that finance and operations teams depend on.
I am happiest when a hard, ambiguous business problem gets translated into a clear data model and a reliable pipeline. Recent work includes fraud-signal detection that drove ~$900K/yr in loss reduction and analytics ownership of a $2.2B portfolio. I am currently focused on streaming and lakehouse patterns, applied AI for data tooling, and senior-level remote roles.
I write code that is boring on purpose - well-tested, instrumented, easy for the next engineer to understand. I treat documentation, runbooks, and data contracts as part of the deliverable, not a nice-to-have.
Stack proficiency
| Area | Tools | Level |
|---|---|---|
| Compute | PySpark, Apache Spark Structured Streaming, Databricks | Expert |
| Cloud | Azure (ADF, ADLS Gen2, Synapse), AWS (S3, Glue, Athena) | Advanced |
| Lakehouse | Delta Lake, Apache Iceberg, Parquet | Advanced |
| Warehouse | Snowflake, Synapse Dedicated SQL | Advanced |
| Modeling | dbt, SQL, dimensional modeling | Expert |
| Orchestration | Apache Airflow, ADF, Databricks Workflows | Advanced |
| Streaming | Apache Kafka, Spark Structured Streaming | Advanced |
| Languages | Python, SQL, Bash, a little Scala | Expert |
| Infra | Terraform, Docker, GitHub Actions | Working |
| ML / AI | Azure ML, Anthropic API, prompt engineering for data tooling | Working |
Certifications
- DP-203 Microsoft Certified: Azure Data Engineer Associate
- DP-100 Microsoft Certified: Azure Data Scientist Associate
Contact
- GitHub: github.com/AlexHuggler
- LinkedIn: www.linkedin.com/in/alexhuggler
- Resume: download PDF