Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Data Engineering Podcast

Putting Airflow Into Production With James Meickle - Episode 43

13 Aug 2018

Description

Summary The theory behind how a tool is supposed to work and the realities of putting it into practice are often at odds with each other. Learning the pitfalls and best practices from someone who has gained that knowledge the hard way can save you from wasted time and frustration. In this episode James Meickle discusses his recent experience building a new installation of Airflow. He points out the strengths, design flaws, and areas of improvement for the framework. He also describes the design patterns and workflows that his team has built to allow them to use Airflow as the basis of their data science platform. Preamble Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat Your host is Tobias Macey and today I’m interviewing James Meickle about his experiences building a new Airflow installation Interview Introduction How did you get involved in the area of data management? What was your initial project requirement? What tooling did you consider in addition to Airflow? What aspects of the Airflow platform led you to choose it as your implementation target? Can you describe your current deployment architecture? How many engineers are involved in writing tasks for your Airflow installation? What resources were the most helpful while learning about Airflow design patterns? How have you architected your DAGs for deployment and extensibility? What kinds of tests and automation have you put in place to support the ongoing stability of your deployment? What are some of the dead-ends or other pitfalls that you encountered during the course of this project? What aspects of Airflow have you found to be lacking that you would like to see improved? What did you wish someone had told you before you started work on your Airflow installation? If you were to start over would you make the same choice? If Airflow wasn’t available what would be your second choice? What are your next steps for improvements and fixes? Contact Info @eronarn on Twitter Website eronarn on GitHub Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Links Quantopian Harvard Brain Science Initiative DevOps Days Boston Google Maps API Cron ETL (Extract, Transform, Load) Azkaban Luigi AWS Glue Airflow Pachyderm Podcast Interview AirBnB Python YAML Ansible REST (Representational State Transfer) SAML (Security Assertion Markup Language) RBAC (Role-Based Access Control) Maxime Beauchemin Medium Blog Celery Dask Podcast Interview PostgreSQL Podcast Interview Redis Cloudformation Jupyter Notebook Qubole Astronomer Podcast Interview Gunicorn Kubernetes Airflow Improvement Proposals Python Enhancement Proposals (PEP) The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SASupport Data Engineering Podcast

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.