Recent Posts:
Here's our December 2020 roundup of links from across the web that could be relevant to you:
This long-form post on the dbt blog is a must-read. Titled “The Modern Data Stack: Past, Present, and Future,” it answers the question that Tristan Handy has been asking himself for the past two years: “What happened to the massive innovation we saw from 2012-2016?” His carefully thought-out analysis covers the natural cycles of technological shifts, defines the phase we are in as a ‘deployment’ one, and points out high-impact opportunity areas for the next few years - which you might find particularly useful if you are considering launching a new product.
Here's our November 2020 roundup of good reads and podcast episodes that might be relevant for your career in data:
Here's our October 2020 roundup of good reads and podcast episodes that might be relevant to you as a data professional:
Created by Berlin-based developer Jan Oberhauser in 2019, n8n presents itself as “a free and open workflow automation tool”. Think of it as a locally hosted Zapier on steroids.
Here's our September 2020 roundup of good reads and podcast episodes that might be relevant to you as a data professional:
Our founder Pete Soderling co-authored a follow-on piece to his previous post with Great Expectations' core contributor Abe Gong and Partner at Amplify Partners Sarah Catanzaro, for which they had interviewed the makers of some of the hottest data tools. The focus is still the same: rather than what their data tools can do, we hear about what they don't do, as a way to better understand how they fit together. From ApertureData to Xplenty, this new installment covers 21 new tools, and you can read it here.
Here's our August 2020 roundup of good reads and great podcast episodes for anyone working with data:
AI engineer and author J.T. Wolohan was recently a guest of the Heroku’s Code[ish] podcast to discuss the contents of his book, “Mastering Large Datasets with Python.” Listen to the episode here or read the transcript for some practical advice on using Python to deal with massive datasets, especially in the context of machine learning.
With more than 1,300 stars on GitHub, Apache Hudi is a great open source solution for companies with large analytical datasets to quickly ingest data onto HDFS or cloud storage.
Receive relevant content, news and event updates from our community directly into your inbox. Enter your email (we promise no spam!) and tell us below which region(s) you are interested in: