Thomas is CEO and Co-Founder at Tasman, a boutique data analytics agency that delivers data infrastructure, data insight, and data teams for fast-growing clients.
In this talk we will share our secrets on rapidly deploying analytics for our clients. As a business, we focus on setting up data infrastructure, data models and reporting, and do it as fast and as well as we can — but we also want to make sure that what we built is scaleable, and still relevant long after we leave.
The modern data stack (think dbt, Fivetran/Stitch/Airbyte/Snowplow, Metabase/Looker/Lightdash, HighTouch/Census) is a phenomenally powerful ecosystem for this. It allows us to have modular data pipelines, growth engines, reproducible analysis and reverse ETL — and beyond that, tools like dbt also institutionalise documentation, integrate engineering processes into data model delivery. But what are the best approaches? Does the ubiquity of these great tools mean that we need to think about data modelling in different way now? How can we be sure that the pipeline we can quickly build right now is still relevant in a few months time, let alone five years?
At Tasman we developed a way of solving this that we call Domain Modelling. We start from the business value and work our way back to the data, and not the other way around. This avoids the classic startup data analytics problem: no more organically grown, very complex data models that few understand and no one can maintain long-term. We will go through the principles, and look at a few very concrete examples of how we did this for some of our clients (Ecosia, Pensionbee and The Collective Co-living). This is a very enticing story that any analytics engineer and data professional can benefit from immensely.