#78 Minimum Viable Data Mesh? - Interview w/ Paul Andrew

Data Mesh Radio - A podcast by Data as a Product Podcast Network - Mondays

Categories:

Sign up for Data Mesh Understanding's free roundtable and introduction programs here: https://landing.datameshunderstanding.com/Please Rate and Review us on your podcast app of choice!If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see hereEpisode list and links to all available episode transcripts here.Provided as a free resource by Data Mesh Understanding / Scott Hirleman. Get in touch with Scott on LinkedIn if you want to chat data mesh.Transcript for this episode (link) provided by Starburst. See their Data Mesh Summit recordings here and their great data mesh resource center here.Paul's data mesh blog series: https://mrpaulandrew.com/tag/data-mesh-vs-azure/Paul's LinkedIn: https://www.linkedin.com/in/mrpaulandrew/Paul's Twitter: @mrpaulandrew / https://twitter.com/mrpaulandrewIn this episode, Scott interviewed Paul Andrew, Technical Architect at Avanade and Microsoft Data Platform MVP.Paul started by sharing his views on the chicken and egg problem of how much do you build out your data platform and when to support your data product creation and on-going operations. Is it after you've built a few data products? Entirely before? And how that discussion becomes even more in a brownfield deployment that already has existing requirements, expectations, and templates.For Paul, delivering a single data mesh data product on its own is not all that valuable - if you are going to go to the expense of implementing data mesh, you need to be able to satisfy use cases that cross domains. And the greater value is in cross-domain interoperability, getting to a data product that wasn't possible before. And, you need to deliver the data platform alongside those first 2-3 data products, otherwise you create a very hard to support data asset, not really a data product.When thinking about minimum viable data mesh, Paul views an approach leveraging DevOps and generally CI/CD - or Continuous Integration/Continuous Deliver - as very crucial. You need repeatability/reproducibility to really call something a data product. In a brownfield deployment, Paul sees leveraging existing templates for security and infrastructure as code as the best path forward - supplement what you've already built to make it usable for your new approach. You've already built out your security and compliance model, make it into infrastructure as code to really reduce friction for new data products. For Paul, being disciplined early in your data mesh journey is key. A proof of concept for data mesh is often only focused on the data set or table itself, not actually generating a data product and much less a minimum viable...