MJR Analytics is now Rittman Analytics ... and an Update on our First Six Months

As of last Friday our company name has changed from MJR Analytics to Rittman Analytics. Nothing else has changed in-terms of ownership or services we provide but we wanted to take advantage of our ability now to put our name and reputation front-and-centre of our branding and website. All emails to our old @mjr-analytics.com domain will automatically redirect, and any links pointing to https://www.mjr-analytics.com will automatically redirect to http://rittmananalytics.com.

Other than changing our name, since our launch last September we've worked with a number of great customers together with our vendor partners, with five of those projects written-up as case studies and customer quotes in the Customers section of our website:

  • Colourpop : eCommerce and customer analytics using Shopify, Redshift and Looker

  • Let's Do This : eCommerce/conversion analytics using Segment, Redshift and Looker

  • Florence : Operational analytics, retention and RFM segmentation using Looker and PostgreSQL

  • INTO University Partnerships : Digital Analytics Strategy for MS PowerBI and Heap Analytics

  • Qubit : Data Engineering and customer/personalization analytics using Google BigQuery and Looker

We're also co-chaired and helped organize the recent Looker London Dev Meetup together with Jon Palmer from GoCardless, and presented a short session on Improving your User Engagement with Looker’s new System Activity Model - look out for the next Meetup in June later this year!

Looker is our primary, strategic vendor partner and since our launch last year we’ve also partnered with selected vendors that provide features complementary to Looker:

  • Segment for integrating event-level data sources, single customer view and data quality

  • Fivetran for no-ops data pipelines

  • Qubit for consumer behavioural, segmentation and personalization analytics

  • Snowflake for no-ops cloud data warehousing-as-a-service

  • Oracle for enterprise-level analytics and ERP/CRM integration

We’re also now recruiting for our first permanent consulting roles to work alongside myself and our team of experienced contract developers and data scientist, and we're looking forward to being one of the sponsors for the upcoming Looker Join 2019 event in London - see some of you there if not sooner!

Mark RittmanComment
Drill to Detail Podcast Returns with Ep.60 'A Deeper Look Into Looker' With Special Guest Lloyd Tabb
DTDv3 iTunes Logo with Border.jpeg

One of our CEO and founder Mark Rittman’s side projects is the Drill to Detail Podcast, a podcast series where Mark interviews a special guest each episode to talk about their work, their products and their take on the analytics, big data and data management industry.

Drill to Detail is now on its 60th episode and now starts its third season with Mark joined by Lloyd Tabb, Founder, CTO and Chairman of Looker to talk about the foundational story of Looker and LookML, query latency and semantic models, analytic engines and code IDEs, analytics developer workflows and the rise of cloud elastically-scalable databases, packaged applications and embedded analytics and why learning (and loving) are the long-term keys to analytics and business success.

mjr analyticsComment
Looker London Meetup on Thursday, February 7th 2019 - Registration Now Open!
meetup2.png

If you’re in London on Thursday 7th February 2019 and a developer or end-user working with the Looker Data Platform, we’re very pleased to be launching together with Jon Palmer from Go Cardless the first ever Looker London Dev Meetup, sponsored by the good folks at Looker and hosted at Go Cardless’ HQ at 65 Goswell Road, London EC1V 7EN.

The aim is to create a space for showcasing and discussing what data people in London are building in and around the Looker ecosystem, and for this first meetup we have three great customer and partner speakers sharing their insights on Looker development and project implementations:

  • "Scale securely with the Looker API" Dylan Baker, Consultant

  • "Internal and External Multi-tenancy with Looker" - Ian Thompson, King

  • "Increase End-User Engagement with Looker 6 System Activity Model" - Mark Rittman, MJR Analytics

followed by a panel session where the speakers are joined by one of our guests from Looker:

  • “Data Analytics Panel - Hear from Industry experts and Looker users from GoCardless, King, MJR analytics and Looker.”

Registration and attendance is free but spaces are limited, so make sure you register soon if you’d like to come along - the meetup starts at at 5.30pm with drinks, food and introductions from Jon, full agenda is on the meetup page and we’re aiming to wrap-up around 9pm with closing comments from me and a short survey on future meetup content.

MJR Analytics Sessions at UKOUG Tech'18, Liverpool ACC, 3rd-5th December 2018
ukoug3.png

Oracle Openworld may have San Francisco and the Moscone Center but you can’t beat Liverpool and the wind whipping in-off the Mersey in early December, and so we’re very pleased to be presenting once again at the UK Oracle User Group Technology Conference & Exhibition running next week at the Liverpool ACC, December 3rd-5th 2018.

Mark Rittman is presenting two sessions on Oracle’s new autonomous analytics and data warehousing platforms:

  • “Planning a Strategy for Autonomous Analytics & Data Warehouse Cloud”, Monday 3rd December 2:30 PM - 3:15 PM in Room 13

    ”As Oracle Analytics and Data Warehousing becomes self-driving and autonomous, the need for a strategy within your BI function becomes all the more important. How you deliver BI content to your users, the skills your developers now need and the most efficient way to manage your cloud estate are vital components of an autonomous cloud analytics strategy; this session will explain what’s changed, what’s significant and what are the implications of that change.

    By reference to today’s Oracle BI technology and applications and explaining how they now run autonomously as services in Oracle Cloud we’ll cover what are the key changes and its implications, what this means in terms of budgets and planning your investment, what are the key technology and process changes, what this means for team resourcing and roles - and just as importantly, what can be safely ignored for now.”

  • “BI Developer to Data Engineer with Oracle Analytics Cloud Data Lake Edition”, Tuesday 4th December 12:35 PM - 1:20 PM Room 11A

    ”In this session, we'll look at the role of the data engineer in designing, provisioning, and enabling an Oracle Cloud data lake using Oracle Analytics Cloud Data Lake Edition. We’ll also examine the use of data flow and data pipeline authoring tools and how machine learning and AI can be applied to this task. Furthermore, we’ll explore connecting to database and SaaS sources along with sources of external data via Oracle Data-as-a-Service. Finally we’ll delve into how traditional Oracle Analytics developers can transition their skills into this role and start working as data engineers on Oracle Public Cloud data lake projects.”

Mark Rittman will be at the event all week, so if you’ve got any questions about the topics we’re talking about in the two sessions or any aspect of Oracle Analytics, big data or Autonomous Data Warehouse either drop us an email beforehand, give us a call on +44 7866 568246 or just stop us in the corridor and we’ll be pleased to chat then.

Five Thoughts About Thomas Kurian’s Move to become CEO of Google Cloud Platform

The news broke late on Friday that Thomas Kurian is joining Google to become their new CEO for Google Cloud Platform. Five thoughts on Kurian’s move:

  1. It’s a smart move made by an exceptionally smart guy. Brave, as well, given the recent history between Oracle and Google but also not surprising given his drive and presumably point to prove. I met him a few times as part of Oracle’s ACE Director program and he’s the only software exec I know who can talk long-term strategy and vision one minute and then explain to you in-detail how it all works, and doing it all with great humility and courtesy.

  2. The fact that GCP is spoken-of as an also-ran at 10% market share whilst Oracle Cloud gets bundled in with “Next 10” shows what a mountain Oracle have to climb to even become a contender to compete with Microsoft and Amazon in the cloud business - and their insistence on only allowing their SaaS and PaaS apps to run in Oracle Cloud is a worrying parallel with the “Windows Tax” that Microsoft’s Office and Server products teams had to pay back in the Steve Ballmer Era, but with Oracle’s equivalent to Satya Nadella having lost the argument and jumped-ship

  3. But Oracle will survive and this has happened many times before - Ray Lane, Charles Philips, Tom Seibel, Mark Benioff all left and in many cases founded massively successful and competitive businesses, client-server went to internet architecture and then internet went to cloud, it’s all part of how businesses renew and people move on and up, and there’s plenty more smart (and most likely, smarter) people left in Oracle and Larry Ellison is still just as driven, competitive and on-top of his game.

  4. Look out for a very interesting (as in Brexit, interesting to watch but not to be part of) culture clash at GCP, with TK about the most top-down leader of a product development team I’ve ever seen and Google, famously, engineering-focused beanbag-friendly and bottom-up. Add to that the feelings most Googler’s have towards Oracle and TK will have his work cut-out early on to win the various teams over - my guess is that his work-ethic, technical chops and sheer ability will do just that and if this role is a success, Sergey and Larry may well have found their new Eric Schmidt but this time with technical skills to match theirs - but there’s always the chance that culture will prevail and he’ll be the next Marissa Meyer instead. No pressure there then..

  5. Expect to see GCP moving increasingly into SaaS B2B areas such as ERP, CRM, Financials and industry-vertical applications to complement their commoditised IaaS and PaaS cloud business and leveraging their massive GSuite and increasingly ChromeOS install base. Just think what they could do if they had access to all the world’s structured business transactional data as well as the unstructured search, email and spreadsheet data they have now - even more comprehensive and market-leading search, the world’s biggest and most valuable ML training set, and a business model that could provide these applications for free in exchange for exclusive access to that data and Google being your default search engine. That’s the real existential threat to Oracle; spending all their time trying to win an un-winnable cloud infrastructure war and then GCP coming along and making ERP, CRM and business applications essentially free.

Slides and Forbes.com Article from "Data Warehouse Like a Tech Startup with Oracle Autonomous Data Warehouse Cloud"

One of the sessions we delivered at Oracle Openworld 2018 was “Data Warehouse Like a Tech Startup with Oracle Autonomous Data Warehouse Cloud”, a presentation on the agility that Oracle Autonomous Data Warehouse Cloud (ADWC) can provide for teams within big, traditional businesses that would otherwise have to wait months to provision a physical Oracle data warehousing database. Moreover, ADWC instances can scale-up and scale-down as demand grows over time, and hits peaks such as Black Friday that are only short-term but in the past would have required companies to pay for and provision that level capacity all year round even though it sat idle most of the time.

The story behind the presentation was also written-up as a profile article in Forbes magazine by Oracle’s Jeff Erickson, with a great description of how MJR Analytics works on client projects.

To understand the type of agility Rittman is talking about, look at how he works with a client on a data analytics question: “You want to help them get inside their data to see trends and patterns” that they couldn’t otherwise see, he says. That might mean quickly bringing in public data or data from a data-as-a-service company and connecting outside analytics tools. “I work fast,” he says. With a retail customer, for example, “we’ll look to understand the lifetime value of a customer, their cycle of purchases, repeat purchases—all kinds of scenarios. And the whole thing will happen in a day. Much of it collaboratively with the client on a web conference.
— Jeff Erickson, "How A Big Business Can Use An Autonomous Database To Move Like A Startup"

But don’t just take our word for it, or even Forbes Magazine - our objective is for every client engagement to be referenceable where client confidentially allows, and you can check-out our first two testimonials from Colourpop and from Florence on our new Customers page … with a couple more to be added in the next week or so!

Event-Level Digital Analytics using Google Analytics, Fivetran, BigQuery and Looker

A few weeks ago I posted a blog on using Mixpanel, Looker and Google BigQuery to analyze listener data from the Drill to Detail website, with Mixpanel tracking individual episode play events by visitors to the site and Fivetran replicating that event data over to BigQuery for analysis using Looker.

Mixpanel is great but like most digital businesses using Google Analytics (GA) to analyze and measure activity on their website, we actually use Google Tag Manager and GA events to track visitor activity on the mjr-analytics.com website. Whilst a full site license for Google Analytics 360 is way beyond our budget right now, the free version of GA together with GTM provides us with an unsampled, event-level stream of visitor events and together with goals and simple A/B testing tools a way to start optimizing the content on our site. 

Couple all of this with Fivetran and Looker and we can start to do some interesting analytics on all this data to not only count sessions, visitors and pages viewed but also start looking at visitor retention and the routes those visitors take through the various features on our website.

Events in Google Analytics are a way of recording interactions visitors to your site make with items on a page such as selecting an item on a navigation menu, clicking on a download link or seeing a special offer or product recommendation. On the mjr-analytics.com website we’ve setup events, for example, to record how far down the page a visitor scrolls so we can see how many people actually scroll past the hero image on our front page and look at the content on Looker services, Oracle services and so on.

Untitled 2.001.png

Another event records clicks on the specific hero image a visitor clicks on with another recording what navigation menu item was clicked on, in each case storing the image name or menu item clicked on as the event label.

Untitled 2.002.png

Whilst you can record events being triggered by adding custom Javascript to your website’s HTML page definitions, we use GTM’s graphical point-and-click interface to set-up our events and the visitor actions that trigger them.

Untitled 2.005.png

Enabling GTM’s debug mode for our site shows us which events then trigger when visiting our site, with the screenshot below showing a typical visitor journey through our homepage and the events that are triggered as they scroll-through and interact with various items on the page.

Untitled 2.003.png

Data on events being triggered along with event label and event action values are stored in a Javascript data layer that can also receive values sent over, for example, by an eCommerce application so that clicks on a product catalog item can record the price on offer as well as what product was clicked on. 

Looking at the Data Layer view in Google Tag Manager you can see the event entries recorded for this visitor to the site page, but note also how nothing personally identifiable is recorded except for a unique client ID set at the device/browser level that can then be used to analyze new vs. returning visitors and track visitor retention and engagement over time.

Untitled 2.004.png

As we then did with the Mixpanel example and the Drill to Detail website, we then use Fivetran to replicate all of the data layer events over to Google BigQuery, storing this granular visitor behavioural data in a BigQuery table and dataset. As you can see from the screenshot below we use Fivetran extensively within MJR Analytics; replicating invoices and all our other accounting data from Xero into BigQuery along with timesheet data from Harvest, CRM data from Hubspot, commits and PRs from Github, tasks outstanding and completed from Asana and search rankings and keywords from Google Search Console all into a Google BigQuery project whose datasets we then combine and analyze together using Looker.

Untitled 2.006.png

For the event data coming across from GA we use Fivetran to replicate the relevant event tracking dimensions and metrics into BigQuery every five minutes, giving us near real-time numbers on what’s popular and what’s engaging on our company website.

Untitled 2.007.png

As the event data from GA arrives as table rows of individual events being triggered, I then sessionize those individual events into rows of visitor sessions, using BigQuery SQL to pivot the incoming data and create individual columns for the first event, second event, first page view and first product item clicked on, for example, within an individual visitor session. The actual SQL used for the view is too long really to include here but to get the idea, here’s a condensed version with repeating parts removed for clarity.

SELECT
  global_session_id,
  visitor_session_id,
  visitor_id,
  visitor_session_start_date_time,
  MAX(visitor_session_length_mins) AS visitor_session_length_mins,
  visitor_first_event_week_cohort,
  city,
  session_events_count,
  ...
  event_1_category,
  event_1_label,
  ...
  page_view_1,
  page_view_2,
  ...
FROM (
  SELECT
    e.global_session_id,
    e.visitor_session_id,
    e.visitor_id,
    e.visitor_session_start_date_time,
    e.visitor_session_length_mins,
    e.visitor_first_event_week_cohort,
    e.city,
    MAX(e.visitor_session_event_seq_num) OVER (PARTITION BY e.global_session_id) AS session_events_count,
    (
    SELECT
      COUNT(*)
    FROM
      `aerial-vehicle-148023.dw_staging.ga_events`
    WHERE
      global_session_id = e.global_session_id
      AND event_category = 'Page View') AS page_view_count,
    SELECT
      COUNT(*)
    FROM
      `aerial-vehicle-148023.dw_staging.ga_events`
    WHERE
      global_session_id = e.global_session_id
      AND event_category = 'Nav Bar Clicks') AS nav_bar_click_count,
    ...,
    e1.event_label AS event_1_label,
    e2.event_category AS event_2_category,
    e2.event_label AS event_2_label,
    ...
    pv.event_label AS page_view_1,
    pv2.event_label AS page_view_2,
    ...,
    igc.event_label AS index_gallery_first_click
  FROM
    `aerial-vehicle-148023.dw_staging.ga_events` e
  LEFT OUTER JOIN (
    SELECT
      global_session_id,
      event_label,
      ROW_NUMBER() OVER (PARTITION BY global_session_id ORDER BY visitor_session_event_seq_num) page_view_seq_num
    FROM
      `aerial-vehicle-148023.dw_staging.ga_events`
    WHERE
      event_category = 'Page View'
    ORDER BY
      3) pv
  ON
    e.global_session_id = pv.global_session_id
    AND pv.page_view_seq_num = 1
  LEFT OUTER JOIN (
    SELECT
      global_session_id,
      event_label,
      ROW_NUMBER() OVER (PARTITION BY global_session_id ORDER BY visitor_session_event_seq_num) page_view_seq_num
    FROM
      `aerial-vehicle-148023.dw_staging.ga_events`
    WHERE
      event_category = 'Page View'
    ORDER BY
      3) pv2
  ON
    e.global_session_id = pv2.global_session_id
    AND pv2.page_view_seq_num = 2
  ...
  JOIN
    `aerial-vehicle-148023.dw_staging.ga_events` e1
  ON
    e.global_session_id = e1.global_session_id
    AND e1.visitor_event_seq_num = 1
  LEFT OUTER JOIN
    `aerial-vehicle-148023.dw_staging.ga_events` e2
  ON
    e.global_session_id = e2.global_session_id
    AND e2.visitor_event_seq_num = 2
  ...
GROUP BY
  1,
  2,
  ...,
  46
ORDER BY
  1

Creating a view or materialized table from this BigQuery SQL then gives me one record per session with event totals, page view and event paths along with session and client IDs that are perfect then for bringing into Looker for further analysis.

Untitled 2.008.png

Then finally it’s just a case of bringing that table of data into Looker as a LookML view and using one of Looker’s custom visualization blocks to display visitor event paths as a Sankey diagram, for example.

Untitled 2.009.png

Or we can aggregate and analyze counts of events from sessions to see in graphical form which offers, download items and navigation menu items are most popular over a given period.

Untitled 2.010.png

If you’d like to understand more about how MJR Analytics can help you better understand what visitors are looking at and interacting with on your website then drop me an email at mark.rittman@mjr-analytics.com; or even better, if you’re in Copenhagen on November 22nd and have the evening free, come along to the free meetup we’re hosting together with our partners CIMA and Looker and you’ll see me demo this all live, in person and show you how it works.

Digital Analytics, BI and Big Data Meetup in Copenhagen 22.11.2018

If you’re in Copenhagen on 22nd November 2018 and interested in digital analytics, big data and what’s new in business intelligence then you’ll be interested in a free evening meetup we’re hosting that evening with our partners CIMA and Looker:

“Where Digital Analytics is Taking BI and Big Data”

How usage and new vendors in the digital, eCommerce and SaaS Analytics market are revolutionising the way in which data is analysed, democratising big data and enabling businesses to finally become “data-driven”.

  • Introduction by Mogens Norgaard, CIMA

  • Presented by Mark Rittman, CEO, MJR Analytics

Meetup hosted by CIMA Technologies, MJR Analytics and Looker

Date: Thursday 22nd November
Time: 17.00hrs - 19.00hrs
Location: Ingeniørforeningen IDA, Kalvebod Brygge 31-33 · 1780 København V

Registration is free, and there’ll be talks by myself and Looker with an introduction by none other than Mogens Norgaard. Just don’t let Mogens know it’s my birthday that day too, or at least not until I’ve finished my presentation and demo - hopefully we’ll see some of you in Copenhagen for the meetup, it’ll be good.

mjr analyticsComment
Media Analytics for the Drill to Detail Podcast using Looker, Fivetran and Mixpanel

Most readers of the MJR Analytics blog will no doubt be aware of the Drill to Detail Podcast on the big data analytics industry that we present and host, with the most recent episode covering the Looker JOIN 2018 event in San Francisco with Tristan Handy and Stewart Bryson as special guests. Things have been a bit busy over the past month with the launch of MJR Analytics and then the Looker and Oracle conferences but we try and upload a new episode every few weeks, with the next episode — fingers crossed — having a rather special guest if we can get our schedules aligned.

Listeners to the show typically either use services such as Apple’s iTunes Podcast Directory to download episodes and play them typically on their iOS mobile devices or they can visit the show’s website at https://www.drilltodetail.com where they can download or play episodes through an embedded audio player and click through the links in the show notes that accompany each episode.

Apple provides statistics at episode level to podcast owners via their new Podcast Analytics service but you’re limited to just the charts and fixed ways that the app allows you to analyze and explore your dataset. If you can find a way to scrape or download this data and load it into a database service such as Google BigQuery though, then you could instead use a tool such as Looker to analyze and visualize your listener data in more interesting ways that are also very applicable to customer and product analytics for any digital or eCommerce site, as I’ll explain as we go through the examples.

To start, the Looker visualization below shows for each episode of the show how many in-total of that episode were listened to by users of Apple’s Podcasts Directory service on their iPhones, iPads and Mac OS devices, with the single most listened-to episode of all time within a given month being our 50th Episode Special that featured Stewart Bryson and Alex Gorbachev as our two special guests.

1_ilhCgPXXaqpil4tKSkJd2g.png

Apple also provide data on the how much of each episode was completed by the listener, so you can get a sense of how long episodes should last and how engaged your audience was with individual episodes. In the Looker visualization below you can see that typically our listeners get through around half to two-thirds of each episode but were particularly engaged with Wes McKinney episode on Apache Arrow, Python Pandas and his blog post comparing Arrow with ORC and Parquet that was generating a lot of discussion in the open-source and big data community at the time.

1_mzVle2MGHq9KlDgpwB7aQQ.png

So far the way I’ve analysed downloads and listener numbers for the podcast is not too dissimilar to how businesses used to only measures sales per month but didn’t really think about how many of their customers and visitors coming onto their site each month were new or in-fact returning customers looking to buy more. 

These days knowing how many of your customers are new or returning is the first thing you’d want to understand as an eCommerce manager because it’s cheaper to sell to an existing customer than to go out and recruit a new one, and one of your main priorities is to try and turn those first-time buyers into loyal, repeat purchasers who then become your most valuable customers.

My focus on episodes rather than listeners was in-reality more due to Apple not providing episode download and listening data at the granularity of an individual user or device ID, but some time ago I also set up Mixpanel, a Javascript event-tracking service similar to Heap and Google Analytics’s custom events, to track when visitors played episodes of the show on the site through their browser as shown in the screenshots below.

1_9-dnYMTQqFJ1E-pfBnmNDg.png

Data is still only logged at unique device level as there’s no registration facility on the site or other means to tie downloads to particular users, but now we have access to additional details of each download such as where the user was located and what type of browser and OS they were using.

Mixpanel stores the events it tracks in a database service it hosts, but you can use a managed data replication service such as the one from our partners at Fivetran, shown in the screenshots below, to copy that data into a warehousing platform such as BigQuery as I’ve done and shown below.

1_NmcCG2ET75sX-M4htrOO7Q.png

Going back to Looker I now create two new LookML views against the single table of Mixpanel tracked events that Fivetran replicated into BigQuery for me, the first of which is keyed on the unique distinct_id columns and contains dimensions and measures for the listener’s device used for accessing the site and playing-back episodes of the show.

view: unique_devices {
  derived_table: {
    sql: SELECT
  distinct_id,
  device,
  mp_country_code,
  os, region, screen_height, screen_width
FROM
  `aerial-vehicle-148023.mixpanel.event`
WHERE
  name IN ('Clicked Nav Link',
    'Podcast Episode Played')
group by 1,2,3,4,5,6,7 ;;
  }
  dimension: distinct_id {
    primary_key: yes
    label: "Device ID"
  }
  dimension: device {
    sql: case when ${TABLE}.os in ('Windows','Mac OS X','Linux') then 'Desktop'
              when ${TABLE}.os = 'Chrome OS' then 'Chromebook'
              else device end;;
  }
  dimension: os {}
  dimension: region {}
  dimension: screen_height {}
  dimension: screen_width {}
  dimension: mp_country_code {
    label: "Country"
  }
  measure: count_unique {
    type: count_distinct
    sql: ${TABLE}.distinct_id ;;
  }
measure: count {
    type: count
  }
}

The second view is keyed on event_id and contains the dimensions and measures used for analyzing and counting playback events by those listeners’ devices.

-- page events view definition
view: page_events {
  derived_table: {
    sql: SELECT
  REPLACE(JSON_EXTRACT(e.properties,
      '$.Episode'),'"','') AS episode,
      REPLACE(JSON_EXTRACT(e.properties,
      '$.url'),'"','') AS nav_url,
      episode_number, 
      distinct_id,
      current_url, 
      event_id, 
      initial_referrer, 
      initial_referring_domain, 
      name as event_type, 
      referrer, 
      referring_domain, 
      search_engine, 
      time
FROM
  `aerial-vehicle-148023.mixpanel.event` e
left outer join `aerial-vehicle-148023.dw_staging.episode_list` l
on REPLACE(JSON_EXTRACT(e.properties,
      '$.Episode'),'"','') = l.episode
WHERE
  e.name IN ('Clicked Nav Link',
    'Podcast Episode Played') ;;
  }
dimension_group: event {
    type: time
    timeframes: [
      date,
      week,
      month,
      hour_of_day,
      day_of_week,
      day_of_month
    ]
    sql: ${TABLE}.time ;;
  }
  dimension: event_type {}
dimension: event_id {
    primary_key: yes
    hidden: yes
  }
  dimension: distinct_id {
    hidden: yes
  }
  dimension: episode {
    sql: replace(${TABLE}.episode,'Drill to Detail','') ;;
  }
dimension: episode_number {
    sql: lpad(cast(${TABLE}.episode_number as string),2,'0') ;;
  }
dimension: nav_url {}
  dimension: referring_domain {}
  dimension: search_engine {}
  measure: event_count {
    type: count
  }
  measure: unique_device_count {
    type: count_distinct
    sql: ${TABLE}.distinct_id ;;
  }
measure: unique_devices {
    type: count_distinct
        sql: ${TABLE}.distinct_id ;;
  }
measure: episode_num {
    type: average
    sql: ${TABLE}.episode_number;;
  }
}

Then, a new explore is defined within the LookML project to join these two views and make them available for analysis in the Looker Explore page.

-- explore definition
explore: page_events  {
  join: unique_devices {
    relationship: one_to_many
    sql_on: ${page_events.distinct_id} = ${unique_devices.distinct_id} ;;
  }
}

Creating a look similar to the one a moment ago but using episode play events recorded by Mixpanel as the measure, I can see that whilst most episodes had the majority of downloads in the month they were published, the episode featuring Jonathan Palmer from King Games on how they used Looker to design and optimize games such as Candy Crush Saga had downloads well after the initial month it became available, telling me that content on this type of topic has perhaps a longer “shelf-life” than others we’ve covered.

1_-Ow4UumPL1f-PWYrEu2kQQ.png

If I use the “merge results” feature introduced with Looker 5 to combine results from two or more queries, I can combine the download numbers per episode per month per unique device ID from both Apple and from Mixpanel’s tracking of plays on the website to give me a combined listener count for each episode, and you can see from the screenshot below that the Episode 50 Special is still the most downloaded episode and the later one with Jonathan Palmer does seem to still be listened to for a lot longer than most other episodes.

1_BLGORMX1o0bXwXrJk0nvBQ.png

Except it’s not. What the chart above actually shows is that Episode 50 was the most downloaded in any one month, but if you take each episode and chart total downloads overall it’s actually the Christmas and New Year Special featuring Christian Berg that overall had more unique device downloads.

1_tsGaA0vUlcjDVgvdGlSokw.png

Putting the above two merged data looks into a dashboard along with two others showing the split by iTunes and website downloads over time and those same numbers expressed as share of all downloads over time, you can get a sense of for how long each episode was downloaded after first publication, the split for each episode by download platform and how listener numbers for each method of download have changed over time. You can see next chart how downloads via iTunes have a close correlation with new episodes being published with listener numbers dropping over the summer when the show took a break; in contrast, downloads directly from the website held-up even over the summer break when no new episodes were published.

1_Zhmt6XPbKfHoAhy0wpsj7A.png

Taking just the device-level data for website downloads coming from Mixpanel and creating a dashboard from the referrer and device details attributes it confirms as I’d have expected that most of those downloads come from listeners using Windows desktop PCs or Android phones, given iTunes’ focus on MacOS and iOS devices, but I can also see that most plays coming from search traffic come from Google and occasionally from Bing, and that Christian Berg’s listeners were unusually-spread amongst lots of different countries, not just the usual UK, USA, Germany and so on.

But what none of these dashboards tell me is how good I am at retaining listeners after they’ve downloaded their first ever episode; although I suspect many listeners come for one particular episode I’d be interested to know how many devices we see each month are returning or new and we can of course extend the scope of website activity to include clicks on navigation links as another indicator of engagement and retention.

To do this we need to know, for every visitor unique device ID, what sequence number in the total set of episode plays for that device this particular play represents; if the sequence number for a given play event was 1 then we’d classify the visitor as new, whereas if the sequence number was greater than 1 then we’d classify the visitor as returning. Sequencing these device playback events also then makes it possible to analyze play histories for individual podcast listener devices to understand how many episodes a typical listener plays and whether a particular episode triggers more repeat listenings of the show than others, to understand the frequency of listening and whether returning visitors are coming back to play the same episode or taking the opportunity to discover and play other episodes from the site — exactly the sort of analysis that online retailers do when looking to turn casual browsers and one-off purchasers into loyal, repeat buyers from their site.

To do this I create an SQL query in Looker’s SQL Runner utility that returns the episodes played for each visitor distinct_id and adds a ROW_NUMBER window function to provide a sequence number for the episode plays sorted in order of time

To add this query to my Looker project as a derived table I then select Get Derived Table LookML from the menu in the top right-hand side of the page, like this:

SELECT 
 distinct_id, 
 event_id, 
 time,
 episode_number,
 row_number() over (partition by distinct_id order by time) as play_order_seq
FROM
  `aerial-vehicle-148023.mixpanel.event` e
LEFT OUTER JOIN `aerial-vehicle-148023.dw_staging.episode_list` l
ON REPLACE(JSON_EXTRACT(e.properties,
      '$.Episode'),'"','') = l.episode
WHERE
  e.name = 'Podcast Episode Played'
1_jxWHFDG70QuKg8eLgcgkOg.png

and then choose the option to add it to your project when the Get Derived Table LookML dialog is shown.

1_9DraJHEWp0IpUSzZ1i6i0w.png

Then, after setting all of the derived table columns as hidden and the event_id column as the primary key for this view, I add one new dimension to the view that returns “new” or “returning” depending on whether the play order sequence number for the play event is 1 or any other number and also set the view_label for this view to the same name as the main page events view, so that the play order sequence number appears alongside the other play event fields in the main explore view.

view: website_play_sequence {
  view_label: "Page Events"
  derived_table: {
    sql: SELECT
       distinct_id,
       event_id,
       time,
       row_number() over (partition by distinct_id order by time) as play_order_seq
FROM
        `aerial-vehicle-148023.mixpanel.event` 
WHERE
        name = 'Podcast Episode Played'
       ;;
  }
dimension: distinct_id {
    hidden: yes
    type: string
    sql: ${TABLE}.distinct_id ;;
  }
dimension: event_id {
    hidden: yes
    primary_key: yes
    sql: ${TABLE}.event_id ;;
  }
dimension_group: time {
    hidden: yes
    type: time
    sql: ${TABLE}.time ;;
  }
dimension: play_order_seq {
    type: number
    sql: ${TABLE}.play_order_seq ;;
  }
dimension: new_or_returning {
    type: string
    sql: case when ${TABLE}.play_order_seq = 1 then 'new' else 'returning' end ;;
  }
}

I then join this new derived table to the page events view in the existing model explore on the event_id column common to both of them, like this:

explore: page_events  {
  join: unique_devices {
    relationship: one_to_many
    sql_on: ${page_events.distinct_id} = ${unique_devices.distinct_id} ;;
  }
  join: website_play_sequence {
    relationship: one_to_many
    sql_on: ${page_events.event_id} = ${website_play_sequence.event_id} ;;
  }

Now I can create a look that shows episode play events over time broken down by new or returning device, and I can see that about two-thirds of listens each month come from returning devices.

1_mdxRGHXwcEc8oZkmNL1DQg.png

If we pivot on the episode play sequence number and then turn the episode numbers themselves into a measure, we can then show for each individual device the sequence of episodes they’ve downloaded.

1_RfWkCwlRnmufmY6YZ9PuGA.png

So far we’ve looked at devices playing podcast episodes on the site as one, homogenous group, working on the assumption that listeners visiting the site for the first time now are no different from ones who found us a year or more ago, and the site itself hasn’t changed over that period of time. 

But in reality we may have improved the usability of the site or made it easier for visitors to find and discover other episodes, and so I’m keen to understand whether the groups, or “cohorts” of users who joined us in more recent months are engaged for longer and churn at a slower rate than visitors who found the site before those changes were made.

I can return the month that each visitor’s device first played a podcast episode on the site using another SQL query, and whilst I’m pulling this together I might as well calculate how long each device has been accessing episodes on the site along with totals for episode play events and distinct episodes listened to for each visitor device.

SELECT distinct_id, 
       date_trunc(date(min(time)),MONTH) as first_visit_at, 
       timestamp_diff(max(time),min(time),day)+1 as listener_days,
       count(*) as total_plays,
       count(distinct JSON_EXTRACT(properties,'$.Episode')) as total_episodes
FROM   mixpanel.event
WHERE  name = 'Podcast Episode Played'
GROUP BY 1

I then use the feature in SQL Runner that generates a derived table view definition from an SQL query and then fine-tune the dimension definitions and convert the listener_days, total_plays and total_episode columns into a set of appropriate measures.

view: visitor_cohort {
  view_label: "Unique Devices"
  derived_table: {
    sql: SELECT distinct_id,
       date_trunc(date(min(time)),MONTH) as first_visit_at,
       timestamp_diff(max(time),min(time),day)+1 as listener_days,
       count(*) as total_plays,
       count(distinct JSON_EXTRACT(properties,'$.Episode')) as total_episodes
FROM   mixpanel.event
WHERE  name = 'Podcast Episode Played'
GROUP BY 1
 ;;
  }
  
  dimension: distinct_id {
    primary_key: yes
    hidden: yes
    type: string
    sql: ${TABLE}.distinct_id ;;
  }
  
  dimension: first_visit_at {
    type: date
    label: "Listener Cohort"
    sql: ${TABLE}.first_visit_at ;;
  }
  
  measure: listener_days {
    type: average
    sql: ${TABLE}.listener_days ;;
  }
  
  measure: avg_episode_plays {
    type: average
    sql: ${TABLE}.total_plays ;;
  }
  
  measure: total_episode_plays {
    type: sum
    sql: ${TABLE}.total_plays ;;
  }
  
  measure: avg_episodes {
    type: average
    sql: ${TABLE}.total_episodes ;;
  }  
}

Then I join this second derived table into the model explore, this time joining to the unique_devices view on their common distinct_id column as the additional columns I’m now adding in apply to visitor devices, not episode play events as in the previous example.

explore: page_events  {
  join: unique_devices {
    relationship: one_to_many
    sql_on: ${page_events.distinct_id} = ${unique_devices.distinct_id} ;;
  }
  join: website_play_sequence {
    relationship: one_to_many
    sql_on: ${page_events.event_id} = ${website_play_sequence.event_id} ;;
  }
  join: visitor_cohort {
    relationship: one_to_one
    sql_on: ${unique_devices.distinct_id} = ${visitor_cohort.distinct_id} ;;
  }
}

I can now create looks such as the one below, where I chart the size of each cohort and overlay the average number of episode each visitor device goes on to download to give me a combined view of how many new listeners we recruit each month and whether each group over time listens to more, or less episodes than the previous ones.

1_Q7Rf49igY6Mg6fzKBV9aGw.png

If I now add two more window function calculations to the SQL query used by the derived table that provides the sequence number for each episode play by a device, like this:

SELECT
       distinct_id,
       event_id,
       time,
       row_number() over (partition by distinct_id order by time) as play_order_seq,
       date_diff(date(time),date(first_value(time) over (partition by distinct_id order by time)),MONTH) as months_since_first_play_at,
       date_diff(date(time),date(lag(time,1) over (partition by distinct_id order by time)),DAY) as days_since_last_play
FROM
        `aerial-vehicle-148023.mixpanel.event`
WHERE
        name = 'Podcast Episode Played'

I can now create a heat-map for each cohort showing how their engagement maintains or falls-off for the six months after their first play of a podcast episode on the site.

1_ipWdMIM607Rkmxh2OOMFFQ.png

If I switch the two dimensions around and change the visualization type to a line chart using a logarithmic scale, you can now see how engagement falls-off (or “decays”) for each of my listener cohorts over the six months since they first play an episode on the site.

1_GfyCbXM4O1oOZWkq37ewyA.png

Or, finally, I could take the Days Since Last Episode Play measure and use it to show how time between episode plays increases for each cohort, with big increases in time elapsed being a strong indicator of that cohort losing interest in the podcast and “churning”.

1_35JXcKGt5uEuweMBUDbvjw.png

So whilst we’ve been analyzing listeners to a podcast and in-detail, only a small and perhaps skewed subset of overall listeners, the techniques we’ve used including cohorting visitors based on when they first download or purchase and then tracking their engagement and overall lifetime value are ones we’ve used recently when working on Looker projects for clients such as Colourpop in the US and Florence back home in the UK

Drop us an email at info@mjr-analytics.com or contact us on +44 7866 568246 if you’re looking to do something similar with your customer or product data, and in the meantime you can check out past episodes of the Drill to Detail podcast a the new home for the show, co-located with this blog on the MJR Analytics site at https://www.mjr-analytics.com/podcast.

MJR Analytics Presenting at Oracle Openworld 2018, San Francisco

Mark Rittman from MJR Analytics will be presenting at this week’s Oracle Openworld 2018 event in San Francisco, both sessions on Monday, 22nd October 2018.

BI Developer to Data Engineer with Oracle Analytics Cloud, Data Lake [PRO3189]
Monday, Oct 22, 11:30 AM - 12:15 PM | Marriott Marquis (Yerba Buena Level) - Nob Hill C/D

 
ds.png
 

“In this session look at the role of a data engineer in designing, provisioning, and enabling an Oracle Cloud data lake using Oracle Analytics Cloud, Data Lake. Attendees learn how to use data flow and data pipeline authoring tools and how machine learning and AI can be applied to this task, as well as how to connect to database and SaaS sources along with sources of external data via Oracle Data as a Service. Discover how traditional Oracle Analytics developers can transition their skills into this role and start working as a data engineer on Oracle Public Cloud data lake projects.”

Data Warehouse Like a Tech Startup with Oracle Autonomous Data Warehouse [BUS3194]
Monday, Oct 22, 04:45 PM - 05:30 PM | Moscone West - Room 3006

 
tech_startup.png
 

“Tech startups can't afford DBAs, and they don't have time to provision servers and scale them up and down or deal with patches or downtime. They've never heard of indexes and they need data loaded and ready for analysis in days, not months. In this session learn how Oracle Database developers can build data warehouses as a hip startup data engineer would—but using a proper database built on Oracle technology. Oracle Data Visualization Desktop provides analytics and data exploration with techniques explained in this session. Hear real-world development experiences from working on data and analytics projects at a tech startup in the UK.”

Mark will be at Openworld and around San Francisco until the end of the event, so if you’d like to have a chat about these talks or anything to do with Oracle Analytics then get in touch at mark.rittman@mjr-analytics.com or call us on +44 7866 568246.

mjr analyticsComment