HubSpot Data Actions, Harvest Analytical Workflows and Looker Data Platform

At Rittman Analytics we use a number of SaaS applications to run our back office operations including HubSpot CRM for sales tracking, Harvest for timesheets and resource planning, Jira for project management and Xero for invoicing and accounting. Back in May 2019 I blogged about how we used services from Stitch, Fivetran and dbt to extract, transform and combine data from those services into a data warehouse running on Google BigQuery and since then we’ve built-out a number of finance, operations and customer success dashboards that provide our back office team with valuable insights into the performance of our business.


But Looker is more than just an ad-hoc query and dashboarding tool, it’s a full data platform that integrates with SaaS applications such as Harvest, HubSpot, Xero and Salesforce at multiple levels:

  • At a data-source level, querying and analyzing data from those application sources

  • Within the Looker explore UI using features such as links, data actions and the Action Hub

  • Within business workflows, providing analytical insights and trusted metrics through the Looker API

Adding Links to HubSpot Deal and Company Pages as Explore Menu Items

One example of this is how we’ve added menu options in the Looker explore to provide direct links into HubSpot for viewing the details of a deal or a company within that application, and for our sales manager to request an update on a deal where there’s been no contact from either side for a while.


This type of simple integration is done through the “link” property in LookML that you can add to dimension and measure definitions, as I’ve done in the example below for our HubSpot opportunity name dimension.

dimension: opportunity_name {
    type: string
    sql: $.dealname ;;
    link: {
      label: "View Deal in Hubspot"
      url: "{  }/"
      icon_url: ""

Then when a user clicks on a field in an explore or a dashboard tile they’ll be able to click-through to HubSpot to view all of the context of the deal; if like us you use Google Apps authentication for both Looker and our SaaS applications you’ll be taken straight into HubSpot without having to login first.


Creating Looker Data Actions to Update HubSpot Deal Properties

A more advanced form of integration between Looker and HubSpot that we’ve setup is the ability for users to update details of a HubSpot deal directly from within Looker, a good example of how you can make your Looker content more “actionable”.


This type of integration involves another type of LookML dimension and measure property called an “action”. I first talked about actions in a blog post titled “Using Looker Data Actions to Make Monzo Spend Analysis More Interactive … and Actionable” back in 2018 where I used the feature to call services such as Google Maps, Google My Business and via Zapier, Google Tasks; in the case of our HubSpot integration we’ve again use Zapier for the webhook service that our data actions call, setting up a three stage Zapier “Zap” that takes the details of the deal and the values that need changing from the webhook payload, then searches for the deal in HubSpot that needs updating and then applies those changes to the deal record.


Adding Data Action Menu Items to Request HubSpot Deal Updates via GMail

Another example of this type of data action integration we’ve setup is between Looker, HubSpot and GMail, giving us the ability to send an email to request a status update from the team directly from the Looker dashboard.


Leveraging the Looker API to Provide KPIs for Harvest Workflows Automated using Zapier

As well as providing hooks into HubSpot for managing the deal process directly from within Looker, we also use Looker to inform and provide trusted business key performance indicators (KPIs) for our automated workflows. An example of this is when we’ve closed a new deal and the salesperson or operations team then sets up a new project in Harvest to record our time and expenses against, and something we want to keep a close eye on is situations where a salesperson has closed the deal but done so by agreeing a daily rate below the rate we’d normally charge.

To keep an eye on this type of deal being agreed and to make sure the salesperson involved knows that the discount will be coming out of their commission, we’ve setup another Zapier workflow that detects new Harvest projects being created, runs a look within Looker that returns the average rate charged across all projects over the past three months and sends a message to the salesperson concerned if the deal rate is below this average.


In the future we can change how this average rate is calculated, introduce other data and analytics into the calculation and by using Looker we’re making sure these analytic insights are governed and delivered “as-a-service” to the requesting business process. If you’re looking to use Looker Data Platform to integrate, inform and help orchestrate your business processes then drop us an email and we’d be happy to share our experiences so far.

Xero Financial Reporting in Looker using G-Accon for Xero, BigQuery and dbt

We use Xero as our accounting package at Rittman Analytics and thought it’d be useful to bring in some of our key financial and performance metrics from Xero’s Profit and Loss report (Income Statement for our US readers) into our Looker environment, so that we can monitor gross and net revenue trends over time, check expenses and staff costs are within budget and keep an eye on our most important financial KPI, Net Margin %.

Although Xero is a supported data source for Stitch, Fivetran and most other data pipelines-as-a-service, trying to reconstruct a P&L report from all of the Xero API tables whilst applying manual journals, categorising accounts into revenue or costs and coming-up with the same numbers as Xero’s own reports is fairly tricky; another approach that we’ve used with success is to use a Google Sheets add-in such as G-Accon for Xero to schedule exports of Xero’s P&L report into a Google Sheet and then access the results using a Google BigQuery external table.


When you’ve set the Xero report export up within Google Sheets and G-Accon for Xero, the next step is to setup a federated table in Google BigQuery that maps to the Google Sheets sheet that you’ve just setup. Remember to share the Google Sheets file with the GCP service account that your Looker connection uses when accessing Google BigQuery data otherwise you’ll get a permissions issue when trying to query the Sheets data from Looker.


Our next step was then to use dbt (“Data Build Tool”) to transform Xero’s account-level P&L numbers as landed into our base BigQuery federated tables into aggregated and derived revenue, cost of sales and overheads KPIs along with net and gross profit, margins and so on.


Now our financial numbers and P&L account details are in Looker as additional views within our main operational analytics LookML model we can start by reproducing the original Profit & Loss report using Looker’s new beta-release Table-Next visualization, giving us subtotals for the account categories as you’d normally expect to see in a P&L-style financial report (numbers obfuscated, in-case you were wondering).


Then to accompany the detail-level P&L report we then created a financial performance dashboard, some of the contents coming from the derived and aggregated financial KPIs table we created beforehand in dbt, some of the numbers (discretionary expense tracking over time for example) coming from the base P&L report numbers. Note how we’ve used Looker’s recently-added trellis charting feature to break expenses spending out by the discretionary spend categories we’re most interested in, I think this works well for this type of analysis.


Whether you go down the path of full Xero data integration into your warehouse using a service such as Stitch or Fivetran or if you choose the report-based export approach we used here, really comes down to whether you need granular access to every bit of your accounting data or some high-level (and processed) numbers as we did in this particular example.

We’ve done both on projects for customers and ourselves in the past, so if you’re interested in bringing in Xero or other financial accounting data into your Looker analytics platform, send us an email or call our office on +44 1273 234690 and we’ll be happy to talk through the various options with you.

New Features in Looker 6.16 : Conditional Alerts (Beta), Content Curation (Beta) and LookML IDE Folders

Looker release 6.16 is rolling-out to customers right now and comes with three new features that you might find interesting:

  • Conditional Alerts (beta),

  • Content Creation (beta), and

  • Folders with the LookML IDE

Note that to enable beta features such as conditional alerts and content creation boards in Looker you’ll need to log in as an admin to specifically enable them before they’re available for use, and as experimental features they are not fully developed and may significantly change or be completely removed in future releases.

Conditional Alerts (Beta)

You can create a form of “alert” in Looker right now by scheduling a look to be sent to you only when a report returns rows, doesn’t return rows or returns a different set of rows than last time, but this is a pretty obscure feature that most end-users won’t have heard of. The new Conditional Alerts (beta) feature in Looker 6.16 makes creating alerts much simpler and is enabled from the Labs section of the Admin page in Looker.

Untitled 3.png

Once you’ve enabled this beta feature, you’ll then see “alert” icons in the top right-hand corner of every dashboard item, like this:

Untitled 4.png

Click on the bell icon to define the alert. In the example below I’ve configured the alert to send me an email if net margin has dropped below 20% at the end of the previous day, with the email going out at 9am next day to give me a heads-up that action needs to be taken - but also saving me the time of scanning the dashboard each day just to the KPI is within target.

Untitled 5.png

Alerts created by users are now listed alongside scheduled looks on the Looker Admin page together with a history of previous alert executions.

Untitled 6.png

Content Curation (Beta)

Boards are a new type of content object within Looker 6.16 and give you the ability to “pin” dashboards and looks relating to a particular topic or department together in a single place. Like conditional alerts, the new Content Creation / Boards (Beta) feature has to be enabled by an admin before it becomes available to end-users, but once you’ve done so then a new Boards (beta) menu item appears in the Browse page menu.

Untitled 8.png

Click on the Pin a Dashboard or Look button to bring up a dialog listing all of the dashboards and looks in spaces you’re able to access, like this:

Untitled 9.png

You can then add more dashboards or looks, and create sections within each board to further organize your content, as I’ve done here with our internal analytics board.

Untitled 7.png

IDE Folders

The final new feature I wanted to highlight isn’t a beta, but it does need to be enabled for each individual LookML project before use. To switch on the new folders feature for your LookML project first select Project Settings from the drop-down menu in the LookML developer IDE:

Untitled 3a.png

Then, check the Enable Folders checkbox just above the Save Project Settings button.

Untitled 3b.png

Now, press the “+” button to see a new Create Folder option.

Untitled 3d.png

Then once you’ve given your folder a name, you can move existing view, model and other LookML object files into the folder you’ve just created. Note how the various LookML files have different icons now for views, models etc.

Untitled 3e.png

Release notes for Looker 6.16 are on the Looker website, and contact us now if you’d like to find out how we can help you get moving with your Looker implementation.

Rittman Analytics is now a UK Consulting Partner for dbt ("Data Build Tool")

We’re very pleased to announce that Rittman Analytics is now an official Consulting Partner for dbt, working with our clients, the community and the wider dbt ecosystem to get the most out of this open-source analytics framework and accompanying commercial dbtCloud service.

You can read more about the role of Consulting Partners on the new Ecosystem page on the getdbt website and while you’re there, check-out comments from real-world users of dbt including one of ours, an excerpt from our recent blog post on How Rittman Analytics does Analytics.

In that blog post we talked about how Rittman Analytics used dbt as part of our modular “extract, transform and load” data loading approach when preparing data ready for use with Looker:


Since launching the company last year we’ve built-up a fair bit of experience implementing dbt and dbtCloud on several clients projects in areas such as

  • transforming event-based (Segment, mParticle), data pipeline (Stitch, Fivetran) and batch-loaded raw data sourced from SaaS, Telco, web application and other data sources

  • using Snowflake Data Warehouse, Postgres and Google BigQuery as sources/targets

  • integrating with GitHub, AWS CodeCommit for local/remote branch-based git development

  • creation of automated CI/CD dbt build test pipelines using dbtCloud and AWS CodeBuild/CodePipeline

If you’re new to dbt or interested in how we use dbt and dbtCloud both internally and on client projects, check out our three recent blog posts on this topic:

and conversations we’ve had in the past with Tristan Handy, CEO and co-founder of Fishtown Analytics (primary sponsors of dbt) on our Drill to Detail Podcast:

Contact us now or email if you’d like any help, or advice on your dbt implementation - we’re UK-based with clients in London, northern Europe the USA.

News on the Second London Looker Developer Meetup, 10th July 2019 at GoCardless, London

The second London Looker Developer Meetup is less than two weeks away now, running from 6pm - 9pm on Wednesday July 10th 2019 at the GoCardless offices in Goswell Road, London. Co-hosted by Mark Rittman and Jon Palmer from GoCardless, join us for an evening of chat, networking, presentations and Q&As around the Looker platform and the following agenda:

  • 6:00 - 6:30pm: Registration, Food & Networking

  • 6:30 - 6:40pm: Welcome and Introduction - Jon Palmer, Head of BI at GoCardless and Mark Rittman, CEO of Rittman Analytics

  • 6:40 - 7:00pm: “Getting Started with Looker” - Baha Sahin, Onfido

  • 7:00 - 7:20pm: “Balancing Liquidity in an On-Demand Staffing Marketplace” - Charles Armitage, Florence

  • 7:20 - 7:35pm: Update from Looker - Dave Hughes and Zara Wells, Customer Success at Looker

  • 7:35 - 8:00pm: Data Analytics Panel

  • 8:00 - 9:00pm: Networking and close

We’re particularly looking-forward to Charles Armitage’s presentation as Florence were Rittman Analytics’ first ever UK client when we launched the company back last year, and we’ve continued to work with Charles and the team since our original engagement right through to their recent successful Series A funding round.

The First London Looker meetup ended-up oversubscribed so make sure you register now if you’re planning on attending, and I’ll be joined by the Rittman Analytics team on the night if you’d like to chat with one of us about your Looker implementation - drop me an email at if you’d like to schedule a chat in-advance.

Rittman Analytics is now a Segment Certified Implementation Partner

In our recent blog posts on How Rittman Analytics does Analytics: Modern BI Stack Operational Analytics and Customer Journey and Lifetime Value Analytics we talked about using services from our partners at Stitch, Fivetran and Segment to connect and integrate data ready for analysis with Looker.

So far we’ve worked with joint Segment/Looker/Rittman Analytics customers such as Let’s Do This to build customer and product analytics platforms on-top of Segment’s rich, event-level behavioural datasets. Now we’re pleased to announce that Rittman Analytics is now a Certified Implementation Partner for Segment, giving us the ability implement Segment Connections along with Segment Personas (Customer Data Platform) and Segment Protocols (Data Quality) on client analytics and wider digital marketing / digital transformation projects.


Look-out for more information on how we’re using Segment technology together with modern, modular cloud analytics services from Looker, Stitch, Fivetran, Snowflake and Google Cloud Platform over the coming weeks, and if we can help with any of your Segment implementation questions just get in-touch at

Continuous Integration and Automated Build Testing with dbtCloud

If you read my recent blog post on how Rittman Analytics built our operational analytics platform running on Looker, Stitch, Google BigQuery and dbt, you’ll know that we built-out the underlying data infrastructure using a modular Extract, Transform and Load (ELT) design pattern, like this:

ra_analytics_architecture (1).png

We version-control our dbt development environment using git and, and do all new development beyond simple bug fixes as git feature branches, giving us a development process for dbt that looked like this:


1. We’d start by cloning dbt git repo master branch from to the developer’s workstation, which also would have dbt installed locally along with the Google Cloud SDK so that they can connect to our development BigQuery dataset.


2. Create a new, local git branch for the new feature using the git CLI or a tool such as Github desktop


3. Develop the new feature locally using the developer’s install of dbt, committing any changes to the feature branch in that developer’s local git repo after checking that all dbt tests have run successfully.

locals-imac:ra_dw markrittman$ dbt run --models harvest_time_entries harvest_invoices --target dev
Running with dbt=0.13.1
Found 50 models, 8 tests, 0 archives, 0 analyses, 109 macros, 0 operations, 0 seed files, 35 sources

20:53:11 | Concurrency: 1 threads (target='dev')
20:53:11 | 
20:53:11 | 1 of 2 START table model ra_data_warehouse_dbt_dev.harvest_invoices.. [RUN]
20:53:13 | 1 of 2 OK created table model ra_data_warehouse_dbt_dev.harvest_invoices [OK in 1.80s]
20:53:13 | 2 of 2 START table model ra_data_warehouse_dbt_dev.harvest_time_entries [RUN]
20:53:15 | 2 of 2 OK created table model ra_data_warehouse_dbt_dev.harvest_time_entries [OK in 1.70s]
20:53:15 | 
20:53:15 | Finished running 2 table models in 5.26s.

Completed successfully

locals-imac:ra_dw markrittman$ dbt test --models harvest_time_entries harvest_invoices --target dev
Running with dbt=0.13.1
Found 50 models, 8 tests, 0 archives, 0 analyses, 109 macros, 0 operations, 0 seed files, 35 sources

20:53:37 | Concurrency: 1 threads (target='dev')
20:53:37 | 
20:53:37 | 1 of 4 START test not_null_harvest_invoices_id....................... [RUN]
20:53:39 | 1 of 4 PASS not_null_harvest_invoices_id............................. [PASS in 1.50s]
20:53:39 | 2 of 4 START test not_null_harvest_time_entries_id................... [RUN]
20:53:40 | 2 of 4 PASS not_null_harvest_time_entries_id......................... [PASS in 0.89s]
20:53:40 | 3 of 4 START test unique_harvest_invoices_id......................... [RUN]
20:53:41 | 3 of 4 PASS unique_harvest_invoices_id............................... [PASS in 1.08s]
20:53:41 | 4 of 4 START test unique_harvest_time_entries_id..................... [RUN]
20:53:42 | 4 of 4 PASS unique_harvest_time_entries_id........................... [PASS in 0.83s]
20:53:42 | 
20:53:42 | Finished running 4 tests in 5.27s.

Completed successfully

4. All dbt transformations at this stage are being deployed to our development BigQuery dataset.

5. When the feature was then ready for deployment to our production BigQuery dataset, the developer would then push the changes in their local branch to the remote git repo, creating that branch if it didn’t already exist.


6. Then they’d create a pull request using Github Desktop and the Github web UI summarising the changes and new features added by the development branch.


7. I’d then review the PR, try and work out if the changes were safe to merge into the master branch and then accept the pull reques, and then overnight our dbtCloud service would clone that development git repo master branch and attempt to deploy the new set of transformations to our production BigQuery dataset.

And sometimes, if for whatever reason that feature branch hadn’t been properly build-tested, that scheduled overnight deployment would then fail.


So having noticed the new Build on Pull Request feature that comes with paid versions of dbtCloud, we’ve upgraded from the free version to the $100/month basic paid version and added an automated “continuous integration” build test to our feature branch development process so that it now looks like this:


To set this automated build test feature up, we first linked our Github account to dbtCloud and then created a new dbtCloud job that triggers when a user submits a pull request.


8. Now, when I come to review the pull request for this feature branch, there’s an automated test added by dbtCloud that checks to see whether this new version of my dbt package deploys without errors.


9. Looking at the details behind this automated build test, I can see that dbtCloud has created a dataset and runs through a full package deployment and test cycle, avoiding any risk of breaking our production dbt environment should that test deployment fail.


10. Checking back at the status of the test build in Github, I can see that this test completed with no issues, and I’m therefore safe to merge the changes in this feature branch into our development master branch, ready for dbtCloud to pick-up these changes overnight and deploy them as scheduled into production.


Docs on this recent dbtCloud feature are online here, and you’ll need at least the basic $100/month paid version of dbtCloud for this feature to become available in the web UI. It’s also dependent on Github as the git hosting service, so not available as an option if you’re using Gitlab, BitBucket or CodeCommit.

Drill to Detail Ep.66 'ETL, Incorta and the Death of the Star Schema' with Special Guest Matthew Halliday

Mark Rittman is joined by Matthew Halliday from Incorta in this latest episode of the Drill to Detail Podcast to talk about the challenge of ETL and analytics on complex relational OLTP data models, previous attempts to solve these problems with products such as Oracle Essbase and Oracle E-Business Suite Extensions for Oracle Endeca and how those experiences, and others, led to his current role as co-founder and VP of Products at Incorta.

Customer Journey and Lifetime Value Analytics using Looker, Google BigQuery, Stitch and dbt

One of the most common analytics use-cases I come across on client projects is to understand the “customer journey”; the series of stages prospects and customers typically go through from becoming aware of a company’s products and services through to consideration, conversion, up-sell and repurchase and eventually, to churn.

Customers typically interact with multiple digital and offline channels over their overall lifetime, and if we take the model of a consulting services business such as Rittman Analytics, a typical customer journey and use of those touch-points might look like the diagram below.


Each customer and prospect interaction with those touch-points sends signals about their buying intentions, product and service preferences and readiness to move through to the next stage in the purchase funnel.


Creating visualization such as these in Looker typically involves two steps of data transformation either through a set of Looker (persistent) derived tables, or as we’ve done for our own operational analytics platform, through two dbt (Data Build Tool) models sequenced together as a transformation graph.

To start, we select a common set of columns from each digital touchpoint channel as brought in by our project data pipeline tool, Stitch; Hubspot for inbound and outbound marketing communctions together with sales activity and closes/loses; Harvest for billable and non-billable client days; Segment for website visits and so on. For each source we turn each type of customer activity they record into a type of event, details of the event and a monetary value (revenue, cost etc) that we can then use to measure the overall lifetime value of each client later on. 


Note the expressions at the start of the model definition that define a number of recency measures (days since last billable day, last contact day and so on) along with another set that are used for defining customer cohorts and bucketing activity into months and weeks since those first engagement date.

    {{ dbt_utils.datediff('last_billable_day_ts', current_timestamp(), 'day')}} AS days_since_last_billable_day,
    {{ dbt_utils.datediff('last_incoming_email_ts', current_timestamp(), 'day')}} AS days_since_last_incoming_email,
    {{ dbt_utils.datediff('last_outgoing_email_ts', current_timestamp(), 'day')}} AS days_since_last_outgoing_email,
    {{ dbt_utils.datediff('last_site_visit_day_ts', current_timestamp(), 'month')}} AS months_since_last_site_visit_day,
    {{ dbt_utils.datediff('last_site_visit_day_ts', current_timestamp(), 'week')}} AS weeks_since_last_site_visit_day
      ROW_NUMBER() OVER (PARTITION BY customer_id ORDER BY event_ts) AS event_seq,
      MIN(CASE WHEN event_type = 'Billable Day' THEN event_ts END)
          {{ customer_window_over('customer_id', 'event_ts', 'ASC') }} AS first_billable_day_ts,
      MAX(CASE WHEN event_type = 'Billable Day' THEN event_ts END)
          {{ customer_window_over('customer_id', 'event_ts', 'ASC') }} AS last_billable_day_ts,
      MIN(CASE WHEN event_type = 'Client Invoiced' THEN event_ts END)
          {{ customer_window_over('customer_id', 'event_ts', 'ASC') }} AS first_invoice_day_ts,
      MAX(CASE WHEN event_type = 'Client Invoiced' THEN event_ts END)
          {{ customer_window_over('customer_id', 'event_ts', 'ASC') }} AS last_invoice_day_ts,
      MAX(CASE WHEN event_type = 'Site Visited' THEN event_ts END)
          {{ customer_window_over('customer_id', 'event_ts', 'ASC') }} AS last_site_visit_day_ts,
      MAX(CASE WHEN event_type = 'Incoming Email' THEN event_ts END)
          {{ customer_window_over('customer_id', 'event_ts', 'ASC') }} AS last_incoming_email_ts,
      MAX(CASE WHEN event_type = 'Outgoing Email' THEN event_ts END)
          {{ customer_window_over('customer_id', 'event_ts', 'ASC') }} AS last_outgoing_email_ts,
          {{ customer_window_over('customer_id', 'event_ts', 'ASC') }} AS first_contact_ts,
      DATE_DIFF(date(event_ts),MIN(CASE WHEN event_type = 'Billable Day' THEN date(event_ts) END)          {{ customer_window_over('customer_id', 'event_ts', 'ASC') }},MONTH) AS months_since_first_billable_day,
      DATE_DIFF(date(event_ts),MIN(CASE WHEN event_type = 'Billable Day' THEN date(event_ts) END)
          {{ customer_window_over('customer_id', 'event_ts', 'ASC') }},WEEK) AS weeks_since_first_billable_day,
      DATE_DIFF(date(event_ts),MIN(CASE WHEN event_type like '%Email%' THEN date(event_ts) END)
          {{ customer_window_over('customer_id', 'event_ts', 'ASC') }},MONTH) AS months_since_first_contact_day,
      DATE_DIFF(date(event_ts),MIN(CASE WHEN event_type like '%Email%' THEN date(event_ts) END)
          {{ customer_window_over('customer_id', 'event_ts', 'ASC') }},WEEK) AS weeks_since_first_contact_day,
      MAX(CASE WHEN event_type = 'Billable Day' THEN true ELSE false END)
          {{ customer_window_over('customer_id', 'event_ts', 'ASC') }} AS billable_client,
      MAX(CASE WHEN event_type LIKE '%Sales%' THEN true ELSE false END)
          {{ customer_window_over('customer_id', 'event_ts', 'ASC') }} AS sales_prospect,
      MAX(CASE WHEN event_type LIKE '%Site Visited%' THEN true ELSE false END)
          {{ customer_window_over('customer_id', 'event_ts', 'ASC') }} AS site_visitor,
      MAX(CASE WHEN event_details LIKE '%Blog%' THEN true ELSE false END)
          {{ customer_window_over('customer_id', 'event_ts', 'ASC') }} AS blog_reader,
      MAX(CASE WHEN event_type LIKE '%Podcast%' THEN true ELSE false END)
          {{ customer_window_over('customer_id', 'event_ts', 'ASC') }} AS podcast_reader
  -- sales opportunity stages
            deals.lastmodifieddate AS event_ts,
          customer_master.customer_id AS customer_id,
          customer_master.customer_name AS customer_name,
          deals.dealname AS event_details,
          deals.dealstage AS event_type,
          AVG(deals.amount) AS event_value
          {  } AS customer_master
          {  } AS deals
          ON customer_master.hubspot_company_id = deals.associatedcompanyids
          {  } AS owners
          ON deals.hubspot_owner_id = CAST(owners.ownerid AS STRING)
          deals.lastmodifieddate IS NOT null
      {  }
  -- consulting days
            time_entries.spent_date AS event_ts,
            customer_master.customer_id AS customer_id,
          customer_master.customer_name AS customer_name,
 AS event_details,
          CASE WHEN time_entries.billable THEN 'Billable Day' ELSE 'Non-Billable Day' END AS event_type,
          time_entries.hours * time_entries.billable_rate AS event_value
          {  } AS customer_master
          {  } AS projects
          ON customer_master.harvest_customer_id = projects.client_id
          {  } AS time_entries
          ON time_entries.project_id =
          time_entries.spent_date IS NOT null
      {  }
  -- incoming and outgoing emails
            communications.communication_timestamp AS event_ts,
            customer_master.customer_id AS customer_id,
          customer_master.customer_name AS customer_name,
          communications.communications_subject AS event_details,
          CASE WHEN communications.communication_type = 'INCOMING_EMAIL' THEN 'Incoming Email'
               WHEN communications.communication_type = 'EMAIL' THEN 'Outgoing Email'
               ELSE communications.communications_subject
          END AS event_type,
          1 AS event_value
          {  } AS customer_master
          {  } AS communications
          ON customer_master.hubspot_company_id = communications.hubspot_company_id
          communications.communication_timestamp IS NOT null
      {  }
  -- sales opportunity stages
            invoices.issue_date AS event_ts,
            customer_master.customer_id AS customer_id,
          customer_master.customer_name AS customer_name,
          invoices.subject AS event_details,
          'Client Invoiced' AS event_type,
          SUM(invoices.amount) AS event_value
          {  } AS customer_master
          {  } AS invoices
          ON customer_master.harvest_customer_id = invoices.client_id
          invoices.issue_date IS NOT null
      {  }
         invoices.issue_date AS event_ts,
         customer_master.customer_id AS customer_id,
         customer_master.customer_name AS customer_name,
         invoice_line_items.description AS event_details,
         'Client Credited' AS event_type,
          COALESCE(SUM(invoice_line_items.amount ), 0) AS event_value
          {  } AS customer_master
          {  } AS invoices
          ON customer_master.harvest_customer_id = invoices.client_id
          {  } AS invoice_line_items
          ON = invoice_line_items.invoice_id
      {  }
         (COALESCE(SUM(invoice_line_items.amount ), 0) < 0)
              pageviews.timestamp AS event_ts,
              customer_master.customer_id  AS customer_id,
              customer_master.customer_name AS customer_name,
              pageviews.page_subcategory AS event_details,
              'Site Visited' AS event_type,
              sum(1) as event_value
          {{ ref('customer_master') }}  AS customer_master
          {{ ref('pageviews') }} AS pageviews 
          ON customer_master.customer_name =
      {{ dbt_utils.group_by(n=5) }}
              invoices.paid_at AS event_ts,
          customer_master.customer_id AS customer_id,
              customer_master.customer_name AS customer_name,
              invoices.subject AS event_details,
              CASE WHEN invoices.paid_at <= invoices.due_date THEN 'Client Paid' ELSE 'Client Paid Late' END AS event_type,
              SUM(invoices.amount) AS event_value
              {  } AS customer_master
          LEFT JOIN {  }  AS invoices
              ON customer_master.harvest_customer_id = invoices.client_id
            invoices.paid_at IS NOT null
          {  }
      customer_name NOT IN ('Rittman Analytics', 'MJR Analytics')

Deploying this model and then querying the resulting table, filtering on just a single customer and ordering rows by timestamp, you can see the history of all interactions by that customer with our sales and delivery channels.


You can also plot all of these activity types into a combined area and bar chart, for example, to show billable days initially growing from zero to a level that we maintain over several months, and with events such as client being invoiced, incoming and outgoing communications and the client actually paying in-full at the end.


We can also do things such as restrict the event type reported on to billable days, and then use Looker’s dimension group and timespans feature to split the last set of weeks into days and week numbers to then show client utilisation over that period.


Of course we can also take those monetary values assigned to events and use them to calculate the overall revenue contribution for this customer, factoring in costs incurred around preparing sales proposals, answering support questions and investing non-billable days in getting a project over the line.


Our second dbt model takes this event data and pivots each customer’s events into a series of separate columns, one for the first event in sequence, another for the second, another for the third and so on.


Again, code for this dbt model is available in our project git repo.

WITH event_type_seq_final AS (
        user_session_id AS event_type_seq,
            SUM(is_new_session) OVER (ORDER BY customer_id ASC, event_ts ASC ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS global_session_id,
            SUM(is_new_session) {{ customer_window_over('customer_id', 'event_ts', 'ASC') }} AS user_session_id
                CASE WHEN event_type != LAG(event_type, 1) OVER (PARTITION BY customer_id ORDER BY event_ts ASC) OR last_event IS NULL THEN 1
                ELSE 0
                END AS is_new_session
                    CASE WHEN event_type LIKE '%Email%' THEN 'Presales'
                    WHEN event_type LIKE '%Bill%' THEN 'Delivery' ELSE event_type END AS event_type,
                    LAG(CASE WHEN event_type LIKE '%Email%' THEN 'Presales' WHEN event_type LIKE '%Bill%' THEN 'Delivery' ELSE event_type END, 1) OVER (PARTITION BY customer_id ORDER BY event_ts ASC) AS last_event
                    {  }
                ORDER BY
                    customer_id ASC,
                    event_ts ASC,
                    event_type ASC
                ) last
             ORDER BY
                customer_id ASC,
                event_ts ASC,
                event_type ASC
           ) final
        ORDER BY
            customer_id ASC,
            is_new_session DESC,
            event_ts ASC
        is_new_session = 1
    {  }
    MAX(event_type_1) AS event_type_1,
    MAX(event_type_2) AS event_type_2,
    MAX(event_type_3) AS event_type_3,
    MAX(event_type_4) AS event_type_4,
    MAX(event_type_5) AS event_type_5,
    MAX(event_type_6) AS event_type_6,
    MAX(event_type_7) AS event_type_7,
    MAX(event_type_8) AS event_type_8,
    MAX(event_type_9) AS event_type_9,
    MAX(event_type_10) AS event_type_10
        CASE WHEN event_type_seq = 1 THEN event_type END AS event_type_1,
        CASE WHEN event_type_seq = 2 THEN event_type END AS event_type_2,
        CASE WHEN event_type_seq = 3 THEN event_type END AS event_type_3,
        CASE WHEN event_type_seq = 4 THEN event_type END AS event_type_4,
        CASE WHEN event_type_seq = 5 THEN event_type END AS event_type_5,
        CASE WHEN event_type_seq = 6 THEN event_type END AS event_type_6,
        CASE WHEN event_type_seq = 7 THEN event_type END AS event_type_7,
        CASE WHEN event_type_seq = 8 THEN event_type END AS event_type_8,
        CASE WHEN event_type_seq = 9 THEN event_type END AS event_type_9,
        CASE WHEN event_type_seq = 10 THEN event_type END AS event_type_10
{  }

Pivoting each customer’s events in this way makes it possible then to visualize the first n events for a given set of customers using Looker’s Sankey custom visualization, as shown in the final screenshot below.

Untitled 6.png

More details on our own internal use of Looker, Stitch, dbt and Google BigQuery to create a modern operational analytics platform can be found in this earlier blog post, and if you’re interested in how we might help you get started with Looker, check out our Services page.

How Rittman Analytics does Analytics: Modern BI Stack Operational Analytics using Looker, Stitch, dbt and Google BigQuery

As well as creating modern BI stack analytics solutions for clients such as Florence, Let’s Do This and Colourpop using technologies from our partners at Looker, Stitch, Segment, Fivetran and Snowflake, it’s probably no surprise that we’ve used those same technologies and project experience to build-out our own internal analytics platform, covering data and operational analytics use-cases such as

  • Showing high-level KPIs that show progress towards our four main business objectives

  • Providing actionable context for these high-level KPIs and the ability to drill-into the detail behind them

  • Enabling ad-hoc querying and analysis of our complete dataset by technical and non-technical users

  • Providing a trusted, integrated analysis-ready data platform for other internal projects


I thought it might be of interest to go through how we’ve used the tools and techniques we use for client projects but in this case to deliver an analytics solution for our own internal use, sharing some of our thinking around how we defined our underlying KPI framework, the tools and design patterns we used when building out the data management part of the platform, and some of the front-end components and data visualizations we’ve put together using the Looker BI tool.

Defining your KPI Framework

When creating any sort of analytics solution the first thing you need to understand is - what are your business objectives; what do you need to measure to understand your progress towards those objectives; and how do you then define success?

Key Performance Indicators (KPIs) and measures within our analytics platform are based around what's needed to measure our progress towards four business objectives we set ourselves for FY2019:

  1. Increase Billing Revenue

  2. Increase Profitability

  3. Increase customer retention, and

  4. Increase our operational and delivery efficiency

In-practice, we “certify” data around these four primary KPIs and also a number of secondary measures, with data in the platform normally up-to-date as of the previous business day and with historical data going back to when we started the business.


Other metrics, measures and dimension attributes are provided alongside these certified KPIs and measures and are considered “best endeavours” in terms of our certifying their accuracy, and users then of course able to create their own calculations for use in reports and analyses outside of this formal curation.

Platform Architecture

At a high-level our internal analytics platform extracts data on a regular basis from the SaaS-based services we use to run our business operations, joins that data together so that each dataset uses the same definition of clients, products and other reference data, then makes that combined dataset available for analysis as a set of business KPIs and an ad-hoc query environment.


The design of our internal analytics platform was based around a modular BI technology stack, ELT ("Extract, Land and Transform") data warehouse loading pattern and modern software development approach that we use on all of our client analytics development engagements.

Data Pipeline and Source Connectivity

We’re digital-first, paperless and run our business operations using cloud-hosted software services such as:

  • Harvest for timesheets, project invoicing and client-rechargeable expenses

  • Harvest Forecast for project resource planning and revenue forecasts

  • Xero for accounting and payroll

  • Hubspot CRM for sales and contact management

  • Docusign for contract management

  • Jira for project management and issue tracking

  • BambooHR for human resources and recruitment

On the delivery side, Rittman Analytics partners with four different data pipeline-as-a-Service software vendors, each of which targets a different segment of the market and that segment’s particular use-cases:

  • Stitch, who focus on small to mid-market startup customers with data engineers who value service extensibility and their land-and-expand pricing model ($100/month for up to five sources, including Hubspot and Xero certified and Harvest community-supported connectors)

  • Fivetran, aimed more at mid-market to enterprise customers and the data analyst personas, with a more turnkey solution and increased levels of service and coverage of enterprise data sources

  • Segment, more of an "enterprise service bus" for event-level digital data streams, connecting multiple event producers with downstream event consumers and offering additional features around customer journey mapping, personas and schema validation

  • Supermetrics, aimed at growth hackers and digital marketers with a focus on simple self-service setup, comprehensive data dictionaries and a comprehensive catalog of advertising, marketing and social data sources delivered through a Google Sheets add-in or more recently, a direct connector into Google BigQuery through the GCP Marketplace.

We chose to use Stitch's service for our data pipeline for this internal project due to their out-of-the-box ability to source data from Harvest and Harvest Forecast; functionally both Fivetran and Stitch would meet our needs other than support for these two particular sources, and we've used Fivetran extensively on other internal and client projects in the past.


We also use Segment's event tracking service to track visitor activity on our website and land those raw tracking events also in our BigQuery project, and intend to make use of Segment's Personas product in due course to enable us to build out 360-degree views of our clients, site visitors and prospects based on their interactions across all of our various digital touchpoints (web, social media, inbound and outbound lead generation, chat etc)

Data Transformation, Data Integration and Orchestration

We transform and integrate raw data that Stitch syncs from each of our SaaS application sources into an integrated, query-orientated data warehouse dataset using the open-source dbt toolkit.

Once we'd decided to work with a data pipeline-as-a-service such as Stitch together with a SQL-based data management platform like Google BigQuery, the decision to transform and integrate our data via a series of SQL SELECT was the obvious next design choice; by using dbt and version-controlling our scripts in a Github repository we increased our productivity when developing these transformation, adhered to modern software design principles and avoided cut-and-paste scripting in-favour of templated, maintainable code.


Transformation steps in dbt are basically SQL SELECTs with references to metadata models replacing hard-coded table names and use of macros as reusable, often cross-database transformation components giving three main benefits over hand-written, author-specific code:

  • Increased analyst productivity through leveraging libraries of best-practice transformation components

  • More maintainable loading processes with concise logic that “doesn’t repeat yourself” (DRY)

  • Portable code that can be deployed on BigQuery today or Snowflake tomorrow if required

For example, as shown in the example code snippet below.

    FROM (
            {{ dbt_utils.datediff(start_date, end_date, 'day')}} +1 AS forecast_days,
            a._sdc_sequence ,
            MAX(a._sdc_sequence) OVER (PARTITION BY ORDER BY a._sdc_sequence 
            {{ ref('harvest_forecast_assignments') }} a
        INNER JOIN
            {{ ref('harvest_forecast_people') }} p
            ON person_id =
        _sdc_sequence = latest_sdc_sequence

In addition to the open-source dbt toolkit we also use the commercial dbtCloud service from our friends at Fishtown Analytics, mainly in order to then be able to execute our transformation graph in the cloud, like this:


dbtCloud also hosts the data dictionary automatically generated from the metadata in our dbt package, and diagrams the transformation graph and data lineage we’ve defined through the dependencies we created between each of the models in the package.


As well as standardising each data source and mapping each source's set of customer references into a single, master customer lookup table that allows us to analyze operations from sales prospect through project delivery to invoicing and payment, we also turn every customer touchpoint into events recorded against a single view of each customer, like this:


We then assign a financial revenue or cost value to each event in the customer journey, calculate metrics such as time between interactions (to measure engagement), value delivered to the client and use that information to better understand and segment our clients so as to provide the advice and service they're looking for at this particular stage in their lifecycle and relationship with our team.


Analytics and Data Visualization

We then use Looker as our primary analytics tool over the data prepared by dbt and stored in Google BigQuery. Everyone's homepage in Looker is set by default to show those four main operational KPIs with comparison to target and with the last six months’ trend history, providing the context when combined with Looker’s ability to go deep into the detail behind those headline numbers so that users can decide and action on what they’re seeing (note that I’ve altered and obfuscated our actual numbers in the next few screenshots)


We've also created a single business operations-wide Looker explore that enables users to analyze and explore all aspects of our sales, delivery and financial relationship with clients, suppliers, partners and prospects.

In the example below we've again using the event model and applying sequencing to those events bucketed by months since first billable day, to help us understand how the the demand decay curve looks for each cohort of clients we've taken on over the first year we've been operating.


Embedded Analytics using Powered by Looker

We also link fields in our Looker explore back to the online services that can provide more background detail on a metric displayed on the screen, in the example below linking back to Harvest to see details of the project displayed in the explore data visualisation.


Finally, for team members who are more likely to be working within our knowledge-sharing and internal company portal site, we use Powered by Looker embedded analyics within Notion, connecting our internal analytics metrics and insights directly into the productivity and workflow tools used by the team, bringing data and analytics to the problems they’re looking to solve rather than the other way around.


So, hopefully this look behind the scenes at how we use these modern analytics stack technologies for our own analytics needs has been useful, and feel free to leave comments or contact me at if any of this would be applicable to your business. In the meantime, the dbt and Looker git repos used to build-out our platform are available as public git repos on Github:

Feel free to fork, improve on what we’ve done, submit PRs or just see a bit more of the detail behind the examples in this post.