Introduction to Terraform

Just imagine you are a customer of a cloud provider and you want to spin up some machines, you can go into some web console, fill in some forms, click some buttons and then launch an instance. But you can also use Terraform.

Terraform allows you to do the same thing but in code, i.e., Infrastructure as Code. It is the automation of your infrastructure and keep your infrastructure in a certain state. For example, you want to spin up 5 small instances and whenever you run Terraform, it will ensure that those run on a cloud platform. When we change something manually, Terraform will try to match the code with the actual infrastructure.

It will also make your infrastructure auditable. Just look at .tf files, we can see what the infrastructure is made of. And even better, we can keep changes in Version Control System, e.g., git.

Download and Install

Terraform can be downloaded and installed in different operational systems, pls check it out on official website.

After downloading and installing Terraform, lets verify it.

Use “terraform -v” in terminal, if the result shows the version, then it can verify if terraform is installed and everything works properly. However, if the result is “Terraform is not a command” or anything like this, there is an issue happen.

The last step in setting is to use an IDE to manage, e.g., Visual Studio.

What Language Terraform supports?

Terraform language is called Hashicorp configuration language in a file that has a .tf extension, so all of our Terraform codes will be stored in a file with .tf extension.

Let’s start!

Now we can create a new project in our own IDE and name it. Then we create a file called main.tf.

At this point, what providers Terraform can support can be checked out on their website.

For example, if we click into AWS Provider, we can see there is an example how to configure the AWS Provider.

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
  }
}

# Configure the AWS Provider
provider "aws" {
  region = "us-east-1"
}

# Create a VPC
resource "aws_vpc" "example" {
  cidr_block = "10.0.0.0/16"
}

We can copy and paste the part of “Configure the AWS Provider” codes in our main.tf file. And we can check what region we are in AWS and modify it.

Now we get the provider set up and if we go back to Terraform tutorial, it continues to teach us how to set things up. And the next thing to set up the Authentication.

We can take the hard-coding way to have a look: create static creditials. It is not recommended because if we publish the .tf file into Github or something else, the credentials will be stored there, which will cause a security vulnerability problem.

Now using this way just keeps things simper but later we will use a securer way.

From the tutorial, it can be seen that we need three parameter values: region, “access_key” and “secret_key”. Now we can go to AWS console and check those information in Identity and Access Management(IAM) service.

We can create an access key in IAM service. And then we click “show access key” and store those 2 values: “access_key” and “secret_key”.

Then let’s try to create and provision resources within AWS. The syntax is quite simple and same whatever you configure Azure, AWS or GCP in Terraform.

The resources syntax is shown as below:

resource "<provider>_<resource_type>""name"{
    config options....
    key = "value"
    key2= "another value"
}

So this is how to create a resources within a provider. Let us to try to create and deploy a EC2 instance in Terraform.

We need to refer back to the documentation now because we need to put value as “resource_type”:

resource "aws_instance" "web" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = "t3.micro"

  tags = {
    Name = "HelloWorld"
  }
}

We can find such an example and it is barely what the minimum we need. We can copy and paste these codes in our main.tf file.

The “resource_type” in the example is “aws_instance”. We can give it a name and call it what we want to replace “web” in the example. “ami” value can be got if we launch an instance in EC2 service on AWS, then we can put the value into the example. And the “instance_type” is what we select in the AWS console as well.

As in the documentation, the “tags” is optional so we can delete it.

The next thing is go to terminal of your IDE, which is suggested because it will navigate to your project directory automatically. The first Terraform command we need to learn is “Terraform init”. This command will look at the config, which is all .tf files in our project. Because we just have one provider which is AWS, so it will download all necessary plugins to interact with AWS api.

After we run it in terminal, we can see it is initializing the backend and initializing provider plugins and so on. If we add another provider for Azure, then it will also download another plugin for Azure.

After it is successful, the second Terraform command we need to know is “Terraform plan”. Even it is completely optional, but it is a quick sanity check to make sure we won’t break anything. If we have a look at what is happening, it will color code things depending on the action.

  • “+” means creating a resource
  • “-“means deleting a resource
  • “~”means modifying an existing resource

The final command is “Terraform apply” and it will ask you to hit “yes” then it will create our server. Then it will show “Apply complete! Resources 1 added, 0 changed, 0 destroyed”.

Then we can verify it in our AWS console. At this point, we can see the first EC2 instance is deployed through Terraform.

There are more things we can learn in Terraform, maybe talk about it in next blog.

If you are interested in or have any problems with Terraform, feel free to contact me.

Or you can connect with me through my LinkedIn.

CloudFormation 101

What is CloudFormation?

CloudFormation is a tool which can spin up resources on AWS. If someone wants to be an AWS expert, CloudFormation is an essential service to master.

Before we jump into writing a CloudFormation template, let’s have a brief history about how to manage AWS infrastructure before CloudFormation.

Without CloudFormation, automating a process is time-consuming because of building tools to assist with automation, e.g., log in AWS console and manually provision servers.

Maybe it is a fast way to write some scripts to get job done and it will generally work fine at a small scale, however if we need to manage more systems in more environments, it will become tedious.

So that is why configuration management tools exist.

There are some popular tools like Chef, Puppet, Salt etc. They can be used to maintain consistency and track changes. However, their disadvantages are it is not necessatily needed for containerized application deployments.

So there is another way: Infrastructure as Code

Infrastructure as Code is all about having a single souce of truth that serves as a blueprint for what we want the infrastructure look like.

Templates are written using a DSL that describes resources and relationships, e.g., CloudFormation and Terraform.

Their pros:

  • Consistency
  • Auditing
  • Complicance
  • Rollbacks

What languages CloudFormation supports?

The answer is JSON /YAML

Let’s take a look at a JSON template (a template respresents a stack, the components with the stack are known as resources):

{
  "AWSTemplateFormatVersion":"2010-09-09",
  "Description":"Create a S3 bucket",
  "Resources":{
  "datalake":{
    "Type":"AWS::S3::Bucket"
             }
}
}

Inside Resources, uniquely named keys are mapped to specific AWS resources. In our example, this CloudFormation template is used to create a S3 bucket in AWS.

Functions:

CloudFormation also supports some functions, and the most useful is Ref, it is used to pass value.

Next blog we will continue our journey with AWS.

If you are interested in or have any problems with CloudFormation, feel free to contact me .

Or you can connect with me through my LinkedIn.

Medium-Hard Data Analyst SQL questions(Part 1)

As we know, the first 70% of SQL is pretty straightforward but the remaining 30% can be pretty tricky.

So, in this blog, some popular hard SQL interview questions will be covered for people to sharpen their skills.

Self-Join Practice Problems

Part 1: How much a key metric, e.g., monthly active users, changes between months, e.g., a table named ‘logins’ is shown as below.

userz-iddate
12020-07-01
2342020-07-02
32020-07-02
12020-07-02
2342020-10-04
table:logins

Q: find the month-over-month percentage change for monthly active users

Solutions:

WITH mau AS
(
  SELECT
  DATE_TRUNC('month', date) AS month_timestamp,
  COUNT(DISTINCT user_id) AS mau
  FROM logins
  GROUP BY
  DATE_TRUNC('month', date)
)

SELECT
a.month_timestamp AS previous_month,
a.mau AS previous_mau,
b.month_timestamp AS current_month,
b.mau AS current_mau,
ROUND(100.0*(b.mau - a.mau)/a.mau,2) AS percent_change
FROM mau a
JOIN mau b
  ON a.month_timestamp = b.month_timestamp - interval '1 month'

Next blog we will continue our journey with SQL medium-hard questions.

If you are interested in or have any problems with SQL, feel free to contact me .

Or you can connect with me through my LinkedIn.

Some query samples to interpret Google Analytics data using Bigquery

As a BI Analyst working in an Online Travelling Agency company, interpreting customer behaviors data into meaningful insights is a Business As Usual task.

Google Analytics is a popular web analytics service tracking website traffic. Therefore, for BI&Reporting team, how to interpret Google Analytics data seems to be an essential skill.

In the technical side, standard SQL can be used in Google BigQuery(cloud-based data warehousing platform) to generate data insights from Google Analytics.

Let’s take some query samples to have a look. Firstly, some dynamic values need to be understood before we write the first query:

  • Dimension: a parameter used for analysis sessions/users/hits, e.g, device.deviceCategory, device.browser, hits.type
  • projectID: ID of the project in BigQuery
  • dataSetName: data set in BigQuery with the information about Google Analytics sessions.

So we can write a query sample to calculate the numbers of users, sessions and hits across set dimensions:

SELECT
  {{d​imension}​},​
  EXACT_COUNT_DISTINCT(fullVisitorId) AS users,
  EXACT_COUNT_DISTINCT(fullVisitorId_visitId) AS sessions,
  COUNT(hits.hitNumber) AS hits
FROM (
  SELECT
    fullVisitorId,
    fullVisitorId+'_'+ visitId AS fullVisitorId_visitId,
    hits.hitNumber,
    hits.type,
    device.browser,
    device.deviceCategory
  FROM
    '​projectID:dataSetName.tableName'
GROUP BY
  {{dimension}}

Another sample is to set up a funnel with steps better suited for your business, as what we do for the first query. Here are some dynamic values we need to use:

  • projectID: ID of the project in BigQuery
  • dataSetName: data set in BigQuery with the information about Google Analytics sessions
  • tableName: name of the table with the standard BigQuery export data

Then we can write a query sample as follows:

SELECT
  table_step1.date AS date,
  table_step1.step1 AS count_step1, 
  table_step2.step2 AS count_step2, 
  table_step3.step3 AS count_step3
FROM (
 --calculate transitions to Step 1 (Pageviews), grouped by days
  SELECT
  date,
  COUNT(*) AS step1
  FROM
   '​projectID:dataSetName.tableName'
  WHERE hits.eCommerceAction.action_type='2'
  GROUP BY 
    date) AS table_step1
JOIN (
--calculate the number of transitions to Ste​p 2 (Product description view) by days
  SELECT  
  date,
  COUNT(*) AS step2
  FROM 
    '​projectID:dataSetName.tableName'
  WHERE 
    hits.eventInfo.eventCategory CONTAINS 'Click' 
    AND hits.eventInfo.eventAction CONTAINS 'Characteristics of product'
  GROUP BY 
  date) AS table_step2
  ON
    table_step1.date=table_step2.date
JOIN (
--calculate transitions to Step 3 (Adding product to cart) by days
SELECT
  date,
  COUNT(*) AS step3
FROM 
  '​projectID:dataSetName.tableName'
WHERE 
  hits.eCommerceAction.action_type='3'
GROUP BY 
  date) AS table_step3
ON
  table_step1.date=table_step3.date

There are more things we can do to track website traffic with BigQuery, maybe talk about it in next blog.

If you are interested in or have any problems with BigQuery, feel free to contact me.

Or you can connect with me through my LinkedIn.

Send Google Analytics payload length as a Custom Dimension

The maximum length of a Google Analytics payload is 8192 bytes. It is useful to check if you are approaching this value with some of your hits because if the payload length exceeds this, the hit is never sent to GA.

How can we know the payload size with each hit?

Today i will show you how to send the payload size as a custom dimension to GA with each hit. The tool is Google Tag Manager.

Before starting, creating a new hit-scoped custom dimension in GA is essential, named ‘Hit Payload Length’ and check its index.

Then, create a custom task in GTM, the custom task is to let users modify the request sent to GA before it is sent. We can take Client ID as an example.

Custom task will work with a custom javascript variable. Here is the javacript code which we create in GTM.

function () {
  // clientIdIndex: The Custom Dimension index to where you want to send the visitor's Client ID, my example is 7.
  var clientIdIndex = 7;

  // payloadLengthIndex: The Custom Dimension index to where you want to send the length of the payload of the request,  my example is 18.

  var payloadLengthIndex = 18;

  // DO NOT EDIT ANYTHING BELOW THIS LINE
  var readFromStorage = function (key) {
    if (!window.Storage) {
      // From: https://stackoverflow.com/a/15724300/2367037
      var value = '; ' + document.cookie;
      var parts = value.split('; ' + key + '=');
      if (parts.length === 2) {
        return parts.pop().split(';').shift();
      }
    } else {
      return window.localStorage.getItem(key);
    }
  };

  var writeToStorage = function (key, value, expireDays) {
    if (!window.Storage) {
      var expiresDate = new Date();
      expiresDate.setDate(expiresDate.getDate() + expireDays);
      document.cookie = key + '=' + value + ';expires=' + expiresDate.toUTCString();
    } else {
      window.localStorage.setItem(key, value);
    }
  };

  var globalSendHitTaskName   = '_ga_originalSendHitTask';

  return function (customTaskModel) {

    window[globalSendHitTaskName] = window[globalSendHitTaskName] || customTaskModel.get('sendHitTask');

    // clientIdIndex
    if (typeof clientIdIndex === 'number') {
      customTaskModel.set('dimension' + clientIdIndex, customTaskModel.get('clientId'));
    }
    // /clientIdIndex

    customTaskModel.set('sendHitTask', function (sendHitTaskModel) {

      var originalSendHitTaskModel = sendHitTaskModel,
          originalSendHitTask      = window[globalSendHitTaskName],
          canSendHit               = true;

      try {

        // payloadLengthIndex
        if (typeof payloadLengthIndex === 'number') {
          var _pl_hitPayload = sendHitTaskModel.get('hitPayload');
          _pl_hitPayload += '&cd' + payloadLengthIndex + '=';
          _pl_hitPayload += (_pl_hitPayload.length + _pl_hitPayload.length.toString().length);
          sendHitTaskModel.set('hitPayload', _pl_hitPayload, true);
        }
        // /payloadLengthIndex

        if (canSendHit) {
          originalSendHitTask(sendHitTaskModel);
        }

      } catch(err) {
        originalSendHitTask(originalSendHitTaskModel);
      }

    });

  };
}

The last step is the add the custom task into your tags, we can scroll down to a tag and add a new field, whose value will be the custom task we just created.

After this step, any tags which has this custom task will add the hit payload length as a custom dimension.

How to debug your work?

Verify it through the developer tools on your browser.

  • Open the Network on the developer tool on your browser
  • Click the request to collect
  • Check if there is a payload parameter

With this custom dimension, we can monitor if the payload maximum size will be hit.

If you are interested in or have any problems with GTM, feel free to contact me.

Or you can connect with me through my LinkedIn.

Why we need data integration and what can we do?

The first question is why we need data integration?

Let me give you an example here to answer this question.

Every company has many departments, and different departments use different tools to store their data. For example, marketing team may use hubspot tool.

Now, we have different departments which store different types of data in a company.

However, insightful information is needed to make business decisions through those large amount of data.

What can we do?

Maybe we can connect all the databases everytime to generate reports. However, it will cost us large amount of time, then the term of data integration is raised.

What is data integration?

Data integration is a process in which heterogeneous data is retrieved and combined as an incorporated form and structure.

There are some advantages of data integration:

  1. Reduce data complexity
  2. Data integrity
  3. Easy data collaboration
  4. Smarter business decisions

There are also some well-known tools can do data integration, e.g., Microsoft SQL Server Integration Services (SSIS), Informatica, Oracle Data Integrator, AWS Glue etc.

This blog we will talk about SSIS, which is one of the most popular ETL tool in New Zealand.

Why SSIS?

  1. Data can be loaded in parallel to many varied destinations
  2. SSIS removes the need of hard core programmers
  3. Tight integration with other products of Microsoft
  4. SSIS is cheaper than most other ETL tools

SSIS stands for SQL Server Integration Services, which is a component of the Microsoft SQL Server database software that can be used to perform a broad range of data integration and data transformation tasks.

What can SSIS do?

It combines the data residing in different sources and provides users with an unified view of these data.

Also, it can also be used to automate maintenance of SQL Server databases and update to multidimensional analytical data.

How is works?

These tasks of data transformation and workflow creation is carried out using SSIS package:

Operational Data–>ETL–>Data Warehouse

In the first place, operational data store (ODS) is a database designed to integrate data from multiple sources for additional operations on the data. This is the place where most of the data used in current operation is housed before it’s tranferred to the data warehouse for longer tem storage or achiiving.

Next step is ETL(Extract, Transform and Load), the process of extracting the data from various sources, tranforming this data to meet your requirement and then loading into a target data warehouse.

The third step is data warehousing, which is a large set of data accumulated which is used for assembling and managing data from various sources of answering business questions. Hence, helps in making decisions.

What is SSIS package?

It is an object that implements integration services functionality to extract, transform and load data. A package is composed of:

  1. connection
  2. control flow elements(handle workflow)
  3. data flow elements(handle data transform)

If you want to investigate more about SSIS, check it out on Microsoft official documents.

If you are interested in or have any problems with SSIS, feel free to contact me.

Or you can connect with me through my LinkedIn.

An advanced SQL mind map

Before, i wrote a blog about a basic SQL mind map for people who want to kick-start their career into business intelligence and data analysis industry.

As we know, SQL is essential to become a Business Intelligence Developer/Data Analyst.

So, in this blog, i drew an advanced SQL mind map for people who want to dive their SQL journey.

If you are interested in or have any problems with SQL, feel free to contact me.

Or you can connect with me through my LinkedIn.

How to design a Data Warehouse(Part 1)

Last blog I wrote why we need a Data Warehouse.

First, what is the data warehouse?

It is a centralized relational database that pulls together data from different sources (CRM, marketing stack, etc.) for better business insights.

It stores current and historical data are used for reporting and analysis.

However, here is the problem:

How we can design a Data Warehouse?

1 Define Business Requirements

Because it touches all areas of a company, all departments need to be onboard with the design. Each department needs to understand what the benefits of data warehouse and what results they can expect from it.

What objectives we can focus on:

  1. Determine the scope of the whole project
  2. Find out what data is useful for analysis and where our dat is current siloed
  3. Create a backup plan in case of failure
  4. Security: monitoring, etc.

2 Choose a data warehouse platform

There are four types of data warehouse platforms:

  1. Traditional database management systems: Row-based relational platforms, e.g., Microsoft SQL Server
  2. Specialized Analytics DBMS: Columnar data stores designed specifically for managing and running analytics, e.g., Teradata
  3. Out-of-box data warehouse appliances: the combination with a software and hardware with a DBMS pre-installed, e.g., Oracle Exadata
  4. Cloud-hosted data warehouse tools

We can choose the suitable one for the company according to budget, employees and infrastructure.

We can choose between cloud or on-premise?

Cloud solution pros:

  1. Scalability: easy, cost-effective, simple and flexible to scale with cloud services
  2. Low entry cost: no servers, hardware and operational cost
  3. Connectivity: easy to connect to other cloud services
  4. Security: cloud providers supply security patches and protocols to keep customers safe

Choices:

  1. Amazon Redshift
  2. Microsoft Azure SQL Data Warehouse
  3. Google Bigquery
  4. Snowflake Computing

On-premise solution pros:

  1. Reliability: With good staff and exceptional hardware, on-premise solutions can be highly available and reliable
  2. Security: Organizations have full control of the security and access

Choices:

  1. Oracle Database
  2. Microsoft SQL Server
  3. MySQL
  4. IBM DB2
  5. PostgreSQL

What we can choose between on-premise and cloud solution, in the big picture, it depends on our budget and existing system.

If we look for control, then we can choose on-premise solution. Conversely, if we look for scalability, we can choose a cloud service.

3 Set up the physical environments

There are three physical environments in Data Warehouse: development, testing and production.

  1. We need to test changes before they move into the production environment
  2. Running tests against data typically uses extreme data sets or random sets of data from the production environment.
  3. Data integrity is much easier to track and issues are easier to contain if we have three environments running.

4 Data Modelling

It is the most complex phase of Data Warehouse design. It is the process of visualizing data distribution in the warehouse.

  1. Visualize the relationships between data
  2. Set standardized naming conventions
  3. Create relationships between data sets
  4. Establish compliance and security processes

There are bunches of data modeling techniques that businesses use for data warehouse design. Here are top 3 popular ones:

  1. Snowflake Schema
  2. Star Schema
  3. Galaxy Schema

4 Choosing the ETL solution

ETL, stands for Extract, Transform and Load is the process we pull data from the storage solutions to warehouse.

We need to build an easy, replicable and consistent data pipeline because a poor ETL process can break the entire data warehouse.

Wrapping up

This post explored the first 4 steps about designing a Data Warehouse in the company. In the future, I will write the next steps.

If you are interested in or have any problems with Data Warehouse, feel free to contact me.

Or you can connect with me through my LinkedIn.

CrUX Dashboard&Data Strategy Lifecycle

Last Blog I demonstrated the data pipeline we can use CrUX to analyze the site performance. This is from a BI developer perspective.

However, for a company, especially the leadership team, what they want is the final dashboard that generated from BI department, so management plan can be gained.

I already wrote how to query from Bigquery and what site speed metrics we can use from the introduction of CrUX blog and public dataset analysis blog.

So this blog I will show you what kind of dashboard we can generate after the steps of data collection from Google public dataset and ETL.

What data visualization tool we need to use?

There are bunches of data visualization tools we can use, e.g., Data Studio, Power BI etc. This time I take Tableau for an example.

I took www.flightcentre.co.nz(blue line) and www.rentalcars.com(red line) as the origin for comparison, set customer’s device is ‘desktop’ (we also can put a filter on it too).

And there are 4 sheets on a dashboard, i.e., Slow FCP Percentage, Fast FCP Percentage, Fast FID Percentage and Slow FID Percentage.

What they actually mean?

  1. Slow FCP Percentage(the percentage of users that experienced a first contentful paint time of 1 second or less)
  2. Fast FCP Percentage(the percentage of users that experienced a first contentful paint time of 2.5 seconds or more)
  3. Fast FID Percentage(the percentage of users that experienced a first input delay time of 50 ms or less)
  4. Slow FID Percentage(the percentage of users that experienced a first input delay time of 250 ms or more)

After this graph, we can roughly see that flightcentre has a higher site speed than rentalcars in user experience.

What we can do in the next step?

After that, we can inform devs and communicate impact according to show exactly the area that the site is falling down. We can point to the fact that it’s from real users and how people actually experiencing the site.

The second part is the data strategy lifecycle in a company.

What is the data strategy lifecycle in a company?

Develop the strategy–>Create the roadmap–>Change management plan–>Analytics lifestyle–>Measurement plan

Perspectives:

  1. Scope and Purpose: What data will we manage? How much does our data worth? How do we measure success?
  2. Data collection: Archiving, what data where and when, single source of truth(data lake), integrating data silos
  3. Architecture: Real time vs Batch, data sharing, data management and security, data modelling, visualization
  4. Insights and analysis: Data Exploration, self-service, collaboration, managing results
  5. Data governance: Identify data owners, strategic leadership. data stewardship. data lineage, quality, and cost
  6. Access and security: RBAC, encryption, PII, access processes, audits, regulatory
  7. Retention and SLAs: Data tiers and retention, SLA’s to the business

Wrapping up

This post explored the CrUX dashboard BI team can generate and the data strategy in a company. In the future, I will write more.

If you are interested in or have any problems with CrUX or Business Intelligence, feel free to contact me.

Or you can connect with me through my LinkedIn.

Tutorial: Using BigQuery to Analyze CrUX Data

Last blog I gave some examples of how we can use the Chrome User Experience report (CrUX) to gain some insights about site speed. This blog I will continue to show you how to use bigquery to compare your site with the competitors.

Prerequisite:

  1.  Log into Google Cloud,
  2. Create a project for the CrUX work
  3. Avigate to BigQuery console
  4. Add the chrome-ux-report dataset and explore the way the tables are structured in ‘preview’

Step one: Figure out what is the origin of your site and the competitor site

like syntax is preferred (Take care of the syntax difference between Standard SQL and T-SQL)

 -- created by: Jacqui Wu
  -- data source: Chrome-ux-report(202003)
  -- last update: 12/05/2020
  
SELECT
  DISTINCT origin
FROM
  `chrome-ux-report.all.202003`
WHERE
  origin LIKE '%yoursite'

Step two: Figure out what should be queried in the select clause?

What we can query from CrUX?

The specific elements that Google is sharing are:

  1. “Origin”, which consists of the protocol and hostname, as we used in step one, which can make sure the URL link
  2. Effective Connection Type (4G, 3G, etc), which can be queried as the network
  3. Form Factor (desktop, mobile, tablet), which can be queried as the device
  4. Percentile Histogram data for First Paint, First Contentful Paint, DOM Content Loaded and onLoad (these are all nested, so if we want to query them, we need to unnest them)

Here I create a SQL query of FCP percentage in different sites, which measures the time from navigation to the time when the browser renders the first bit of content from the DOM.

This is an important milestone for users because it provides feedback that the page is actually loading.

SQL queries: 

  -- created by: Jacqui Wu
  -- data source: Chrome-ux-report(202003) in diffrent sites
  -- last update: 12/05/2020
  -- Comparing fcp metric in Different Sites

SELECT origin, form_factor.name AS device, effective_connection_type.name AS conn, "first contentful paint" AS metric, bin.start/1000 AS bin, SUM(bin.density) AS volume
FROM(  
SELECT origin, form_factor, effective_connection_type, first_contentful_paint.histogram.bin as bins
FROM `chrome-ux-report.all.202003`
WHERE origin IN ("your site URL link", "competitor A site URL link", "competitor B site URL link")
)
CROSSS JOIN UNNEST(bins) AS bin
GROUP BY origin, device, conn, bin

Step 3: Export the results to the Data Studio(Google visualization tool)

Here are some tips may be useful

  1. Line chart is preferred for comparing different sites in Visual Selection
  2. Set x-axis to bin(which we already calculate it to seconds) and y-axis to percentage of fcp
  3. Set filter(origin, device, conn) in Filtering section

Wrapping up

This post explored the data pipeline we can use CrUX report to analyze the site performance. In the future, I will write more about CrUX.

If you are interested in or have any problems with CrUX or Business Intelligence, feel free to contact me.

Or you can connect with me through my LinkedIn.