A special Easter Day

In this special Easter Day, New Zealanders need to stay in our own ‘bubbles’.

So, good time to do some learning stuff.

Pluralsight is now offering all courses free in April.

Completed the Google Analytics for Creative Professionals course on it.

Highly recommended its methodology:

  1. Look for top-level outliers + Mix & Match (segment)
  2. Go to pages and look for issues (technical, content & design)

Especially the part of spotting the ‘Ghost Spam’ in referrals and how to remove it with Regex, quite useful.

In business, making decisions by combining statistics with insight and our knowledge of human wants & needs is called data-informed

Next target: Architecting Data Warehousing Solutions Using Google BigQuery

If you are interested in or have any problems with BigQuery or Business Intelligence, feel free to contact me.

Or you can connect with me through my LinkedIn.

Analyzing Google Analytics Data in BigQuery (Part1)

What is BigQuery?

Among Google Cloud Platform family products, there are Google App Engine, Google Compute Engine, Google Cloud Datastore, Google Cloud Storage, Google Big Query (for analytics), and Google Cloud SQL.

The most important product for BI Analyst is Big Query, it is an OLAP Data Warehouse which supports DW, Join and fully managed. It can make developers use SQL to query massive amounts of data in seconds.

Why BigQuery?

The main advantage is BigQuery can integrate with Google Analytics. It means we can synchronize Session/Event data to BigQuery easily to make custom analytics, not only the Google Analytics functions.

In other words, BigQuery can dump raw GA data into it. So it means some custom analytics which can’t be performed with the GA interface now can be generated by BigQuery.

Moreover, we can also bring in third-party data into it.

What is the difficulty for BI Analyst, it means we need to calculate every metrics in queries.

Which SQL is preferred in Big Query?

Standard SQL syntax is preferred in Big query nowadays.

How we can get the data from Google Analytics?

A daily dataset can be got from GA to BigQuery. Any within each dataset, a table is imported for each day of export. Its name format is ga_sessions_YYYYMMDD.

We can also set some steps to make sure the tables, dashboards and data transfers are always up-to-date.

How to get it a try?

Firstly, set up a Google Cloud Billing account. With a Google Cloud Billing account, we can use BigQuery web UI with Google Analytics 360.

The next step is to run a SQL query and visualize the output. The query editor is standard and follows the SQL syntax.

For example, here is a sample query that queries user-level data, total visits and page views.

SELECT fullVisitorId,
       visitId,
       trafficSource.source,
       trafficSource.medium,
       totals.visits,
       totals.pageviews,
FROM 'ga_sessions_YYYYMMDD'

In this step, if we need to get a good understanding of ga_sessios_table in BigQuery, we need to make sure what is the available raw GA data fileds can be got in BigQuery.

We can use an interactive visual representation as the reference.

Next blog we will give more examples about how to analyze GA data in BigQuery according to data ranges or others like users, sessions, traffic sources, etc.

If you are interested in or have any problems with Business Intelligence or BigQuery, feel free to contact me.

Or you can connect with me through my LinkedIn.

Google Cloud Platform Fundamentals

Google Cloud is seen as a leader in areas including data analytics, machine learning and open source. And digital transformation through the cloud allowed companies to deliver personalised, high quality experiences.

During this Lockdown time in New Zealand, working from home means taking less time on the traffic and a time to learn more advanced techniques.

So stay positive and stay safe!

Thanks to GCP fundamentals, it is a perfect opportunity for those who wants to learn Google Cloud Platform.

If you are interested in or have any problems with Business Intelligence, feel free to contact me.

Or you can connect with me through my LinkedIn.

What is normalization and what are normalization forms?

Normalization is a technique that decomposes the table to reduce the redundacy of data.

There are 3 normalization forms we need to check and follow to normalize tables.

1st Normalization Form:

  1. Each coloum of the table should have a single value
  2. Each column should belong to a same domain
  3. Two colums should not have a same name
  4. It need not to be a proper order, e.g., we need not to sort the records according to the date of DOB.

2nd Normalization Form:

  1. The table should be satisfy 1st Normalization Form
  2. All the non-key attributes myst be functionally dependently on primary key

3rd Normalization Form:

  1. The table should be satisfy 2nd Normalization Form
  2. There is no transitive dependency for non-prime attributes

The transitive functional dependency is as follows:

A is functional dependent on B and B is functional dependent on C. So, C is transitive dependent on A through B.

If you are interested in or have any problems with Business Intelligence, feel free to contact me.

Or you can connect with me through my LinkedIn.

DAX Cheat Sheet

What is DAX?

It is the programming language for Power Pivot, SSAS Tabular and Power BI.

It resembles Excel because it was born in PowerPivot. But it has no concept of <row> and <column> and has different type system.

The most important, it has many new functions.

The most two are measure and calculated column:

Measure is used to calculate aggregates, e.g., Sum, Avg and evaluated in the context of the cell in a report or a DAX query.

Calculated column evaluates each row and is computed at the low level within the table it belongs to.

Some Common Dax Expressions:

LOOKUP:

  1. Return the value in result_columnName for the row that meets all criteria specified by search_columnName and search_value
  2. LOOKUPVALUE(Result Column Name, Search Column Name, Search Column value)

FILTER:

  1. Return a subset of a table or expression
  2. FILTER(<table>,<filter>)

ALL:

  1. Return all the rows in a table, or values in a column, ignoring any filters that may have been applied
  2. ALL(<table> or <column>)

RELATED

  1. Returns a related value from another table
  2. RELATED(<column>)

CALCULATE

  1. Evaluates an expression in a context that is modifies by specific filters
  2. CALCULATE(<expression>,<filter1>,<filter2>)

If you are interested in or have any problems with DAX or Business Intelligence, feel free to contact me.

Or you can connect with me through my LinkedIn.

Agile Scrum Workflow

Scrum is one of the Agile framework:

1 User story and refinement:

Input from executives, team, stakeholders, customers

2 Product Backlog:

Ranked list of what is required: features, stories

3 Sprint Planning Meeting:

Team selects starting at stop as much as it can commit to deliver by end of sprint

4 Sprint Backlog: Task breakout

5 Stand-up meeting: daily discussion between team members

6 Sprint end date and team deliverable do not change

7 Sprint review, finished work and sprint retrospective

If you are interested in or have any problems with Agile or Business Intelligence, feel free to contact me.

Or you can connect with me through my LinkedIn.

View VS SYNONYM VS TRIGGER in SQL

What is a view?

A view is a virtual table based on the result set of an SQL statement.

Here is an example.

create view [sales].[v.salesbyperson2]
as
select
  salespersonid, round(totaldue,2) as salesamount
 from
sales.sales

What is a synonym?

A synonym, like the name, is an alternate name we create for another database object. 

Here is the syntax:

create synonym [synonym_name]
for [server_name].[database_name].[schema_name].[object_name]

What is the trigger?

A trigger is a special kind of stored procedure that automatically executes when an event occurs in the database server. 

Regardless of whether or not any table rows are affected.

Here is an example:

create trigger reminder1
on sales.customer
after insert, update
as raiserror ('notify customer relations', 16, 10);
go

If you are interested in or have any problems with SQL or Business Intelligence, feel free to contact me.

Or you can connect with me through my LinkedIn.

How to do web scrapping in Power Query in Power BI?

On web, there are public data sourse which can be imported and transformed to make insight reports or dashboards.

There are two circumstances:

This kind of importing of tables is super easy.

For example, we can try to make an easy one.

Here is the web link: https://en.wikipedia.org/wiki/Demographics_of_China. What we want is to import the tables from a Wikipedia webpage.

The simple steps are as follow:

Open Power BI Desktop–>Click Get Data–>Click Web–> Paste the URL in the dialog and click OK.

So, there is the screenshot:

In Navigatot dialog, we can select the corresponding table that we want to import and transform it.

This kind of importing from web in Power BI is simple. Because Power Query can identify the table in HTML and tables are in the <table>…</table> tags.

However, many tables in HTML are different.

If we meet this kind of circumstance, we will end up seeing the Document entity in the Navigator.

Next blog, we will demonstrate how to extract the relevant elements in HTML with Power Query in Power BI.

The tricky part id how to find the correct path in the HTML source tree, which needs you to have a basic but not prior HTML knowledge.

To motivate you to keep reading, here is the final web scarping screenshot which i made recently:

If you are interested in or have any problems with Power BI or Power Query, feel free to contact me.

Or you can connect with me through my LinkedIn.

Create a SSIS Project in Visual Studio 2015

Installation:

  1. SQL Server Data Tools 2015(install a shell of Visual Studio 2015)
  2. Visual Studio 2015

What is SSIS?

SSIS stands for SQL Server Intergration Services. SSIS is to do the ETL(Extract, Transform and Load) task for data warehouse.

SSIS can also update data warehouse, clean and mine data, create ‘packages’, manipulate data, etc.

To design ETL task in Visual Studio, we use the data flow in Visual Studio.

What is data flow?

A data flow defines a flow of data from a source to a destination.

Before designing the data flow, we can try to drag a data flow and drop it in the control flow.

What is a control flow?

It is a flow we can use to control the flow for different tasks. It provides the logic for when data flow components are run and how they run.

For example:

  1. perform lopping
  2. call stored procedures
  3. move files
  4. manage error handling
  5. check a condition and call different tasks

It defines a workflow of tasks to be executed, often a particular order (assuming your included precedence constraints).

we can rename the data flow task in control flow.

For example, we can drag OLE DB Source, so we can connect to Relational Database.

Then we can follow the Microsoft official toturial to do it step by step.

https://docs.microsoft.com/en-us/sql/integration-services/lesson-1-1-creating-a-new-integration-services-project?view=sql-server-ver15

If you are interested in or have any problems with SSIS, feel free to contact me.

Or you can connect with me through my LinkedIn.

Dimensional Modelling

First of all, a concept of Data Warehouse is supposed to be clear. Data Warehouse is not a copy of source database with a name prefixed with ‘DW’.

DW is we can store data from multiple data sources to be used for historical or analysis report. Some data sources are .csv format, some are google docs, etc. So we need to aggregate them into one data source.

This is called Data Warehouse.

How to design it? The procedure is Dimensional Modelling.

What is the difference between relational and dimensional?

It is same as the difference like normalised VS denormalised.

Normalised:

  1. Minimal data redundancy
  2. Optimised for fast read and fast write
  3. Current data only
  4. Realtime data

Denormalised:

  1. Redundancy data storage for performance
  2. Fast read only
  3. Non-realtime data
  4. Current and historical data

The next part is about fact table and dimension table in dimensional design.

Fact table:

  1. Data that can be measured
  2. Contains surrogate key, linking the associated measures or facts

Dimension table:

  1. Descriptive information

Some types of dimensional models:

  1. Accumalating snapshot table
  2. Aggregate fact
  3. Fact table
  4. Factless Fact table
  5. Snapshot table

If you are interested in or have any problems with Dimensional Modelling, feel free to contact me.

Or you can connect with me through my LinkedIn.