Connecting Databricks GCP to Churney

Suela Isaj | August 28, 2024 · 6 min

Here you can read our guide for connecting your Databricks GCP data warehouse with Churney

What kind of data is required?

The short answer is as much as possible. The long answer is that Churney requires data about:

  • Payments

  • Trials (if applicable)

  • User demographic (if available)

  • User activity

Additionally, we need to know the location (region) of your data warehouse.

Create hashed views

To create the views, we would need to create queries to hash the PII columns.

For context, Facebook has a guide on how to hash contact information for their conversion api: https://developers.facebook.com/docs/marketing-api/conversions-api/parameters/customer-information-parameters. Google has a similar guide for enhanced conversions using their ads api https://developers.google.com/google-ads/api/docs/conversions/enhance-conversions. Basically, we want to hash columns that contain data which would allow one to determine the identity of a user: First name, last name, birth day, street address, phone number, email address etc.


Example of creating hashed views

Create a dataset named churney .

If the raw data contains email, phone, birthday or other identifiers, the columns need to be excluded and hashed in the view.

Example, if the raw data lies under raw_data.sensitive and you would like to create a view churney.sensitive_data_view

CREATE OR REPLACE <your_project>.churney.sensitive_data_view as ( 
select
SHA2(LOWER(TRIM(email)), 256) as email,
SHA2(LOWER(TRIM(name)), 256) as name,
SHA2(LTRIM('0', regexp_replace(phone, '[()-+-]', '')), 256) as phone,
SHA2(date_format(birthday, 'yyyyMMdd'), 256) as birthday,
* EXCEPT(email, name, phone, birthday)
from <your_project>.raw_data.sensitive
)

If you want to test it how the script looks with some data:

with sensitive as
(
 select 1 as id,
 "abc" as name,
 "[email protected]" as email,
 current_date() as birthday,
 '+4511111111' as phone
)
select
SHA2(LOWER(TRIM(email)), 256) as email,
SHA2(LOWER(TRIM(name)), 256) as name,
SHA2(LTRIM('0', regexp_replace(phone, '[()-+-]', '')), 256) as phone,
SHA2(date_format(birthday, 'yyyyMMdd'), 256) as birthday,
* EXCEPT(email, name, phone, birthday)
from sensitive

Create a user for Churney

For this step, it is recommended that you use an email from your domain and maintain the user. You will not need to share these credentials with Churney, so this can be a developer-purpose email

Go to your account -> Settings, and then Identity and access -> Users and create the new user

Edit the user we just created (in this example [email protected]) and give admin access

Then verify the email, login with the new user, and go to Settings -> Developer -> Access tokens

Generate a token for Churney and remember to remove the lifetime days. Note the token down, you will share it with Churney in a safe way.

Create a group for the churney user as below

And add the email address you created above to this group.


Create a storage credential

Churney will give you the name of their gs bucket for this step.

Go to Catalog -> Storage Credentials -> Create credentials

Note down the service account you will see and share it with Churney. This is the service account that will access Churney’s google storage bucket to unload the data.

Go to the storage credential and grant the below permissions to yourself

And finally run


CREATE EXTERNAL LOCATION IF NOT EXISTS `churney_external`
URL 'gs://<bucket_name>/'
WITH (STORAGE CREDENTIAL `churney-storage-credential`);

Where the <bucket_name> contains the gcp bucket of Churney.

Then run

GRANT CREATE EXTERNAL TABLE ON EXTERNAL LOCATION `churney_external` TO `Churney`;
GRANT READ FILES ON EXTERNAL LOCATION `churney_external` TO `Churney`;

Grant permissions on schema

Let’s assume that you would like to share the views under churney_views with Churney

Churney will create external tables pointing at the gs bucket, so let’s create a schema for the churney_exports:

CREATE SCHEMA test.churney_exports;

Churney will maintain the exports here, so grant the permissions below

For the churney_views schema, Churney only needs Data reader, so grant those permissions as below

What to share with Churney

  • The token generated for the user (in a safe way)

  • The region of your Databricks (e.g. europe-west1)

  • The name of the export dataset create for Churney (in this example churney_exports)

  • The service account generated from the storage credential setup

  • The catalog name (in this example test)

  • The connection details of the sql warehouse: server_hostname and http_path as below