Data Engineering
Last updated
Last updated
Link: Mattermost Data Warehouse
To access the repository, you must be a member of the Core Developers team.
EKS is a managed Kubernetes service that allows us to deploy, orchestrate, and run our code. The main benefit of Kubernetes is being able to declaratively specify the resources you need and how much CPU and memory they require, and Kubernetes will figure out how to make it work. It will also attempt to restart VMs that have failed. We make use of Dockerfiles for our images. We also use Bitnami’s Airflow helm chart.
To keep our data and configuration confidential, we make use of Kubernetes Secrets which are only shared with team members who need access in LastPass.
To access Airflow, you must be on VPN go to this link. The Airflow Creds are stored in the Shared-BizOps LastPass folder.
Apache Airflow is a workflow orchestration tool built in Python that allows you to build and schedule DAGs. With these DAGs we can schedule jobs to run using crontab style scheduling and also declare dependencies between jobs so that we can ensure that data that we’re processing doesn’t get overwritten. Airflow also has great utilities for retrying failed jobs and alerting for job and DAG failures.
We take advantage of Mattermost incoming webhooks to send DAG failures to a special internal Mattermost channel called BizOps where team members can triage the failure. We ensure that these get sent to Mattermost with our failed task callback which is specified in each DAGs configuration.
We also utilize Bitnami’s Get DAG files from a git repository to automatically pull from the master branch of our GitHub repository every 60 seconds so our DAGs are always up to date.
We use Airflow’s new KubernetesPodOperator that allows each of our jobs to run in its own Kubernetes Pod. The real flexibility with this is that because it’s simply a Kubernetes Pod running a process, we can actually run any job in any language. It also isolates the compute and memory for all the jobs, and we can even customize how much compute and memory power we give to each job so if a job requires more power we can grant it that.
This is a screenshot of our actual Airflow installation and gives an example of the UI.
To keep our connection strings and other configuration items confidential, we utilize Kubernetes Secrets and inject those as environment variables into our Kubernetes Pods. To inject a secret into the environment of a job run through an Airflow DAG, you must specify it in the kube_secrets.py then you can import it in the DAG file and then finally inject it into the Operator object itself.
Snowflake is a cloud- and SQL- based data warehouse platform that allows you to separate query compute power from data storage. It uses a proprietary data format for storing data and strives to provide a service that means you don’t need a DBA to constantly monitor and tweak to keep the warehouse performant.
Virtual warehouses are Snowflake’s concept for a cluster of compute resources that can execute queries. You are billed based on how the size of the Virtual Warehouse and how long it is running for.
Stage and COPY
dbt is a tool, written in Python, that allows you to execute the transform
step of your ELT or ETL process.
We use it to transform raw data in our Snowflake warehouse into more easily usable tables and views.
Our dbt implementation is here.
Dbt has a concept of sources and models.
An example sources file is here and this specifies already existing raw data that dbt can pull from to build models.
Then, you can define models that reference the sources.
Example:
The filename (account_daily_arr) of the model file determines the object name in the database (in this case a table).
Meltano is an overall framework that includes a lot more than what we’re using it for.
We’re actually just using a small piece that allows us to control our Snowflake user and role permissions in a fine-grained way.
The specific piece we use is here.
We use this in the snowflake_permissions DAG. The interesting piece is the container_cmd lines. Essentially we create an entire Meltano project, but then just use a roles.yml file that we define to control the permissions.
Telemetry data is data that's sent from Mattermost servers and makes its way to our data warehouse.
This data is detailed here.
Currently, we use Segment to push this data to Snowflake.
The data is available in its raw form in the Raw database, in the mattermost2 and mattermost_nps schemas.
We currently have a dbt model here that uses this raw data, but will continue to add more.
Mattermost runs a proxy service that allows notifications to be sent through Apple and Google’s respective notification services for mobile notifications. The log data is put into an S3 bucket and then ingested using Snowflake Stage and COPY
. See here for more details.
To help us track how many Mattermost servers are being deployed, there's a pingback which gets logged to an S3 bucket that we import. Details here.
License data
Mattermost’s enterprise license metadata is exported nightly to an S3 bucket and then we import it daily. Code here.
Google Analytics
Overview
Google Analytics - Stitch integration has a lot of caveats and limitations.
Known limitations:
Each set of dimensions and measures from Google Analytics needs to have its own Stitch integration.
Each integration creates a schema in Snowflake that matches the name of the integration and adds a table called report
.
Name: GA ChannelGrouping Source Users Org Schema: analytics.ga_channelgrouping_source_users_org Table: analytics.ga_channelgrouping_source_users_org.report
Once an integration is created, it can't be edited. If you need to make changes, you need to delete the integration and start over.
Data is only pulled at a daily level.
This is an issue because Unique Monthly Users
is not the same as Aggregated Unique Daily Users
.
Mattermost.com
Owner: Kevin Fayle
Stitch integrations:
GA Mattermost Com Pages Visits
Frequency: 6 hours
Dimensions: Page Path, Page Title
Measures: Page Visits, Unique Page Visits, Avg Time on Page
Developers.Mattermost.com
Owner: Kevin Fayle
Stitch integrations:
Frequency: 6 hours
Dimensions: Page Path, Page Title
Measures: Page Visits, Unique Page Visits, Avg Time on Page