Data Engineering
Last updated
Was this helpful?
Last updated
Was this helpful?
To access the repository, you must be a member of the Core Developers team.
This is a screenshot of our actual Airflow installation and gives an example of the UI.
Stage and COPY
We use it to transform raw data in our Snowflake warehouse into more easily usable tables and views.
Then, you can define models that reference the sources.
Example:
The filename (account_daily_arr) of the model file determines the object name in the database (in this case a table).
We’re actually just using a small piece that allows us to control our Snowflake user and role permissions in a fine-grained way.
Telemetry data is data that's sent from Mattermost servers and makes its way to our data warehouse.
The data is available in its raw form in the Raw database, in the mattermost2 and mattermost_nps schemas.
Active User Counts
License data
Google Analytics
Overview
Google Analytics - Stitch integration has a lot of caveats and limitations.
Known limitations:
Each set of dimensions and measures from Google Analytics needs to have its own Stitch integration.
Each integration creates a schema in Snowflake that matches the name of the integration and adds a table called report
.
Name: GA ChannelGrouping Source Users Org Schema: analytics.ga_channelgrouping_source_users_org Table: analytics.ga_channelgrouping_source_users_org.report
Once an integration is created, it can't be edited. If you need to make changes, you need to delete the integration and start over.
Data is only pulled at a daily level.
This is an issue because Unique Monthly Users
is not the same as Aggregated Unique Daily Users
.
Mattermost.com
Owner: Kevin Fayle
Stitch integrations:
Frequency: 6 hours
Dimensions: Page Path, Page Title
Measures: Page Visits, Unique Page Visits, Avg Time on Page
Developers.Mattermost.com
Owner: Kevin Fayle
Stitch integrations:
Frequency: 6 hours
Dimensions: Page Path, Page Title
Measures: Page Visits, Unique Page Visits, Avg Time on Page
Link:
EKS is a managed Kubernetes service that allows us to deploy, orchestrate, and run our code. The main benefit of Kubernetes is being able to declaratively specify the resources you need and how much CPU and memory they require, and Kubernetes will figure out how to make it work. It will also attempt to restart VMs that have failed. We make use of for our images. We also use .
To keep our data and configuration confidential, we make use of which are only shared with team members who need access in LastPass.
To access Airflow, you must be on VPN go to this . The Airflow Creds are stored in the Shared-BizOps LastPass folder.
is a workflow orchestration tool built in Python that allows you to build and schedule . With these DAGs we can schedule jobs to run using and also declare dependencies between jobs so that we can ensure that data that we’re processing doesn’t get overwritten. Airflow also has great utilities for retrying failed jobs and alerting for job and DAG failures.
We take advantage of to send DAG failures to a special internal Mattermost channel called BizOps where team members can triage the failure. We ensure that these get sent to Mattermost with our which is specified in each .
We also utilize to automatically pull from the master branch of our every 60 seconds so our DAGs are always up to date.
We use Airflow’s new that allows each of our jobs to run in its own Kubernetes Pod. The real flexibility with this is that because it’s simply a Kubernetes Pod running a process, we can actually run any job in any language. It also isolates the compute and memory for all the jobs, and we can even customize how much compute and memory power we give to each job so if a job requires more power we can grant it that.
To keep our connection strings and other configuration items confidential, we utilize and inject those as environment variables into our Kubernetes Pods. To inject a secret into the environment of a job run through an Airflow DAG, you must specify it in the then you can import it in the and then finally inject it into the itself.
is a cloud- and SQL- based data warehouse platform that allows you to separate query compute power from data storage. It uses a proprietary data format for storing data and strives to provide a service that means you don’t need a DBA to constantly monitor and tweak to keep the warehouse performant.
are Snowflake’s concept for a cluster of compute resources that can execute queries. You are billed based on how the size of the Virtual Warehouse and how long it is running for.
in Snowflake allow you to specify an external data source that you want to load data from. Once specified, you can run a simple COPY INTO
command with a pattern, and in our case, will allow us to import data from S3 buckets. You can see how we utilize this .
is a tool, written in Python, that allows you to execute the transform
step of your ELT or ETL process.
Our dbt implementation is .
Dbt has a concept of and .
An example sources file is and this specifies already existing raw data that dbt can pull from to build models.
.
is an overall framework that includes a lot more than what we’re using it for.
The specific piece we use is .
We use this in the . The interesting piece is the container_cmd lines. Essentially we create an entire Meltano project, but then just use a file that we define to control the permissions.
This data is detailed .
Currently, we use to push this data to Snowflake.
We currently have a dbt model that uses this raw data, but will continue to add more.
Mattermost servers ping a Cloudfront endpoint with some basic telemetry. It uses the log format specified .
The import job uses code .
Mattermost runs a proxy service that allows notifications to be sent through Apple and Google’s respective notification services for mobile notifications. The log data is put into an S3 bucket and then ingested using Snowflake Stage and COPY
. See for more details.
To help us track how many Mattermost servers are being deployed, there's a pingback which gets logged to an S3 bucket that we import. Details .
Mattermost’s enterprise license metadata is exported nightly to an S3 bucket and then we import it daily. Code .