Skip to content

Releases: fivetran/dbt_fivetran_log

v1.9.1 dbt_fivetran_log

21 Nov 17:12
244f2c5
Compare
Choose a tag to compare

PR #138 includes the following updates:

Features

  • For Fivetran Platform Connectors created after November 2024, Fivetran has deprecated the api_call event in favor of extract_summary (release notes).
  • Accordingly, we have updated the fivetran_platform__connector_daily_events model to support the new extract_summary event while maintaining backward compatibility with the api_call event for connectors created before November 2024.

Under the Hood

  • Replaced the deprecated dbt.current_timestamp_backcompat() function with dbt.current_timestamp() to ensure all timestamps are captured in UTC.
  • Updated fivetran_platform__connector_daily_events to support running dbt compile prior to the initial dbt run on a new schema.

Full Changelog: v1.9.0...v1.9.1

v1.9.0 dbt_fivetran_log

25 Jul 17:55
b82e78e
Compare
Choose a tag to compare

PR #132 includes the following updates:

🚨 Schema Changes 🚨

  • Following the July 2024 Fivetran Platform connector update, the connector_name field has been added to the incremental_mar source table. As a result, the following changes have been applied:
    • A new tmp model stg_fivetran_platform__incremental_mar_tmp has been created. This is necessary to ensure column consistency in downstream incremental_mar models.
    • The get_incremental_mar_columns() macro has been added to ensure all required columns are present in the stg_fivetran_platform__incremental_mar model.
    • The stg_fivetran_platform__incremental_mar has been updated to reference both the aforementioned tmp model and macro to fill empty fields if any required field is not present in the source.
    • The connector_name field in the stg_fivetran_platform__incremental_mar model is now defined by: coalesce(connector_name, connector_id). This ensures the data model will use the appropriate field to define the connector_name.

Under the Hood

  • Updated integration test seed data within integration_tests/seeds/incremental_mar.csv to ensure new code updates are working as expected.

Full Changelog: v1.8.0...v1.9.0

v1.8.0 dbt_fivetran_log

12 Jun 13:46
ce41a02
Compare
Choose a tag to compare

PR #130 includes the following updates:

🚨 Breaking Changes 🚨

⚠️ Since the following changes result in the table format changing, we recommend running a --full-refresh after upgrading to this version to avoid possible incremental failures.

  • For Databricks All-Purpose clusters, the fivetran_platform__audit_table model will now be materialized using the delta table format (previously parquet).
    • Delta tables are generally more performant than parquet and are also more widely available for Databricks users. Previously, the parquet file format was causing compilation issues on customers' managed tables.

Documentation Updates

  • Updated the sync_start and sync_end field descriptions for the fivetran_platform__audit_table to explicitly define that these fields only represent the sync start/end times for when the connector wrote new or modified existing records to the specified table.
  • Addition of integrity and consistency validation tests within integration tests for every end model.
  • Removed duplicate Databricks dispatch instructions listed in the README.

Under the Hood

  • The is_databricks_sql_warehouse macro has been renamed to is_incremental_compatible and has been modified to return true if the Databricks runtime being used is an all-purpose cluster (previously this macro checked if a sql warehouse runtime was used) or if any other non-Databricks supported destination is being used.
    • This update was applied as there have been other Databricks runtimes discovered (ie. an endpoint and external runtime) which do not support the insert_overwrite incremental strategy used in the fivetran_platform__audit_table model.
  • In addition to the above, for Databricks users the fivetran_platform__audit_table model will now leverage the incremental strategy only if the Databricks runtime is all-purpose. Otherwise, all other Databricks runtimes will not leverage an incremental strategy.

Full Changelog: v1.7.3...v1.8.0

v1.7.3 dbt_fivetran_log

14 May 21:29
8b325b8
Compare
Choose a tag to compare

PR #126 includes the following updates:

Performance Improvements

  • Updated the sequence of JSON parsing for model fivetran_platform__audit_table to reduce runtime.

Bug Fixes

  • Updated model fivetran_platform__audit_user_activity to correct the JSON parsing used to determine column email. This fixes an issue introduced in v1.5.0 where fivetran_platform__audit_user_activity could potentially have 0 rows.

Under the hood

  • Updated logic for macro fivetran_log_lookback to align with logic used in similar macros in other packages.
  • Updated logic for the postgres dispatch of macro fivetran_log_json_parse to utilize jsonb instead of json for performance.

Full Changelog: v1.7.2...v1.7.3

v1.7.2 dbt_fivetran_log

09 Apr 17:29
d355614
Compare
Choose a tag to compare

PR #123 includes the following updates:

Bug Fixes

  • Removal of the leading / from the target.http_path regex search within the is_databricks_sql_warehouse() macro to accurately identify SQL Warehouse Databricks destinations in Quickstart.
    • The macro above initially worked as expected in dbt core environments; however, in Quickstart implementations this data model was not working. This was due to Quickstart removing the leading / from the target.http_path. Thus resulting in the regex search to always fail.

Full Changelog: v1.7.1...v1.7.2

v1.7.1 dbt_fivetran_log

04 Apr 16:22
4ada430
Compare
Choose a tag to compare

PR #121 includes the following updates:

Bug Fixes

  • Users leveraging the Databricks SQL Warehouse runtime were previously unable to run the fivetran_platform__audit_table model due to an incompatible incremental strategy. As such, the following updates have been made:
    • A new macro is_databricks_sql_warehouse() has been added to determine if a SQL Warehouse runtime for Databricks is being used. This macro will return a boolean of true if the runtime is determined to be SQL Warehouse and false if it is any other runtime or a non-Databricks destination.
    • The above macro is used in determining the incremental strategy within the fivetran_platform__audit_table. For Databricks SQL Warehouses, there will be no incremental strategy used. All other destinations and runtime strategies are not impacted with this change.
      • For the SQL Warehouse runtime, the best incremental strategy we could elect to use is the merge strategy. However, we do not have full confidence in the resulting data integrity of the output model when leveraging this strategy. Therefore, we opted for the model to be materialized as a non-incremental table for the time being.
    • The file format of the model has changed to delta for SQL Warehouse users. For all other destinations the parquet file format is still used.

Features

  • Updated README incremental model section to revise descriptions and add information for Databricks SQL Warehouse.

Under the Hood

  • Added integration testing pipeline for Databricks SQL Warehouse.
  • Applied modifications to the integration testing pipeline to account for jobs being run on both Databricks All Purpose Cluster and SQL Warehouse runtimes.

Full Changelog: v1.7.0...v1.7.1

v1.7.0 dbt_fivetran_log

18 Mar 15:30
1856dd4
Compare
Choose a tag to compare

PR #119 includes the following updates:

🚨 Breaking Changes 🚨: Bug Fixes

  • The following fields have been deprecated (removed) as these fields proved to be problematic across warehouses due to the end size of the fields.
    • errors_since_last_completed_sync
    • warnings_since_last_completed_sync

Note: If you found these fields to be relevant, you may still reference the error/warning messages from within the underlying log table.

  • The fivetran_platform_using_sync_alert_messages variable has been removed as it is no longer necessary due to the above changes.

Feature Updates

  • The following fields have been added to display the number of error/warning messages sync last completed sync. These fields are intended to substitute the information from deprecated fields listed above.
    • number_errors_since_last_completed_sync
    • number_warnings_since_last_completed_sync

Full Changelog: v1.6.0...v1.7.0

v1.6.0 dbt_fivetran_log

11 Mar 15:24
2fe07aa
Compare
Choose a tag to compare

PR #117 includes the following updates as a result of users encountering numeric counts exceeding the limit of a standard integer. Therefore, these fields were required to be cast as bigint in order to avoid "integer out of range" errors:

Breaking Changes

⚠️ Since the following changes result in a field changing datatype, we recommend running a --full-refresh after upgrading to this version to avoid possible incremental failures.

  • The following fields in the fivetran_platform__audit_table model have been updated to be cast as dbt.type_bigint() (previously was dbt.type_int())
    • sum_rows_replaced_or_inserted
    • sum_rows_updated
    • sum_rows_deleted

Bug Fixes

  • The following fields in the fivetran_platform__connector_daily_events model have been updated to be cast as dbt.type_bigint() (previously was dbt.type_int())
    • count_api_calls
    • count_record_modifications
    • count_schema_changes

Under the Hood

  • Modified log seed data within the integration tests folder to ensure that large integers are being tested as part of our integration tests.

Full Changelog: v1.5.0...v1.6.0

v1.5.0 dbt_fivetran_log

20 Feb 21:04
1627958
Compare
Choose a tag to compare

PR #114 includes the following updates:

Breaking Changes

⚠️ Since the following changes are breaking, we recommend running a --full-refresh after upgrading to this version.

  • For Bigquery and Databricks destinations, updated the partition_by config to coordinate with the filter used in the incremental logic.
  • For Snowflake destinations, added a cluster_by config for performance.

Feature Updates

  • Updated incremental logic for fivetran_platform__audit_table so that it looks back 7 days to catch any late arriving records.
  • Updated JSON parsing logic in the following models to prevent run failures when incoming JSON-like strings are invalid.
    • fivetran_platform__audit_table
    • fivetran_platform__audit_user_activity
    • fivetran_platform__connector_daily_events
    • fivetran_platform__connector_status
    • fivetran_platform__schema_changelog
  • Updated fivetran_platform__connector_status to parse only a subset of the message_data field to improve compute.

Under The Hood

  • Added macros:
    • fivetran_log_json_parse to handle the updated JSON parsing.
    • fivetran_log_lookback for use in fivetran_platform__audit_table.
  • Updated seeds to test handling of invalid JSON strings.

Full Changelog: v1.4.3...v1.5.0

v1.4.3 dbt_fivetran_log

24 Jan 10:34
a1a4662
Compare
Choose a tag to compare

PR #112 includes the following updates:

Feature Updates

  • Updated logic for connector_health dimension in fivetran_platform__connector_status to show deleted for connectors that had been removed. Previously the connector would report the last known status before deletion, which is inaccurate based on the definition of this measure.
  • Brought in the is_deleted dimension (based on the _fivetran_deleted value) to stg_fivetran__platform__connector to capture connectors that are deleted in the downstream fivetran_platform__connector_status model.

Under The Hood

  • Renamed get_brand_columns macro file to get_connector_columns to maintain consistency with the actual macro function within the file, and the connector source that the macro is drawing columns from.

Full Changelog: v1.4.2...v1.4.3