Releases: fivetran/dbt_fivetran_log
v1.9.1 dbt_fivetran_log
PR #138 includes the following updates:
Features
- For Fivetran Platform Connectors created after November 2024, Fivetran has deprecated the
api_call
event in favor ofextract_summary
(release notes). - Accordingly, we have updated the
fivetran_platform__connector_daily_events
model to support the newextract_summary
event while maintaining backward compatibility with theapi_call
event for connectors created before November 2024.
Under the Hood
- Replaced the deprecated
dbt.current_timestamp_backcompat()
function withdbt.current_timestamp()
to ensure all timestamps are captured in UTC. - Updated
fivetran_platform__connector_daily_events
to support runningdbt compile
prior to the initialdbt run
on a new schema.
Full Changelog: v1.9.0...v1.9.1
v1.9.0 dbt_fivetran_log
PR #132 includes the following updates:
🚨 Schema Changes 🚨
- Following the July 2024 Fivetran Platform connector update, the
connector_name
field has been added to theincremental_mar
source table. As a result, the following changes have been applied:- A new tmp model
stg_fivetran_platform__incremental_mar_tmp
has been created. This is necessary to ensure column consistency in downstreamincremental_mar
models. - The
get_incremental_mar_columns()
macro has been added to ensure all required columns are present in thestg_fivetran_platform__incremental_mar
model. - The
stg_fivetran_platform__incremental_mar
has been updated to reference both the aforementioned tmp model and macro to fill empty fields if any required field is not present in the source. - The
connector_name
field in thestg_fivetran_platform__incremental_mar
model is now defined by:coalesce(connector_name, connector_id)
. This ensures the data model will use the appropriate field to define theconnector_name
.
- A new tmp model
Under the Hood
- Updated integration test seed data within
integration_tests/seeds/incremental_mar.csv
to ensure new code updates are working as expected.
Full Changelog: v1.8.0...v1.9.0
v1.8.0 dbt_fivetran_log
PR #130 includes the following updates:
🚨 Breaking Changes 🚨
⚠️ Since the following changes result in the table format changing, we recommend running a--full-refresh
after upgrading to this version to avoid possible incremental failures.
- For Databricks All-Purpose clusters, the
fivetran_platform__audit_table
model will now be materialized using the delta table format (previously parquet).- Delta tables are generally more performant than parquet and are also more widely available for Databricks users. Previously, the parquet file format was causing compilation issues on customers' managed tables.
Documentation Updates
- Updated the
sync_start
andsync_end
field descriptions for thefivetran_platform__audit_table
to explicitly define that these fields only represent the sync start/end times for when the connector wrote new or modified existing records to the specified table. - Addition of integrity and consistency validation tests within integration tests for every end model.
- Removed duplicate Databricks dispatch instructions listed in the README.
Under the Hood
- The
is_databricks_sql_warehouse
macro has been renamed tois_incremental_compatible
and has been modified to returntrue
if the Databricks runtime being used is an all-purpose cluster (previously this macro checked if a sql warehouse runtime was used) or if any other non-Databricks supported destination is being used.- This update was applied as there have been other Databricks runtimes discovered (ie. an endpoint and external runtime) which do not support the
insert_overwrite
incremental strategy used in thefivetran_platform__audit_table
model.
- This update was applied as there have been other Databricks runtimes discovered (ie. an endpoint and external runtime) which do not support the
- In addition to the above, for Databricks users the
fivetran_platform__audit_table
model will now leverage the incremental strategy only if the Databricks runtime is all-purpose. Otherwise, all other Databricks runtimes will not leverage an incremental strategy.
Full Changelog: v1.7.3...v1.8.0
v1.7.3 dbt_fivetran_log
PR #126 includes the following updates:
Performance Improvements
- Updated the sequence of JSON parsing for model
fivetran_platform__audit_table
to reduce runtime.
Bug Fixes
- Updated model
fivetran_platform__audit_user_activity
to correct the JSON parsing used to determine columnemail
. This fixes an issue introduced in v1.5.0 wherefivetran_platform__audit_user_activity
could potentially have 0 rows.
Under the hood
- Updated logic for macro
fivetran_log_lookback
to align with logic used in similar macros in other packages. - Updated logic for the postgres dispatch of macro
fivetran_log_json_parse
to utilizejsonb
instead ofjson
for performance.
Full Changelog: v1.7.2...v1.7.3
v1.7.2 dbt_fivetran_log
PR #123 includes the following updates:
Bug Fixes
- Removal of the leading
/
from thetarget.http_path
regex search within theis_databricks_sql_warehouse()
macro to accurately identify SQL Warehouse Databricks destinations in Quickstart.- The macro above initially worked as expected in dbt core environments; however, in Quickstart implementations this data model was not working. This was due to Quickstart removing the leading
/
from thetarget.http_path
. Thus resulting in the regex search to always fail.
- The macro above initially worked as expected in dbt core environments; however, in Quickstart implementations this data model was not working. This was due to Quickstart removing the leading
Full Changelog: v1.7.1...v1.7.2
v1.7.1 dbt_fivetran_log
PR #121 includes the following updates:
Bug Fixes
- Users leveraging the Databricks SQL Warehouse runtime were previously unable to run the
fivetran_platform__audit_table
model due to an incompatible incremental strategy. As such, the following updates have been made:- A new macro
is_databricks_sql_warehouse()
has been added to determine if a SQL Warehouse runtime for Databricks is being used. This macro will return a boolean oftrue
if the runtime is determined to be SQL Warehouse andfalse
if it is any other runtime or a non-Databricks destination. - The above macro is used in determining the incremental strategy within the
fivetran_platform__audit_table
. For Databricks SQL Warehouses, there will be no incremental strategy used. All other destinations and runtime strategies are not impacted with this change.- For the SQL Warehouse runtime, the best incremental strategy we could elect to use is the
merge
strategy. However, we do not have full confidence in the resulting data integrity of the output model when leveraging this strategy. Therefore, we opted for the model to be materialized as a non-incrementaltable
for the time being.
- For the SQL Warehouse runtime, the best incremental strategy we could elect to use is the
- The file format of the model has changed to
delta
for SQL Warehouse users. For all other destinations theparquet
file format is still used.
- A new macro
Features
- Updated README incremental model section to revise descriptions and add information for Databricks SQL Warehouse.
Under the Hood
- Added integration testing pipeline for Databricks SQL Warehouse.
- Applied modifications to the integration testing pipeline to account for jobs being run on both Databricks All Purpose Cluster and SQL Warehouse runtimes.
Full Changelog: v1.7.0...v1.7.1
v1.7.0 dbt_fivetran_log
PR #119 includes the following updates:
🚨 Breaking Changes 🚨: Bug Fixes
- The following fields have been deprecated (removed) as these fields proved to be problematic across warehouses due to the end size of the fields.
errors_since_last_completed_sync
warnings_since_last_completed_sync
Note: If you found these fields to be relevant, you may still reference the error/warning messages from within the underlying
log
table.
- The
fivetran_platform_using_sync_alert_messages
variable has been removed as it is no longer necessary due to the above changes.
Feature Updates
- The following fields have been added to display the number of error/warning messages sync last completed sync. These fields are intended to substitute the information from deprecated fields listed above.
number_errors_since_last_completed_sync
number_warnings_since_last_completed_sync
Full Changelog: v1.6.0...v1.7.0
v1.6.0 dbt_fivetran_log
PR #117 includes the following updates as a result of users encountering numeric counts exceeding the limit of a standard integer. Therefore, these fields were required to be cast as bigint
in order to avoid "integer out of range" errors:
Breaking Changes
⚠️ Since the following changes result in a field changing datatype, we recommend running a--full-refresh
after upgrading to this version to avoid possible incremental failures.
- The following fields in the
fivetran_platform__audit_table
model have been updated to be cast asdbt.type_bigint()
(previously wasdbt.type_int()
)sum_rows_replaced_or_inserted
sum_rows_updated
sum_rows_deleted
Bug Fixes
- The following fields in the
fivetran_platform__connector_daily_events
model have been updated to be cast asdbt.type_bigint()
(previously wasdbt.type_int()
)count_api_calls
count_record_modifications
count_schema_changes
Under the Hood
- Modified
log
seed data within the integration tests folder to ensure that large integers are being tested as part of our integration tests.
Full Changelog: v1.5.0...v1.6.0
v1.5.0 dbt_fivetran_log
PR #114 includes the following updates:
Breaking Changes
⚠️ Since the following changes are breaking, we recommend running a--full-refresh
after upgrading to this version.
- For Bigquery and Databricks destinations, updated the
partition_by
config to coordinate with the filter used in the incremental logic. - For Snowflake destinations, added a
cluster_by
config for performance.
Feature Updates
- Updated incremental logic for
fivetran_platform__audit_table
so that it looks back 7 days to catch any late arriving records. - Updated JSON parsing logic in the following models to prevent run failures when incoming JSON-like strings are invalid.
fivetran_platform__audit_table
fivetran_platform__audit_user_activity
fivetran_platform__connector_daily_events
fivetran_platform__connector_status
fivetran_platform__schema_changelog
- Updated
fivetran_platform__connector_status
to parse only a subset of themessage_data
field to improve compute.
Under The Hood
- Added macros:
fivetran_log_json_parse
to handle the updated JSON parsing.fivetran_log_lookback
for use infivetran_platform__audit_table
.
- Updated seeds to test handling of invalid JSON strings.
Full Changelog: v1.4.3...v1.5.0
v1.4.3 dbt_fivetran_log
PR #112 includes the following updates:
Feature Updates
- Updated logic for
connector_health
dimension infivetran_platform__connector_status
to showdeleted
for connectors that had been removed. Previously the connector would report the last known status before deletion, which is inaccurate based on the definition of this measure. - Brought in the
is_deleted
dimension (based on the_fivetran_deleted
value) tostg_fivetran__platform__connector
to capture connectors that are deleted in the downstreamfivetran_platform__connector_status
model.
Under The Hood
- Renamed
get_brand_columns
macro file toget_connector_columns
to maintain consistency with the actual macro function within the file, and theconnector
source that the macro is drawing columns from.
Full Changelog: v1.4.2...v1.4.3