PR #138 includes the following updates:
- For Fivetran Platform Connectors created after November 2024, Fivetran has deprecated the
api_call
event in favor ofextract_summary
(release notes). - Accordingly, we have updated the
fivetran_platform__connector_daily_events
model to support the newextract_summary
event while maintaining backward compatibility with theapi_call
event for connectors created before November 2024.
- Replaced the deprecated
dbt.current_timestamp_backcompat()
function withdbt.current_timestamp()
to ensure all timestamps are captured in UTC. - Updated
fivetran_platform__connector_daily_events
to support runningdbt compile
prior to the initialdbt run
on a new schema.
PR #132 includes the following updates:
- Following the July 2024 Fivetran Platform connector update, the
connector_name
field has been added to theincremental_mar
source table. As a result, the following changes have been applied:- A new tmp model
stg_fivetran_platform__incremental_mar_tmp
has been created. This is necessary to ensure column consistency in downstreamincremental_mar
models. - The
get_incremental_mar_columns()
macro has been added to ensure all required columns are present in thestg_fivetran_platform__incremental_mar
model. - The
stg_fivetran_platform__incremental_mar
has been updated to reference both the aforementioned tmp model and macro to fill empty fields if any required field is not present in the source. - The
connector_name
field in thestg_fivetran_platform__incremental_mar
model is now defined by:coalesce(connector_name, connector_id)
. This ensures the data model will use the appropriate field to define theconnector_name
.
- A new tmp model
- Updated integration test seed data within
integration_tests/seeds/incremental_mar.csv
to ensure new code updates are working as expected.
PR #130 includes the following updates:
⚠️ Since the following changes result in the table format changing, we recommend running a--full-refresh
after upgrading to this version to avoid possible incremental failures.
- For Databricks All-Purpose clusters, the
fivetran_platform__audit_table
model will now be materialized using the delta table format (previously parquet).- Delta tables are generally more performant than parquet and are also more widely available for Databricks users. Previously, the parquet file format was causing compilation issues on customers' managed tables.
- Updated the
sync_start
andsync_end
field descriptions for thefivetran_platform__audit_table
to explicitly define that these fields only represent the sync start/end times for when the connector wrote new or modified existing records to the specified table. - Addition of integrity and consistency validation tests within integration tests for every end model.
- Removed duplicate Databricks dispatch instructions listed in the README.
- The
is_databricks_sql_warehouse
macro has been renamed tois_incremental_compatible
and has been modified to returntrue
if the Databricks runtime being used is an all-purpose cluster (previously this macro checked if a sql warehouse runtime was used) or if any other non-Databricks supported destination is being used.- This update was applied as there have been other Databricks runtimes discovered (ie. an endpoint and external runtime) which do not support the
insert_overwrite
incremental strategy used in thefivetran_platform__audit_table
model.
- This update was applied as there have been other Databricks runtimes discovered (ie. an endpoint and external runtime) which do not support the
- In addition to the above, for Databricks users the
fivetran_platform__audit_table
model will now leverage the incremental strategy only if the Databricks runtime is all-purpose. Otherwise, all other Databricks runtimes will not leverage an incremental strategy.
PR #126 includes the following updates:
- Updated the sequence of JSON parsing for model
fivetran_platform__audit_table
to reduce runtime.
- Updated model
fivetran_platform__audit_user_activity
to correct the JSON parsing used to determine columnemail
. This fixes an issue introduced in v1.5.0 wherefivetran_platform__audit_user_activity
could potentially have 0 rows.
- Updated logic for macro
fivetran_log_lookback
to align with logic used in similar macros in other packages. - Updated logic for the postgres dispatch of macro
fivetran_log_json_parse
to utilizejsonb
instead ofjson
for performance.
PR #123 includes the following updates:
- Removal of the leading
/
from thetarget.http_path
regex search within theis_databricks_sql_warehouse()
macro to accurately identify SQL Warehouse Databricks destinations in Quickstart.- The macro above initially worked as expected in dbt core environments; however, in Quickstart implementations this data model was not working. This was due to Quickstart removing the leading
/
from thetarget.http_path
. Thus resulting in the regex search to always fail.
- The macro above initially worked as expected in dbt core environments; however, in Quickstart implementations this data model was not working. This was due to Quickstart removing the leading
PR #121 includes the following updates:
- Users leveraging the Databricks SQL Warehouse runtime were previously unable to run the
fivetran_platform__audit_table
model due to an incompatible incremental strategy. As such, the following updates have been made:- A new macro
is_databricks_sql_warehouse()
has been added to determine if a SQL Warehouse runtime for Databricks is being used. This macro will return a boolean oftrue
if the runtime is determined to be SQL Warehouse andfalse
if it is any other runtime or a non-Databricks destination. - The above macro is used in determining the incremental strategy within the
fivetran_platform__audit_table
. For Databricks SQL Warehouses, there will be no incremental strategy used. All other destinations and runtime strategies are not impacted with this change.- For the SQL Warehouse runtime, the best incremental strategy we could elect to use is the
merge
strategy. However, we do not have full confidence in the resulting data integrity of the output model when leveraging this strategy. Therefore, we opted for the model to be materialized as a non-incrementaltable
for the time being.
- For the SQL Warehouse runtime, the best incremental strategy we could elect to use is the
- The file format of the model has changed to
delta
for SQL Warehouse users. For all other destinations theparquet
file format is still used.
- A new macro
- Updated README incremental model section to revise descriptions and add information for Databricks SQL Warehouse.
- Added integration testing pipeline for Databricks SQL Warehouse.
- Applied modifications to the integration testing pipeline to account for jobs being run on both Databricks All Purpose Cluster and SQL Warehouse runtimes.
PR #119 includes the following updates:
- The following fields have been deprecated (removed) as these fields proved to be problematic across warehouses due to the end size of the fields.
errors_since_last_completed_sync
warnings_since_last_completed_sync
Note: If you found these fields to be relevant, you may still reference the error/warning messages from within the underlying
log
table.
- The
fivetran_platform_using_sync_alert_messages
variable has been removed as it is no longer necessary due to the above changes.
- The following fields have been added to display the number of error/warning messages sync last completed sync. These fields are intended to substitute the information from deprecated fields listed above.
number_errors_since_last_completed_sync
number_warnings_since_last_completed_sync
PR #117 includes the following updates as a result of users encountering numeric counts exceeding the limit of a standard integer. Therefore, these fields were required to be cast as bigint
in order to avoid "integer out of range" errors:
⚠️ Since the following changes result in a field changing datatype, we recommend running a--full-refresh
after upgrading to this version to avoid possible incremental failures.
- The following fields in the
fivetran_platform__audit_table
model have been updated to be cast asdbt.type_bigint()
(previously wasdbt.type_int()
)sum_rows_replaced_or_inserted
sum_rows_updated
sum_rows_deleted
- The following fields in the
fivetran_platform__connector_daily_events
model have been updated to be cast asdbt.type_bigint()
(previously wasdbt.type_int()
)count_api_calls
count_record_modifications
count_schema_changes
- Modified
log
seed data within the integration tests folder to ensure that large integers are being tested as part of our integration tests.
PR #114 includes the following updates:
⚠️ Since the following changes are breaking, we recommend running a--full-refresh
after upgrading to this version.
- For Bigquery and Databricks destinations, updated the
partition_by
config to coordinate with the filter used in the incremental logic. - For Snowflake destinations, added a
cluster_by
config for performance.
- Updated incremental logic for
fivetran_platform__audit_table
so that it looks back 7 days to catch any late arriving records. - Updated JSON parsing logic in the following models to prevent run failures when incoming JSON-like strings are invalid.
fivetran_platform__audit_table
fivetran_platform__audit_user_activity
fivetran_platform__connector_daily_events
fivetran_platform__connector_status
fivetran_platform__schema_changelog
- Updated
fivetran_platform__connector_status
to parse only a subset of themessage_data
field to improve compute.
- Added macros:
fivetran_log_json_parse
to handle the updated JSON parsing.fivetran_log_lookback
for use infivetran_platform__audit_table
.
- Updated seeds to test handling of invalid JSON strings.
PR #112 includes the following updates:
- Updated logic for
connector_health
dimension infivetran_platform__connector_status
to showdeleted
for connectors that had been removed. Previously the connector would report the last known status before deletion, which is inaccurate based on the definition of this measure. - Brought in the
is_deleted
dimension (based on the_fivetran_deleted
value) tostg_fivetran__platform__connector
to capture connectors that are deleted in the downstreamfivetran_platform__connector_status
model.
- Renamed
get_brand_columns
macro file toget_connector_columns
to maintain consistency with the actual macro function within the file, and theconnector
source that the macro is drawing columns from.
PR #109 includes the following updates:
- Adjusted the
stg_fivetran_platform__credits_used
andstg_fivetran_platform__usage_cost
models to return empty tables (via alimit 0
function, orfetch/offset
function for SQL Server) if the respectivefivetran_platform__credits_pricing
and/orfivetran_platform__usage_pricing
variables are disabled. This is to avoid Postgres data type errors if those tables are null.
- Included an additional test case within the integration tests where the
fivetran_platform__credits_pricing
variable is set to false and thefivetran_platform__usage_pricing
variable is set to true in order to effectively test this scenario. - Updated seed files to ensure downstream models properly populate into
fivetran_platform__usage_mar_destination_history
.
PR #107 includes the following updates:
- Adjusted the
fivetran_platform__audit_user_activity
model to parse themessage_data
json field to obtain the actor_email information only if the field containsactor
.- This ensures the JSON parsing is only happening on the fields that are relevant. This will help reduce compute and avoid potential parsing errors from malformed JSON objects.
- Included auto-releaser GitHub Actions workflow to automate future releases.
- Updated the maintainer PR template to resemble the most up to date format.
- This release introduces compatibility with SQL Server 🥳 🎆 🍾 (PR #101)
- Adjusts the uniqueness test on the recently introduced
fivetran_platform__audit_user_activity
model to test onlog_id
andoccurred_at
(PR #102).- Previously, the
log_id
was erroneously considered the primary key of this model.
- Previously, the
- Removed
order by
from the finalselect
statement in each model. This was done to reduce compute costs from the models (PR #101). - Converted all
group by
's to explicitly reference the names of columns we are grouping by, instead of grouping by column number. This was necessary for SQL Server compatibility, as implicit groupings are not supported (PR #101).
- Deprecated the
transformation
andtrigger_table
source tables and any downstream transforms. These tables only housed information on Fivetran Basic SQL Transformations, which were sunset last year (PR #96).- The entire
fivetran_platform__transformation_status
end model has therefore been removed. - As they are now obsolete, the
fivetran_platform_using_transformations
andfivetran_platform_using_triggers
variables have been removed.
- The entire
- We have added a new model,
fivetran_platform__audit_user_activity
(PR #98):- Each record represents a user-triggered action in your Fivetran instance. This model is intended for audit-trail purposes, as it can be very helpful when trying to trace a user action to a log event such as a schema change, sync frequency update, manual update, broken connection, etc.
- This model builds off of this sample query from Fivetran's docs.
- Tightened incremental logic in
fivetran_platform__audit_table
, which was seeing duplicates on incremental runs (PR #97).- If you are seeing uniqueness test failures on the
unique_table_sync_key
field, please run a full refresh before upgrading to this version of the package.
- If you are seeing uniqueness test failures on the
- Added a dependency on the
dbt_date
package (PR #98):
- package: calogica/dbt_date
version: [">=0.9.0", "<1.0.0"]
PR #92 includes the following updates:
- The
unique_table_sync_key
surrogate key which is created within thefivetran_platform__audit_table
has been updated to also be comprised of theschema_name
in addition to theconnector_id
,destination_id
,table_name
,write_to_table_start
fields. This update will also ensure the uniqueness test on this record is accurately testing the true grain of the model.- 🚨 Please be aware that as the
fivetran_platform__audit_table
model is an incremental model a--full-refresh
will be needed following the package upgrade in order for this change to properly be applied to all records in the end model. 🚨
- 🚨 Please be aware that as the
PR #87 includes the following updates:
The below change was made to an incremental model. As such, a dbt run --full-refresh
will be required following an upgrade to capture the new column.
- Added
schema_name
to thefivetran_platform__audit_table
end model. This schema name field is captured from themessage_data
JSON within thelog
source table. In cases where theschema_name
is not provided, a coalesce was added to replicate theconnector_name
as theschema_name
.
Note: This may change the row count of your
fivetran_platform__audit_table
model. However, this new row count is more correct, as it more accurately captures records from database connectors, which can write to multiple schemas.
- Fixed links in the README models section to properly redirect to the dbt hosted docs for the relevant models.
- Update staging models CTE names to current standard used in our other packages (the base, fields, final approach) to avoid potential circular references (PR #85)
The Fivetran Log connector has been renamed to the "Fivetran Platform" connector. To align with this name change, this package is largely being renamed from fivetran_log
to fivetran_platform
. This is a very breaking change! 🚨 🚨 🚨 🚨
Bottom Line: What you need to update and/or know:
- If you are setting any variables for this package in your
dbt_project.yml
, update the name of the prefix of the variable(s) fromfivetran_log_*
tofivetran_platform_*
. The default values for variables have not changed. - Similarly, any references to package models will need to be updated. The prefix of package models has been updated from
fivetran_log__*
tofivetran_platform__*
. - If you are overriding the
fivetran_log
source, you will need to update theoverrides
property to match the newsource
name (fivetran_platform
). - Run a full refresh, as we have updated the incremental strategies across warehouses.
- The default build schema suffixes have been changed from
_stg_fivetran_log
and_fivetran_log
to_stg_fivetran_platform
and_fivetran_platform
respectively. We recommend dropping the old schemas.
Note: Things that are NOT changing in the package:
- The name of the Github repository will not be changed. It will remain
dbt_fivetran_log
- The package's project name will remain
fivetran_log
. You will not need to update yourpackages.yml
reference. - The default source schema will remain
fivetran_log
. The name of the source schema variable has changed though (fivetran_log_schema
->fivetran_platform_schema
).
See details below!
PR #81 introduced the following changes (some unrelated to the connector name change):
- Updated the prefixes of each model from
fivetran_log_*
orstg_fivetran_log_*
tofivetran_platform_*
andstg_fivetran_platform_*
, respectively.
Original model name | New model name |
---|---|
fivetran_log__audit_table | fivetran_platform__audit_table |
fivetran_log__connector_daily_events | fivetran_platform__connector_daily_events |
fivetran_log__connector_status | fivetran_platform__connector_status |
fivetran_log__mar_table_history | fivetran_platform__mar_table_history |
fivetran_log__schema_changelog | fivetran_platform__schema_changelog |
fivetran_log__transformation_status | fivetran_platform__transformation_status |
fivetran_log__usage_mar_destination_history | fivetran_platform__usage_mar_destination_history |
stg_fivetran_log__account | stg_fivetran_platform__account |
stg_fivetran_log__connector | stg_fivetran_platform__connector |
stg_fivetran_log__credits_used | stg_fivetran_platform__credits_used |
stg_fivetran_log__destination_membership | stg_fivetran_platform__destination_membership |
stg_fivetran_log__destination | stg_fivetran_platform__destination |
stg_fivetran_log__incremental_mar | stg_fivetran_platform__incremental_mar |
stg_fivetran_log__log | stg_fivetran_platform__log |
stg_fivetran_log__transformation | stg_fivetran_platform__transformation |
stg_fivetran_log__trigger_table | stg_fivetran_platform__trigger_table |
stg_fivetran_log__usage_cost | stg_fivetran_platform__usage_cost |
stg_fivetran_log__user | stg_fivetran_platform__user |
- Updated the prefix of all package variables from
fivetran_log_*
tofivetran_platform_*
.
Original variable name | New variable name | Default value (consistent) |
---|---|---|
fivetran_log_schema | fivetran_platform_schema | fivetran_log |
fivetran_log_database | fivetran_platform_database | target.database |
fivetran_log__usage_pricing | fivetran_platform__usage_pricing | Dynamically checks the source at runtime to set as either true or false . May be overridden using this variable if desired. |
fivetran_log__credits_pricing | fivetran_platform__credits_pricing | Dynamically checks the source at runtime to set as either true or false . May be overridden using this variable if desired |
fivetran_log_using_sync_alert_messages | fivetran_platform_using_sync_alert_messages | True |
fivetran_log_using_transformations | fivetran_platform_using_transformations | True |
fivetran_log_using_triggers | fivetran_platform_using_triggers | True |
fivetran_log_using_destination_membership | fivetran_platform_using_destination_membership | True |
fivetran_log_using_user | fivetran_platform_using_user | True |
fivetran_log_[default_table_name]_identifier | fivetran_platform_[default_table_name]_identifier | Default table name (ie 'connector' for fivetran_platform_connector_identifier ) |
- Updated the default build schema suffixes of package models from
_stg_fivetran_log
and_fivetran_log
to_stg_fivetran_platform
and_fivetran_platform
respectively.
We recommend dropping the old schemas to eradicate the stale pre-name-change models from your destintation.
- Updated the name of the package's source from
fivetran_log
tofivetran_platform
. - Updated the name of the packages' schema files:
src_fivetran_log.yml
->src_fivetran_platform.yml
stg_fivetran_log.yml
->stg_fivetran_platform.yml
fivetran_log.yml
->fivetran_platform.yml
- Updated the freshness tests on the
fivetran_platform
source to be less stringent and more realistic. The following source tables have had their default fresness tests removed, as they will not necessarily update frequently:connector
account
destination
destination_membership
user
- Updated the incremental strategy of the audit table model for BigQuery and Databricks users from
merge
to the more consistentinsert_overwrite
method. We have also updated thefile_format
toparquet
and added a partition on a newsync_start_day
field for Databricks. This field is merely a truncated version ofsync_start
.- Run a full refresh to capture these new changes. We recommend running a full refresh every so often regardless. See README for more details.
- The
account_membership
source table (and any of its transformations) has been deprecated. Fivetran deprecated this table from the connector in June 2023.
⚠️ If you are overriding thefivetran_log
source, you will need to update theoverrides
property to match the newsource
name (fivetran_platform
).
- Added documentation for fields missing yml entries.
- Incorporated the new
fivetran_utils.drop_schemas_automation
macro into the end of each Buildkite integration test job (PR #80). - Updated the pull request templates (PR #80).
PR #79 includes the following updates:
- The
sync_id
field from the sourcelog
table is added to thestg_fivetran_log__log
model for ease of grouping events by the sync that they are associated with.
- Added the
get_log_columns
macro and included the fill staging cte's within thestg_fivetran_log__log
model to ensure the model succeeds regardless of a user not having all the required fields.
- The
sync_id
field is added to the documentation in thefivetran_log.yml
file.
PR #77 includes the following updates:
- The logic within the
does_table_exist
macro would run the source_relation check across all nodes. This opened dbt compile to erroneous failures in other (non fivetran_log) sources. This macro logic has been updated to only check the source_relation for the specific source in question. - Adjusted the enabled variable used within the
stg_fivetran_log__credits_used
model to the more appropriatefivetran_log__credits_pricing
name as opposed tofivetran_log__usage_pricing
. This ensures a user may override the respective model enablement in isolation of each other.
- Added a DECISIONLOG to support the logic behind the
fivetran_log__usage_pricing
andfivetran_log__credits_pricing
variable behaviors within the package.
- Fixed duplicated rows in
fivetran_log__mar_table_history
and set the model back to a monthly granularity for each source, destination, and table. (#74)
- Adjusted the uniqueness test within the
fivetran_log__mar_table_history
to also include theschema_name
as the same table may exist in multiple schemas within a connector/destination. (#74)
- Modified the logic within the
fivetran_log__mar_table_history
model to no longer filter out previous historical MAR records. Previously, these fields were filtered out as theactive_volume
source (since deprecated and replaced withincremental_mar
) produced a cumulative daily MAR total. However, theincremental_mar
source is not cumulative and will need to include all historical records. (#72)
- Added coalesce statements to the
paid_monthly_active_rows
andfree_monthly_active_rows
fields within the fivetran_log__mar_table_history model to coalesce to 0. (#72)
PR #64 includes the following breaking changes:
- Dispatch update for dbt-utils to dbt-core cross-db macros migration. Specifically
{{ dbt_utils.<macro> }}
have been updated to{{ dbt.<macro> }}
for the below macros:any_value
bool_or
cast_bool_to_text
concat
date_trunc
dateadd
datediff
escape_single_quotes
except
hash
intersect
last_day
length
listagg
position
replace
right
safe_cast
split_part
string_literal
type_bigint
type_float
type_int
type_numeric
type_string
type_timestamp
array_append
array_concat
array_construct
- For
current_timestamp
andcurrent_timestamp_in_utc
macros, the dispatch AND the macro names have been updated to the below, respectively:dbt.current_timestamp_backcompat
dbt.current_timestamp_in_utc_backcompat
dbt_utils.surrogate_key
has also been updated todbt_utils.generate_surrogate_key
. Since the method for creating surrogate keys differ, we suggest all users do afull-refresh
for the most accurate data. For more information, please refer to dbt-utils release notes for this update.packages.yml
has been updated to reflect new defaultfivetran/fivetran_utils
version, previously[">=0.3.0", "<0.4.0"]
now[">=0.4.0", "<0.5.0"]
.
PR #68 includes the following breaking changes:
- The
active_volume
source (and accompanyingstg_fivetran_log__active_volume
model) has been deprecated from the Fivetran Log connector. In its place, theincremental_mar
table (and accompanyingstg_fivetran_log__incremental_mar
model) has been added. This new source has been swapped within the package to reference the new source table.- This new source table has enriched data behind the paid and free MAR across Fivetran connectors within your destinations.
- Removed the
monthly_active_rows
field from thefivetran_log__mar_table_history
andfivetran_log__usage_mar_destination_history
models. In it's place the following fields have been added:free_mothly_active_rows
: Detailing the total free MARpaid_mothly_active_rows
: Detailing the total paid MARtotal_mothly_active_rows
: Detailing the total free and paid MAR
- Added second qualifying join clause to
fivetran_log__usage_mar_destination_history
in theusage
cte. This join was failing this test to ensure eachdestination_id
has a singlemeasured_month
:
- dbt_utils.unique_combination_of_columns:
combination_of_columns:
- destination_id
- measured_month
- BuildKite testing has been added. (#70)
- Modified the argument used for the identifier in the get_relation macro used in the does_table_exist macro from name to identifier. This avoids issues on snowflake where the name of a table defined in a source yaml may be in lowercase while in snowflake it is uppercased.
- Extend model disablement with
config: is_enabled
setting in sources to avoid running source freshness when a model is disabled. (#58)
-
Added the option to disable the
user
,account_membership
, anddestination_membership
models that may not be available depending on your Fivetran setup. In yourdbt_project.yml
you can now change these flags to disable parts of the package. Use the below to configure your project. (#52) and (#55)fivetran_log_using_account_membership: false # Disables account membership models fivetran_log_using_destination_membership: false # Disables account membership models fivetran_log_using_user: false # Disables account membership models
- This release includes updates to the
fivetran_log__credit_mar_destination_history
andstg_fivetran_log__credits_used
models to account for the new Fivetran pricing model. These changes include: (#50)stg_fivetran_log__credits_used
- The field
credits_consumed
has been renamed tocredits_spent
- The field
fivetran_log__credit_mar_destination_history
- The model has been renamed to
fivetran_log__usage_mar_destination_history
- The field
credits_per_million_mar
has been renamed tocredits_spent_per_million_mar
- The field
mar_per_credit
has been renamed tomar_per_credit_spent
- The model has been renamed to
- README documentation updates for easier experience leveraging the dbt package.
- Added
fivetran_log_[source_table_name]_identifier
variables to allow for easier flexibility of the package to refer to source tables with different names. - This package now accounts for the new Fivetran pricing model. In particular, the new model accounts for the amount of dollars spend vs credits spent. Therefore, a new staging model
stg_fivetran_log__usage_cost
has been added. (#50)- This model relies on the
usage_cost
source table. If you do not have this source table it means you are not on the new pricing model yet. Please note, the dbt package will still generate this staging model. However, the model will be comprised of allnull
records.
- This model relies on the
- In addition to the new staging model, two new fields have been added to the
fivetran_log__usage_mar_destination_history
model. These fields mirror the credits spent fields, but account for the amount of dollars spent instead of credits. (#50)amount_spent_per_million_mar
mar_per_amount_spent
- Introduces a new macro
does_table_exist
to be leveraged in the new pricing model updates. This macro will check the sources defined and provide eithertrue
orfalse
if the table does or does not exist in the schema. (#50)
- The unique combination of columns test within the
fivetran_log__schema_changelog
model has been updated to also check themessage_data
field. This is needed as schema changelog events may now sync at the same time. (#51) - The
fivetran_log__connector_status
model has been adjusted to filter out all logs that contain atransformation_id
. Transformation logs are not always synced as a JSON object and thus the package may encounter errors on Snowflake warehouses when parsing non-JSON fields. Since transformation records are not used in this end model, they have been filtered out. (#51)
- Per the Fivetran Log December 2021 Release Notes every sync results in a final
sync_end
event. In the previous version of this package, a successful sync was identified via async_end
event while anything else was a version of broken. Since all syncs result in async_end
event now, the package has been updated to account for this change within the connector. - To account for the above fix, a new field (
last_successful_sync_completed_at
) was added to thefivetran_log__connector_status
model. This field captures the last successful sync for the connector.
- The
fivetran_log__connector_status
model uses a date function off ofcreated_at
from thestg_fivetran_log__log
model. This fails on certain redshift destinations as the timestamp is synced astimestamptz
. Therefore, the field within the staging model is cast usingdbt_utils.type_timestamp
to appropriately cast the field for downstream functions. Further, to future proof, timestamps were cast within the following staging models:account
,account_membership
,active_volume
,destination_membership
,destination
,log
,transformation
, anduser
. (#40)
This release just introduces Databricks compatibility! 🧱🧱
🎉 Official dbt v1.0.0 Compatibility Release 🎉
- Adjusts the
require-dbt-version
to now be within the range [">=1.0.0", "<2.0.0"]. Additionally, the package has been updated for dbt v1.0.0 compatibility. If you are using a dbt version <1.0.0, you will need to upgrade in order to leverage the latest version of the package.- For help upgrading your package, I recommend reviewing this GitHub repo's Release Notes on what changes have been implemented since your last upgrade.
- For help upgrading your dbt project to dbt v1.0.0, I recommend reviewing dbt-labs upgrading to 1.0.0 docs for more details on what changes must be made.
- Upgrades the package dependency to refer to the latest
dbt_fivetran_utils
. The latestdbt_fivetran_utils
package also has a dependency ondbt_utils
[">=0.8.0", "<0.9.0"].- Please note, if you are installing a version of
dbt_utils
in yourpackages.yml
that is not in the range above then you will encounter a package dependency error.
- Please note, if you are installing a version of
- Materializes the
fivetran_log__audit_table
incrementally, and employs partitioning for BigQuery users. As a non-incremental table, this model involved high runtimes for some users (#27)- If you would like to apply partitioning to the underlying source tables (ie the
LOG
table), refer to Fivetran docs on how to do so.
- If you would like to apply partitioning to the underlying source tables (ie the
- Expands compatibility to Postgres!
🎉 dbt v1.0.0 Compatibility Pre Release 🎉 An official dbt v1.0.0 compatible version of the package will be released once existing feature/bug PRs are merged.
- Adjusts the
require-dbt-version
to now be within the range [">=1.0.0", "<2.0.0"]. Additionally, the package has been updated for dbt v1.0.0 compatibility. If you are using a dbt version <1.0.0, you will need to upgrade in order to leverage the latest version of the package.- For help upgrading your package, I recommend reviewing this GitHub repo's Release Notes on what changes have been implemented since your last upgrade.
- For help upgrading your dbt project to dbt v1.0.0, I recommend reviewing dbt-labs upgrading to 1.0.0 docs for more details on what changes must be made.
- Upgrades the package dependency to refer to the latest
dbt_fivetran_utils
. The latestdbt_fivetran_utils
package also has a dependency ondbt_utils
[">=0.8.0", "<0.9.0"].- Please note, if you are installing a version of
dbt_utils
in yourpackages.yml
that is not in the range above then you will encounter a package dependency error.
- Please note, if you are installing a version of
- n/a
- Added logic in the
fivetran_log__connector_status
model to accommodate the way priority-first syncs are currently logged. Now, each connector'sconnector_health
field can be one of the following values:- "broken"
- "incomplete"
- "connected"
- "paused"
- "initial sync in progress"
- "priority first sync"
Once your connector has completed its priority-first sync and begun syncing normally, it will be marked as
connected
.
- Added this changelog to capture iterations of the package!
- n/a