-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Description
The current implementation of source row count anomaly detection will error if the count is less than average by a defined threshold (defaulted at 5%). This currently only handles cases where source row counts are lower than average.
Averages can be skewed if 0 rows are loaded into a table that normally has millions of records, resulting in false positives.
What's the best way to check for true anomalies on source row counts?
- outlier detection and removal in the test
- using most recent successful run's row count
- using something other than average, such as mode, by rounding each row count to the nearest thousand and taking the most common count, then comparing
Should we also allow for higher than average row counts via a variable that's defaulted to off?
i.e.
if {{ var('dbt_observability:error_high_row_counts', False) }} ...
Metadata
Metadata
Assignees
Labels
No labels