Hi everyone :wave: We’re currently validating the ...
# gooddata-platform
a
Hi everyone 👋 We’re currently validating the implementation of custom fields. In our model, we have generic fields (e.g.
custom_field_01
) that are populated based on a selection made by the customer — meaning the same field can represent different information over time. It’s also possible for the customer to change this selection, for example, switching from field A to another field B. During testing of this scenario, we noticed that in filters, GoodData is not overwriting the previous data, but rather appending it. Example: • Selected the field Car Brand → value shown: FIAT • Then changed the selection to Year of Manufacture → value: 2015➡️ Instead of replacing FIAT with 2015, GoodData kept both — the old value remained, and the new one was added as a second filter option. When running a full load, the behavior was as expected: old values were removed, and only the new ones remained. Could you please confirm if there’s any configuration in GoodData that allows us to force an overwrite behavior instead of append during incremental updates? Thanks a lot! 🙏
cc: @Luis Felipe Mattos
👀 1
m
Hi Adriane, the incremental data loading appends or updates records based on the dataset’s connection point or fact table grain, but it does not remove attribute values that no longer exist in the source data. This is why, when the meaning of a generic field changes (for example, from “Car Brand” to “Year of Manufacture”), older values such as “FIAT” remain visible in filters alongside new ones like “2015" after an incremental load. Only a full load will remove obsolete values, as you have observed. There is no configuration option in GoodData that forces an “overwrite” (automatic removal of old values) during incremental updates - incremental loads are designed to add or update data, not to delete data that has disappeared from the source. This is expected behavior for incremental loading Incremental Data Loading. Workarounds:Full Load: Running a full load will truncate the dataset and reload only the current data, effectively removing outdated values. • Delete During Incremental Load: If you prefer to use incremental loads but still need to remove obsolete data, you can use the
x__deleted
column. By setting specific records, you instruct GoodData to delete those records during the incremental load. This requires your ETL process to identify which records should be removed before loading Delete Old Data while Loading New Data to a Dataset via API. For more details, see the documentation on Delete Mode in ADD v2.