Hi all, I’m getting this error for any column I tr...
# gooddata-cloud
Hi all, I’m getting this error for any column I try to add to an insight. Do you know what could be causing this?
Copy code
  "title": "Bad Request",
  "status": 400,
  "detail": "A result cache error has occurred during the calculation of the result",
  "resultId": "bcd191c43fb797451bed42c1a68bc07b3f6bba2d",
  "reason": "The table path must be non-empty.",
  "traceId": "b44bc817cca26a13b5a6c885cc599b61"
Hi Bruno, I have not come across this error myself, but the error seems to indicate that there is an issue with the configuration or structure of your insight. Can you please try and check that the columns paths or identifiers are correct and that they exist in the dataset? Or perhaps you made some Data Model and Schema Changes recently? If there have been recent changes to your project’s data model or schema, such as renaming or removing tables or columns, this could impact the insight. I would like to suggest that you check that your data model is in consistent state. Lastly, I would recommend that you review any filters or metrics applied within the insight. You can try removing them 1 by 1 to verify if it’s this thats causing conflicts? Hope this helps 🤞
Check the mapping of attribute to table column in dataset details in LDM modeler (data section). Is the attribute mapped to existing table column?
I've just hit this same error as a result of using the python SDK to transfer the config of insights from one GoodData instance to another.
I managed to solve it by going into the LDM adjusting something and forcing a save on the models which seems to force a recache of request data, then all my insights and dashboards worked as expected. Would be good if this is something that could be triggered via the API when doing a transfer using the Declarative API via python SDK.
Hi @Brunno Araujo, the error is caused by missing key
in LDM definition of dataset. Starting by 11.09.2023 LDM models require this key to compute report. Based on logs on our side, you are using python SDK in version 1.4.0 to maintain workspace definitions. Unfortunately, support for above mentioned key was introduced in version 1.5.0 of python SDK. With 1.4.0 python SDK will not store key
into your LDM definition. It disappears. As the result, reports computation fails. When you upgrade to 1.5.0 or newer,
is going to be handled correctly. Be aware that your current LDM models do not have
so you need to add it first into LDM definition, e.g. by python SDK or in LDM modeler. The key is list of strings pointing to the table in database. For example
["schema_name", "table1"]
says that dataset is backed by table
in given data source. cc: @Cam Findlay
Thanks @Pavel Cerny so an upgrade to 1.5.0 + and setting the path is the fix going forward?
Yes, if you have WS LDM datasets definitions without
key already, you need to set it back. Plus you need to upgrade pySDK at least to version 1.5.0 to make sure it will not be removed again by pySDK operations.
Thank you, guys! @Pavel Cerny @Cam Findlay I’m using the SDK to migrate workspaces and environments, I updated the SDK to 1.7 and it worked like a charm.