Hi, I'm trying to write a filter to append to my a...
# gooddata-cloud
d
Hi, I'm trying to write a filter to append to my analyticaldashboards call to return only the dashboards where areRelationsValid is false. I have the header X-GDC-VALIDATE-RELATIONS set to true Is that possible? e.g. api/v1/entities/workspaces/:workspaceId/analyticalDashboards?origin=ALL&page=0&size=20&filter=analyticalDashboards.attributes.areRelationsValid%3D%3Dtrue returns the following error { "detail": "Wrong RSQL: Field 'attributes' not found in 'class com.gooddata.tiger.metadata.analytics.QAnalyticalDashboard'", "status": 400, "title": "Bad Request", "traceId": "1c18fc452c1089cd" }
j
Unfortunately calculated attributes like areRelationsValid cannot be used as filters. Currently the only solution is to collect all and filter them out in your application. Question: at least temporarily, would you appreciate a new function in gooddata Python SDK providing such functionality?
d
Hi Jan, Well that's a shame...our use case is we want to check if various things like analytical dashboards, visualization objects, metrics, content filters are valid before we port them between workspaces and/or environments. And telling users which ones need to be fixed is what I am working on, I don't really need to know in this use case which ones are correct....so I'll collect them all and filter them as you suggested We're not using the python SDK, so it wouldn't help sadly
j
Thanks for this feedback! I will add it to the SDK anyway and incorporate it into our blueprint. Btw the blueprint currently validates workspaces by executing all insights. I would recommend you to do the same šŸ˜‰
I will also create an internal Jira for a new API serving your case
d
"Btw the blueprint currently validates workspaces by executing all insights."...what do you mean? Invoke a specific endpoint?
j
Well, this is very easy to do with the SDK but very complicated to do with raw APIs
s
what call in the sdk does that?
and to be clear, the validation is challenging because it requires validating the analytics model entities which you can't do from the single call from the api to the analytics model. You have to make calls individually to the components.
right?
j
Here we are talking about two different use cases: 1. Validate analytics definitions (dashboards, insights, metrics) 2. Validate that all insights are executable ad 1) yes, you have to call multiple APIs for each object type and find invalid objects, ad 2) Here is an example how to test insights with their executions: https://github.com/jaceksan/dbt-gooddata/blob/main/dbt_gooddata/dbt_gooddata.py#L108 I thing you should do the both. Even if all objects are valid, some insights may still be failing, e.g. because someone dropped am underlying column in the database. What I can offer is: ā€¢ A new Python SDK function calling all APIs and returning only list of invalid objects or even just raising exception if there is at least one invalid object. ā€¢ A new backend API doing the same The first is much easier to deliver for me.
s
I would much appreciate the 2nd one because we're not easily able to support Python in our infrastructure. However the 1st one could help us triage while we're locally verifying our operations within GoodData.
p
šŸŽ‰ New note created.
šŸŽ‰ New note created.
j
I created corresponding product requests.
āœ… 1