Hello, at times we get this error for loading filt...
# gooddata-cn
Hello, at times we get this error for loading filter values in dashboards. Often just refreshing dashboard is enough to load the values properly. So our thinking is whether this is caused by some aggressive timeout for retrieving the available values.
Hi Vaclav, Sorry to hear about your troubles. There might be some timeout in play, especially if you are choosing huge amount of filter values. Could you share with us the error from the DevTools if there’s anything visible, please?
We get this error on console for each such failed filter.
Copy code
Tiger backend threw an error: {status: 400, detail: 'Query timeout occurred', traceId: '11540857ac63fbd0'}
react_devtools_backend_compact.js:2367 Error while loading initial elements page: UNKNOWN_ERROR
Inner error: Error: An unexpected error has occurred
Hello @Václav Slováček. This comes from the SQL executor timeout. I’m looking into what the default value is and how configurable it is.
🙌 1
Thank you. Is insights rendering somehow influencing this? For example if there is a demanding insight in a dashboard are filters retrieved before, in parallel or after the insights. Or randomly?
Well, this depends directly on the performance of your data source. It would be a good idea to look into why the queries take so long there.
is default for labelElements. For reports, default timeout is
For sure. But independently on that I would assume filters should get the priority as they should be fairly easy to retrieve. People can do a lot of crazy things in metrics and reports and we cannot control for all.
Here’s the feedback I got from the CN team:
If you prolong timeout for queries, you’ll probably face timeouts on ingress controller level (180s). So the right approach is to examine generated sql and improve the overall query performance (add proper db indices, rewrite the maql, etc.)
I am not sure if there’s any sort of priority for the filters. I don’t think so out of hand - it could be that some of your dashboards/reports/metrics are pushing the datasource hard at certain points, and that’s when the filter query fails.
I understand. I would just pass the feedback that filter value should get a priority (and eventually never be pushed out of cache by insights data). They are often quite a short lists (max low thousands). It just makes the solution look super unreliable and failing randomly. We have multiple tenants and we do not have control what tenants do. While I understand that a complex MAQL in an insight can cause the insight to be slow or time out (so the message is shown for the inisght) the collateral damage on filters is really bad as it is very counter intuitive that an insight on the dashboard can cause that.
I understand your point. I’ll pass that feedback onwards and I’ll let you know if I get any in return.
It is true CN deployment is not able to prioritize labelElements queries in front of report queries now. I can imagine organization configurations for which it would be very useful . Question is if you could mitigate the issue immediately by sql-executor microservice reconfiguration. Each sql-executor POD is able to execute 64 parallel queries by default. There is also limit to number of parallel queries executed against one data source. It is 6 by default. So if you have 2 PODs, you can run up to 12 parallel queries against one data source. I guess data source limit could be an issue here. If you are sure, your DB is able to handle e.g. 100 connections from CN deployment and that you have pointed one data source to this DB instance, you can set data source limit to e.g. 45. Consider to tweak the following options using environment variables: • DS_POOL_MAX_CONNECTIONS - max number of connections created for one data source, default 6 • SPRING_DATASOURCE_HIKARI_MAXIMUM_POOL_SIZE - if you are using caching on data source level then set this variable to the same number as DS_POOL_MAX_CONNECTIONS. Minimum is 4. • SQL_EXECUTOR_LABEL_ELEMENTS_TIMEOUT_MS - configure labelElements timeout, default is 10 seconds
🎉 New note created.