Hi GD Team, I would like to bring up an issue we'v...
# gooddata-platform
j
Hi GD Team, I would like to bring up an issue we've encountered with our current implementation of FlexConnect as a data source. We have a pre-processing step that involves executing an aggregation pipeline query and then reading all records from the output multiple times to compute some additional derived fields to be added. The result is then written to another collection before the data is queried and returned to GD. As the original collections has grown in size, the process now seems to exceed the timeout limit, causing the task to be cancelled before completion, resulting in no data being returned. Would the team be able to advise if there’s a way to work around or resolve this? Any assistance would be greatly appreciated! cc @Alson Yap
i
Hello Jennifer, Thank you for reaching out to us. First of all, I would like to ask any trace id from the errors you get so we query it in our logs to investigate further. My initial assumption here is reading all record from the output multiple times is causing the timeout, but we can look into it later. Thank you.
j
Hi @Ismail Karafakioglu, thanks for your reply. One traceId I got was
749ffd368887154f25fa81338035b555
.
i
Hello Jennifer, Thank you providing the trace id. I escalated this ticket to my L2 colleagues and one of them will respond here. Thank you for your patience.
d
Hello Jennifer! This is Daniela with L2. Can I pls ask you if you have any logs that are coming from your FlexConnect server?
j
Not much information to be seen, just that the task got cancelled
d
Thank you! I’ll check on this and come back here as soon as I have any findings.
🙏 1
Hello! We have implemented a fix that should provide you with a better error message, indicating what is the issue with the query used. Can you please try again and let me know what do you see? Thanks!
j
Hi @Daniela Salmeron, this is the response
d
Hello, Thanks for you reply. I can see on my side the following error:
Copy code
Loading data for table 'OrgStructure' timed out.
Which should have been reflected on your side with the error message, sorry about that, we’ll fix this by next week. Regardless, with this error message I can confirm that the root cause is indeed that your FlexConnect is too slow and best would be to optimize the query. We are also planning a second fix for the end of next week, to be able to give more time to the query, so it could avoid the time out.
j
Hi @Daniela Salmeron, thank you very much! Would greatly appreciate an update once these fixes are in place and we will look into optimising the query on our end as well.
d
Hello, Sure thing! As soon as the fix is in production, I will let you know.
🙏 1
Hello! We have increased the time out. The fix is now in Production. Can you pls try again and let me know if this helps your query? Thanks!
Hello @Jennifer Chue may I know if you had time to test? Thanks!
j
Hi @Daniela Salmeron, apologies for the delayed response. This is the response now.
d
Thanks for sending the error. I’ll check on my side, my apologies. In the meantime, may I know if you have optimize the query on your side?
j
Hi @Daniela Salmeron, thanks for following up. The query optimisation hasn't been implemented yet, but it is planned for future improvements to our FC. I can see that the error reason has changed to inform us of the query timeout, just wanted to check if this is the intended message after the update?
d
At the moment we extended the timeout, however, the query is failing after almost 3minutes. Therefore, for it to not timeout, I will highly recommend to optimize the query on your side.
👍 1
About the logging error, taht is the message on our end as it shows the logging on your end. The logging is on your control, you can add it in the FlexConnect implementation. For example, how to log the incoming requests:
Copy code
<http://_LOGGER.info|_LOGGER.info>(
            "report_execution",
            report_execution_context=execution_context.report_execution_request,
        )
It will return something like this following example:
Copy code
2025-08-29T13:11:00.492255Z [info     ] execution_context              [sample_flexconnect_function] execution_context=ExecutionContext(execution_type=<ExecutionType.REPORT: 'REPORT'>, organization_id='default', workspace_id='8c269ce1792242ebb795d9b2c0f49ac4', user_id='demo', timestamp='2025-08-29T13:11:00+00:00', timezone='Etc/UTC', week_start='SUNDAY', attributes=[ExecutionContextAttribute(attribute_identifier='SampleFlexConnectFunction.attribute1', attribute_title='Attribute1', label_identifier='SampleFlexConnectFunction.attribute1', label_title='Attribute1', date_granularity=None, sorting=None)], filters=[], report_execution_request=ReportExecutionRequest(attributes=[compute_model.Attribute(local_id='a_SampleFlexConnectFunction.attribute1', label='label/SampleFlexConnectFunction.attribute1', show_all_values='False')], metrics=[compute_model.SimpleMetric(item='fact/SampleFlexConnectFunction.fact1', aggregation='MEDIAN', compute_ratio='False', filters='[]')], filters=[]), label_elements_execution_request=None) fun=SampleFlexConnectFunction peer=ipv4:127.0.0.1:64005 task_id=105c7ac75e09433899a7ba273e5aa946
Which shows the attributes and metrics, etc. You can then use the
task_id
to correlate the inputs with the errors.