Hi All :wave: , I was trying to execute ADD load u...
# gooddata-platform
s
Hi All 👋 , I was trying to execute ADD load using the API route (https://my_domain.on.gooddata.com/gdc/projects/{project_id}/schedules/{schedule_id}/executions and observed an error (as attached below) as the API response (return code: 409). Next, when I logged into GoodData scheduler workspace, I saw that the ADD load was still in RUNNING status and there were no errors observed in the logs (attached screenshot). Next, I tried to re-trigger the execution endpoint to see more details about the error from API side, but I did not see any more errors and instead it response indicated that the ADD load is still RUNNING, which is what I see from workspace as well. Can you please suggest on what could lead to such an error ?
Model: TEST_MODEL Status: 409:
{ "*error*": { "errorClass": "com.gooddata.scheduler.*exception*.UnfinishedLastExecutionException", "trace": "", "message": "Schedule '%s' has unfinished execution '%s' with status '%s'.", "component": "MSF", "errorId": "ca42606f-5b3b-4448-b704-7055f6e387da", "errorCode": "gdc.scheduler.UnfinishedLastExecution", "parameters": [ "/gdc/projects/<my work space ID>/schedules/<my schedule ID>", "/gdc/projects/<my work space ID>/schedules/<my schedule ID>/executions/<my execution ID>", "RUNNING" ] } }
j
The error indicates that there is an unfinished process is delaying this one to execute. This is generally caused by loading more data than expected. If the process is too large, you will need to ensure that data fits the platform limits.
s
Hi Joseph, I just saw that the ADD load was completed, taking 1 hr 28 m, which still falls below the limits which is described in the shared link. Is there a way we can see the root cause ? The data loaded is also within the limits, I see not much difference from previous load.
Hi @Joseph Heun Just wanted to circle back and confirm that I did not face that failure anymore as per the latest run (last night). This was also API invoked via an orchestration tool and it did not fail with that 429 error. I did some research and found 429 would indicate too many requests, however, I believe it would have been just 1 invocation. Anyways, I will keep close watch and if I find something repeating, I will surely share the details here.
j
Thanks for following up. If there is still a processes running then the proceeding process won’t be able to start. Based on our logs this seems to be the case