Hi all, We have an BadRequest Error CODE 400 (Gen...
# gooddata-cloud
a
Hi all, We have an BadRequest Error CODE 400 (General error) trying to scan a datasource using the API Trace id
ee61b3bd66bfc16b010316b6c8596e24
Can someone tell us what's going wrong based on the trace id please ?
j
Hi Alexandre, Could you please provide the body of the request as well as the headers you have provided. What exactly is the endpoint you are calling in your tool?
vasco.cloud.gooddata.com/api/v1/actions/dataSources/gvs02ndjme7ivobsq3l4u014/scan, is it this? and does the user have access to the datasource as well?
a
Yes the token used to scan has access to the data source. (it's an admin token) Yes this is the involved endpoint The body params we send
Copy code
{
  "scanTables": true,
  "scanViews": true,
  "separator": "__",
}
The headers
Copy code
{ 
  "Content-Type": "application/json",
  "Authorization': "Bearer ****"
}
Sometimes we got Http code 200, sometimes we got a 400 error
j
If you get 200 sometimes and 400 another, it seems like it would be related to the authentication token. Is your token still valid when making the call?
a
Yes the token is still valid. I can query any path in the API.
I can provide several trace ids if you need
The error started to appear on Saturday May 31 at 7am (UTC-04:00 (America/Toronto)
@Joseph Heun We really need your help here please, at least to know what's going on.
j
The error from the logs are pretty generic themselves. Could you send another traceID and I can try and compare the entries?
Copy code
"msg": "Error while reading data source metadata",
  "action": "scanModelGetTables",
  "dataSourceId": "gvs02ndjme7ivobsq3l4u014",
  "exc": "errorType=com.gooddata.tiger.grpc.error.GrpcPropagatedClientException, message=General error
Are you making changes to the LDM or any other API calls? What exactly is the workflow when you yield 200 vs 400?
a
We first ask good data to scan the datasource to create the physical data model. We apply modification to the pdm, we translate some labels. Then we update the logical data model. The error is happening while scanning the datasource. Our implementation is sync with the structure of our data product, whenever our dataproduct changes its structure, we re-scan to update the logical data model.
Here another trace Id.
18dc50700dec3a91ce3e944a63494c93
Copy code
_header: 'POST /api/v1/actions/dataSources/rhuadm5dymyh9avy6s9cgbr4/scan HTTP/1.1\r\n' +
        'Accept: application/json, text/plain, */*\r\n' +
        'Content-Type: application/json\r\n' +
        'Authorization: Bearer [HIDDEN FOR DEBUG]=\r\n' +
        'User-Agent: axios/1.7.7\r\n' +
        'Content-Length: 53\r\n' +
        'Accept-Encoding: gzip, compress, deflate, br\r\n' +
        'x-datadog-trace-id: 4835643465372204768\r\n' +
        'x-datadog-parent-id: 4835643465372204768\r\n' +
        'x-datadog-sampling-priority: 2\r\n' +
        'x-datadog-tags: _dd.p.tid=683dec4900000000,_dd.p.dm=-3\r\n' +
        'Host: <http://vasco.cloud.gooddata.com|vasco.cloud.gooddata.com>\r\n' +
        'Connection: keep-alive\r\n' +
        '\r\n',
I can reproduce the error from the good data UI as well TraceID:
574498be8747c1c67d85fc55ea761704
d
Hi Alexandre. Joe asked me to dig deeper in the logs. I can see BigQuery throwing an
InterruptedException
. I can't see whether the interrupt came from GD or from within BQ. A timeout would make sense - the calls take well over a minute. Does it make sense? Do you have a complex PDM? However, I'm not 100% sure of the reason being a timeout - there are calls to the endpoint that take over 100 s and still end up successfully. This is just the first investigation. I'll keep digging. But if this gives you some idea, please share.