Hi everyone, I have one more question about config...
# gooddata-platform
Hi everyone, I have one more question about configuring LCM by API. I believe @Cam and I are very close to our LCM proof of concept! The issue we were facing, was when running the load data process, we received this error:
Copy code
[ERROR]: Data distribution worker failed. Reason: The Output Stage has the x__client_id column, but no Client Identifier was provided for the current workspace.
This was strange, as we had specified client ID's using the API: here I was finally able to fix this issue by going to the Data Integration Console , and "Re-deploying" and entering a "Client ID" in the re-deploy modal (see attached screenshot). My question is: How is assigning a client ID in this modal, different than what we were doing in the API? And how can I accomplish this using the API?
🙏 1
Hi Daniel, let me try to explain. There are two ways of how the data distribution can work in LCM. 1. you create a data loading process in each client workspace separately and assign a client ID (this tells the dataloading process which rows in the source data it should load to the workspace where it’s running (it’s not tied to the client ID assigned in LCM segment) 2. you load whole segment at once. We usually use a service workspace (which is an empty workspace with signle purpose - to run all ETL and LCM processes). You create a automated data distribution and select the Segment (LCM) option. In this case all client workspaces in the segment will be loaded with data at the same time, and the rows which go to every client are determined by the
column in your source data - it needs to match with the client ID assugned in the LCM segment. As for how to do it via API, it’s probably not possible to do option 1 (option 2 is somehow possible). But as deploying a process itself is one-time thing, you can do it in the UI, and only manage schedules and their executions via API (that’s the use case we have documented).
🙌 1
Thank you @Boris! #2 works great for us