Hi All :wave: I am encountering a weird issue with...
# gooddata-platform
s
Hi All 👋 I am encountering a weird issue with LDM with the scenario explained below. Details: • In workspace #1, I have a LDM and I added a dataset
mytable
to it and used it for reference/connection point. • Now, I am trying to add this
mytable
to another LDM within a different workspace #2. • However, after importing I see that the
mytable
is getting mapped to an ID named as
mytable1
(which is probably GoodData's internal table stored name). • My issue is that when I try to generate the output stage, GoodData is forcing me to change the reference name to
cp_mytable1
instead of
cp_mytable
. 1. Why is this happening and how to avoid this problem ? 2. As I am using different workspaces and different LDMs, why would I get mytable1 mapped instead of mytable ? Note: Within my database, I only have
mytable
. Can someone please review and suggest on the next steps ?
m
Hi Shankar, just a few ideas here: • is there any chance there already is a dataset with id "mytable" in your workspace #2? It might even have different name/title but this id? • As far as I understand, the system for generating IDs is trying to avoid duplicities by adding a sequential number at the end of the id in case an id without it would become a duplicity • Under some circumstances the dataset might be not visible in the modeller (but still exist). That would be if it existed before normally and someone made it "deprecated"/hidden with and API call.
Also if you are using output stage from your own database (not GoodData ADS datawarehouse), you do not need to stick with the mapping to the naming convention and can map the facts/attributes/references in the dataset mapping view to whatever column in your table. So you could still map cp_mytable1 id to cp_mytable column if needed.
s
Hi Michal, Initially I thought so, that the dataset may have already existed in workspace #2 which is why 1 was suffixed to avoid duplicate. But, my theory was proven wrong when I tried renaming that
mytable
to
myothertable
from source DWH and then importing it to workspace #2 and its ID appeared as
myothertable1
. I tried other names as well and each time it added the 1 suffix, which was bizarre.
The naming convention is enforced during the data load process where it expects me to create the output stage table name as mytable1 and all references as
cp_mytable1
or
r__mytable1
. Renaming my db objects to support the data load works ofcourse, but this limits my ability to reuse db objects in different workspaces with the same name or same relationships, yet creating different output stage tables just to support the data load.
m
Hi Shankar, I was just reading through the issue you were facing, and I wanted to check if you need any further help here? 🙂
s
Hi Michael, thank you for following up. I went ahead and renamed the object/field names within DWH to support the GoodData requirements of output stage naming in this other workspace #2. However, I am happy to collaborate and identify the root cause of this behavior i.e. any new name of that dataset
mytable
that I create like
myothertable
or
mysampletable
and try to add to the workspace #2, they all get linked to ID:
myothertable1
or
mysampletable2
.
Hi @Michael Ullock / @Michal HauzĂ­rek -- I have noticed something and wanted to share. I tried removing the dataset
mytable
(which internally assumed the ID:
mytable1
), saved the LDM, then edit again, import ytable1`mytable` dataset and this time, it assumed the ID:
mytable2
. This indicates the removal operation is not committed to GoodData ADS and each time, the same dataset is added to the workspace, it always finds a previous copy and tries to add
1
to the end. When I view the MAQL, it always performs a CREATE DATASET operation and not CREATE OR REPLACE DATASET. Can you please review and suggest if this is the cause of the problem and how to avoid it ?
m
Hi Shankar, perhaps there might be some discrepancy between your PDM and LDM causing this behaviour. A Synchronization aligns the PDM (physical data model) with the LDM (logical data model) Can you try to run synchronization of the dataset to see if it helps fix the issue? https://help.gooddata.com/doc/enterprise/en/data-integration/data-modeling-in-gooddata/data-modeling-and-maql/maql-ddl/
đź‘€ 1