This seems like confusing UI or a bug: I added a ...
# gooddata-platform
k
This seems like confusing UI or a bug: I added a new dataset and let the system set default mappings. But there is a red X in the upper left with an error that some dataset fields are not mapped -- why would DG not map all fields when dragging a new dataset from a data source? Beyond that, the error message says to fix the problem by going to More... -> View Details -> Data Mapping. But such a menu path does not exist (see 2nd screenshot). There is a Details button at the bottom, but when I click on it, nothing happens.
m
This would depend on the fields themselves. Did you create them manually or edited them anyhow? If so, it will be necessary to manually define the mapping. It can only be done when placing the Logical Data Model in edit mode.
I recommend checking the following resource for further guidance: https://university.gooddata.com/tutorials/data-modeling/
One last thing, please post any questions related to GoodData Cloud to our dedicated channel #C04S1MSLEAW. This way we can ensure they will reach the relevant audience. Thanks.
k
Ok, it turns out it was a bug. Of course I had it in edit mode, and I had not added the fields manually -- as mentioned in my post I had just added the dataset and let the system set default mappings. Despite trying different things yesterday the Red X problem would not go away. Now today when I first go into the LDM it is gone, so it appears to have been a transient error of some kind. Finally, I will switch to the other channel -- I'm not familiar with GoodData's various components and these channels have no descriptions...
Actually, I take it back. The Red X bug is gone, but now there's a new bug, apparently worse. It seems that this dataset, which was mapped from table policy_modification in schema 'plus' has now magically been remapped to table policy_modification in schema 'dm'. The latter table had timestamps stored as bigint, and the former was a view to convert them to timestamptz (all Postgres types). While the original dragging of plus.policy_modification into the LDM worked and mapped correctly (except for the Red X problem), now when I look at this dataset, in the Data mapping tab, the Source Column for each shows "That table is not mapped". Now if it were having some kind of trouble talking to plus.policy_modification you would expect an error along those lines, but instead, it has now become mapped to dm.policy_modification instead. If I click on Map dataset and change the Data Source back to the correct schema, then I'm back to the Red X bug. This is really very messy, not sure if I'm going to be able to recommend GoodData at the end of this evaluation.
m
You may need to define the column explicitly as a referenced column in the target dataset so the connection is made correctly when dragging and dropping the tables into the canvas. This is achieved via the Naming Convention: https://www.gooddata.com/docs/cloud/model-data/prepare-your-data/#PrepareYourData-RecommendedNamingConventions Please notice the entity for "Both Primary Key and Reference". Could you please DM me your organization details? I.e., a link to the workspace you are trying to access and the email you used to sign up for GoodData. I would like to take a closer look. Thanks!
k
It was my understanding from one of these posts that in order for a PG timestamp type to show up as a key in the dataset, it needs to happen at the time the table is dragged into the LDM. If I understand this correctly, then the dataset Policy modification in the attached screenshot must have come from the schema 'plus' since the timestamp field names ending in tstz are only in that version of the table (it's actually a view that modifies the underlying table in the dm schema). So if we agree that this dataset must have been originally created from plus.policy_modification, how did it get switched to use dm.policy_modification? I will DM you.
Screenshot 2025-03-26 at 1.28.01 PM.png
m
As stated in our internal conversation, I am sending my findings over here for future reference purposes. Hi Kurt, I want to let you know that I am able to reproduce this and have worked around it by converting the second dataset to an SQL dataset with the following statement:
Copy code
select "datamart_created_timestamp", "datamart_updated_timestamp", "effective_timestamp", "fee_change", "flow_document", "flow_published_at", "gross_taxes_change", "invoice_locator", "issued_timestamp", "locator", "name", "policy_locator", "premium_change", "type" from "policy_modification"
The mapping is now set correctly and I have made sure the dataset is working as expected As to what is the root cause of the issue, I will have to check this one internally with our developers. From my investigation, the dataset that gets added first to the modeler gets mapped correctly, while the second one will get bugged. Alternatively, if using the SQL dataset is not an optimal for you, I would recommend renaming the table in your DB, but we will aim at addressing this properly Thank you for bringing this to our attention!