I have a question about possibility of moving attributes between datasets.
Let's say i want to move an attribute Account name from Subscription fact table to Performance dimension table at the LDM level.
It seems possible operationally (using a “move” button) but I cannot see how can I map it properly when it lands in the Performance dimension table. Seems like I cannot tell that the data source for this specific column should be another source table (Subscription fact table in this case). If so, what is the purpose behind a “move” button?
My general context is to test how can/can’t I modify my LDM in case any Insights/Dashboards already exist on top of the LDM.
And is there any recommended scenario for the approach when I want to populate my workspace with data incrementally?
Best answer by Julius KosView original
The best way to update your LDM is to follow the steps here:
Please note that there may be cascading consequences with existing metrics/insights which use those objects in the LDM. Some changes my cause the metrics to become unusable as well as filters.
You can make sure your your workspace is synchronized with a MAQL ddl command:
In regards to incremental loading you can find a wealth of information on our page here:
Just make sure you are staying within the Workspace Loading Platform Limits
@Joseph Heun, thanks much for these details, those are really helpful.
One thing that is still unclear to me is the moving of the attribute between datasets part.
The instructions do not mention how to deal with mapping in Load configuration tab in case the source table of the new dataset is different from the initial one. Cos it looks like mapping is defined at the dataset level, not at column one.
Referring to my example above, Account name attribute is initially sourced from Subscription fact table in our DB that corresponds to Subscription dataset of the LDM. While Performance dimension table (where I intend to move Account name) is sourced from another DB table.
So the system allows me to move Account name between datasets. But when it comes to data load after LDM changes, the load breaks because the new field mapped to the wrong dataset and I cannot change it.
So my question is - is there a way to map fields to different data sources within one dataset?
If no, do I understand correctly, that the best approach will be to load the data from only one data source (aka one general table in the DB) and split it into dimensions/facts/date datasets only at data modelling step?
That’s correct the best approach is to keep one data source for a single dataset. GoodData is essentially based on the LDM, so generally speaking you need to split the data in the modeling step.
@Joseph Heun! Just to make sure we’re on the same page - we should use 1 table in the DB schema as a source for LDM. And after that split the data from this 1 table into datasets (dimension and fact LDM tables) at the LDM level, is that correct?
You should have more tables/views in your source data and each of them should represent particular dataset in your logical data model. Can you please confirm which datasource are you using? This article might be useful for you as well:
I’m a bit confused cos if I understood everything correctly, if we will be using a separate source table for each dataset (as we’re doing now), we won’t have that much flexibility for any future modifications like moving an attribute from one dataset of LDM to another cos it looks like each dataset can be mapped only for one datasource aka table.
If i misunderstood could you please help me to clarify it?
We’re using Redshift (a dedicated schema with separate tables for each dataset) atm.
I can confirm that the approach which you are currently using (separate data source tables) is the correct one and unfortunately there is not the type of flexibility you are describing here.
As my college Joe already suggested, there are always some implications while changing the Logical Data Model (LDM) in “living” workspace - cascading consequences with existing metrics/insights/reports... which are defined by the relations within the Data Model.
The move button from the provided documentation is primarily meant to be used while preparing the LDM before the actual creation of the analytics above it, otherwise it would also require a change in the data source (DS). The table/view from your data source = dataset in the LDM. You can map only fields between the particular data source table and corresponding dataset.
I hope that helped but feel free to follow up in case of further questions.
@Julius Kos, thanks for these clarifications!
My last question will be - what are the possible cons of using one general source table and distributing the data into demential and factual datasets at the LDM level? Or maybe I can read about it somewhere?
Unfortunately that won’t be even possible because you wouldn’t be able to map the datasets to such a source table. In order to map the dataset to source table/view successfully - the column names (headers) and number of columns must match.
More information about mapping can be found in the bellow documentation:
Got it, thanks much, that helps a lot!
Have a nice weekend!