Our team is about to integrate gooddata platform into our application. I was trying to build a proof of concept project, but failed to scan the postgresql dataschema.
The UI crashes immediately in chrome, with out of memory. In firefox it keeps running with around 9GB memory usage, but frozen even after 30 minutes. Is there a limitation to the number of tables in the schema? Our schema contains 300+ tables currently.
Is this a bug, or am I doing something wrong? If so, please suggest me a work around.
- run gooddata/gooddata-cn-ce:latest locally with docker
- connect custom datasource via API (POST /api/entities/dataSources)
- click connect data on workspace
- after a few seconds the google chrome tab crashes with OOM
If I scan only the views, everything works fine. I tried to scan parts of the schema with different prefixes, but that did not load any tables, if I did specify any prefix.
Best answer by jacekView original
our LDM Modeler is not capable of handling so many datasets generated from so many tables.
Honestly, we do not expect users working with so many datasets in single workspace, it would not provide a good user experience (UX) to end users when creating insights, because they would have to search for attributes/facts in list of thousands.
Instead, we recommend to generate datasets (LDM) from subset of tables. There are two ways how to achieve that:
Do forget to specify “separator” correctly, default is double underscore.
There is filter box above tables, which you can use to find required tables.
It should provide a good-enough UX including generating references once you e.g. add second dataset connected to first dataset.
Let me know if my recommendations work for you and what is your UX here - we not only appreciate it but we can also improve the UX in upcoming releases ;-)