hi there! a question, I have two databases and wou...
# gooddata-platform
j
hi there! a question, I have two databases and would like to alternate between connecting the dev workspace to the development database and the live database, how would that work? any recommendations for how to work with multiple databases to separate the development and the production? thanks!
j
Are you interested into our new CN platform? I mean https://www.gooddata.com/developers/cloud-native/doc/1.7/.
m
I believe Josefin is currently using the hosted GoodData Platform, based on the previous questions. Josefine, our typical best practice for the hosted platform is to have a whole separate DEV environment (GD domain) for development, where the development workspace is, which is connected to the development database) and then have a production environment (GD domain) connected to production database. Typically you want to name the data sources the same way in both environments and then you can easily use the Pipeline Export and Import feature to migrate processes, schedules, and data sources between the environments. If you on the other hand for some reason need to switch between the dev and prod database for your single development workspace inside one environment (and if you are using ADDv2 to load it), I would recommend this: • create two data sources in your environment - one pointing to the dev and one pointing to the prod database • in the development workspace create two ADDv2 processes - each using different data source • then manually execute the one which you want to load the data from NOTE: If you are using incremental load normally, please note that you probably want to use a FULL load every time you load from the other database than before. With incremental load the data from both databases would mix up in the workspace. And also the last loaded timestamp in each database will be probably different. So use full load to avoid these issues.
j
It's correct, I'm using the hosted GoodData Platform. I have separate DEV and PROD environments and use the ADDv2 to load and distribute the data to the workspaces as well as the LCM to manage the workspaces. Given that, is the second option the one you'd recommend?
m
Great, thank you for confirming. I would only recommend it if you for some reason need to load the DEV workspace on dev environment with data from PROD database and then load from Dev database again. So if that is your case, I think this approach is OK. Otherwise we usually try not to mix dev and production setups. But technically it is possible.
j
Alright! If I want to keep the environments completely separate as you first suggested, would that be possible given my setup of workspaces w LCM and ADDv2? If so, how would the datapipeline work with or be related to the bricks set up, service workspace and data loading (ADD)? Or would the process of Export and Import the Data Pipeline replace the service workspace, the scheduled bricks (release, roll out) and ADDv2?
m
No, the pipeline export and import does not replace the service workspace. It allows you to export the service workspace configuration (all the bricks, the segment-level ADD etc.) from one domain and then import the whole thing to another domain. This would be a typical GoodData setup with two environments - DEV and PROD: • Your DEV GD Domaindatasource “DS” connected to your DEV database ◦ other datasources (i.e. generic datasource storing domain name etc.) ◦ DEV workspace ▪︎ ADDv2 for “this workspace” load using the “DS” datasource ◦ service workspace (optional - only if you want to have the clients also in dev*)* ▪︎ release brick (pointed to the DEV workspace in DEV domain) ▪︎ rollout brick ▪︎ provisioning bricks using the “DS” datasource ▪︎ …other bricks if used ▪︎ ADDv2 for segment load using the “DS” datasource ◦ Client workspaces within the segment (optional - only if you want to have the clients also in dev*)* ▪︎ dev client1 ▪︎ dev client 2 ▪︎ … • Your PROD GD Domaindatasource “DS” connected to your PROD database ◦ other datasources (i.e. generic datasource storing domain name etc.) ◦ service workspace ▪︎ release brick (pointed to the DEV workspace in DEV domain) ▪︎ rollout brick ▪︎ provisioning bricks using the “DS” datasource ▪︎ …other bricks if used ▪︎ ADDv2 for segment load using the “DS” datasource ◦ Client workspaces within the segment ▪︎ prod client 1 ▪︎ prod client 2 ▪︎ … So in an ideal scenario the environments are completely separated. The dev gooddata is pointing to your dev database, prod gooddata to your prod database. And your pipeline in the service workspace in DEV domain and pipeline in the service workspace in PROD domain are also identical and only differ by the values of the data sources. In that case you can use the pipeline export/import tool to deploy changes from dev service workspace to the prod service workspace. You do not need to use the pipeline export/import and can handle the changes in the service workspaces on DEV and PROD manually if you prefer that.
j
okay, I'll give this a try, many thanks for your reply! 🙂