Hello, we're migrating our
GD.CN deployments from GCP/BigQuery to AWS/Snowflake and would like to migrate the
GD.CN databases from our Postgres clusters (dev/stage/prod) on GCP to our corresponding Postgres clusters on AWS, including the following objects:
- workspaces
- datasets
- visualization objects
- facts
- metrics
- filters
- any other objects that the above objects depend on (note: organizations already exist on both platforms with the same ID's)
We'd like to transfer all properties on the above objects to AWS exactly as they exist on GCP, including object ID's, such that when we refer to an object by its ID, we get an exact copy of the object on AWS as it existed in GCP with the following exceptions:
1) our data sources on AWS are Snowflake data sources (not BigQuery)
2) our dataset and source column names on AWS follow Snowflake naming conventions (not BigQuery conventions)
Hence, we envision the following steps:
1. upgrade our
GD.CN deployment on GCP to the same version deployed on AWS so that we have identical schemas on both platforms
2. export the
GD.CN Postgres database on GCP to SQL
3. update the data source info, dataset names, source column names, and other values that are specific to BigQuery to their Snowflake equivalents in the exported SQL file
4. import the updated SQL to our RDS Postgres DB on AWS, used by
GD.CN
Would you suggest another approach? If the above approach seems reasonable, is there anything we might be missing that we should consider? Thanks for suggestions.