Skip to content

This change allows the module to create an Aurora Global database acc… #237

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed

This change allows the module to create an Aurora Global database acc… #237

wants to merge 1 commit into from

Conversation

pedrojflores
Copy link

…ross two regions. When creating a global database credentials must not be specified for the secondary database. The current version of the module does not provide a way to not assign credentials to a secondary database. This change allows it to.

Description

This change will allow this module to create an Aurora Global database across two regions. Currently this is not possible with the latest version of this module because the module tries to assign a username and password to the secondary cluster even when specifying 'is_primary_cluster = false' which is not allowed by AWS. Credentials can only be applied to the primary cluster.

Motivation and Context

This change is required to allow the module to be used to create both the primary and the secondary cluster that make up an Aurora Global cluster

# global cluster
resource "aws_rds_global_cluster" "aurora_global_cluster" {
  provider                  = aws.us-east-1
  global_cluster_identifier = local.global_cluster_identifier
  engine                    = local.engine
  engine_version            = local.engine_version
  database_name             = local.primary_database_name
  storage_encrypted         = local.storage_encrypted
}

# primary cluster
module "primary" {
  providers = {
    aws = aws.us-east-1
  }
  source                          = "terraform-aws-modules/rds-aurora/aws"
  apply_immediately               = true
  create_security_group           = false
  create_monitoring_role          = true
  db_subnet_group_name            = data.aws_db_subnet_group.private-use1.name
  enabled_cloudwatch_logs_exports = ["audit", "error", "general", "slowquery"]
  engine                          = local.engine
  engine_version                  = local.engine_version
  global_cluster_identifier       = aws_rds_global_cluster.aurora_global_cluster.id
  instance_type                   = "db.r4.large"
  instance_type_replica           = "db.r4.large"
  is_primary_cluster              = true
  kms_key_id                      = aws_kms_key.primary.arn
  monitoring_interval             = 10
  name                            = local.primary_database_name
  performance_insights_enabled    = "true"
  performance_insights_kms_key_id = aws_kms_key.primary.arn
  replica_scale_cpu               = 85
  replica_scale_enabled           = true
  replica_scale_in_cooldown       = 120
  replica_scale_max               = 3
  replica_scale_min               = 2
  replica_scale_out_cooldown      = 120
  storage_encrypted               = true
  version                         = "5.2.0"
  vpc_id                          = data.aws_vpc.use1.id
  vpc_security_group_ids          = [aws_security_group.rds_access-use1.id]
  tags = merge(
    {
      application = local.primary_database_name
    },
    local.tags,
  )
  username = local.database_username
}


# secondary cluster
module "secondary" {
  providers = {
    aws = aws.us-west-2
  }
  #source                          = "./terraform-aws-rds-aurora"
  source                          = "[email protected]:q2e/it/terraform/modules/terraform-aws-rds-aurora.git"
  apply_immediately               = true
  create_security_group           = false
  create_monitoring_role          = false
  db_subnet_group_name            = data.aws_db_subnet_group.private-usw2.name
  enabled_cloudwatch_logs_exports = ["audit", "error", "general", "slowquery"]
  engine                          = local.engine
  engine_version                  = local.engine_version
  global_cluster_identifier       = aws_rds_global_cluster.aurora_global_cluster.id
  instance_type                   = "db.r4.large"
  instance_type_replica           = "db.r4.large"
  is_primary_cluster              = false
  kms_key_id                      = aws_kms_key.secondary.arn
  monitoring_interval             = 10
  monitoring_role_arn             = module.primary.enhanced_monitoring_iam_role_arn
  name                            = local.secondary_database_name
  performance_insights_enabled    = true
  performance_insights_kms_key_id = aws_kms_key.secondary.arn
  replica_scale_cpu               = 85
  replica_scale_enabled           = true
  replica_scale_in_cooldown       = 120
  replica_scale_max               = 3
  replica_scale_min               = 2
  replica_scale_out_cooldown      = 120
  replication_source_identifier   = module.primary.rds_cluster_arn
  source_region                   = "us-east-1"
  storage_encrypted               = true
  vpc_id                          = data.aws_vpc.usw2.id
  vpc_security_group_ids          = [aws_security_group.rds_access-usw2.id]
  tags = merge(
    {
      application = local.secondary_database_name
    },
    local.tags,
  )
  depends_on = [module.primary]
}

Breaking Changes

None.

How Has This Been Tested?

We've been using a modified version of this module that includes the proposed changes for months without issue. We have a suite of tests that leverage python testinfra that guarantees that the global cluster deployed by leveraging this module with the proposed changes is fully functional.

…ross two regions. When creating a global database credentials must not be specified for the secondary database. The current version of the module does not provide a way to not assign credentials to a secondary database. This change allows it to.
@@ -61,8 +61,8 @@ resource "aws_rds_cluster" "this" {
enable_http_endpoint = var.enable_http_endpoint
kms_key_id = var.kms_key_id
database_name = var.database_name
master_username = var.username
master_password = local.master_password
master_username = var.is_primary_cluster == false ? null : var.username
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
master_username = var.is_primary_cluster == false ? null : var.username
master_username = var.is_primary_cluster ? var.username : null

master_username = var.username
master_password = local.master_password
master_username = var.is_primary_cluster == false ? null : var.username
master_password = var.is_primary_cluster == false ? null : local.master_password
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
master_password = var.is_primary_cluster == false ? null : local.master_password
master_password = var.is_primary_cluster ? local.master_password : null

@safakozdek
Copy link

@pedrojflores Can you add the above fixes?

@bryantbiggs
Copy link
Member

we'll also need to create an example for this to ensure it functionally works

@safakozdek
Copy link

@bryantbiggs I can work on it. Can I open-up a different PR for that? Maybe include these changes as well. Because I am not sure if the owner of this PR is willing to implement an example for it.

@bryantbiggs
Copy link
Member

@safakozdek yes of course, anyone is welcome to submit PRs

@github-actions
Copy link

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Apr 14, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants