Skip to content

Conversation

@victorsantos-cit
Copy link

No description provided.

@victorsantos-cit
Copy link
Author

/gcbrun

@jlporter
Copy link

jlporter commented Nov 5, 2025

Looking at this compared with the spec doc, it looks like this is just setting up the regional ALBs, and does not include the Global external ALB piece. Is that going to be a separate PR?

@victorsantos-cit
Copy link
Author

yes @jlporterits in another pr , this i the pr terraform-google-modules/terraform-google-lb-http#545

@victorsantos-cit
Copy link
Author

Please, @jlporter , tell me if exist any error to merge this blue print, i watting for this a long time kk, so please , tell if a need chance something

@jlporter
Copy link

jlporter commented Nov 5, 2025

Thanks @victorsantos-cit, LGTM overall. See just the few comments I left above for some minor issues I ran into while trying to test.

@victorsantos-cit
Copy link
Author

@jlporter what part you put the comment, because i dont found the comments kk

@@ -0,0 +1,93 @@
variable "project_id" {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can get away with only needing project, not project_id also.

resource "google_compute_region_security_policy" "armor" {
for_each = var.regions
name = "regional-${each.key}-armor"
region = each.key
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add type = "CLOUD_ARMOR" here as well. In my testing, not having this caused the resource to be destroyed and recreated even with no changes, because it thought the type was null.

description = "Public zone for ${var.regional_hostname}"
}

resource "google_dns_record_set" "regional_a" {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something about this is causing it to get recreated on each update as well. Here's what I'm seeing

# google_dns_record_set.regional_a[0] will be updated in-place
  ~ resource "google_dns_record_set" "regional_a" {
        id           = "projects/jud-staging/managedZones/public-regional-example-com/rrsets/regional.example.com./A"
        name         = "regional.example.com."
      ~ rrdatas      = [
          + "34.143.3.80",
          + "34.144.89.233",
        ]
        # (4 unchanged attributes hidden)

      - routing_policy {
          - enable_geo_fencing = false -> null
            # (1 unchanged attribute hidden)

          - geo {
              - location = "us-central1" -> null
              - rrdatas  = [
                  - "34.143.3.80",
                ] -> null
            }
          - geo {
              - location = "us-west1" -> null
              - rrdatas  = [
                  - "34.144.89.233",
                ] -> null
            }
        }
    }

url_map = google_compute_region_url_map.um[each.key].self_link
}

resource "google_compute_region_target_https_proxy" "https" {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm getting an error during resource creation here, which I don't fully understand

╷
│ Error: Error creating RegionTargetHttpsProxy: googleapi: Error 400: Invalid value for field 'resource.sslCertificates': ''. The URL is malformed., invalid
│
│   with google_compute_region_target_https_proxy.https["us-central1"],
│   on main.tf line 164, in resource "google_compute_region_target_https_proxy" "https":
│  164: resource "google_compute_region_target_https_proxy" "https" {
│
╵
╷
│ Error: Error creating RegionTargetHttpsProxy: googleapi: Error 400: Invalid value for field 'resource.sslCertificates': ''. The URL is malformed., invalid
│
│   with google_compute_region_target_https_proxy.https["us-west1"],
│   on main.tf line 164, in resource "google_compute_region_target_https_proxy" "https":
│  164: resource "google_compute_region_target_https_proxy" "https" {
│
╵

boot = true
}

network_interface {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe that without

access_config {}

The VMs won't get an external IP address. This could work, but not without some more configuration. For one, I think this causes the startup script to fail because apt-get doesn't work with cloud nat. Also, in order to allow SSH into that machines, you'd need to configure IAP tunneling. Not sure what we usually do with these templates, but I think you either need to make sure that this works correctly without an external IP, or just enable an external IP for the VMs.

}
}

resource "google_compute_instance_template" "tmpl" {
Copy link

@jlporter jlporter Nov 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest adding

 lifecycle {
    create_before_destroy = true
  }

otherwise template modifications fail because they are still being used by the mig.

@jlporter
Copy link

jlporter commented Nov 6, 2025

@jlporter what part you put the comment, because i dont found the comments kk

Sorry about that, I didn't realize it hadn't published my comments, I'm not used to the github review process. Should be visible now?

@victorsantos-cit
Copy link
Author

ok, @jlporter i will take a look on this

]
}

resource "google_compute_firewall" "allow_hc" {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you also need a firewall rule covering traffic from the proxy to the backend, no? For instance, the google_compute_firewall on https://docs.cloud.google.com/load-balancing/docs/l7-internal/int-https-lb-tf-examples

depends_on = [google_compute_subnetwork.proxy_only]
}

resource "google_dns_record_set" "regional_geo_a" {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This resource should also have --enable-health-check set, correct? That is, the health checking feature should be enabled so that failover from to the other can happen, right?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you still need --enable-health-check set, correct? So that healthchecking in DNS can failover from one region to another if one of the LBs goes down?

Added lifecycle management and updated DNS record logic.
Removed redundant project_id variable definition.
@victorsantos-cit
Copy link
Author

Hey @jlporter I just ran this version and it worked fine, all the tests passed, clearly using a DNS server that doesn't exist, but it's creating it normally and running smoothly.


direction = "INGRESS"
priority = 1000
source_ranges = ["35.191.0.0/16", "130.211.0.0/22"]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The source range here should be the source range of the proxy subnet (the proxy_only resource), since that is the address range the LBs will be using to send traffic to the backends.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants