Real-time GCP security alerting via Cloud Logging → Pub/Sub → Cloud Function → Slack (or any webhook).
The native GCP Security Command Center alerting has multi-minute latency. This module wires up a Cloud Logging sink that captures high-signal audit events and forwards them to a webhook within seconds.
GCP Audit Logs (Admin Activity)
│
▼
Cloud Logging sink (filter: IAM, firewall, SA keys, GCS ACL, KMS, org policy)
│
▼
Pub/Sub topic ──────────────────────────────► dead-letter topic
│
▼ (CloudEvent trigger)
Cloud Function (Gen 2, Python 3.12)
│
▼
POST → Slack webhook (or any HTTP endpoint)
| Category | What's watched |
|---|---|
| IAM | SetIamPolicy on any resource |
| Service Accounts | Create, delete, disable, enable, undelete, key create/delete |
| Firewall | Rule insert, patch, delete |
| Network | VPC create/delete, peering, subnetworks, routers |
| VPN / Interconnect | Tunnel and attachment create/delete (traffic exfil risk) |
| Compute Instances | Service account swap, metadata injection, IAM change |
| GCS | Bucket/object ACL and policy changes |
| KMS | Key create, destroy, disable, enable, import, update |
| Org Policy | Create, update, delete |
| Secret Manager | Secret and version lifecycle, IAM changes |
| Cloud SQL | Data export, authorized network changes |
| GKE | Cluster create, delete, update |
| Cloud Run | IAM changes (public exposure), service deletion |
| Cloud Functions | IAM changes (public exposure), function deletion |
| Logging Sinks | Sink delete/update, exclusion create/update/delete (pipeline integrity) |
| Access Context Manager | VPC service control perimeter create/update/delete |
| BigQuery | Dataset IAM and sharing changes |
| Cloud Armor | WAF security policy create, patch, delete |
| Artifact Registry | Repository and image create/delete/update |
| Binary Authorization | Policy update or delete |
| Cloud Scheduler / Tasks | Job and queue create/update/delete |
| Billing | Account updates, resource association changes |
Override with your own filter by setting log_filter in terraform.tfvars.
Enable the following APIs in your project before applying:
gcloud services enable \
cloudfunctions.googleapis.com \
cloudbuild.googleapis.com \
run.googleapis.com \
pubsub.googleapis.com \
logging.googleapis.com \
storage.googleapis.com \
artifactregistry.googleapis.com \
--project YOUR_PROJECT_IDThe identity running terraform apply needs the following roles (or equivalent):
roles/cloudfunctions.adminroles/iam.serviceAccountAdminroles/iam.serviceAccountUserroles/pubsub.adminroles/logging.adminroles/storage.adminroles/resourcemanager.projectIamAdmin
- Terraform >= 1.5.0
gcloudauthenticated:gcloud auth application-default login
Create terraform.tfvars in this directory:
project_id = "my-secure-project"
region = "us-central1"
webhook_url = "https://hooks.slack.com/services/XXX/YYY/ZZZ"Then apply:
terraform init
terraform applymodule "security_alerts" {
source = "./gcp-security-alerts"
project_id = "my-secure-project"
region = "us-central1"
webhook_url = var.slack_webhook_url
}| Variable | Default | Description |
|---|---|---|
name_prefix |
"security-alerts" |
Prefix for all resource names. Change to deploy multiple instances. |
log_filter |
(curated security filter) | Custom Cloud Logging filter. Set to "" to use default or provide your own. |
function_memory_mb |
256 |
Cloud Function memory in MB. |
function_timeout_seconds |
60 |
Cloud Function timeout in seconds. |
After terraform apply, check that the log sink and function exist:
PROJECT=your-project-id
# List log sinks
gcloud logging sinks list --project=$PROJECT
# Describe the function
gcloud functions describe security-alerts-fn --region=us-central1 --project=$PROJECTThe fastest way to generate an event matching the default filter is to create and immediately delete an IAM service account:
gcloud iam service-accounts create test-alert-sa \
--description="Temporary SA for alert testing" \
--project=$PROJECT
gcloud iam service-accounts delete \
test-alert-sa@${PROJECT}.iam.gserviceaccount.com \
--project=$PROJECT --quietYou should see a Slack alert within ~10–30 seconds.
This bypasses the log sink and tests the function in isolation. It is the fastest feedback loop.
First, build a realistic audit log payload and base64-encode it:
TOPIC=$(terraform output -raw pubsub_topic_name)
PAYLOAD=$(python3 -c "
import base64, json
entry = {
'protoPayload': {
'@type': 'type.googleapis.com/google.cloud.audit.AuditLog',
'methodName': 'google.iam.admin.v1.CreateServiceAccountKey',
'resourceName': 'projects/test-project/serviceAccounts/[email protected]',
'serviceName': 'iam.googleapis.com',
'authenticationInfo': {'principalEmail': '[email protected]'},
'requestMetadata': {'callerIp': '203.0.113.1'}
},
'severity': 'NOTICE',
'timestamp': '2026-03-25T12:00:00Z',
'resource': {'labels': {'project_id': 'test-project'}}
}
print(base64.b64encode(json.dumps(entry).encode()).decode())
")
gcloud pubsub topics publish $TOPIC \
--message="$PAYLOAD" \
--project=$PROJECTgcloud functions logs read security-alerts-fn \
--region=us-central1 \
--project=$PROJECT \
--limit=50Or via Cloud Logging:
gcloud logging read \
'resource.type="cloud_run_revision" AND resource.labels.service_name="security-alerts-fn"' \
--project=$PROJECT \
--limit=20 \
--format=jsonIf alerts are not arriving, check whether messages landed in the dead-letter topic:
DEAD_LETTER=$(terraform output -raw dead_letter_topic_name)
gcloud pubsub subscriptions create tmp-dl-check \
--topic=$DEAD_LETTER \
--project=$PROJECT
gcloud pubsub subscriptions pull tmp-dl-check \
--auto-ack \
--project=$PROJECT
# Clean up
gcloud pubsub subscriptions delete tmp-dl-check --project=$PROJECTterraform destroyThe GCS bucket has force_destroy = true so it will be deleted along with its contents.
- The webhook URL is marked
sensitivein Terraform so it will not appear in plan/apply output. - The function runs under a dedicated least-privilege service account (
roles/logging.logWriteronly). - The function uses Cloud Functions Gen 2 (backed by Cloud Run), which supports longer timeouts and better concurrency than Gen 1.
- 4xx responses from the webhook are treated as permanent failures (not retried). 5xx and network errors are retried automatically by Cloud Functions.
- Unprocessable messages (permanently malformed) are ACKed so they do not cause infinite retries.