From d3fa73b2cc7470e568322f8010e753d3ca4e01b2 Mon Sep 17 00:00:00 2001 From: github-actions Date: Tue, 22 Apr 2025 16:05:16 +0000 Subject: [PATCH] chore(schema): update --- samtranslator/schema/schema.json | 186 +++---- schema_source/cloudformation-docs.json | 613 +++++++++++++++++------ schema_source/cloudformation.schema.json | 186 +++---- 3 files changed, 642 insertions(+), 343 deletions(-) diff --git a/samtranslator/schema/schema.json b/samtranslator/schema/schema.json index aed4e150d..d1e313581 100644 --- a/samtranslator/schema/schema.json +++ b/samtranslator/schema/schema.json @@ -565,7 +565,7 @@ "type": "string" }, "KeyStorageSecurityStandard": { - "markdownDescription": "Specifies a cryptographic key management compliance standard used for handling CA keys.\n\nDefault: FIPS_140_2_LEVEL_3_OR_HIGHER\n\n> Some AWS Regions do not support the default. When creating a CA in these Regions, you must provide `FIPS_140_2_LEVEL_2_OR_HIGHER` as the argument for `KeyStorageSecurityStandard` . Failure to do this results in an `InvalidArgsException` with the message, \"A certificate authority cannot be created in this region with the specified security standard.\"\n> \n> For information about security standard support in various Regions, see [Storage and security compliance of AWS Private CA private keys](https://docs.aws.amazon.com/privateca/latest/userguide/data-protection.html#private-keys) .", + "markdownDescription": "Specifies a cryptographic key management compliance standard for handling and protecting CA keys.\n\nDefault: FIPS_140_2_LEVEL_3_OR_HIGHER\n\n> Some AWS Regions don't support the default value. When you create a CA in these Regions, you must use `CCPC_LEVEL_1_OR_HIGHER` for the `KeyStorageSecurityStandard` parameter. If you don't, the operation returns an `InvalidArgsException` with this message: \"A certificate authority cannot be created in this region with the specified security standard.\"\n> \n> For information about security standard support in different AWS Regions, see [Storage and security compliance of AWS Private CA private keys](https://docs.aws.amazon.com/privateca/latest/userguide/data-protection.html#private-keys) .", "title": "KeyStorageSecurityStandard", "type": "string" }, @@ -9785,7 +9785,7 @@ "type": "string" }, "Description": { - "markdownDescription": "A description of the configuration.", + "markdownDescription": "A description of the configuration.\n\n> Due to HTTP limitations, this field only supports ASCII characters.", "title": "Description", "type": "string" }, @@ -20623,7 +20623,7 @@ "type": "number" }, "ResourceId": { - "markdownDescription": "The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .\n- Pool of WorkSpaces - The resource type is `workspacespool` and the unique identifier is the pool ID. Example: `workspacespool/wspool-123456` .", + "markdownDescription": "The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Amazon ElastiCache cache cluster - The resource type is `cache-cluster` and the unique identifier is the cache cluster name. Example: `cache-cluster/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .\n- Pool of WorkSpaces - The resource type is `workspacespool` and the unique identifier is the pool ID. Example: `workspacespool/wspool-123456` .", "title": "ResourceId", "type": "string" }, @@ -20633,7 +20633,7 @@ "type": "string" }, "ScalableDimension": { - "markdownDescription": "The scalable dimension associated with the scalable target. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.\n- `workspaces:workspacespool:DesiredUserSessions` - The number of user sessions for the WorkSpaces in the pool.", + "markdownDescription": "The scalable dimension associated with the scalable target. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:cache-cluster:Nodes` - The number of nodes for an Amazon ElastiCache cache cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.\n- `workspaces:workspacespool:DesiredUserSessions` - The number of user sessions for the WorkSpaces in the pool.", "title": "ScalableDimension", "type": "string" }, @@ -20804,17 +20804,17 @@ "type": "string" }, "PolicyType": { - "markdownDescription": "The scaling policy type.\n\nThe following policy types are supported:\n\n`TargetTrackingScaling` \u2014Not supported for Amazon EMR\n\n`StepScaling` \u2014Not supported for DynamoDB, Amazon Comprehend, Lambda, Amazon Keyspaces, Amazon MSK, Amazon ElastiCache, or Neptune.", + "markdownDescription": "The scaling policy type.\n\nThe following policy types are supported:\n\n`TargetTrackingScaling` \u2014Not supported for Amazon EMR\n\n`StepScaling` \u2014Not supported for DynamoDB, Amazon Comprehend, Lambda, Amazon Keyspaces, Amazon MSK, Amazon ElastiCache, or Neptune.\n\n`PredictiveScaling` \u2014Only supported for Amazon ECS", "title": "PolicyType", "type": "string" }, "ResourceId": { - "markdownDescription": "The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .\n- Pool of WorkSpaces - The resource type is `workspacespool` and the unique identifier is the pool ID. Example: `workspacespool/wspool-123456` .", + "markdownDescription": "The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Amazon ElastiCache cache cluster - The resource type is `cache-cluster` and the unique identifier is the cache cluster name. Example: `cache-cluster/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .\n- Pool of WorkSpaces - The resource type is `workspacespool` and the unique identifier is the pool ID. Example: `workspacespool/wspool-123456` .", "title": "ResourceId", "type": "string" }, "ScalableDimension": { - "markdownDescription": "The scalable dimension. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.\n- `workspaces:workspacespool:DesiredUserSessions` - The number of user sessions for the WorkSpaces in the pool.", + "markdownDescription": "The scalable dimension. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:cache-cluster:Nodes` - The number of nodes for an Amazon ElastiCache cache cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.\n- `workspaces:workspacespool:DesiredUserSessions` - The number of user sessions for the WorkSpaces in the pool.", "title": "ScalableDimension", "type": "string" }, @@ -27201,7 +27201,7 @@ }, "Tags": { "additionalProperties": true, - "markdownDescription": "Key-value pair tags to be applied to Amazon EC2 resources that are launched in the compute environment. For AWS Batch , these take the form of `\"String1\": \"String2\"` , where `String1` is the tag key and `String2` is the tag value-for example, `{ \"Name\": \"Batch Instance - C4OnDemand\" }` . This is helpful for recognizing your Batch instances in the Amazon EC2 console. These tags aren't seen when using the AWS Batch `ListTagsForResource` API operation.\n\nWhen updating a compute environment, changing this setting requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .\n\n> This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.", + "markdownDescription": "Key-value pair tags to be applied to Amazon EC2 resources that are launched in the compute environment. For AWS Batch , these take the form of `\"String1\": \"String2\"` , where `String1` is the tag key and `String2` is the tag value (for example, `{ \"Name\": \"Batch Instance - C4OnDemand\" }` ). This is helpful for recognizing your Batch instances in the Amazon EC2 console. These tags aren't seen when using the AWS Batch `ListTagsForResource` API operation.\n\nWhen updating a compute environment, changing this setting requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .\n\n> This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.", "patternProperties": { "^[a-zA-Z0-9]+$": { "type": "string" @@ -27302,7 +27302,7 @@ "type": "number" }, "TerminateJobsOnUpdate": { - "markdownDescription": "Specifies whether jobs are automatically terminated when the computer environment infrastructure is updated. The default value is `false` .", + "markdownDescription": "Specifies whether jobs are automatically terminated when the compute environment infrastructure is updated. The default value is `false` .", "title": "TerminateJobsOnUpdate", "type": "boolean" } @@ -28095,7 +28095,7 @@ "additionalProperties": false, "properties": { "LogDriver": { - "markdownDescription": "The log driver to use for the container. The valid values that are listed for this parameter are log drivers that the Amazon ECS container agent can communicate with by default.\n\nThe supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `logentries` , `syslog` , and `splunk` .\n\n> Jobs that are running on Fargate resources are restricted to the `awslogs` and `splunk` log drivers. \n\n- **awslogs** - Specifies the Amazon CloudWatch Logs logging driver. For more information, see [Using the awslogs log driver](https://docs.aws.amazon.com/batch/latest/userguide/using_awslogs.html) in the *AWS Batch User Guide* and [Amazon CloudWatch Logs logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/) in the Docker documentation.\n- **fluentd** - Specifies the Fluentd logging driver. For more information including usage and options, see [Fluentd logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/fluentd/) in the *Docker documentation* .\n- **gelf** - Specifies the Graylog Extended Format (GELF) logging driver. For more information including usage and options, see [Graylog Extended Format logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/gelf/) in the *Docker documentation* .\n- **journald** - Specifies the journald logging driver. For more information including usage and options, see [Journald logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/journald/) in the *Docker documentation* .\n- **json-file** - Specifies the JSON file logging driver. For more information including usage and options, see [JSON File logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/json-file/) in the *Docker documentation* .\n- **splunk** - Specifies the Splunk logging driver. For more information including usage and options, see [Splunk logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/splunk/) in the *Docker documentation* .\n- **syslog** - Specifies the syslog logging driver. For more information including usage and options, see [Syslog logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/syslog/) in the *Docker documentation* .\n\n> If you have a custom driver that's not listed earlier that you want to work with the Amazon ECS container agent, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you want to have included. However, Amazon Web Services doesn't currently support running modified copies of this software. \n\nThis parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep \"Server API version\"`", + "markdownDescription": "The log driver to use for the container. The valid values that are listed for this parameter are log drivers that the Amazon ECS container agent can communicate with by default.\n\nThe supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `logentries` , `syslog` , and `splunk` .\n\n> Jobs that are running on Fargate resources are restricted to the `awslogs` and `splunk` log drivers. \n\n- **awsfirelens** - Specifies the firelens logging driver. For more information on configuring Firelens, see [Send Amazon ECS logs to an AWS service or AWS Partner](https://docs.aws.amazon.com//AmazonECS/latest/developerguide/using_firelens.html) in the *Amazon Elastic Container Service Developer Guide* .\n- **awslogs** - Specifies the Amazon CloudWatch Logs logging driver. For more information, see [Using the awslogs log driver](https://docs.aws.amazon.com/batch/latest/userguide/using_awslogs.html) in the *AWS Batch User Guide* and [Amazon CloudWatch Logs logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/) in the Docker documentation.\n- **fluentd** - Specifies the Fluentd logging driver. For more information including usage and options, see [Fluentd logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/fluentd/) in the *Docker documentation* .\n- **gelf** - Specifies the Graylog Extended Format (GELF) logging driver. For more information including usage and options, see [Graylog Extended Format logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/gelf/) in the *Docker documentation* .\n- **journald** - Specifies the journald logging driver. For more information including usage and options, see [Journald logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/journald/) in the *Docker documentation* .\n- **json-file** - Specifies the JSON file logging driver. For more information including usage and options, see [JSON File logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/json-file/) in the *Docker documentation* .\n- **splunk** - Specifies the Splunk logging driver. For more information including usage and options, see [Splunk logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/splunk/) in the *Docker documentation* .\n- **syslog** - Specifies the syslog logging driver. For more information including usage and options, see [Syslog logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/syslog/) in the *Docker documentation* .\n\n> If you have a custom driver that's not listed earlier that you want to work with the Amazon ECS container agent, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you want to have included. However, Amazon Web Services doesn't currently support running modified copies of this software. \n\nThis parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep \"Server API version\"`", "title": "LogDriver", "type": "string" }, @@ -28811,7 +28811,7 @@ "type": "number" }, "ShareDecaySeconds": { - "markdownDescription": "The amount of time (in seconds) to use to calculate a fair-share percentage for each share identifier in use. A value of zero (0) indicates the default minimum time window (600 seconds). The maximum supported value is 604800 (1 week).\n\nThe decay allows for more recently run jobs to have more weight than jobs that ran earlier. Consider adjusting this number if you have jobs that (on average) run longer than ten minutes, or a large difference in job count or job run times between share identifiers, and the allocation of resources doesn\u2019t meet your needs.", + "markdownDescription": "The amount of time (in seconds) to use to calculate a fair-share percentage for each share identifier in use. A value of zero (0) indicates the default minimum time window (600 seconds). The maximum supported value is 604800 (1 week).\n\nThe decay allows for more recently run jobs to have more weight than jobs that ran earlier. Consider adjusting this number if you have jobs that (on average) run longer than ten minutes, or a large difference in job count or job run times between share identifiers, and the allocation of resources doesn't meet your needs.", "title": "ShareDecaySeconds", "type": "number" }, @@ -31954,7 +31954,7 @@ "items": { "type": "string" }, - "markdownDescription": "Specifies the AWS Regions that the keyspace is replicated in. You must specify at least two Regions, including the Region that the keyspace is being created in.", + "markdownDescription": "Specifies the AWS Regions that the keyspace is replicated in. You must specify at least two Regions, including the Region that the keyspace is being created in.\n\nTo specify a Region [that's disabled by default](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-regions.html#rande-manage-enable) , you must first enable the Region. For more information, see [Multi-Region replication in AWS Regions disabled by default](https://docs.aws.amazon.com/keyspaces/latest/devguide/multiRegion-replication_how-it-works.html#howitworks_mrr_opt_in) in the *Amazon Keyspaces Developer Guide* .", "title": "RegionList", "type": "array" }, @@ -32959,7 +32959,7 @@ "items": { "type": "string" }, - "markdownDescription": "The abilities granted to the collaboration creator.\n\n*Allowed values* `CAN_QUERY` | `CAN_RECEIVE_RESULTS`", + "markdownDescription": "The abilities granted to the collaboration creator.\n\n*Allowed values* `CAN_QUERY` | `CAN_RECEIVE_RESULTS` | `CAN_RUN_JOB`", "title": "CreatorMemberAbilities", "type": "array" }, @@ -39233,7 +39233,7 @@ "type": "array" }, "Field": { - "markdownDescription": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `resources.type` (required), `eventName` , `readOnly` , and `resources.ARN` . The following additional fields are available for event data stores: `eventSource` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events (for event data stores only), and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor management and data events for event data stores, you can use it to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source. For a list of services supporting network activity events, see [Logging network activity events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-network-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide* .\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - This is an optional field available only for event data stores, which is used to filter management and data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - This is an optional field available only for event data stores, which is used to filter management and data events based on whether the events originated from an AWS Management Console session. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor more information about the ARN formats of data event resources, see [Actions, resources, and condition keys for AWS services](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html) in the *Service Authorization Reference* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - This is an optional field available only for event data stores, which is used to filter management and data events on the userIdentity ARN. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", + "markdownDescription": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `eventName` , `eventSource` , `eventType` , `resources.type` (required), `readOnly` , `resources.ARN` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events, and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor data events for trails, this is an optional field that you can use to include or exclude any event source and can use any operator.\n\nFor management and data events for event data stores, this is an optional field that you can use to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source. For a list of services supporting network activity events, see [Logging network activity events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-network-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide* .\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - For event data stores, this is an optional field available for event data stores to filter management and data events on the event type. For trails, this is an optional field to filter data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - For event data stores, this is an optional field used to filter management and data events based on whether the events originated from an AWS Management Console session. For trails, this is an optional field used to filter data events. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor more information about the ARN formats of data event resources, see [Actions, resources, and condition keys for AWS services](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html) in the *Service Authorization Reference* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - For event data stores, this is an optional field used to filter management and data events for actions taken by specific IAM identities. For trails, this is an optional field used to filter data events. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", "title": "Field", "type": "string" }, @@ -39556,7 +39556,7 @@ "type": "array" }, "Field": { - "markdownDescription": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `resources.type` (required), `eventName` , `readOnly` , and `resources.ARN` . The following additional fields are available for event data stores: `eventSource` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events (for event data stores only), and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor management and data events for event data stores, you can use it to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source. For a list of services supporting network activity events, see [Logging network activity events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-network-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide* .\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - This is an optional field available only for event data stores, which is used to filter management and data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - This is an optional field available only for event data stores, which is used to filter management and data events based on whether the events originated from an AWS Management Console session. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor more information about the ARN formats of data event resources, see [Actions, resources, and condition keys for AWS services](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html) in the *Service Authorization Reference* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - This is an optional field available only for event data stores, which is used to filter management and data events on the userIdentity ARN. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", + "markdownDescription": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `eventName` , `eventSource` , `eventType` , `resources.type` (required), `readOnly` , `resources.ARN` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events, and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor data events for trails, this is an optional field that you can use to include or exclude any event source and can use any operator.\n\nFor management and data events for event data stores, this is an optional field that you can use to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source. For a list of services supporting network activity events, see [Logging network activity events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-network-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide* .\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - For event data stores, this is an optional field available for event data stores to filter management and data events on the event type. For trails, this is an optional field to filter data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - For event data stores, this is an optional field used to filter management and data events based on whether the events originated from an AWS Management Console session. For trails, this is an optional field used to filter data events. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor more information about the ARN formats of data event resources, see [Actions, resources, and condition keys for AWS services](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html) in the *Service Authorization Reference* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - For event data stores, this is an optional field used to filter management and data events for actions taken by specific IAM identities. For trails, this is an optional field used to filter data events. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", "title": "Field", "type": "string" }, @@ -67549,7 +67549,7 @@ "items": { "type": "string" }, - "markdownDescription": "Represents the non-key attribute names which will be projected into the index.\n\nFor local secondary indexes, the total count of `NonKeyAttributes` summed across all of the local secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.", + "markdownDescription": "Represents the non-key attribute names which will be projected into the index.\n\nFor global and local secondary indexes, the total count of `NonKeyAttributes` summed across all of the secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total. This limit only applies when you specify the ProjectionType of `INCLUDE` . You still can specify the ProjectionType of `ALL` to project all attributes from the source table, even if the table has more than 100 attributes.", "title": "NonKeyAttributes", "type": "array" }, @@ -68203,7 +68203,7 @@ "items": { "type": "string" }, - "markdownDescription": "Represents the non-key attribute names which will be projected into the index.\n\nFor local secondary indexes, the total count of `NonKeyAttributes` summed across all of the local secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.", + "markdownDescription": "Represents the non-key attribute names which will be projected into the index.\n\nFor global and local secondary indexes, the total count of `NonKeyAttributes` summed across all of the secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total. This limit only applies when you specify the ProjectionType of `INCLUDE` . You still can specify the ProjectionType of `ALL` to project all attributes from the source table, even if the table has more than 100 attributes.", "title": "NonKeyAttributes", "type": "array" }, @@ -73433,7 +73433,7 @@ "type": "string" }, "DeviceIndex": { - "markdownDescription": "The device index for the network interface attachment. If the network interface is of type `interface` , you must specify a device index.\n\nIf you create a launch template that includes secondary network interfaces but no primary network interface, and you specify it using the `LaunchTemplate` property of `AWS::EC2::Instance` , then you must include a primary network interface using the `NetworkInterfaces` property of `AWS::EC2::Instance` .", + "markdownDescription": "The device index for the network interface attachment. The primary network interface has a device index of 0. If the network interface is of type `interface` , you must specify a device index.\n\nIf you create a launch template that includes secondary network interfaces but no primary network interface, and you specify it using the `LaunchTemplate` property of `AWS::EC2::Instance` , then you must include a primary network interface using the `NetworkInterfaces` property of `AWS::EC2::Instance` .", "title": "DeviceIndex", "type": "number" }, @@ -76608,7 +76608,7 @@ "items": { "$ref": "#/definitions/AWS::EC2::SecurityGroup.Egress" }, - "markdownDescription": "The outbound rules associated with the security group. There is a short interruption during which you cannot connect to the security group.", + "markdownDescription": "The outbound rules associated with the security group.", "title": "SecurityGroupEgress", "type": "array" }, @@ -76616,7 +76616,7 @@ "items": { "$ref": "#/definitions/AWS::EC2::SecurityGroup.Ingress" }, - "markdownDescription": "The inbound rules associated with the security group. There is a short interruption during which you cannot connect to the security group.", + "markdownDescription": "The inbound rules associated with the security group.", "title": "SecurityGroupIngress", "type": "array" }, @@ -83844,7 +83844,7 @@ }, "Options": { "additionalProperties": true, - "markdownDescription": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using the Fargate launch type.Optional for the EC2 launch type, required for the Fargate launch type.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to CloudWatch Logs. The delivery mode you choose affects application availability when the flow of logs from container to CloudWatch is interrupted.\n\nIf you use the `blocking` mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", + "markdownDescription": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using Fargate.Optional when using EC2.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n\nThe following options apply to all supported log drivers.\n\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to the log driver specified using `logDriver` . The delivery mode you choose affects application availability when the flow of logs from container is interrupted.\n\nIf you use the `blocking` mode and the flow of logs is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n\nYou can set a default `mode` for all containers in a specific AWS Region by using the `defaultLogDriverMode` account setting. If you don't specify the `mode` option or configure the account setting, Amazon ECS will default to the `blocking` mode. For more information about the account setting, see [Default log driver mode](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#default-log-driver-mode) in the *Amazon Elastic Container Service Developer Guide* .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", "patternProperties": { "^[a-zA-Z0-9]+$": { "type": "string" @@ -85014,7 +85014,7 @@ }, "Options": { "additionalProperties": true, - "markdownDescription": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using the Fargate launch type.Optional for the EC2 launch type, required for the Fargate launch type.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to CloudWatch Logs. The delivery mode you choose affects application availability when the flow of logs from container to CloudWatch is interrupted.\n\nIf you use the `blocking` mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", + "markdownDescription": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using Fargate.Optional when using EC2.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n\nThe following options apply to all supported log drivers.\n\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to the log driver specified using `logDriver` . The delivery mode you choose affects application availability when the flow of logs from container is interrupted.\n\nIf you use the `blocking` mode and the flow of logs is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n\nYou can set a default `mode` for all containers in a specific AWS Region by using the `defaultLogDriverMode` account setting. If you don't specify the `mode` option or configure the account setting, Amazon ECS will default to the `blocking` mode. For more information about the account setting, see [Default log driver mode](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#default-log-driver-mode) in the *Amazon Elastic Container Service Developer Guide* .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", "patternProperties": { "^[a-zA-Z0-9]+$": { "type": "string" @@ -90848,7 +90848,7 @@ "additionalProperties": false, "properties": { "AtRestEncryptionEnabled": { - "markdownDescription": "A flag that enables encryption at rest when set to `true` .\n\nYou cannot modify the value of `AtRestEncryptionEnabled` after the replication group is created. To enable encryption at rest on a replication group you must set `AtRestEncryptionEnabled` to `true` when you create the replication group.\n\n*Required:* Only available when creating a replication group in an Amazon VPC using Redis OSS version `3.2.6` or `4.x` onward.\n\nDefault: `false`", + "markdownDescription": "A flag that enables encryption at rest when set to `true` .\n\n*Required:* Only available when creating a replication group in an Amazon VPC using Redis OSS version `3.2.6` or `4.x` onward.\n\nDefault: `false`", "title": "AtRestEncryptionEnabled", "type": "boolean" }, @@ -91049,7 +91049,7 @@ "type": "array" }, "TransitEncryptionEnabled": { - "markdownDescription": "A flag that enables in-transit encryption when set to `true` .\n\nYou cannot modify the value of `TransitEncryptionEnabled` after the cluster is created. To enable in-transit encryption on a cluster you must set `TransitEncryptionEnabled` to `true` when you create a cluster.\n\nThis parameter is valid only if the `Engine` parameter is `redis` , the `EngineVersion` parameter is `3.2.6` or `4.x` onward, and the cluster is being created in an Amazon VPC.\n\nIf you enable in-transit encryption, you must also specify a value for `CacheSubnetGroup` .\n\n> - TransitEncryptionEnabled is only available when creating a replication group in an Amazon VPC using Valkey version `7.2` and above, Redis OSS version `3.2.6` , or Redis OSS version `4.x` and above.\n> - TransitEncryptionEnabled is required when creating a new valkey replication group. \n\nDefault: `false`\n\n> For HIPAA compliance, you must specify `TransitEncryptionEnabled` as `true` , an `AuthToken` , and a `CacheSubnetGroup` .", + "markdownDescription": "A flag that enables in-transit encryption when set to `true` .\n\nThis parameter is only available when creating a replication group in an Amazon VPC using Valkey version `7.2` and above, Redis OSS version `3.2.6` , or Redis OSS version `4.x` and above, and the cluster is being created in an Amazon VPC.\n\nIf you enable in-transit encryption, you must also specify a value for `CacheSubnetGroup` .\n\n> TransitEncryptionEnabled is required when creating a new valkey replication group. \n\nDefault: `false`\n\n> For HIPAA compliance, you must specify `TransitEncryptionEnabled` as `true` , an `AuthToken` , and a `CacheSubnetGroup` .", "title": "TransitEncryptionEnabled", "type": "boolean" }, @@ -99853,7 +99853,7 @@ "type": "number" }, "StorageType": { - "markdownDescription": "Sets the storage class for the file system that you're creating. Valid values are `SSD` , `HDD` , and `INTELLIGENT_TIERING` .\n\n- Set to `SSD` to use solid state drive storage. SSD is supported on all Windows, Lustre, ONTAP, and OpenZFS deployment types.\n- Set to `HDD` to use hard disk drive storage. HDD is supported on `SINGLE_AZ_2` and `MULTI_AZ_1` Windows file system deployment types, and on `PERSISTENT_1` Lustre file system deployment types.\n- Set to `INTELLIGENT_TIERING` to use fully elastic, intelligently-tiered storage. Intelligent-Tiering is only available for OpenZFS file systems with the Multi-AZ deployment type.\n\nDefault value is `SSD` . For more information, see [Storage type options](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/optimize-fsx-costs.html#storage-type-options) in the *FSx for Windows File Server User Guide* , [Multiple storage options](https://docs.aws.amazon.com/fsx/latest/LustreGuide/what-is.html#storage-options) in the *FSx for Lustre User Guide* , and [Working with Intelligent-Tiering](https://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/performance-intelligent-tiering) in the *Amazon FSx for OpenZFS User Guide* .", + "markdownDescription": "Sets the storage class for the file system that you're creating. Valid values are `SSD` , `HDD` , and `INTELLIGENT_TIERING` .\n\n- Set to `SSD` to use solid state drive storage. SSD is supported on all Windows, Lustre, ONTAP, and OpenZFS deployment types.\n- Set to `HDD` to use hard disk drive storage, which is supported on `SINGLE_AZ_2` and `MULTI_AZ_1` Windows file system deployment types, and on `PERSISTENT_1` Lustre file system deployment types.\n- Set to `INTELLIGENT_TIERING` to use fully elastic, intelligently-tiered storage. Intelligent-Tiering is only available for OpenZFS file systems with the Multi-AZ deployment type.\n\nDefault value is `SSD` . For more information, see [Storage type options](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/optimize-fsx-costs.html#storage-type-options) in the *FSx for Windows File Server User Guide* , [Multiple storage options](https://docs.aws.amazon.com/fsx/latest/LustreGuide/what-is.html#storage-options) in the *FSx for Lustre User Guide* , and [Working with Intelligent-Tiering](https://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/performance-intelligent-tiering) in the *Amazon FSx for OpenZFS User Guide* .", "title": "StorageType", "type": "string" }, @@ -100025,7 +100025,7 @@ "type": "number" }, "WeeklyMaintenanceStartTime": { - "markdownDescription": "A recurring weekly time, in the format `D:HH:MM` .\n\n`D` is the day of the week, for which 1 represents Monday and 7 represents Sunday. For further details, see [the ISO-8601 spec as described on Wikipedia](https://docs.aws.amazon.com/https://en.wikipedia.org/wiki/ISO_week_date) .\n\n`HH` is the zero-padded hour of the day (0-23), and `MM` is the zero-padded minute of the hour.\n\nFor example, `1:05:00` specifies maintenance at 5 AM Monday.", + "markdownDescription": "The preferred start time to perform weekly maintenance, formatted d:HH:MM in the UTC time zone, where d is the weekday number, from 1 through 7, beginning with Monday and ending with Sunday.\n\nFor example, `1:05:00` specifies maintenance at 5 AM Monday.", "title": "WeeklyMaintenanceStartTime", "type": "string" } @@ -100108,7 +100108,7 @@ "type": "number" }, "WeeklyMaintenanceStartTime": { - "markdownDescription": "A recurring weekly time, in the format `D:HH:MM` .\n\n`D` is the day of the week, for which 1 represents Monday and 7 represents Sunday. For further details, see [the ISO-8601 spec as described on Wikipedia](https://docs.aws.amazon.com/https://en.wikipedia.org/wiki/ISO_week_date) .\n\n`HH` is the zero-padded hour of the day (0-23), and `MM` is the zero-padded minute of the hour.\n\nFor example, `1:05:00` specifies maintenance at 5 AM Monday.", + "markdownDescription": "The preferred start time to perform weekly maintenance, formatted d:HH:MM in the UTC time zone, where d is the weekday number, from 1 through 7, beginning with Monday and ending with Sunday.\n\nFor example, `1:05:00` specifies maintenance at 5 AM Monday.", "title": "WeeklyMaintenanceStartTime", "type": "string" } @@ -100188,7 +100188,7 @@ "type": "number" }, "WeeklyMaintenanceStartTime": { - "markdownDescription": "A recurring weekly time, in the format `D:HH:MM` .\n\n`D` is the day of the week, for which 1 represents Monday and 7 represents Sunday. For further details, see [the ISO-8601 spec as described on Wikipedia](https://docs.aws.amazon.com/https://en.wikipedia.org/wiki/ISO_week_date) .\n\n`HH` is the zero-padded hour of the day (0-23), and `MM` is the zero-padded minute of the hour.\n\nFor example, `1:05:00` specifies maintenance at 5 AM Monday.", + "markdownDescription": "The preferred start time to perform weekly maintenance, formatted d:HH:MM in the UTC time zone, where d is the weekday number, from 1 through 7, beginning with Monday and ending with Sunday.\n\nFor example, `1:05:00` specifies maintenance at 5 AM Monday.", "title": "WeeklyMaintenanceStartTime", "type": "string" } @@ -100362,7 +100362,7 @@ "type": "number" }, "WeeklyMaintenanceStartTime": { - "markdownDescription": "A recurring weekly time, in the format `D:HH:MM` .\n\n`D` is the day of the week, for which 1 represents Monday and 7 represents Sunday. For further details, see [the ISO-8601 spec as described on Wikipedia](https://docs.aws.amazon.com/https://en.wikipedia.org/wiki/ISO_week_date) .\n\n`HH` is the zero-padded hour of the day (0-23), and `MM` is the zero-padded minute of the hour.\n\nFor example, `1:05:00` specifies maintenance at 5 AM Monday.", + "markdownDescription": "The preferred start time to perform weekly maintenance, formatted d:HH:MM in the UTC time zone, where d is the weekday number, from 1 through 7, beginning with Monday and ending with Sunday.", "title": "WeeklyMaintenanceStartTime", "type": "string" } @@ -102917,7 +102917,7 @@ "type": "string" }, "OperatingSystem": { - "markdownDescription": "The platform that all containers in the container group definition run on.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use server SDK version 4.x for Amazon GameLift Servers, first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", + "markdownDescription": "The platform that all containers in the container group definition run on.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use server SDK version 4.x for Amazon GameLift Servers, first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", "title": "OperatingSystem", "type": "string" }, @@ -112509,7 +112509,7 @@ "items": { "$ref": "#/definitions/AWS::GroundStation::DataflowEndpointGroup.EndpointDetails" }, - "markdownDescription": "List of Endpoint Details, containing address and port for each endpoint.", + "markdownDescription": "List of Endpoint Details, containing address and port for each endpoint. All dataflow endpoints within a single dataflow endpoint group must be of the same type. You cannot mix AWS Ground Station Agent endpoints with Dataflow endpoints in the same group. If your use case requires both types of endpoints, you must create separate dataflow endpoint groups for each type.", "title": "EndpointDetails", "type": "array" }, @@ -120277,7 +120277,7 @@ }, "AuditCheckConfigurations": { "$ref": "#/definitions/AWS::IoT::AccountAuditConfiguration.AuditCheckConfigurations", - "markdownDescription": "Specifies which audit checks are enabled and disabled for this account.\n\nSome data collection might start immediately when certain checks are enabled. When a check is disabled, any data collected so far in relation to the check is deleted. To disable a check, set the value of the `Enabled:` key to `false` .\n\nIf an enabled check is removed from the template, it will also be disabled.\n\nYou can't disable a check if it's used by any scheduled audit. You must delete the check from the scheduled audit or delete the scheduled audit itself to disable the check.\n\nFor more information on avialbe auidt checks see [AWS::IoT::AccountAuditConfiguration AuditCheckConfigurations](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iot-accountauditconfiguration-auditcheckconfigurations.html)", + "markdownDescription": "Specifies which audit checks are enabled and disabled for this account.\n\nSome data collection might start immediately when certain checks are enabled. When a check is disabled, any data collected so far in relation to the check is deleted. To disable a check, set the value of the `Enabled:` key to `false` .\n\nIf an enabled check is removed from the template, it will also be disabled.\n\nYou can't disable a check if it's used by any scheduled audit. You must delete the check from the scheduled audit or delete the scheduled audit itself to disable the check.\n\nFor more information on available audit checks see [AWS::IoT::AccountAuditConfiguration AuditCheckConfigurations](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iot-accountauditconfiguration-auditcheckconfigurations.html)", "title": "AuditCheckConfigurations" }, "AuditNotificationTargetConfigurations": { @@ -122562,7 +122562,7 @@ "items": { "type": "string" }, - "markdownDescription": "Which checks are performed during the scheduled audit. Checks must be enabled for your account. (Use `DescribeAccountAuditConfiguration` to see the list of all checks, including those that are enabled or use `UpdateAccountAuditConfiguration` to select which checks are enabled.)\n\nThe following checks are currently aviable:\n\n- `AUTHENTICATED_COGNITO_ROLE_OVERLY_PERMISSIVE_CHECK`\n- `CA_CERTIFICATE_EXPIRING_CHECK`\n- `CA_CERTIFICATE_KEY_QUALITY_CHECK`\n- `CONFLICTING_CLIENT_IDS_CHECK`\n- `DEVICE_CERTIFICATE_EXPIRING_CHECK`\n- `DEVICE_CERTIFICATE_KEY_QUALITY_CHECK`\n- `DEVICE_CERTIFICATE_SHARED_CHECK`\n- `IOT_POLICY_OVERLY_PERMISSIVE_CHECK`\n- `IOT_ROLE_ALIAS_ALLOWS_ACCESS_TO_UNUSED_SERVICES_CHECK`\n- `IOT_ROLE_ALIAS_OVERLY_PERMISSIVE_CHECK`\n- `LOGGING_DISABLED_CHECK`\n- `REVOKED_CA_CERTIFICATE_STILL_ACTIVE_CHECK`\n- `REVOKED_DEVICE_CERTIFICATE_STILL_ACTIVE_CHECK`\n- `UNAUTHENTICATED_COGNITO_ROLE_OVERLY_PERMISSIVE_CHECK`", + "markdownDescription": "Which checks are performed during the scheduled audit. Checks must be enabled for your account. (Use `DescribeAccountAuditConfiguration` to see the list of all checks, including those that are enabled or use `UpdateAccountAuditConfiguration` to select which checks are enabled.)\n\nThe following checks are currently available:\n\n- `AUTHENTICATED_COGNITO_ROLE_OVERLY_PERMISSIVE_CHECK`\n- `CA_CERTIFICATE_EXPIRING_CHECK`\n- `CA_CERTIFICATE_KEY_QUALITY_CHECK`\n- `CONFLICTING_CLIENT_IDS_CHECK`\n- `DEVICE_CERTIFICATE_EXPIRING_CHECK`\n- `DEVICE_CERTIFICATE_KEY_QUALITY_CHECK`\n- `DEVICE_CERTIFICATE_SHARED_CHECK`\n- `IOT_POLICY_OVERLY_PERMISSIVE_CHECK`\n- `IOT_ROLE_ALIAS_ALLOWS_ACCESS_TO_UNUSED_SERVICES_CHECK`\n- `IOT_ROLE_ALIAS_OVERLY_PERMISSIVE_CHECK`\n- `LOGGING_DISABLED_CHECK`\n- `REVOKED_CA_CERTIFICATE_STILL_ACTIVE_CHECK`\n- `REVOKED_DEVICE_CERTIFICATE_STILL_ACTIVE_CHECK`\n- `UNAUTHENTICATED_COGNITO_ROLE_OVERLY_PERMISSIVE_CHECK`", "title": "TargetCheckNames", "type": "array" } @@ -134188,7 +134188,7 @@ "items": { "type": "string" }, - "markdownDescription": "The security groups for the connector.", + "markdownDescription": "The security group IDs for the connector.", "title": "SecurityGroups", "type": "array" }, @@ -149109,7 +149109,7 @@ "additionalProperties": false, "properties": { "DataSource": { - "markdownDescription": "Specifies the geospatial data provider for the new place index.\n\n> This field is case-sensitive. Enter the valid values as shown. For example, entering `HERE` returns an error. \n\nValid values include:\n\n- `Esri` \u2013 For additional information about [Esri](https://docs.aws.amazon.com/location/previous/developerguide/esri.html) 's coverage in your region of interest, see [Esri details on geocoding coverage](https://docs.aws.amazon.com/https://developers.arcgis.com/rest/geocode/api-reference/geocode-coverage.htm) .\n- `Grab` \u2013 Grab provides place index functionality for Southeast Asia. For additional information about [GrabMaps](https://docs.aws.amazon.com/location/previous/developerguide/grab.html) ' coverage, see [GrabMaps countries and areas covered](https://docs.aws.amazon.com/location/previous/developerguide/grab.html#grab-coverage-area) .\n- `Here` \u2013 For additional information about [HERE Technologies](https://docs.aws.amazon.com/location/previous/developerguide/HERE.html) ' coverage in your region of interest, see [HERE details on goecoding coverage](https://docs.aws.amazon.com/https://developer.here.com/documentation/geocoder/dev_guide/topics/coverage-geocoder.html) .\n\n> If you specify HERE Technologies ( `Here` ) as the data provider, you may not [store results](https://docs.aws.amazon.com//location-places/latest/APIReference/API_DataSourceConfiguration.html) for locations in Japan. For more information, see the [AWS Service Terms](https://docs.aws.amazon.com/service-terms/) for Amazon Location Service.\n\nFor additional information , see [Data providers](https://docs.aws.amazon.com/location/previous/developerguide/what-is-data-provider.html) on the *Amazon Location Service Developer Guide* .", + "markdownDescription": "Specifies the geospatial data provider for the new place index.\n\n> This field is case-sensitive. Enter the valid values as shown. For example, entering `HERE` returns an error. \n\nValid values include:\n\n- `Esri` \u2013 For additional information about [Esri](https://docs.aws.amazon.com/location/previous/developerguide/esri.html) 's coverage in your region of interest, see [Esri details on geocoding coverage](https://docs.aws.amazon.com/https://developers.arcgis.com/rest/geocode/api-reference/geocode-coverage.htm) .\n- `Grab` \u2013 Grab provides place index functionality for Southeast Asia. For additional information about [GrabMaps](https://docs.aws.amazon.com/location/previous/developerguide/grab.html) ' coverage, see [GrabMaps countries and areas covered](https://docs.aws.amazon.com/location/previous/developerguide/grab.html#grab-coverage-area) .\n- `Here` \u2013 For additional information about [HERE Technologies](https://docs.aws.amazon.com/location/previous/developerguide/HERE.html) ' coverage in your region of interest, see [HERE details on goecoding coverage](https://docs.aws.amazon.com/https://developer.here.com/documentation/geocoder/dev_guide/topics/coverage-geocoder.html) .\n\n> If you specify HERE Technologies ( `Here` ) as the data provider, you may not [store results](https://docs.aws.amazon.com//location-places/latest/APIReference/API_DataSourceConfiguration.html) for locations in Japan. For more information, see the [AWS service terms](https://docs.aws.amazon.com/service-terms/) for Amazon Location Service.\n\nFor additional information , see [Data providers](https://docs.aws.amazon.com/location/previous/developerguide/what-is-data-provider.html) on the *Amazon Location Service developer guide* .", "title": "DataSource", "type": "string" }, @@ -152023,12 +152023,12 @@ }, "Firehose": { "$ref": "#/definitions/AWS::MSK::Cluster.Firehose", - "markdownDescription": "", + "markdownDescription": "Details of the Kinesis Data Firehose delivery stream that is the destination for broker logs.", "title": "Firehose" }, "S3": { "$ref": "#/definitions/AWS::MSK::Cluster.S3", - "markdownDescription": "", + "markdownDescription": "Details of the Amazon S3 destination for broker logs.", "title": "S3" } }, @@ -152085,17 +152085,17 @@ "properties": { "Sasl": { "$ref": "#/definitions/AWS::MSK::Cluster.Sasl", - "markdownDescription": "", + "markdownDescription": "Details for client authentication using SASL. To turn on SASL, you must also turn on `EncryptionInTransit` by setting `inCluster` to true. You must set `clientBroker` to either `TLS` or `TLS_PLAINTEXT` . If you choose `TLS_PLAINTEXT` , then you must also set `unauthenticated` to true.", "title": "Sasl" }, "Tls": { "$ref": "#/definitions/AWS::MSK::Cluster.Tls", - "markdownDescription": "", + "markdownDescription": "Details for ClientAuthentication using TLS. To turn on TLS access control, you must also turn on `EncryptionInTransit` by setting `inCluster` to true and `clientBroker` to `TLS` .", "title": "Tls" }, "Unauthenticated": { "$ref": "#/definitions/AWS::MSK::Cluster.Unauthenticated", - "markdownDescription": "", + "markdownDescription": "Details for ClientAuthentication using no authentication.", "title": "Unauthenticated" } }, @@ -152105,12 +152105,12 @@ "additionalProperties": false, "properties": { "Enabled": { - "markdownDescription": "", + "markdownDescription": "Specifies whether broker logs get sent to the specified CloudWatch Logs destination.", "title": "Enabled", "type": "boolean" }, "LogGroup": { - "markdownDescription": "", + "markdownDescription": "The CloudWatch log group that is the destination for broker logs.", "title": "LogGroup", "type": "string" } @@ -152124,12 +152124,12 @@ "additionalProperties": false, "properties": { "Arn": { - "markdownDescription": "", + "markdownDescription": "ARN of the configuration to use.", "title": "Arn", "type": "string" }, "Revision": { - "markdownDescription": "", + "markdownDescription": "The revision of the configuration to use.", "title": "Revision", "type": "number" } @@ -152145,12 +152145,12 @@ "properties": { "PublicAccess": { "$ref": "#/definitions/AWS::MSK::Cluster.PublicAccess", - "markdownDescription": "", + "markdownDescription": "Access control settings for the cluster's brokers.", "title": "PublicAccess" }, "VpcConnectivity": { "$ref": "#/definitions/AWS::MSK::Cluster.VpcConnectivity", - "markdownDescription": "", + "markdownDescription": "VPC connection control settings for brokers.", "title": "VpcConnectivity" } }, @@ -152161,11 +152161,11 @@ "properties": { "ProvisionedThroughput": { "$ref": "#/definitions/AWS::MSK::Cluster.ProvisionedThroughput", - "markdownDescription": "", + "markdownDescription": "EBS volume provisioned throughput information.", "title": "ProvisionedThroughput" }, "VolumeSize": { - "markdownDescription": "", + "markdownDescription": "The size in GiB of the EBS volume for the data drive on each broker node.", "title": "VolumeSize", "type": "number" } @@ -152176,7 +152176,7 @@ "additionalProperties": false, "properties": { "DataVolumeKMSKeyId": { - "markdownDescription": "", + "markdownDescription": "The ARN of the Amazon KMS key for encrypting data at rest. If you don't specify a KMS key, MSK creates one for you and uses it.", "title": "DataVolumeKMSKeyId", "type": "string" } @@ -152207,7 +152207,7 @@ "properties": { "EncryptionAtRest": { "$ref": "#/definitions/AWS::MSK::Cluster.EncryptionAtRest", - "markdownDescription": "", + "markdownDescription": "The data-volume encryption details.", "title": "EncryptionAtRest" }, "EncryptionInTransit": { @@ -152222,12 +152222,12 @@ "additionalProperties": false, "properties": { "DeliveryStream": { - "markdownDescription": "", + "markdownDescription": "The Kinesis Data Firehose delivery stream that is the destination for broker logs.", "title": "DeliveryStream", "type": "string" }, "Enabled": { - "markdownDescription": "", + "markdownDescription": "Specifies whether broker logs get send to the specified Kinesis Data Firehose delivery stream.", "title": "Enabled", "type": "boolean" } @@ -152241,7 +152241,7 @@ "additionalProperties": false, "properties": { "Enabled": { - "markdownDescription": "", + "markdownDescription": "SASL/IAM authentication is enabled or not.", "title": "Enabled", "type": "boolean" } @@ -152255,7 +152255,7 @@ "additionalProperties": false, "properties": { "EnabledInBroker": { - "markdownDescription": "", + "markdownDescription": "Indicates whether you want to enable or disable the JMX Exporter.", "title": "EnabledInBroker", "type": "boolean" } @@ -152270,7 +152270,7 @@ "properties": { "BrokerLogs": { "$ref": "#/definitions/AWS::MSK::Cluster.BrokerLogs", - "markdownDescription": "", + "markdownDescription": "You can configure your MSK cluster to send broker logs to different destination types. This configuration specifies the details of these destinations.", "title": "BrokerLogs" } }, @@ -152283,7 +152283,7 @@ "additionalProperties": false, "properties": { "EnabledInBroker": { - "markdownDescription": "", + "markdownDescription": "Indicates whether you want to enable or disable the Node Exporter.", "title": "EnabledInBroker", "type": "boolean" } @@ -152298,7 +152298,7 @@ "properties": { "Prometheus": { "$ref": "#/definitions/AWS::MSK::Cluster.Prometheus", - "markdownDescription": "", + "markdownDescription": "Prometheus exporter settings.", "title": "Prometheus" } }, @@ -152312,12 +152312,12 @@ "properties": { "JmxExporter": { "$ref": "#/definitions/AWS::MSK::Cluster.JmxExporter", - "markdownDescription": "", + "markdownDescription": "Indicates whether you want to enable or disable the JMX Exporter.", "title": "JmxExporter" }, "NodeExporter": { "$ref": "#/definitions/AWS::MSK::Cluster.NodeExporter", - "markdownDescription": "", + "markdownDescription": "Indicates whether you want to enable or disable the Node Exporter.", "title": "NodeExporter" } }, @@ -152327,12 +152327,12 @@ "additionalProperties": false, "properties": { "Enabled": { - "markdownDescription": "", + "markdownDescription": "Provisioned throughput is on or off.", "title": "Enabled", "type": "boolean" }, "VolumeThroughput": { - "markdownDescription": "", + "markdownDescription": "Throughput value of the EBS volumes for the data drive on each kafka broker node in MiB per second.", "title": "VolumeThroughput", "type": "number" } @@ -152343,7 +152343,7 @@ "additionalProperties": false, "properties": { "Type": { - "markdownDescription": "", + "markdownDescription": "DISABLED means that public access is turned off. SERVICE_PROVIDED_EIPS means that public access is turned on.", "title": "Type", "type": "string" } @@ -152354,17 +152354,17 @@ "additionalProperties": false, "properties": { "Bucket": { - "markdownDescription": "", + "markdownDescription": "The name of the S3 bucket that is the destination for broker logs.", "title": "Bucket", "type": "string" }, "Enabled": { - "markdownDescription": "", + "markdownDescription": "Specifies whether broker logs get sent to the specified Amazon S3 destination.", "title": "Enabled", "type": "boolean" }, "Prefix": { - "markdownDescription": "", + "markdownDescription": "The S3 prefix that is the destination for broker logs.", "title": "Prefix", "type": "string" } @@ -152379,12 +152379,12 @@ "properties": { "Iam": { "$ref": "#/definitions/AWS::MSK::Cluster.Iam", - "markdownDescription": "", + "markdownDescription": "Details for ClientAuthentication using IAM.", "title": "Iam" }, "Scram": { "$ref": "#/definitions/AWS::MSK::Cluster.Scram", - "markdownDescription": "", + "markdownDescription": "Details for SASL/SCRAM client authentication.", "title": "Scram" } }, @@ -152394,7 +152394,7 @@ "additionalProperties": false, "properties": { "Enabled": { - "markdownDescription": "", + "markdownDescription": "SASL/SCRAM authentication is enabled or not.", "title": "Enabled", "type": "boolean" } @@ -152409,7 +152409,7 @@ "properties": { "EBSStorageInfo": { "$ref": "#/definitions/AWS::MSK::Cluster.EBSStorageInfo", - "markdownDescription": "", + "markdownDescription": "EBS volume information.", "title": "EBSStorageInfo" } }, @@ -152422,12 +152422,12 @@ "items": { "type": "string" }, - "markdownDescription": "", + "markdownDescription": "List of AWS Private CA ARNs.", "title": "CertificateAuthorityArnList", "type": "array" }, "Enabled": { - "markdownDescription": "", + "markdownDescription": "TLS authentication is enabled or not.", "title": "Enabled", "type": "boolean" } @@ -152438,7 +152438,7 @@ "additionalProperties": false, "properties": { "Enabled": { - "markdownDescription": "", + "markdownDescription": "Unauthenticated is enabled or not.", "title": "Enabled", "type": "boolean" } @@ -152453,7 +152453,7 @@ "properties": { "ClientAuthentication": { "$ref": "#/definitions/AWS::MSK::Cluster.VpcConnectivityClientAuthentication", - "markdownDescription": "", + "markdownDescription": "VPC connection control settings for brokers.", "title": "ClientAuthentication" } }, @@ -152464,12 +152464,12 @@ "properties": { "Sasl": { "$ref": "#/definitions/AWS::MSK::Cluster.VpcConnectivitySasl", - "markdownDescription": "", + "markdownDescription": "Details for VpcConnectivity ClientAuthentication using SASL.", "title": "Sasl" }, "Tls": { "$ref": "#/definitions/AWS::MSK::Cluster.VpcConnectivityTls", - "markdownDescription": "", + "markdownDescription": "Details for VpcConnectivity ClientAuthentication using TLS.", "title": "Tls" } }, @@ -152479,7 +152479,7 @@ "additionalProperties": false, "properties": { "Enabled": { - "markdownDescription": "", + "markdownDescription": "SASL/IAM authentication is enabled or not.", "title": "Enabled", "type": "boolean" } @@ -152494,12 +152494,12 @@ "properties": { "Iam": { "$ref": "#/definitions/AWS::MSK::Cluster.VpcConnectivityIam", - "markdownDescription": "", + "markdownDescription": "Details for ClientAuthentication using IAM for VpcConnectivity.", "title": "Iam" }, "Scram": { "$ref": "#/definitions/AWS::MSK::Cluster.VpcConnectivityScram", - "markdownDescription": "", + "markdownDescription": "Details for SASL/SCRAM client authentication for VpcConnectivity.", "title": "Scram" } }, @@ -152509,7 +152509,7 @@ "additionalProperties": false, "properties": { "Enabled": { - "markdownDescription": "", + "markdownDescription": "SASL/SCRAM authentication is enabled or not.", "title": "Enabled", "type": "boolean" } @@ -152523,7 +152523,7 @@ "additionalProperties": false, "properties": { "Enabled": { - "markdownDescription": "", + "markdownDescription": "TLS authentication is enabled or not.", "title": "Enabled", "type": "boolean" } @@ -153111,7 +153111,7 @@ "properties": { "Sasl": { "$ref": "#/definitions/AWS::MSK::ServerlessCluster.Sasl", - "markdownDescription": "", + "markdownDescription": "Details for client authentication using SASL. To turn on SASL, you must also turn on `EncryptionInTransit` by setting `inCluster` to true. You must set `clientBroker` to either `TLS` or `TLS_PLAINTEXT` . If you choose `TLS_PLAINTEXT` , then you must also set `unauthenticated` to true.", "title": "Sasl" } }, @@ -153124,7 +153124,7 @@ "additionalProperties": false, "properties": { "Enabled": { - "markdownDescription": "", + "markdownDescription": "SASL/IAM authentication is enabled or not.", "title": "Enabled", "type": "boolean" } @@ -153139,7 +153139,7 @@ "properties": { "Iam": { "$ref": "#/definitions/AWS::MSK::ServerlessCluster.Iam", - "markdownDescription": "", + "markdownDescription": "Details for ClientAuthentication using IAM.", "title": "Iam" } }, @@ -153957,7 +153957,7 @@ "type": "string" }, "Status": { - "markdownDescription": "The status of Amazon Macie for the account. Valid values are: `ENABLED` , start or resume all Macie activities for the account; and, `PAUSED` , suspend all Macie activities for the account.", + "markdownDescription": "The status of Amazon Macie for the account. Valid values are: `ENABLED` , start or resume Macie activities for the account; and, `PAUSED` , suspend Macie activities for the account.", "title": "Status", "type": "string" } @@ -171640,7 +171640,7 @@ "type": "object" }, "StorageCapacity": { - "markdownDescription": "The default storage capacity for the workflow runs, in gibibytes.", + "markdownDescription": "The default static storage capacity (in gibibytes) for runs that use this workflow or workflow version.", "title": "StorageCapacity", "type": "number" }, @@ -234386,7 +234386,7 @@ "type": "string" }, "ResourceId": { - "markdownDescription": "The ID of the Amazon Virtual Private Cloud VPC that you're configuring Resolver for.", + "markdownDescription": "The ID of the Amazon Virtual Private Cloud VPC or a Route 53 Profile that you're configuring Resolver for.", "title": "ResourceId", "type": "string" } @@ -260580,7 +260580,7 @@ "additionalProperties": false, "properties": { "Name": { - "markdownDescription": "The name of the activity.\n\nA name must *not* contain:\n\n- white space\n- brackets `< > { } [ ]`\n- wildcard characters `? *`\n- special characters `\" # % \\ ^ | ~ ` $ & , ; : /`\n- control characters ( `U+0000-001F` , `U+007F-009F` )\n\nTo enable logging with CloudWatch Logs, the name should only contain 0-9, A-Z, a-z, - and _.", + "markdownDescription": "The name of the activity.\n\nA name must *not* contain:\n\n- white space\n- brackets `< > { } [ ]`\n- wildcard characters `? *`\n- special characters `\" # % \\ ^ | ~ ` $ & , ; : /`\n- control characters ( `U+0000-001F` , `U+007F-009F` , `U+FFFE-FFFF` )\n- surrogates ( `U+D800-DFFF` )\n- invalid characters ( `U+10FFFF` )\n\nTo enable logging with CloudWatch Logs, the name should only contain 0-9, A-Z, a-z, - and _.", "title": "Name", "type": "string" }, @@ -262848,7 +262848,7 @@ "additionalProperties": false, "properties": { "ActiveDate": { - "markdownDescription": "An optional date that specifies when the certificate becomes active.", + "markdownDescription": "An optional date that specifies when the certificate becomes active. If you do not specify a value, `ActiveDate` takes the same value as `NotBeforeDate` , which is specified by the CA.", "title": "ActiveDate", "type": "string" }, @@ -262868,7 +262868,7 @@ "type": "string" }, "InactiveDate": { - "markdownDescription": "An optional date that specifies when the certificate becomes inactive.", + "markdownDescription": "An optional date that specifies when the certificate becomes inactive. If you do not specify a value, `InactiveDate` takes the same value as `NotAfterDate` , which is specified by the CA.", "title": "InactiveDate", "type": "string" }, @@ -263038,7 +263038,7 @@ "type": "string" }, "MdnResponse": { - "markdownDescription": "Used for outbound requests (from an AWS Transfer Family server to a partner AS2 server) to determine whether the partner response for transfers is synchronous or asynchronous. Specify either of the following values:\n\n- `SYNC` : The system expects a synchronous MDN response, confirming that the file was transferred successfully (or not).\n- `NONE` : Specifies that no MDN response is required.", + "markdownDescription": "Used for outbound requests (from an AWS Transfer Family connector to a partner AS2 server) to determine whether the partner response for transfers is synchronous or asynchronous. Specify either of the following values:\n\n- `SYNC` : The system expects a synchronous MDN response, confirming that the file was transferred successfully (or not).\n- `NONE` : Specifies that no MDN response is required.", "title": "MdnResponse", "type": "string" }, @@ -263072,12 +263072,12 @@ "items": { "type": "string" }, - "markdownDescription": "The public portion of the host key, or keys, that are used to identify the external server to which you are connecting. You can use the `ssh-keyscan` command against the SFTP server to retrieve the necessary key.\n\nThe three standard SSH public key format elements are `` , `` , and an optional `` , with spaces between each element. Specify only the `` and `` : do not enter the `` portion of the key.\n\nFor the trusted host key, AWS Transfer Family accepts RSA and ECDSA keys.\n\n- For RSA keys, the `` string is `ssh-rsa` .\n- For ECDSA keys, the `` string is either `ecdsa-sha2-nistp256` , `ecdsa-sha2-nistp384` , or `ecdsa-sha2-nistp521` , depending on the size of the key you generated.\n\nRun this command to retrieve the SFTP server host key, where your SFTP server name is `ftp.host.com` .\n\n`ssh-keyscan ftp.host.com`\n\nThis prints the public host key to standard output.\n\n`ftp.host.com ssh-rsa AAAAB3Nza... `TrustedHostKeys` is optional for `CreateConnector` . If not provided, you can use `TestConnection` to retrieve the server host key during the initial connection attempt, and subsequently update the connector with the observed host key. \n\nThe three standard SSH public key format elements are `` , `` , and an optional `` , with spaces between each element. Specify only the `` and `` : do not enter the `` portion of the key.\n\nFor the trusted host key, AWS Transfer Family accepts RSA and ECDSA keys.\n\n- For RSA keys, the `` string is `ssh-rsa` .\n- For ECDSA keys, the `` string is either `ecdsa-sha2-nistp256` , `ecdsa-sha2-nistp384` , or `ecdsa-sha2-nistp521` , depending on the size of the key you generated.\n\nRun this command to retrieve the SFTP server host key, where your SFTP server name is `ftp.host.com` .\n\n`ssh-keyscan ftp.host.com`\n\nThis prints the public host key to standard output.\n\n`ftp.host.com ssh-rsa AAAAB3Nza... - Required when creating an SFTP connector\n> - Optional when updating an existing SFTP connector", "title": "UserSecretId", "type": "string" } @@ -263239,7 +263239,7 @@ "type": "string" }, "LoggingRole": { - "markdownDescription": "The Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that allows a server to turn on Amazon CloudWatch logging for Amazon S3 or Amazon EFSevents. When set, you can view user activity in your CloudWatch logs.", + "markdownDescription": "The Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that allows a server to turn on Amazon CloudWatch logging for Amazon S3 or Amazon EFS events. When set, you can view user activity in your CloudWatch logs.", "title": "LoggingRole", "type": "string" }, diff --git a/schema_source/cloudformation-docs.json b/schema_source/cloudformation-docs.json index 41a09c269..8763b9e44 100644 --- a/schema_source/cloudformation-docs.json +++ b/schema_source/cloudformation-docs.json @@ -97,7 +97,7 @@ "AWS::ACMPCA::CertificateAuthority": { "CsrExtensions": "Specifies information to be added to the extension section of the certificate signing request (CSR).", "KeyAlgorithm": "Type of the public key algorithm and size, in bits, of the key pair that your CA creates when it issues a certificate. When you create a subordinate CA, you must use a key algorithm supported by the parent CA.", - "KeyStorageSecurityStandard": "Specifies a cryptographic key management compliance standard used for handling CA keys.\n\nDefault: FIPS_140_2_LEVEL_3_OR_HIGHER\n\n> Some AWS Regions do not support the default. When creating a CA in these Regions, you must provide `FIPS_140_2_LEVEL_2_OR_HIGHER` as the argument for `KeyStorageSecurityStandard` . Failure to do this results in an `InvalidArgsException` with the message, \"A certificate authority cannot be created in this region with the specified security standard.\"\n> \n> For information about security standard support in various Regions, see [Storage and security compliance of AWS Private CA private keys](https://docs.aws.amazon.com/privateca/latest/userguide/data-protection.html#private-keys) .", + "KeyStorageSecurityStandard": "Specifies a cryptographic key management compliance standard for handling and protecting CA keys.\n\nDefault: FIPS_140_2_LEVEL_3_OR_HIGHER\n\n> Some AWS Regions don't support the default value. When you create a CA in these Regions, you must use `CCPC_LEVEL_1_OR_HIGHER` for the `KeyStorageSecurityStandard` parameter. If you don't, the operation returns an `InvalidArgsException` with this message: \"A certificate authority cannot be created in this region with the specified security standard.\"\n> \n> For information about security standard support in different AWS Regions, see [Storage and security compliance of AWS Private CA private keys](https://docs.aws.amazon.com/privateca/latest/userguide/data-protection.html#private-keys) .", "RevocationConfiguration": "Information about the Online Certificate Status Protocol (OCSP) configuration or certificate revocation list (CRL) created and maintained by your private CA.", "SigningAlgorithm": "Name of the algorithm your private CA uses to sign certificate requests.\n\nThis parameter should not be confused with the `SigningAlgorithm` parameter used to sign certificates when they are issued.", "Subject": "Structure that contains X.500 distinguished name information for your private CA.", @@ -252,7 +252,19 @@ "Alias": "The alias that is assigned to this workspace to help identify it. It does not need to be unique.", "KmsKeyArn": "(optional) The ARN for a customer managed AWS KMS key to use for encrypting data within your workspace. For more information about using your own key in your workspace, see [Encryption at rest](https://docs.aws.amazon.com/prometheus/latest/userguide/encryption-at-rest-Amazon-Service-Prometheus.html) in the *Amazon Managed Service for Prometheus User Guide* .", "LoggingConfiguration": "Contains information about the logging configuration for the workspace.", - "Tags": "The list of tag keys and values that are associated with the workspace." + "Tags": "The list of tag keys and values that are associated with the workspace.", + "WorkspaceConfiguration": "Use this structure to define label sets and the ingestion limits for time series that match label sets, and to specify the retention period of the workspace." + }, + "AWS::APS::Workspace Label": { + "Name": "The name for this label.", + "Value": "The value for this label." + }, + "AWS::APS::Workspace LimitsPerLabelSet": { + "LabelSet": "This defines one label set that will have an enforced ingestion limit. You can set ingestion limits on time series that match defined label sets, to help prevent a workspace from being overwhelmed with unexpected spikes in time series ingestion.\n\nLabel values accept all UTF-8 characters with one exception. If the label name is metric name label `__ *name* __` , then the *metric* part of the name must conform to the following pattern: `[a-zA-Z_:][a-zA-Z0-9_:]*`", + "Limits": "This structure contains the information about the limits that apply to time series that match this label set." + }, + "AWS::APS::Workspace LimitsPerLabelSetEntry": { + "MaxSeries": "The maximum number of active series that can be ingested that match this label set.\n\nSetting this to 0 causes no label set limit to be enforced, but it does cause Amazon Managed Service for Prometheus to vend label set metrics to CloudWatch" }, "AWS::APS::Workspace LoggingConfiguration": { "LogGroupArn": "The ARN of the CloudWatch log group to which the vended log data will be published. This log group must exist prior to calling this operation." @@ -261,6 +273,10 @@ "Key": "The key of the tag. Must not begin with `aws:` .", "Value": "The value of the tag." }, + "AWS::APS::Workspace WorkspaceConfiguration": { + "LimitsPerLabelSets": "This is an array of structures, where each structure defines a label set for the workspace, and defines the ingestion limit for active time series for each of those label sets. Each label name in a label set must be unique.", + "RetentionPeriodInDays": "Specifies how many days that metrics will be retained in the workspace." + }, "AWS::ARCZonalShift::AutoshiftObserverNotificationStatus": { "Status": "" }, @@ -1125,6 +1141,7 @@ "DisableExecuteApiEndpoint": "Specifies whether clients can invoke your API by using the default `execute-api` endpoint. By default, clients can invoke your API with the default https://{api_id}.execute-api.{region}.amazonaws.com endpoint. To require that clients use a custom domain name to invoke your API, disable the default endpoint.", "DisableSchemaValidation": "Avoid validating models when creating a deployment. Supported only for WebSocket APIs.", "FailOnWarnings": "Specifies whether to rollback the API creation when a warning is encountered. By default, API creation continues if a warning is encountered.", + "IpAddressType": "The IP address types that can invoke the API. Use `ipv4` to allow only IPv4 addresses to invoke your API, or use `dualstack` to allow both IPv4 and IPv6 addresses to invoke your API.\n\nDon\u2019t use IP address type for an HTTP API based on an OpenAPI specification. Instead, specify the IP address type in the OpenAPI specification.", "Name": "The name of the API. Required unless you specify an OpenAPI definition for `Body` or `S3BodyLocation` .", "ProtocolType": "The API protocol. Valid values are `WEBSOCKET` or `HTTP` . Required unless you specify an OpenAPI definition for `Body` or `S3BodyLocation` .", "RouteKey": "This property is part of quick create. If you don't specify a `routeKey` , a default route of `$default` is created. The `$default` route acts as a catch-all for any request made to your API, for a particular stage. The `$default` route key can't be modified. You can add routes after creating the API, and you can update the route keys of additional routes. Supported only for HTTP APIs.", @@ -1223,6 +1240,7 @@ "CertificateArn": "An AWS -managed certificate that will be used by the edge-optimized endpoint for this domain name. AWS Certificate Manager is the only supported source.", "CertificateName": "The user-friendly name of the certificate that will be used by the edge-optimized endpoint for this domain name.", "EndpointType": "The endpoint type.", + "IpAddressType": "The IP address types that can invoke the domain name. Use `ipv4` to allow only IPv4 addresses to invoke your domain name, or use `dualstack` to allow both IPv4 and IPv6 addresses to invoke your domain name.", "OwnershipVerificationCertificateArn": "The Amazon resource name (ARN) for the public certificate issued by AWS Certificate Manager . This ARN is used to validate custom domain ownership. It's required only if you configure mutual TLS and use either an ACM-imported or a private CA certificate ARN as the regionalCertificateArn.", "SecurityPolicy": "The Transport Layer Security (TLS) version of the security policy for this domain name. The valid values are `TLS_1_0` and `TLS_1_2` ." }, @@ -1450,7 +1468,7 @@ "ConfigurationProfileId": "The configuration profile ID.", "Content": "The configuration data, as bytes.\n\n> AWS AppConfig accepts any type of data, including text formats like JSON or TOML, or binary formats like protocol buffers or compressed data.", "ContentType": "A standard MIME type describing the format of the configuration content. For more information, see [Content-Type](https://docs.aws.amazon.com/https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.17) .", - "Description": "A description of the configuration.", + "Description": "A description of the configuration.\n\n> Due to HTTP limitations, this field only supports ASCII characters.", "LatestVersionNumber": "An optional locking token used to prevent race conditions from overwriting configuration updates when creating a new version. To ensure your data is not overwritten when creating multiple hosted configuration versions in rapid succession, specify the version number of the latest hosted configuration version.", "VersionLabel": "A user-defined label for an AWS AppConfig hosted configuration version." }, @@ -3490,9 +3508,9 @@ "AWS::ApplicationAutoScaling::ScalableTarget": { "MaxCapacity": "The maximum value that you plan to scale out to. When a scaling policy is in effect, Application Auto Scaling can scale out (expand) as needed to the maximum capacity limit in response to changing demand.", "MinCapacity": "The minimum value that you plan to scale in to. When a scaling policy is in effect, Application Auto Scaling can scale in (contract) as needed to the minimum capacity limit in response to changing demand.", - "ResourceId": "The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .\n- Pool of WorkSpaces - The resource type is `workspacespool` and the unique identifier is the pool ID. Example: `workspacespool/wspool-123456` .", + "ResourceId": "The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Amazon ElastiCache cache cluster - The resource type is `cache-cluster` and the unique identifier is the cache cluster name. Example: `cache-cluster/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .\n- Pool of WorkSpaces - The resource type is `workspacespool` and the unique identifier is the pool ID. Example: `workspacespool/wspool-123456` .", "RoleARN": "Specify the Amazon Resource Name (ARN) of an Identity and Access Management (IAM) role that allows Application Auto Scaling to modify the scalable target on your behalf. This can be either an IAM service role that Application Auto Scaling can assume to make calls to other AWS resources on your behalf, or a service-linked role for the specified service. For more information, see [How Application Auto Scaling works with IAM](https://docs.aws.amazon.com/autoscaling/application/userguide/security_iam_service-with-iam.html) in the *Application Auto Scaling User Guide* .\n\nTo automatically create a service-linked role (recommended), specify the full ARN of the service-linked role in your stack template. To find the exact ARN of the service-linked role for your AWS or custom resource, see the [Service-linked roles](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-service-linked-roles.html) topic in the *Application Auto Scaling User Guide* . Look for the ARN in the table at the bottom of the page.", - "ScalableDimension": "The scalable dimension associated with the scalable target. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.\n- `workspaces:workspacespool:DesiredUserSessions` - The number of user sessions for the WorkSpaces in the pool.", + "ScalableDimension": "The scalable dimension associated with the scalable target. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:cache-cluster:Nodes` - The number of nodes for an Amazon ElastiCache cache cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.\n- `workspaces:workspacespool:DesiredUserSessions` - The number of user sessions for the WorkSpaces in the pool.", "ScheduledActions": "The scheduled actions for the scalable target. Duplicates aren't allowed.", "ServiceNamespace": "The namespace of the AWS service that provides the resource, or a `custom-resource` .", "SuspendedState": "An embedded object that contains attributes and attribute values that are used to suspend and resume automatic scaling. Setting the value of an attribute to `true` suspends the specified scaling activities. Setting it to `false` (default) resumes the specified scaling activities.\n\n*Suspension Outcomes*\n\n- For `DynamicScalingInSuspended` , while a suspension is in effect, all scale-in activities that are triggered by a scaling policy are suspended.\n- For `DynamicScalingOutSuspended` , while a suspension is in effect, all scale-out activities that are triggered by a scaling policy are suspended.\n- For `ScheduledScalingSuspended` , while a suspension is in effect, all scaling activities that involve scheduled actions are suspended." @@ -3516,10 +3534,10 @@ }, "AWS::ApplicationAutoScaling::ScalingPolicy": { "PolicyName": "The name of the scaling policy.\n\nUpdates to the name of a target tracking scaling policy are not supported, unless you also update the metric used for scaling. To change only a target tracking scaling policy's name, first delete the policy by removing the existing `AWS::ApplicationAutoScaling::ScalingPolicy` resource from the template and updating the stack. Then, recreate the resource with the same settings and a different name.", - "PolicyType": "The scaling policy type.\n\nThe following policy types are supported:\n\n`TargetTrackingScaling` \u2014Not supported for Amazon EMR\n\n`StepScaling` \u2014Not supported for DynamoDB, Amazon Comprehend, Lambda, Amazon Keyspaces, Amazon MSK, Amazon ElastiCache, or Neptune.", + "PolicyType": "The scaling policy type.\n\nThe following policy types are supported:\n\n`TargetTrackingScaling` \u2014Not supported for Amazon EMR\n\n`StepScaling` \u2014Not supported for DynamoDB, Amazon Comprehend, Lambda, Amazon Keyspaces, Amazon MSK, Amazon ElastiCache, or Neptune.\n\n`PredictiveScaling` \u2014Only supported for Amazon ECS", "PredictiveScalingPolicyConfiguration": "The predictive scaling policy configuration.", - "ResourceId": "The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .\n- Pool of WorkSpaces - The resource type is `workspacespool` and the unique identifier is the pool ID. Example: `workspacespool/wspool-123456` .", - "ScalableDimension": "The scalable dimension. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.\n- `workspaces:workspacespool:DesiredUserSessions` - The number of user sessions for the WorkSpaces in the pool.", + "ResourceId": "The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Amazon ElastiCache cache cluster - The resource type is `cache-cluster` and the unique identifier is the cache cluster name. Example: `cache-cluster/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .\n- Pool of WorkSpaces - The resource type is `workspacespool` and the unique identifier is the pool ID. Example: `workspacespool/wspool-123456` .", + "ScalableDimension": "The scalable dimension. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:cache-cluster:Nodes` - The number of nodes for an Amazon ElastiCache cache cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.\n- `workspaces:workspacespool:DesiredUserSessions` - The number of user sessions for the WorkSpaces in the pool.", "ScalingTargetId": "The CloudFormation-generated ID of an Application Auto Scaling scalable target. For more information about the ID, see the Return Value section of the `AWS::ApplicationAutoScaling::ScalableTarget` resource.\n\n> You must specify either the `ScalingTargetId` property, or the `ResourceId` , `ScalableDimension` , and `ServiceNamespace` properties, but not both.", "ServiceNamespace": "The namespace of the AWS service that provides the resource, or a `custom-resource` .", "StepScalingPolicyConfiguration": "A step scaling policy.", @@ -3757,7 +3775,7 @@ "AWS::ApplicationSignals::ServiceLevelObjective": { "BurnRateConfigurations": "Each object in this array defines the length of the look-back window used to calculate one burn rate metric for this SLO. The burn rate measures how fast the service is consuming the error budget, relative to the attainment goal of the SLO.", "Description": "An optional description for this SLO.", - "ExclusionWindows": "", + "ExclusionWindows": "The time window to be excluded from the SLO performance metrics.", "Goal": "This structure contains the attributes that determine the goal of an SLO. This includes the time period for evaluation and the attainment threshold.", "Name": "A name for this SLO.", "RequestBasedSli": "A structure containing information about the performance metric that this SLO monitors, if this is a request-based SLO.", @@ -3772,15 +3790,19 @@ "DurationUnit": "Specifies the calendar interval unit.", "StartTime": "The date and time when you want the first interval to start. Be sure to choose a time that configures the intervals the way that you want. For example, if you want weekly intervals starting on Mondays at 6 a.m., be sure to specify a start time that is a Monday at 6 a.m.\n\nWhen used in a raw HTTP Query API, it is formatted as be epoch time in seconds. For example: `1698778057`\n\nAs soon as one calendar interval ends, another automatically begins." }, + "AWS::ApplicationSignals::ServiceLevelObjective DependencyConfig": { + "DependencyKeyAttributes": "If this SLO is related to a metric collected by Application Signals, you must use this field to specify which dependency the SLO metric is related to.\n\n- `Type` designates the type of object this is.\n- `ResourceType` specifies the type of the resource. This field is used only when the value of the `Type` field is `Resource` or `AWS::Resource` .\n- `Name` specifies the name of the object. This is used only if the value of the `Type` field is `Service` , `RemoteService` , or `AWS::Service` .\n- `Identifier` identifies the resource objects of this resource. This is used only if the value of the `Type` field is `Resource` or `AWS::Resource` .\n- `Environment` specifies the location where this object is hosted, or what it belongs to.", + "DependencyOperationName": "When the SLO monitors a specific operation of the dependency, this field specifies the name of that operation in the dependency." + }, "AWS::ApplicationSignals::ServiceLevelObjective Dimension": { "Name": "The name of the dimension. Dimension names must contain only ASCII characters, must include at least one non-whitespace character, and cannot start with a colon ( `:` ). ASCII control characters are not supported as part of dimension names.", "Value": "The value of the dimension. Dimension values must contain only ASCII characters and must include at least one non-whitespace character. ASCII control characters are not supported as part of dimension values." }, "AWS::ApplicationSignals::ServiceLevelObjective ExclusionWindow": { - "Reason": "A description explaining why this time period should be excluded from SLO calculations.", - "RecurrenceRule": "The recurrence rule for the SLO time window exclusion. Supports both cron and rate expressions.", - "StartTime": "The start of the SLO time window exclusion. Defaults to current time if not specified.", - "Window": "The SLO time window exclusion ." + "Reason": "The reason for the time exclusion windows. For example, maintenance.", + "RecurrenceRule": "The recurrence rule for the time exclusion window.", + "StartTime": "The start time of the time exclusion window.", + "Window": "The time exclusion window." }, "AWS::ApplicationSignals::ServiceLevelObjective Goal": { "AttainmentGoal": "The threshold that determines if the goal is being met.\n\nIf this is a period-based SLO, the attainment goal is the percentage of good periods that meet the threshold requirements to the total periods within the interval. For example, an attainment goal of 99.9% means that within your interval, you are targeting 99.9% of the periods to be in healthy state.\n\nIf this is a request-based SLO, the attainment goal is the percentage of requests that must be successful to meet the attainment goal.\n\nIf you omit this parameter, 99 is used to represent 99% as the attainment goal.", @@ -3814,7 +3836,7 @@ "GoodCountMetric": "If you want to count \"good requests\" to determine the percentage of successful requests for this request-based SLO, specify the metric to use as \"good requests\" in this structure." }, "AWS::ApplicationSignals::ServiceLevelObjective RecurrenceRule": { - "Expression": "A cron or rate expression that specifies the schedule for the exclusion window." + "Expression": "The following two rules are supported:\n\n- rate(value unit) - The value must be a positive integer and the unit can be hour|day|month.\n- cron - An expression which consists of six fields separated by white spaces: (minutes hours day_of_month month day_of_week year)." }, "AWS::ApplicationSignals::ServiceLevelObjective RequestBasedSli": { "ComparisonOperator": "The arithmetic operation used when comparing the specified metric to the threshold.", @@ -3822,6 +3844,7 @@ "RequestBasedSliMetric": "A structure that contains information about the metric that the SLO monitors." }, "AWS::ApplicationSignals::ServiceLevelObjective RequestBasedSliMetric": { + "DependencyConfig": "Identifies the dependency using the `DependencyKeyAttributes` and `DependencyOperationName` .", "KeyAttributes": "This is a string-to-string map that contains information about the type of object that this SLO is related to. It can include the following fields.\n\n- `Type` designates the type of object that this SLO is related to.\n- `ResourceType` specifies the type of the resource. This field is used only when the value of the `Type` field is `Resource` or `AWS::Resource` .\n- `Name` specifies the name of the object. This is used only if the value of the `Type` field is `Service` , `RemoteService` , or `AWS::Service` .\n- `Identifier` identifies the resource objects of this resource. This is used only if the value of the `Type` field is `Resource` or `AWS::Resource` .\n- `Environment` specifies the location where this object is hosted, or what it belongs to.\n- `AwsAccountId` allows you to create an SLO for an object that exists in another account.", "MetricType": "If the SLO monitors either the `LATENCY` or `AVAILABILITY` metric that Application Signals collects, this field displays which of those metrics is used.", "MonitoredRequestCountMetric": "Use this structure to define the metric that you want to use as the \"good request\" or \"bad request\" value for a request-based SLO. This value observed for the metric defined in `TotalRequestCountMetric` will be divided by the number found for `MonitoredRequestCountMetric` to determine the percentage of successful requests that this SLO tracks.", @@ -3838,6 +3861,7 @@ "SliMetric": "Use this structure to specify the metric to be used for the SLO." }, "AWS::ApplicationSignals::ServiceLevelObjective SliMetric": { + "DependencyConfig": "Identifies the dependency using the `DependencyKeyAttributes` and `DependencyOperationName` .", "KeyAttributes": "If this SLO is related to a metric collected by Application Signals, you must use this field to specify which service the SLO metric is related to. To do so, you must specify at least the `Type` , `Name` , and `Environment` attributes.\n\nThis is a string-to-string map. It can include the following fields.\n\n- `Type` designates the type of object this is.\n- `ResourceType` specifies the type of the resource. This field is used only when the value of the `Type` field is `Resource` or `AWS::Resource` .\n- `Name` specifies the name of the object. This is used only if the value of the `Type` field is `Service` , `RemoteService` , or `AWS::Service` .\n- `Identifier` identifies the resource objects of this resource. This is used only if the value of the `Type` field is `Resource` or `AWS::Resource` .\n- `Environment` specifies the location where this object is hosted, or what it belongs to.", "MetricDataQueries": "If this SLO monitors a CloudWatch metric or the result of a CloudWatch metric math expression, use this structure to specify that metric or expression.", "MetricType": "If the SLO is to monitor either the `LATENCY` or `AVAILABILITY` metric that Application Signals collects, use this field to specify which of those metrics is used.", @@ -3850,8 +3874,8 @@ "Value": "The value for the specified tag key." }, "AWS::ApplicationSignals::ServiceLevelObjective Window": { - "Duration": "The number of time units for the exclusion window length.", - "DurationUnit": "The unit of time for the exclusion window duration. Valid values: MINUTE, HOUR, DAY, MONTH." + "Duration": "The start and end time of the time exclusion window.", + "DurationUnit": "The unit of measurement to use during the time window exclusion." }, "AWS::Athena::CapacityReservation": { "CapacityAssignmentConfiguration": "Assigns Athena workgroups (and hence their queries) to capacity reservations. A capacity reservation can have only one capacity assignment configuration, but the capacity assignment configuration can be made up of multiple individual assignments. Each assignment specifies how Athena queries can consume capacity from the capacity reservation that their workgroup is mapped to.", @@ -4736,7 +4760,6 @@ "RestoreTestingPlanName": "The RestoreTestingPlanName is a unique string that is the name of the restore testing plan. This cannot be changed after creation, and it must consist of only alphanumeric characters and underscores.", "ScheduleExpression": "A CRON expression in specified timezone when a restore testing plan is executed. When no CRON expression is provided, AWS Backup will use the default expression `cron(0 5 ? * * *)` .", "ScheduleExpressionTimezone": "Optional. This is the timezone in which the schedule expression is set. By default, ScheduleExpressions are in UTC. You can modify this to a specified timezone.", - "ScheduleStatus": "This parameter is not currently supported.", "StartWindowHours": "Defaults to 24 hours.\n\nA value in hours after a restore test is scheduled before a job will be canceled if it doesn't start successfully. This value is optional. If this value is included, this parameter has a maximum value of 168 hours (one week).", "Tags": "Optional tags to include. A tag is a key-value pair you can use to manage, filter, and search for your resources. Allowed characters include UTF-8 letters,numbers, spaces, and the following characters: `+ - = . _ : /.`" }, @@ -4811,7 +4834,7 @@ "SecurityGroupIds": "The Amazon EC2 security groups that are associated with instances launched in the compute environment. This parameter is required for Fargate compute resources, where it can contain up to 5 security groups. For Fargate compute resources, providing an empty list is handled as if this parameter wasn't specified and no change is made. For Amazon EC2 compute resources, providing an empty list removes the security groups from the compute resource.\n\nWhen updating a compute environment, changing the Amazon EC2 security groups requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .", "SpotIamFleetRole": "The Amazon Resource Name (ARN) of the Amazon EC2 Spot Fleet IAM role applied to a `SPOT` compute environment. This role is required if the allocation strategy set to `BEST_FIT` or if the allocation strategy isn't specified. For more information, see [Amazon EC2 spot fleet role](https://docs.aws.amazon.com/batch/latest/userguide/spot_fleet_IAM_role.html) in the *AWS Batch User Guide* .\n\n> This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it. > To tag your Spot Instances on creation, the Spot Fleet IAM role specified here must use the newer *AmazonEC2SpotFleetTaggingRole* managed policy. The previously recommended *AmazonEC2SpotFleetRole* managed policy doesn't have the required permissions to tag Spot Instances. For more information, see [Spot instances not tagged on creation](https://docs.aws.amazon.com/batch/latest/userguide/troubleshooting.html#spot-instance-no-tag) in the *AWS Batch User Guide* .", "Subnets": "The VPC subnets where the compute resources are launched. Fargate compute resources can contain up to 16 subnets. For Fargate compute resources, providing an empty list will be handled as if this parameter wasn't specified and no change is made. For Amazon EC2 compute resources, providing an empty list removes the VPC subnets from the compute resource. For more information, see [VPCs and subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html) in the *Amazon VPC User Guide* .\n\nWhen updating a compute environment, changing the VPC subnets requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .\n\n> AWS Batch on Amazon EC2 and AWS Batch on Amazon EKS support Local Zones. For more information, see [Local Zones](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-local-zones) in the *Amazon EC2 User Guide for Linux Instances* , [Amazon EKS and AWS Local Zones](https://docs.aws.amazon.com/eks/latest/userguide/local-zones.html) in the *Amazon EKS User Guide* and [Amazon ECS clusters in Local Zones, Wavelength Zones, and AWS Outposts](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-regions-zones.html#clusters-local-zones) in the *Amazon ECS Developer Guide* .\n> \n> AWS Batch on Fargate doesn't currently support Local Zones.", - "Tags": "Key-value pair tags to be applied to Amazon EC2 resources that are launched in the compute environment. For AWS Batch , these take the form of `\"String1\": \"String2\"` , where `String1` is the tag key and `String2` is the tag value-for example, `{ \"Name\": \"Batch Instance - C4OnDemand\" }` . This is helpful for recognizing your Batch instances in the Amazon EC2 console. These tags aren't seen when using the AWS Batch `ListTagsForResource` API operation.\n\nWhen updating a compute environment, changing this setting requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .\n\n> This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.", + "Tags": "Key-value pair tags to be applied to Amazon EC2 resources that are launched in the compute environment. For AWS Batch , these take the form of `\"String1\": \"String2\"` , where `String1` is the tag key and `String2` is the tag value (for example, `{ \"Name\": \"Batch Instance - C4OnDemand\" }` ). This is helpful for recognizing your Batch instances in the Amazon EC2 console. These tags aren't seen when using the AWS Batch `ListTagsForResource` API operation.\n\nWhen updating a compute environment, changing this setting requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .\n\n> This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.", "Type": "The type of compute environment: `EC2` , `SPOT` , `FARGATE` , or `FARGATE_SPOT` . For more information, see [Compute environments](https://docs.aws.amazon.com/batch/latest/userguide/compute_environments.html) in the *AWS Batch User Guide* .\n\nIf you choose `SPOT` , you must also specify an Amazon EC2 Spot Fleet role with the `spotIamFleetRole` parameter. For more information, see [Amazon EC2 spot fleet role](https://docs.aws.amazon.com/batch/latest/userguide/spot_fleet_IAM_role.html) in the *AWS Batch User Guide* .\n\nWhen updating compute environment, changing the type of a compute environment requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .\n\nWhen updating the type of a compute environment, changing between `EC2` and `SPOT` or between `FARGATE` and `FARGATE_SPOT` will initiate an infrastructure update, but if you switch between `EC2` and `FARGATE` , AWS CloudFormation will create a new compute environment.", "UpdateToLatestImageVersion": "Specifies whether the AMI ID is updated to the latest one that's supported by AWS Batch when the compute environment has an infrastructure update. The default value is `false` .\n\n> An AMI ID can either be specified in the `imageId` or `imageIdOverride` parameters or be determined by the launch template that's specified in the `launchTemplate` parameter. If an AMI ID is specified any of these ways, this parameter is ignored. For more information about to update AMI IDs during an infrastructure update, see [Updating the AMI ID](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html#updating-compute-environments-ami) in the *AWS Batch User Guide* . \n\nWhen updating a compute environment, changing this setting requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* ." }, @@ -4838,7 +4861,7 @@ }, "AWS::Batch::ComputeEnvironment UpdatePolicy": { "JobExecutionTimeoutMinutes": "Specifies the job timeout (in minutes) when the compute environment infrastructure is updated. The default value is 30.", - "TerminateJobsOnUpdate": "Specifies whether jobs are automatically terminated when the computer environment infrastructure is updated. The default value is `false` ." + "TerminateJobsOnUpdate": "Specifies whether jobs are automatically terminated when the compute environment infrastructure is updated. The default value is `false` ." }, "AWS::Batch::ConsumableResource": { "ConsumableResourceName": "The name of the consumable resource.", @@ -4871,6 +4894,7 @@ }, "AWS::Batch::JobDefinition ContainerProperties": { "Command": "The command that's passed to the container. This parameter maps to `Cmd` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `COMMAND` parameter to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) . For more information, see [https://docs.docker.com/engine/reference/builder/#cmd](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/builder/#cmd) .", + "EnableExecuteCommand": "Determines whether execute command functionality is turned on for this task. If `true` , execute command functionality is turned on all the containers in the task.", "Environment": "The environment variables to pass to a container. This parameter maps to `Env` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `--env` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) .\n\n> We don't recommend using plaintext environment variables for sensitive information, such as credential data. > Environment variables cannot start with \" `AWS_BATCH` \". This naming convention is reserved for variables that AWS Batch sets.", "EphemeralStorage": "The amount of ephemeral storage to allocate for the task. This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks hosted on AWS Fargate .", "ExecutionRoleArn": "The Amazon Resource Name (ARN) of the execution role that AWS Batch can assume. For jobs that run on Fargate resources, you must provide an execution role. For more information, see [AWS Batch execution IAM role](https://docs.aws.amazon.com/batch/latest/userguide/execution-IAM-role.html) in the *AWS Batch User Guide* .", @@ -4914,6 +4938,7 @@ }, "AWS::Batch::JobDefinition EcsTaskProperties": { "Containers": "This object is a list of containers.", + "EnableExecuteCommand": "Determines whether execute command functionality is turned on for this task. If `true` , execute command functionality is turned on all the containers in the task.", "EphemeralStorage": "The amount of ephemeral storage to allocate for the task. This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks hosted on AWS Fargate .", "ExecutionRoleArn": "The Amazon Resource Name (ARN) of the execution role that AWS Batch can assume. For jobs that run on Fargate resources, you must provide an execution role. For more information, see [AWS Batch execution IAM role](https://docs.aws.amazon.com/batch/latest/userguide/execution-IAM-role.html) in the *AWS Batch User Guide* .", "IpcMode": "The IPC resource namespace to use for the containers in the task. The valid values are `host` , `task` , or `none` .\n\nIf `host` is specified, all containers within the tasks that specified the `host` IPC mode on the same container instance share the same IPC resources with the host Amazon EC2 instance.\n\nIf `task` is specified, all containers within the specified `task` share the same IPC resources.\n\nIf `none` is specified, the IPC resources within the containers of a task are private, and are not shared with other containers in a task or on the container instance.\n\nIf no value is specified, then the IPC resource namespace sharing depends on the Docker daemon setting on the container instance. For more information, see [IPC settings](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/#ipc-settings---ipc) in the Docker run reference.", @@ -5014,6 +5039,10 @@ "AWS::Batch::JobDefinition FargatePlatformConfiguration": { "PlatformVersion": "The AWS Fargate platform version where the jobs are running. A platform version is specified only for jobs that are running on Fargate resources. If one isn't specified, the `LATEST` platform version is used by default. This uses a recent, approved version of the AWS Fargate platform for compute resources. For more information, see [AWS Fargate platform versions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/platform_versions.html) in the *Amazon Elastic Container Service Developer Guide* ." }, + "AWS::Batch::JobDefinition FirelensConfiguration": { + "Options": "The options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is `\"options\":{\"enable-ecs-log-metadata\":\"true|false\",\"config-file-type:\"s3|file\",\"config-file-value\":\"arn:aws:s3:::mybucket/fluent.conf|filepath\"}` . For more information, see [Creating a task definition that uses a FireLens configuration](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html#firelens-taskdef) in the *Amazon Elastic Container Service Developer Guide* .", + "Type": "The log router to use. The valid values are `fluentd` or `fluentbit` ." + }, "AWS::Batch::JobDefinition Host": { "SourcePath": "The path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If this parameter contains a file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If the source path location doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.\n\n> This parameter isn't applicable to jobs that run on Fargate resources. Don't provide this for these jobs." }, @@ -5032,7 +5061,7 @@ "Tmpfs": "The container path, mount options, and size (in MiB) of the `tmpfs` mount. This parameter maps to the `--tmpfs` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) .\n\n> This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide this parameter for this resource type." }, "AWS::Batch::JobDefinition LogConfiguration": { - "LogDriver": "The log driver to use for the container. The valid values that are listed for this parameter are log drivers that the Amazon ECS container agent can communicate with by default.\n\nThe supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `logentries` , `syslog` , and `splunk` .\n\n> Jobs that are running on Fargate resources are restricted to the `awslogs` and `splunk` log drivers. \n\n- **awslogs** - Specifies the Amazon CloudWatch Logs logging driver. For more information, see [Using the awslogs log driver](https://docs.aws.amazon.com/batch/latest/userguide/using_awslogs.html) in the *AWS Batch User Guide* and [Amazon CloudWatch Logs logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/) in the Docker documentation.\n- **fluentd** - Specifies the Fluentd logging driver. For more information including usage and options, see [Fluentd logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/fluentd/) in the *Docker documentation* .\n- **gelf** - Specifies the Graylog Extended Format (GELF) logging driver. For more information including usage and options, see [Graylog Extended Format logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/gelf/) in the *Docker documentation* .\n- **journald** - Specifies the journald logging driver. For more information including usage and options, see [Journald logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/journald/) in the *Docker documentation* .\n- **json-file** - Specifies the JSON file logging driver. For more information including usage and options, see [JSON File logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/json-file/) in the *Docker documentation* .\n- **splunk** - Specifies the Splunk logging driver. For more information including usage and options, see [Splunk logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/splunk/) in the *Docker documentation* .\n- **syslog** - Specifies the syslog logging driver. For more information including usage and options, see [Syslog logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/syslog/) in the *Docker documentation* .\n\n> If you have a custom driver that's not listed earlier that you want to work with the Amazon ECS container agent, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you want to have included. However, Amazon Web Services doesn't currently support running modified copies of this software. \n\nThis parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep \"Server API version\"`", + "LogDriver": "The log driver to use for the container. The valid values that are listed for this parameter are log drivers that the Amazon ECS container agent can communicate with by default.\n\nThe supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `logentries` , `syslog` , and `splunk` .\n\n> Jobs that are running on Fargate resources are restricted to the `awslogs` and `splunk` log drivers. \n\n- **awsfirelens** - Specifies the firelens logging driver. For more information on configuring Firelens, see [Send Amazon ECS logs to an AWS service or AWS Partner](https://docs.aws.amazon.com//AmazonECS/latest/developerguide/using_firelens.html) in the *Amazon Elastic Container Service Developer Guide* .\n- **awslogs** - Specifies the Amazon CloudWatch Logs logging driver. For more information, see [Using the awslogs log driver](https://docs.aws.amazon.com/batch/latest/userguide/using_awslogs.html) in the *AWS Batch User Guide* and [Amazon CloudWatch Logs logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/) in the Docker documentation.\n- **fluentd** - Specifies the Fluentd logging driver. For more information including usage and options, see [Fluentd logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/fluentd/) in the *Docker documentation* .\n- **gelf** - Specifies the Graylog Extended Format (GELF) logging driver. For more information including usage and options, see [Graylog Extended Format logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/gelf/) in the *Docker documentation* .\n- **journald** - Specifies the journald logging driver. For more information including usage and options, see [Journald logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/journald/) in the *Docker documentation* .\n- **json-file** - Specifies the JSON file logging driver. For more information including usage and options, see [JSON File logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/json-file/) in the *Docker documentation* .\n- **splunk** - Specifies the Splunk logging driver. For more information including usage and options, see [Splunk logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/splunk/) in the *Docker documentation* .\n- **syslog** - Specifies the syslog logging driver. For more information including usage and options, see [Syslog logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/syslog/) in the *Docker documentation* .\n\n> If you have a custom driver that's not listed earlier that you want to work with the Amazon ECS container agent, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you want to have included. However, Amazon Web Services doesn't currently support running modified copies of this software. \n\nThis parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep \"Server API version\"`", "Options": "The configuration options to send to the log driver. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep \"Server API version\"`", "SecretOptions": "The secrets to pass to the log configuration. For more information, see [Specifying sensitive data](https://docs.aws.amazon.com/batch/latest/userguide/specifying-sensitive-data.html) in the *AWS Batch User Guide* ." }, @@ -5043,6 +5072,7 @@ }, "AWS::Batch::JobDefinition MultiNodeContainerProperties": { "Command": "The command that's passed to the container. This parameter maps to `Cmd` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `COMMAND` parameter to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) . For more information, see [https://docs.docker.com/engine/reference/builder/#cmd](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/builder/#cmd) .", + "EnableExecuteCommand": "Determines whether execute command functionality is turned on for this task. If `true` , execute command functionality is turned on all the containers in the task.", "Environment": "The environment variables to pass to a container. This parameter maps to `Env` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `--env` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) .\n\n> We don't recommend using plaintext environment variables for sensitive information, such as credential data. > Environment variables cannot start with \" `AWS_BATCH` \". This naming convention is reserved for variables that AWS Batch sets.", "EphemeralStorage": "The amount of ephemeral storage to allocate for the task. This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks hosted on AWS Fargate .", "ExecutionRoleArn": "The Amazon Resource Name (ARN) of the execution role that AWS Batch can assume. For jobs that run on Fargate resources, you must provide an execution role. For more information, see [AWS Batch execution IAM role](https://docs.aws.amazon.com/batch/latest/userguide/execution-IAM-role.html) in the *AWS Batch User Guide* .", @@ -5069,6 +5099,7 @@ }, "AWS::Batch::JobDefinition MultiNodeEcsTaskProperties": { "Containers": "This object is a list of containers.", + "EnableExecuteCommand": "Determines whether execute command functionality is turned on for this task. If `true` , execute command functionality is turned on all the containers in the task.", "ExecutionRoleArn": "The Amazon Resource Name (ARN) of the execution role that AWS Batch can assume. For jobs that run on Fargate resources, you must provide an execution role. For more information, see [AWS Batch execution IAM role](https://docs.aws.amazon.com/batch/latest/userguide/execution-IAM-role.html) in the *AWS Batch User Guide* .", "IpcMode": "The IPC resource namespace to use for the containers in the task. The valid values are `host` , `task` , or `none` .\n\nIf `host` is specified, all containers within the tasks that specified the `host` IPC mode on the same container instance share the same IPC resources with the host Amazon EC2 instance.\n\nIf `task` is specified, all containers within the specified `task` share the same IPC resources.\n\nIf `none` is specified, the IPC resources within the containers of a task are private, and are not shared with other containers in a task or on the container instance.\n\nIf no value is specified, then the IPC resource namespace sharing depends on the Docker daemon setting on the container instance. For more information, see [IPC settings](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/#ipc-settings---ipc) in the Docker run reference.", "PidMode": "The process namespace to use for the containers in the task. The valid values are `host` or `task` . For example, monitoring sidecars might need `pidMode` to access information about other containers running in the same task.\n\nIf `host` is specified, all containers within the tasks that specified the `host` PID mode on the same container instance share the process namespace with the host Amazon EC2 instance.\n\nIf `task` is specified, all containers within the specified task share the same process namespace.\n\nIf no value is specified, the default is a private namespace for each container. For more information, see [PID settings](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/#pid-settings---pid) in the Docker run reference.", @@ -5119,6 +5150,7 @@ "DependsOn": "A list of containers that this container depends on.", "Environment": "The environment variables to pass to a container. This parameter maps to Env in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `--env` parameter to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) .\n\n> We don't recommend using plaintext environment variables for sensitive information, such as credential data. > Environment variables cannot start with `AWS_BATCH` . This naming convention is reserved for variables that AWS Batch sets.", "Essential": "If the essential parameter of a container is marked as `true` , and that container fails or stops for any reason, all other containers that are part of the task are stopped. If the `essential` parameter of a container is marked as false, its failure doesn't affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential.\n\nAll jobs must have at least one essential container. If you have an application that's composed of multiple containers, group containers that are used for a common purpose into components, and separate the different components into multiple task definitions. For more information, see [Application Architecture](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/application_architecture.html) in the *Amazon Elastic Container Service Developer Guide* .", + "FirelensConfiguration": "The FireLens configuration for the container. This is used to specify and configure a log router for container logs. For more information, see [Custom log](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html) routing in the *Amazon Elastic Container Service Developer Guide* .", "Image": "The image used to start a container. This string is passed directly to the Docker daemon. By default, images in the Docker Hub registry are available. Other repositories are specified with either `repository-url/image:tag` or `repository-url/image@digest` . Up to 255 letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs are allowed. This parameter maps to `Image` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/) and the `IMAGE` parameter of the [*docker run*](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/#security-configuration) .", "LinuxParameters": "Linux-specific modifications that are applied to the container, such as Linux kernel capabilities. For more information, see [KernelCapabilities](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_KernelCapabilities.html) .", "LogConfiguration": "The log configuration specification for the container.\n\nThis parameter maps to `LogConfig` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/) and the `--log-driver` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/#security-configuration) .\n\nBy default, containers use the same logging driver that the Docker daemon uses. However the container can use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options). For more information about the options for different supported log drivers, see [Configure logging drivers](https://docs.aws.amazon.com/https://docs.docker.com/engine/admin/logging/overview/) in the *Docker documentation* .\n\n> Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon (shown in the `LogConfiguration` data type). Additional log drivers may be available in future releases of the Amazon ECS container agent. \n\nThis parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version `--format '{{.Server.APIVersion}}'`\n\n> The Amazon ECS container agent running on a container instance must register the logging drivers available on that instance with the `ECS_AVAILABLE_LOGGING_DRIVERS` environment variable before containers placed on that instance can use these log configuration options. For more information, see [Amazon ECS container agent configuration](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html) in the *Amazon Elastic Container Service Developer Guide* .", @@ -5173,7 +5205,7 @@ }, "AWS::Batch::SchedulingPolicy FairsharePolicy": { "ComputeReservation": "A value used to reserve some of the available maximum vCPU for share identifiers that aren't already used.\n\nThe reserved ratio is `( *computeReservation* /100)^ *ActiveFairShares*` where `*ActiveFairShares*` is the number of active share identifiers.\n\nFor example, a `computeReservation` value of 50 indicates that AWS Batch reserves 50% of the maximum available vCPU if there's only one share identifier. It reserves 25% if there are two share identifiers. It reserves 12.5% if there are three share identifiers. A `computeReservation` value of 25 indicates that AWS Batch should reserve 25% of the maximum available vCPU if there's only one share identifier, 6.25% if there are two fair share identifiers, and 1.56% if there are three share identifiers.\n\nThe minimum value is 0 and the maximum value is 99.", - "ShareDecaySeconds": "The amount of time (in seconds) to use to calculate a fair-share percentage for each share identifier in use. A value of zero (0) indicates the default minimum time window (600 seconds). The maximum supported value is 604800 (1 week).\n\nThe decay allows for more recently run jobs to have more weight than jobs that ran earlier. Consider adjusting this number if you have jobs that (on average) run longer than ten minutes, or a large difference in job count or job run times between share identifiers, and the allocation of resources doesn\u2019t meet your needs.", + "ShareDecaySeconds": "The amount of time (in seconds) to use to calculate a fair-share percentage for each share identifier in use. A value of zero (0) indicates the default minimum time window (600 seconds). The maximum supported value is 604800 (1 week).\n\nThe decay allows for more recently run jobs to have more weight than jobs that ran earlier. Consider adjusting this number if you have jobs that (on average) run longer than ten minutes, or a large difference in job count or job run times between share identifiers, and the allocation of resources doesn't meet your needs.", "ShareDistribution": "An array of `SharedIdentifier` objects that contain the weights for the share identifiers for the fair-share policy. Share identifiers that aren't included have a default weight of `1.0` ." }, "AWS::Batch::SchedulingPolicy ShareAttributes": { @@ -5941,7 +5973,13 @@ "WordPolicyConfig": "The word policy you configure for the guardrail." }, "AWS::Bedrock::Guardrail ContentFilterConfig": { + "InputAction": "", + "InputEnabled": "", + "InputModalities": "", "InputStrength": "The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.", + "OutputAction": "", + "OutputEnabled": "", + "OutputModalities": "", "OutputStrength": "The strength of the content filter to apply to model responses. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.", "Type": "The harmful category that the content filter is applied to." }, @@ -5949,6 +5987,8 @@ "FiltersConfig": "Contains the type of the content filter and how strongly it should apply to prompts and model responses." }, "AWS::Bedrock::Guardrail ContextualGroundingFilterConfig": { + "Action": "", + "Enabled": "", "Threshold": "The threshold details for the guardrails contextual grounding filter.", "Type": "The filter details for the guardrails contextual grounding filter." }, @@ -5956,16 +5996,28 @@ "FiltersConfig": "" }, "AWS::Bedrock::Guardrail ManagedWordsConfig": { + "InputAction": "", + "InputEnabled": "", + "OutputAction": "", + "OutputEnabled": "", "Type": "The managed word type to configure for the guardrail." }, "AWS::Bedrock::Guardrail PiiEntityConfig": { "Action": "Configure guardrail action when the PII entity is detected.", + "InputAction": "", + "InputEnabled": "", + "OutputAction": "", + "OutputEnabled": "", "Type": "Configure guardrail type when the PII entity is detected.\n\nThe following PIIs are used to block or mask sensitive information:\n\n- *General*\n\n- *ADDRESS*\n\nA physical address, such as \"100 Main Street, Anytown, USA\" or \"Suite #12, Building 123\". An address can include information such as the street, building, location, city, state, country, county, zip code, precinct, and neighborhood.\n- *AGE*\n\nAn individual's age, including the quantity and unit of time. For example, in the phrase \"I am 40 years old,\" Guardrails recognizes \"40 years\" as an age.\n- *NAME*\n\nAn individual's name. This entity type does not include titles, such as Dr., Mr., Mrs., or Miss. guardrails doesn't apply this entity type to names that are part of organizations or addresses. For example, guardrails recognizes the \"John Doe Organization\" as an organization, and it recognizes \"Jane Doe Street\" as an address.\n- *EMAIL*\n\nAn email address, such as *marymajor@email.com* .\n- *PHONE*\n\nA phone number. This entity type also includes fax and pager numbers.\n- *USERNAME*\n\nA user name that identifies an account, such as a login name, screen name, nick name, or handle.\n- *PASSWORD*\n\nAn alphanumeric string that is used as a password, such as \"* *very20special#pass** \".\n- *DRIVER_ID*\n\nThe number assigned to a driver's license, which is an official document permitting an individual to operate one or more motorized vehicles on a public road. A driver's license number consists of alphanumeric characters.\n- *LICENSE_PLATE*\n\nA license plate for a vehicle is issued by the state or country where the vehicle is registered. The format for passenger vehicles is typically five to eight digits, consisting of upper-case letters and numbers. The format varies depending on the location of the issuing state or country.\n- *VEHICLE_IDENTIFICATION_NUMBER*\n\nA Vehicle Identification Number (VIN) uniquely identifies a vehicle. VIN content and format are defined in the *ISO 3779* specification. Each country has specific codes and formats for VINs.\n- *Finance*\n\n- *CREDIT_DEBIT_CARD_CVV*\n\nA three-digit card verification code (CVV) that is present on VISA, MasterCard, and Discover credit and debit cards. For American Express credit or debit cards, the CVV is a four-digit numeric code.\n- *CREDIT_DEBIT_CARD_EXPIRY*\n\nThe expiration date for a credit or debit card. This number is usually four digits long and is often formatted as *month/year* or *MM/YY* . Guardrails recognizes expiration dates such as *01/21* , *01/2021* , and *Jan 2021* .\n- *CREDIT_DEBIT_CARD_NUMBER*\n\nThe number for a credit or debit card. These numbers can vary from 13 to 16 digits in length. However, Amazon Comprehend also recognizes credit or debit card numbers when only the last four digits are present.\n- *PIN*\n\nA four-digit personal identification number (PIN) with which you can access your bank account.\n- *INTERNATIONAL_BANK_ACCOUNT_NUMBER*\n\nAn International Bank Account Number has specific formats in each country. For more information, see [www.iban.com/structure](https://docs.aws.amazon.com/https://www.iban.com/structure) .\n- *SWIFT_CODE*\n\nA SWIFT code is a standard format of Bank Identifier Code (BIC) used to specify a particular bank or branch. Banks use these codes for money transfers such as international wire transfers.\n\nSWIFT codes consist of eight or 11 characters. The 11-digit codes refer to specific branches, while eight-digit codes (or 11-digit codes ending in 'XXX') refer to the head or primary office.\n- *IT*\n\n- *IP_ADDRESS*\n\nAn IPv4 address, such as *198.51.100.0* .\n- *MAC_ADDRESS*\n\nA *media access control* (MAC) address is a unique identifier assigned to a network interface controller (NIC).\n- *URL*\n\nA web address, such as *www.example.com* .\n- *AWS_ACCESS_KEY*\n\nA unique identifier that's associated with a secret access key; you use the access key ID and secret access key to sign programmatic AWS requests cryptographically.\n- *AWS_SECRET_KEY*\n\nA unique identifier that's associated with an access key. You use the access key ID and secret access key to sign programmatic AWS requests cryptographically.\n- *USA specific*\n\n- *US_BANK_ACCOUNT_NUMBER*\n\nA US bank account number, which is typically 10 to 12 digits long.\n- *US_BANK_ROUTING_NUMBER*\n\nA US bank account routing number. These are typically nine digits long,\n- *US_INDIVIDUAL_TAX_IDENTIFICATION_NUMBER*\n\nA US Individual Taxpayer Identification Number (ITIN) is a nine-digit number that starts with a \"9\" and contain a \"7\" or \"8\" as the fourth digit. An ITIN can be formatted with a space or a dash after the third and forth digits.\n- *US_PASSPORT_NUMBER*\n\nA US passport number. Passport numbers range from six to nine alphanumeric characters.\n- *US_SOCIAL_SECURITY_NUMBER*\n\nA US Social Security Number (SSN) is a nine-digit number that is issued to US citizens, permanent residents, and temporary working residents.\n- *Canada specific*\n\n- *CA_HEALTH_NUMBER*\n\nA Canadian Health Service Number is a 10-digit unique identifier, required for individuals to access healthcare benefits.\n- *CA_SOCIAL_INSURANCE_NUMBER*\n\nA Canadian Social Insurance Number (SIN) is a nine-digit unique identifier, required for individuals to access government programs and benefits.\n\nThe SIN is formatted as three groups of three digits, such as *123-456-789* . A SIN can be validated through a simple check-digit process called the [Luhn algorithm](https://docs.aws.amazon.com/https://www.wikipedia.org/wiki/Luhn_algorithm) .\n- *UK Specific*\n\n- *UK_NATIONAL_HEALTH_SERVICE_NUMBER*\n\nA UK National Health Service Number is a 10-17 digit number, such as *485 777 3456* . The current system formats the 10-digit number with spaces after the third and sixth digits. The final digit is an error-detecting checksum.\n- *UK_NATIONAL_INSURANCE_NUMBER*\n\nA UK National Insurance Number (NINO) provides individuals with access to National Insurance (social security) benefits. It is also used for some purposes in the UK tax system.\n\nThe number is nine digits long and starts with two letters, followed by six numbers and one letter. A NINO can be formatted with a space or a dash after the two letters and after the second, forth, and sixth digits.\n- *UK_UNIQUE_TAXPAYER_REFERENCE_NUMBER*\n\nA UK Unique Taxpayer Reference (UTR) is a 10-digit number that identifies a taxpayer or a business.\n- *Custom*\n\n- *Regex filter* - You can use a regular expressions to define patterns for a guardrail to recognize and act upon such as serial number, booking ID etc.." }, "AWS::Bedrock::Guardrail RegexConfig": { "Action": "The guardrail action to configure when matching regular expression is detected.", "Description": "The description of the regular expression to configure for the guardrail.", + "InputAction": "", + "InputEnabled": "", "Name": "The name of the regular expression to configure for the guardrail.", + "OutputAction": "", + "OutputEnabled": "", "Pattern": "The regular expression pattern to configure for the guardrail." }, "AWS::Bedrock::Guardrail SensitiveInformationPolicyConfig": { @@ -5979,13 +6031,21 @@ "AWS::Bedrock::Guardrail TopicConfig": { "Definition": "A definition of the topic to deny.", "Examples": "A list of prompts, each of which is an example of a prompt that can be categorized as belonging to the topic.", + "InputAction": "", + "InputEnabled": "", "Name": "The name of the topic to deny.", + "OutputAction": "", + "OutputEnabled": "", "Type": "Specifies to deny the topic." }, "AWS::Bedrock::Guardrail TopicPolicyConfig": { "TopicsConfig": "A list of policies related to topics that the guardrail should deny." }, "AWS::Bedrock::Guardrail WordConfig": { + "InputAction": "", + "InputEnabled": "", + "OutputAction": "", + "OutputEnabled": "", "Text": "Text of the word configured for the guardrail to block." }, "AWS::Bedrock::Guardrail WordPolicyConfig": { @@ -6031,6 +6091,7 @@ "Endpoint": "The endpoint URL of your MongoDB Atlas cluster for your knowledge base.", "EndpointServiceName": "The name of the VPC endpoint service in your account that is connected to your MongoDB Atlas cluster.", "FieldMapping": "Contains the names of the fields to which to map information about the vector store.", + "TextIndexName": "The name of the text search index in the MongoDB collection. This is required for using the hybrid search feature.", "VectorIndexName": "The name of the MongoDB Atlas vector search index." }, "AWS::Bedrock::KnowledgeBase MongoDbAtlasFieldMapping": { @@ -6046,6 +6107,17 @@ "MetadataField": "The name of the field in which Amazon Bedrock stores metadata about the vector store.", "TextField": "The name of the field in which Amazon Bedrock stores the raw text from your data. The text is split according to the chunking strategy you choose." }, + "AWS::Bedrock::KnowledgeBase OpenSearchManagedClusterConfiguration": { + "DomainArn": "The Amazon Resource Name (ARN) of the OpenSearch domain.", + "DomainEndpoint": "The endpoint URL the OpenSearch domain.", + "FieldMapping": "Contains the names of the fields to which to map information about the vector store.", + "VectorIndexName": "The name of the vector store." + }, + "AWS::Bedrock::KnowledgeBase OpenSearchManagedClusterFieldMapping": { + "MetadataField": "The name of the field in which Amazon Bedrock stores metadata about the vector store.", + "TextField": "The name of the field in which Amazon Bedrock stores the raw text from your data. The text is split according to the chunking strategy you choose.", + "VectorField": "The name of the field in which Amazon Bedrock stores the vector embeddings for your data sources." + }, "AWS::Bedrock::KnowledgeBase OpenSearchServerlessConfiguration": { "CollectionArn": "The Amazon Resource Name (ARN) of the OpenSearch Service vector store.", "FieldMapping": "Contains the names of the fields to which to map information about the vector store.", @@ -6093,6 +6165,7 @@ "TableName": "The name of the table in the database." }, "AWS::Bedrock::KnowledgeBase RdsFieldMapping": { + "CustomMetadataField": "Provide a name for the universal metadata field where Amazon Bedrock will store any custom metadata from your data source.", "MetadataField": "The name of the field in which Amazon Bedrock stores metadata about the vector store.", "PrimaryKeyField": "The name of the field in which Amazon Bedrock stores the ID for each entry.", "TextField": "The name of the field in which Amazon Bedrock stores the raw text from your data. The text is split according to the chunking strategy you choose.", @@ -6146,6 +6219,7 @@ "AWS::Bedrock::KnowledgeBase StorageConfiguration": { "MongoDbAtlasConfiguration": "Contains the storage configuration of the knowledge base in MongoDB Atlas.", "NeptuneAnalyticsConfiguration": "Contains details about the Neptune Analytics configuration of the knowledge base in Amazon Neptune. For more information, see [Create a vector index in Amazon Neptune Analytics.](https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup-neptune.html) .", + "OpensearchManagedClusterConfiguration": "Contains details about the storage configuration of the knowledge base in OpenSearch Managed Cluster. For more information, see [Create a vector index in Amazon OpenSearch Service](https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup-osm.html) .", "OpensearchServerlessConfiguration": "Contains the storage configuration of the knowledge base in Amazon OpenSearch Service.", "PineconeConfiguration": "Contains the storage configuration of the knowledge base in Pinecone.", "RdsConfiguration": "Contains details about the storage configuration of the knowledge base in Amazon RDS. For more information, see [Create a vector index in Amazon RDS](https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup-rds.html) .", @@ -6172,7 +6246,7 @@ "Variants": "A list of objects, each containing details about a variant of the prompt." }, "AWS::Bedrock::Prompt CachePointBlock": { - "Type": "Indicates that the CachePointBlock is of the default type" + "Type": "Specifies the type of cache point within the CachePointBlock." }, "AWS::Bedrock::Prompt ChatPromptTemplateConfiguration": { "InputVariables": "An array of the variables in the prompt template.", @@ -6181,7 +6255,7 @@ "ToolConfiguration": "Configuration information for the tools that the model can use when generating a response." }, "AWS::Bedrock::Prompt ContentBlock": { - "CachePoint": "Creates a cache checkpoint within a message.", + "CachePoint": "CachePoint to include in the message.", "Text": "Text to include in the message." }, "AWS::Bedrock::Prompt Message": { @@ -6228,7 +6302,7 @@ "Name": "The name of the tool that the model must request." }, "AWS::Bedrock::Prompt SystemContentBlock": { - "CachePoint": "Creates a cache checkpoint within a tool designation", + "CachePoint": "CachePoint to include in the system prompt.", "Text": "A system prompt for the model." }, "AWS::Bedrock::Prompt TextPromptTemplateConfiguration": { @@ -6243,7 +6317,7 @@ "Version": "The version of the Amazon S3 location to use." }, "AWS::Bedrock::Prompt Tool": { - "CachePoint": "Creates a cache checkpoint within a tool designation", + "CachePoint": "CachePoint to include in the tool configuration.", "ToolSpec": "The specfication for the tool." }, "AWS::Bedrock::Prompt ToolChoice": { @@ -6269,7 +6343,7 @@ "Tags": "A map of tags attached to the prompt version and their values." }, "AWS::Bedrock::PromptVersion CachePointBlock": { - "Type": "Indicates that the CachePointBlock is of the default type" + "Type": "Specifies the type of cache point within the CachePointBlock." }, "AWS::Bedrock::PromptVersion ChatPromptTemplateConfiguration": { "InputVariables": "An array of the variables in the prompt template.", @@ -6278,7 +6352,7 @@ "ToolConfiguration": "Configuration information for the tools that the model can use when generating a response." }, "AWS::Bedrock::PromptVersion ContentBlock": { - "CachePoint": "Creates a cache checkpoint within a message.", + "CachePoint": "CachePoint to include in the message.", "Text": "Text to include in the message." }, "AWS::Bedrock::PromptVersion Message": { @@ -6325,7 +6399,7 @@ "Name": "The name of the tool that the model must request." }, "AWS::Bedrock::PromptVersion SystemContentBlock": { - "CachePoint": "Creates a cache checkpoint within a tool designation", + "CachePoint": "CachePoint to include in the system prompt.", "Text": "A system prompt for the model." }, "AWS::Bedrock::PromptVersion TextPromptTemplateConfiguration": { @@ -6334,7 +6408,7 @@ "Text": "The message for the prompt." }, "AWS::Bedrock::PromptVersion Tool": { - "CachePoint": "Creates a cache checkpoint within a tool designation", + "CachePoint": "CachePoint to include in the tool configuration.", "ToolSpec": "The specfication for the tool." }, "AWS::Bedrock::PromptVersion ToolChoice": { @@ -6608,7 +6682,7 @@ "Tags": "An array of key-value pairs to apply to this resource.\n\nFor more information, see [Tag](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-resource-tags.html) ." }, "AWS::Cassandra::Keyspace ReplicationSpecification": { - "RegionList": "Specifies the AWS Regions that the keyspace is replicated in. You must specify at least two Regions, including the Region that the keyspace is being created in.", + "RegionList": "Specifies the AWS Regions that the keyspace is replicated in. You must specify at least two Regions, including the Region that the keyspace is being created in.\n\nTo specify a Region [that's disabled by default](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-regions.html#rande-manage-enable) , you must first enable the Region. For more information, see [Multi-Region replication in AWS Regions disabled by default](https://docs.aws.amazon.com/keyspaces/latest/devguide/multiRegion-replication_how-it-works.html#howitworks_mrr_opt_in) in the *Amazon Keyspaces Developer Guide* .", "ReplicationStrategy": "The options are:\n\n- `SINGLE_REGION` (optional)\n- `MULTI_REGION`\n\nIf no value is specified, the default is `SINGLE_REGION` . If `MULTI_REGION` is specified, `RegionList` is required." }, "AWS::Cassandra::Keyspace Tag": { @@ -6777,7 +6851,9 @@ "Format": "The format of the analysis template.", "MembershipIdentifier": "The identifier for a membership resource.", "Name": "The name of the analysis template.", + "Schema": "The entire schema object.", "Source": "The source of the analysis template.", + "SourceMetadata": "The source metadata for the analysis template.", "Tags": "An optional label that you can assign to a resource when you create it. Each tag consists of a key and an optional value, both of which you define. When you use tagging, you can also use tag-based access control in IAM policies to control access to this resource." }, "AWS::CleanRooms::AnalysisTemplate AnalysisParameter": { @@ -6789,8 +6865,31 @@ "ReferencedTables": "The tables referenced in the analysis schema." }, "AWS::CleanRooms::AnalysisTemplate AnalysisSource": { + "Artifacts": "The artifacts of the analysis source.", "Text": "The query text." }, + "AWS::CleanRooms::AnalysisTemplate AnalysisSourceMetadata": { + "Artifacts": "The artifacts of the analysis source metadata." + }, + "AWS::CleanRooms::AnalysisTemplate AnalysisTemplateArtifact": { + "Location": "The artifact location." + }, + "AWS::CleanRooms::AnalysisTemplate AnalysisTemplateArtifactMetadata": { + "AdditionalArtifactHashes": "Additional artifact hashes for the analysis template.", + "EntryPointHash": "The hash of the entry point for the analysis template artifact metadata." + }, + "AWS::CleanRooms::AnalysisTemplate AnalysisTemplateArtifacts": { + "AdditionalArtifacts": "Additional artifacts for the analysis template.", + "EntryPoint": "The entry point for the analysis template artifacts.", + "RoleArn": "The role ARN for the analysis template artifacts." + }, + "AWS::CleanRooms::AnalysisTemplate Hash": { + "Sha256": "The SHA-256 hash value." + }, + "AWS::CleanRooms::AnalysisTemplate S3Location": { + "Bucket": "The bucket name.", + "Key": "The object key." + }, "AWS::CleanRooms::AnalysisTemplate Tag": { "Key": "The key of the tag.", "Value": "The value of the tag." @@ -6799,10 +6898,11 @@ "AnalyticsEngine": "The analytics engine for the collaboration.", "CreatorDisplayName": "A display name of the collaboration creator.", "CreatorMLMemberAbilities": "The ML member abilities for a collaboration member.", - "CreatorMemberAbilities": "The abilities granted to the collaboration creator.\n\n*Allowed values* `CAN_QUERY` | `CAN_RECEIVE_RESULTS`", + "CreatorMemberAbilities": "The abilities granted to the collaboration creator.\n\n*Allowed values* `CAN_QUERY` | `CAN_RECEIVE_RESULTS` | `CAN_RUN_JOB`", "CreatorPaymentConfiguration": "An object representing the collaboration member's payment responsibilities set by the collaboration creator.", "DataEncryptionMetadata": "The settings for client-side encryption for cryptographic computing.", "Description": "A description of the collaboration provided by the collaboration owner.", + "JobLogStatus": "An indicator as to whether job logging has been enabled or disabled for the collaboration.\n\nWhen `ENABLED` , AWS Clean Rooms logs details about jobs run within this collaboration and those logs can be viewed in Amazon CloudWatch Logs. The default value is `DISABLED` .", "Members": "A list of initial members, not including the creator. This list is immutable.", "Name": "A human-readable identifier provided by the collaboration owner. Display names are not unique.", "QueryLogStatus": "An indicator as to whether query logging has been enabled or disabled for the collaboration.\n\nWhen `ENABLED` , AWS Clean Rooms logs details about queries run within this collaboration and those logs can be viewed in Amazon CloudWatch Logs. The default value is `DISABLED` .", @@ -6814,6 +6914,9 @@ "AllowJoinsOnColumnsWithDifferentNames": "Indicates whether Fingerprint columns can be joined on any other Fingerprint column with a different name ( `TRUE` ) or can only be joined on Fingerprint columns of the same name ( `FALSE` ).", "PreserveNulls": "Indicates whether NULL values are to be copied as NULL to encrypted tables ( `TRUE` ) or cryptographically processed ( `FALSE` )." }, + "AWS::CleanRooms::Collaboration JobComputePaymentConfig": { + "IsResponsible": "Indicates whether the collaboration creator has configured the collaboration member to pay for query and job compute costs ( `TRUE` ) or has not configured the collaboration member to pay for query and job compute costs ( `FALSE` ).\n\nExactly one member can be configured to pay for query and job compute costs. An error is returned if the collaboration creator sets a `TRUE` value for more than one member in the collaboration.\n\nAn error is returned if the collaboration creator sets a `FALSE` value for the member who can run queries and jobs." + }, "AWS::CleanRooms::Collaboration MLMemberAbilities": { "CustomMLMemberAbilities": "The custom ML member abilities for a collaboration member." }, @@ -6835,6 +6938,7 @@ "IsResponsible": "Indicates whether the collaboration creator has configured the collaboration member to pay for model training costs ( `TRUE` ) or has not configured the collaboration member to pay for model training costs ( `FALSE` ).\n\nExactly one member can be configured to pay for model training costs. An error is returned if the collaboration creator sets a `TRUE` value for more than one member in the collaboration.\n\nIf the collaboration creator hasn't specified anyone as the member paying for model training costs, then the member who can query is the default payer. An error is returned if the collaboration creator sets a `FALSE` value for the member who can query." }, "AWS::CleanRooms::Collaboration PaymentConfiguration": { + "JobCompute": "The compute configuration for the job.", "MachineLearning": "An object representing the collaboration member's machine learning payment responsibilities set by the collaboration creator.", "QueryCompute": "The collaboration member's payment responsibilities set by the collaboration creator for query compute costs." }, @@ -6851,6 +6955,7 @@ "AnalysisRules": "The analysis rule that was created for the configured table.", "Description": "A description for the configured table.", "Name": "A name for the configured table.", + "SelectedAnalysisMethods": "The selected analysis methods for the configured table.", "TableReference": "The table that this configured table represents.", "Tags": "An optional label that you can assign to a resource when you create it. Each tag consists of a key and an optional value, both of which you define. When you use tagging, you can also use tag-based access control in IAM policies to control access to this resource." }, @@ -7023,11 +7128,16 @@ }, "AWS::CleanRooms::Membership": { "CollaborationIdentifier": "The unique ID for the associated collaboration.", + "DefaultJobResultConfiguration": "The default job result configuration for the membership.", "DefaultResultConfiguration": "The default protected query result configuration as specified by the member who can receive results.", + "JobLogStatus": "An indicator as to whether job logging has been enabled or disabled for the collaboration.\n\nWhen `ENABLED` , AWS Clean Rooms logs details about jobs run within this collaboration and those logs can be viewed in Amazon CloudWatch Logs. The default value is `DISABLED` .", "PaymentConfiguration": "The payment responsibilities accepted by the collaboration member.", "QueryLogStatus": "An indicator as to whether query logging has been enabled or disabled for the membership.\n\nWhen `ENABLED` , AWS Clean Rooms logs details about queries run within this collaboration and those logs can be viewed in Amazon CloudWatch Logs. The default value is `DISABLED` .", "Tags": "An optional label that you can assign to a resource when you create it. Each tag consists of a key and an optional value, both of which you define. When you use tagging, you can also use tag-based access control in IAM policies to control access to this resource." }, + "AWS::CleanRooms::Membership MembershipJobComputePaymentConfig": { + "IsResponsible": "Indicates whether the collaboration member has accepted to pay for job compute costs ( `TRUE` ) or has not accepted to pay for query and job compute costs ( `FALSE` ).\n\nThere is only one member who pays for queries and jobs.\n\nAn error message is returned for the following reasons:\n\n- If you set the value to `FALSE` but you are responsible to pay for query and job compute costs.\n- If you set the value to `TRUE` but you are not responsible to pay for query and job compute costs." + }, "AWS::CleanRooms::Membership MembershipMLPaymentConfig": { "ModelInference": "The payment responsibilities accepted by the member for model inference.", "ModelTraining": "The payment responsibilities accepted by the member for model training." @@ -7039,9 +7149,17 @@ "IsResponsible": "Indicates whether the collaboration member has accepted to pay for model training costs ( `TRUE` ) or has not accepted to pay for model training costs ( `FALSE` ).\n\nIf the collaboration creator has not specified anyone to pay for model training costs, then the member who can query is the default payer.\n\nAn error message is returned for the following reasons:\n\n- If you set the value to `FALSE` but you are responsible to pay for model training costs.\n- If you set the value to `TRUE` but you are not responsible to pay for model training costs." }, "AWS::CleanRooms::Membership MembershipPaymentConfiguration": { + "JobCompute": "The payment responsibilities accepted by the collaboration member for job compute costs.", "MachineLearning": "The payment responsibilities accepted by the collaboration member for machine learning costs.", "QueryCompute": "The payment responsibilities accepted by the collaboration member for query compute costs." }, + "AWS::CleanRooms::Membership MembershipProtectedJobOutputConfiguration": { + "S3": "Contains the configuration to write the job results to S3." + }, + "AWS::CleanRooms::Membership MembershipProtectedJobResultConfiguration": { + "OutputConfiguration": "The output configuration for a protected job result.", + "RoleArn": "The unique ARN for an IAM role that is used by AWS Clean Rooms to write protected job results to the result location, given by the member who can receive results." + }, "AWS::CleanRooms::Membership MembershipProtectedQueryOutputConfiguration": { "S3": "Required configuration for a protected query with an `s3` output type." }, @@ -7052,6 +7170,10 @@ "AWS::CleanRooms::Membership MembershipQueryComputePaymentConfig": { "IsResponsible": "Indicates whether the collaboration member has accepted to pay for query compute costs ( `TRUE` ) or has not accepted to pay for query compute costs ( `FALSE` ).\n\nIf the collaboration creator has not specified anyone to pay for query compute costs, then the member who can query is the default payer.\n\nAn error message is returned for the following reasons:\n\n- If you set the value to `FALSE` but you are responsible to pay for query compute costs.\n- If you set the value to `TRUE` but you are not responsible to pay for query compute costs." }, + "AWS::CleanRooms::Membership ProtectedJobS3OutputConfigurationInput": { + "Bucket": "The S3 bucket for job output.", + "KeyPrefix": "The S3 prefix to unload the protected job results." + }, "AWS::CleanRooms::Membership ProtectedQueryS3OutputConfiguration": { "Bucket": "The S3 bucket to unload the protected query results.", "KeyPrefix": "The S3 prefix to unload the protected query results.", @@ -7958,7 +8080,7 @@ "AWS::CloudTrail::EventDataStore AdvancedFieldSelector": { "EndsWith": "An operator that includes events that match the last few characters of the event record field specified as the value of `Field` .", "Equals": "An operator that includes events that match the exact value of the event record field specified as the value of `Field` . This is the only valid operator that you can use with the `readOnly` , `eventCategory` , and `resources.type` fields.", - "Field": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `resources.type` (required), `eventName` , `readOnly` , and `resources.ARN` . The following additional fields are available for event data stores: `eventSource` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events (for event data stores only), and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor management and data events for event data stores, you can use it to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source. For a list of services supporting network activity events, see [Logging network activity events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-network-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide* .\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - This is an optional field available only for event data stores, which is used to filter management and data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - This is an optional field available only for event data stores, which is used to filter management and data events based on whether the events originated from an AWS Management Console session. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor more information about the ARN formats of data event resources, see [Actions, resources, and condition keys for AWS services](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html) in the *Service Authorization Reference* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - This is an optional field available only for event data stores, which is used to filter management and data events on the userIdentity ARN. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", + "Field": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `eventName` , `eventSource` , `eventType` , `resources.type` (required), `readOnly` , `resources.ARN` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events, and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor data events for trails, this is an optional field that you can use to include or exclude any event source and can use any operator.\n\nFor management and data events for event data stores, this is an optional field that you can use to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source. For a list of services supporting network activity events, see [Logging network activity events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-network-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide* .\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - For event data stores, this is an optional field available for event data stores to filter management and data events on the event type. For trails, this is an optional field to filter data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - For event data stores, this is an optional field used to filter management and data events based on whether the events originated from an AWS Management Console session. For trails, this is an optional field used to filter data events. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor more information about the ARN formats of data event resources, see [Actions, resources, and condition keys for AWS services](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html) in the *Service Authorization Reference* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - For event data stores, this is an optional field used to filter management and data events for actions taken by specific IAM identities. For trails, this is an optional field used to filter data events. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", "NotEndsWith": "An operator that excludes events that match the last few characters of the event record field specified as the value of `Field` .", "NotEquals": "An operator that excludes events that match the exact value of the event record field specified as the value of `Field` .", "NotStartsWith": "An operator that excludes events that match the first few characters of the event record field specified as the value of `Field` .", @@ -8000,7 +8122,7 @@ "AWS::CloudTrail::Trail AdvancedFieldSelector": { "EndsWith": "An operator that includes events that match the last few characters of the event record field specified as the value of `Field` .", "Equals": "An operator that includes events that match the exact value of the event record field specified as the value of `Field` . This is the only valid operator that you can use with the `readOnly` , `eventCategory` , and `resources.type` fields.", - "Field": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `resources.type` (required), `eventName` , `readOnly` , and `resources.ARN` . The following additional fields are available for event data stores: `eventSource` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events (for event data stores only), and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor management and data events for event data stores, you can use it to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source. For a list of services supporting network activity events, see [Logging network activity events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-network-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide* .\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - This is an optional field available only for event data stores, which is used to filter management and data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - This is an optional field available only for event data stores, which is used to filter management and data events based on whether the events originated from an AWS Management Console session. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor more information about the ARN formats of data event resources, see [Actions, resources, and condition keys for AWS services](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html) in the *Service Authorization Reference* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - This is an optional field available only for event data stores, which is used to filter management and data events on the userIdentity ARN. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", + "Field": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `eventName` , `eventSource` , `eventType` , `resources.type` (required), `readOnly` , `resources.ARN` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events, and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor data events for trails, this is an optional field that you can use to include or exclude any event source and can use any operator.\n\nFor management and data events for event data stores, this is an optional field that you can use to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source. For a list of services supporting network activity events, see [Logging network activity events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-network-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide* .\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - For event data stores, this is an optional field available for event data stores to filter management and data events on the event type. For trails, this is an optional field to filter data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - For event data stores, this is an optional field used to filter management and data events based on whether the events originated from an AWS Management Console session. For trails, this is an optional field used to filter data events. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor more information about the ARN formats of data event resources, see [Actions, resources, and condition keys for AWS services](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html) in the *Service Authorization Reference* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - For event data stores, this is an optional field used to filter management and data events for actions taken by specific IAM identities. For trails, this is an optional field used to filter data events. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", "NotEndsWith": "An operator that excludes events that match the last few characters of the event record field specified as the value of `Field` .", "NotEquals": "An operator that excludes events that match the exact value of the event record field specified as the value of `Field` .", "NotStartsWith": "An operator that excludes events that match the first few characters of the event record field specified as the value of `Field` .", @@ -11496,6 +11618,7 @@ "AllowMajorVersionUpgrade": "Indicates that major version upgrades are allowed. Changing this parameter does not result in an outage, and the change is asynchronously applied as soon as possible.\n\nThis parameter must be set to `true` when specifying a value for the `EngineVersion` parameter that is a different major version than the replication instance's current version.", "AutoMinorVersionUpgrade": "A value that indicates whether minor engine upgrades are applied automatically to the replication instance during the maintenance window. This parameter defaults to `true` .\n\nDefault: `true`", "AvailabilityZone": "The Availability Zone that the replication instance will be created in.\n\nThe default value is a random, system-chosen Availability Zone in the endpoint's AWS Region , for example `us-east-1d` .", + "DnsNameServers": "A list of custom DNS name servers supported for the replication instance to access your on-premise source or target database. This list overrides the default name servers supported by the replication instance. You can specify a comma-separated list of internet addresses for up to four on-premise DNS name servers. For example: `\"1.1.1.1,2.2.2.2,3.3.3.3,4.4.4.4\"`", "EngineVersion": "The engine version number of the replication instance.\n\nIf an engine version number is not specified when a replication instance is created, the default is the latest engine version available.", "KmsKeyId": "An AWS KMS key identifier that is used to encrypt the data on the replication instance.\n\nIf you don't specify a value for the `KmsKeyId` parameter, AWS DMS uses your default encryption key.\n\nAWS KMS creates the default encryption key for your AWS account . Your AWS account has a different default encryption key for each AWS Region .", "MultiAZ": "Specifies whether the replication instance is a Multi-AZ deployment. You can't set the `AvailabilityZone` parameter if the Multi-AZ parameter is set to `true` .", @@ -12529,6 +12652,7 @@ "Tags": "The tags specified for the Amazon DataZone domain." }, "AWS::DataZone::Domain SingleSignOn": { + "IdcInstanceArn": "", "Type": "The type of single sign-on in Amazon DataZone.", "UserAssignment": "The single sign-on user assignment in Amazon DataZone." }, @@ -13155,7 +13279,7 @@ "RecoveryPeriodInDays": "The number of preceding days for which continuous backups are taken and maintained. Your table data is only recoverable to any point-in-time from within the configured recovery period. This parameter is optional. If no value is provided, the value will default to 35." }, "AWS::DynamoDB::GlobalTable Projection": { - "NonKeyAttributes": "Represents the non-key attribute names which will be projected into the index.\n\nFor local secondary indexes, the total count of `NonKeyAttributes` summed across all of the local secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.", + "NonKeyAttributes": "Represents the non-key attribute names which will be projected into the index.\n\nFor global and local secondary indexes, the total count of `NonKeyAttributes` summed across all of the secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total. This limit only applies when you specify the ProjectionType of `INCLUDE` . You still can specify the ProjectionType of `ALL` to project all attributes from the source table, even if the table has more than 100 attributes.", "ProjectionType": "The set of attributes that are projected into the index:\n\n- `KEYS_ONLY` - Only the index and primary keys are projected into the index.\n- `INCLUDE` - In addition to the attributes described in `KEYS_ONLY` , the secondary index will include other non-key attributes that you specify.\n- `ALL` - All of the table attributes are projected into the index.\n\nWhen using the DynamoDB console, `ALL` is selected by default." }, "AWS::DynamoDB::GlobalTable ReadOnDemandThroughputSettings": { @@ -13299,7 +13423,7 @@ "RecoveryPeriodInDays": "The number of preceding days for which continuous backups are taken and maintained. Your table data is only recoverable to any point-in-time from within the configured recovery period. This parameter is optional. If no value is provided, the value will default to 35." }, "AWS::DynamoDB::Table Projection": { - "NonKeyAttributes": "Represents the non-key attribute names which will be projected into the index.\n\nFor local secondary indexes, the total count of `NonKeyAttributes` summed across all of the local secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.", + "NonKeyAttributes": "Represents the non-key attribute names which will be projected into the index.\n\nFor global and local secondary indexes, the total count of `NonKeyAttributes` summed across all of the secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total. This limit only applies when you specify the ProjectionType of `INCLUDE` . You still can specify the ProjectionType of `ALL` to project all attributes from the source table, even if the table has more than 100 attributes.", "ProjectionType": "The set of attributes that are projected into the index:\n\n- `KEYS_ONLY` - Only the index and primary keys are projected into the index.\n- `INCLUDE` - In addition to the attributes described in `KEYS_ONLY` , the secondary index will include other non-key attributes that you specify.\n- `ALL` - All of the table attributes are projected into the index.\n\nWhen using the DynamoDB console, `ALL` is selected by default." }, "AWS::DynamoDB::Table ProvisionedThroughput": { @@ -13924,7 +14048,7 @@ "DeleteOnTermination": "Indicates whether the network interface is deleted when the instance is terminated. Applies only if creating a network interface when launching an instance.", "Description": "The description of the network interface. Applies only if creating a network interface when launching an instance.", "DeviceIndex": "The position of the network interface in the attachment order. A primary network interface has a device index of 0.\n\nIf you create a network interface when launching an instance, you must specify the device index.", - "EnaSrdSpecification": "", + "EnaSrdSpecification": "Configures ENA Express for UDP network traffic.", "GroupSet": "The IDs of the security groups for the network interface. Applies only if creating a network interface when launching an instance.", "Ipv6AddressCount": "A number of IPv6 addresses to assign to the network interface. Amazon EC2 chooses the IPv6 addresses from the range of the subnet. You cannot specify this option and the option to assign specific IPv6 addresses in the same request. You can specify this option if you've specified a minimum number of instances to launch.", "Ipv6Addresses": "The IPv6 addresses to assign to the network interface. You cannot specify this option and the option to assign a number of IPv6 addresses in the same request. You cannot specify this option if you've specified a minimum number of instances to launch.", @@ -14185,7 +14309,7 @@ "ConnectionTrackingSpecification": "A connection tracking specification for the network interface.", "DeleteOnTermination": "Indicates whether the network interface is deleted when the instance is terminated.", "Description": "A description for the network interface.", - "DeviceIndex": "The device index for the network interface attachment. If the network interface is of type `interface` , you must specify a device index.\n\nIf you create a launch template that includes secondary network interfaces but no primary network interface, and you specify it using the `LaunchTemplate` property of `AWS::EC2::Instance` , then you must include a primary network interface using the `NetworkInterfaces` property of `AWS::EC2::Instance` .", + "DeviceIndex": "The device index for the network interface attachment. The primary network interface has a device index of 0. If the network interface is of type `interface` , you must specify a device index.\n\nIf you create a launch template that includes secondary network interfaces but no primary network interface, and you specify it using the `LaunchTemplate` property of `AWS::EC2::Instance` , then you must include a primary network interface using the `NetworkInterfaces` property of `AWS::EC2::Instance` .", "EnaSrdSpecification": "The ENA Express configuration for the network interface.", "Groups": "The IDs of one or more security groups.", "InterfaceType": "The type of network interface. To create an Elastic Fabric Adapter (EFA), specify `efa` or `efa` . For more information, see [Elastic Fabric Adapter for AI/ML and HPC workloads on Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/efa.html) in the *Amazon EC2 User Guide* .\n\nIf you are not creating an EFA, specify `interface` or omit this parameter.\n\nIf you specify `efa-only` , do not assign any IP addresses to the network interface. EFA-only network interfaces do not support IP addresses.\n\nValid values: `interface` | `efa` | `efa-only`", @@ -14661,6 +14785,48 @@ "VpcEndpointId": "The ID of a VPC endpoint. Supported for Gateway Load Balancer endpoints only.", "VpcPeeringConnectionId": "The ID of a VPC peering connection." }, + "AWS::EC2::RouteServer": { + "AmazonSideAsn": "The Border Gateway Protocol (BGP) Autonomous System Number (ASN) for the appliance. Valid values are from 1 to 4294967295. We recommend using a private ASN in the 64512\u201365534 (16-bit ASN) or 4200000000\u20134294967294 (32-bit ASN) range.", + "PersistRoutes": "Indicates whether routes should be persisted after all BGP sessions are terminated.", + "PersistRoutesDuration": "The number of minutes a route server will wait after BGP is re-established to unpersist the routes in the FIB and RIB. Value must be in the range of 1-5. The default value is 1. Only valid if `persistRoutesState` is 'enabled'.\n\nIf you set the duration to 1 minute, then when your network appliance re-establishes BGP with route server, it has 1 minute to relearn it's adjacent network and advertise those routes to route server before route server resumes normal functionality. In most cases, 1 minute is probably sufficient. If, however, you have concerns that your BGP network may not be capable of fully re-establishing and re-learning everything in 1 minute, you can increase the duration up to 5 minutes.", + "SnsNotificationsEnabled": "Indicates whether SNS notifications are enabled for the route server. Enabling SNS notifications persists BGP status changes to an SNS topic provisioned by AWS .", + "Tags": "Any tags assigned to the route server." + }, + "AWS::EC2::RouteServer Tag": { + "Key": "The key of the tag.\n\nConstraints: Tag keys are case-sensitive and accept a maximum of 127 Unicode characters. May not begin with `aws:` .", + "Value": "The value of the tag.\n\nConstraints: Tag values are case-sensitive and accept a maximum of 256 Unicode characters." + }, + "AWS::EC2::RouteServerAssociation": { + "RouteServerId": "The ID of the associated route server.", + "VpcId": "The ID of the associated VPC." + }, + "AWS::EC2::RouteServerEndpoint": { + "RouteServerId": "The ID of the route server associated with this endpoint.", + "SubnetId": "The ID of the subnet to place the route server endpoint into.", + "Tags": "Any tags assigned to the route server endpoint." + }, + "AWS::EC2::RouteServerEndpoint Tag": { + "Key": "The key of the tag.\n\nConstraints: Tag keys are case-sensitive and accept a maximum of 127 Unicode characters. May not begin with `aws:` .", + "Value": "The value of the tag.\n\nConstraints: Tag values are case-sensitive and accept a maximum of 256 Unicode characters." + }, + "AWS::EC2::RouteServerPeer": { + "BgpOptions": "The BGP configuration options for this peer, including ASN (Autonomous System Number) and BFD (Bidrectional Forwarding Detection) settings.", + "PeerAddress": "The IPv4 address of the peer device.", + "RouteServerEndpointId": "The ID of the route server endpoint associated with this peer.", + "Tags": "Any tags assigned to the route server peer." + }, + "AWS::EC2::RouteServerPeer BgpOptions": { + "PeerAsn": "The Border Gateway Protocol (BGP) Autonomous System Number (ASN) for the appliance. Valid values are from 1 to 4294967295. We recommend using a private ASN in the 64512\u201365534 (16-bit ASN) or 4200000000\u20134294967294 (32-bit ASN) range.", + "PeerLivenessDetection": "The liveness detection protocol used for the BGP peer.\n\nThe requested liveness detection protocol for the BGP peer.\n\n- `bgp-keepalive` : The standard BGP keep alive mechanism ( [RFC4271](https://docs.aws.amazon.com/https://www.rfc-editor.org/rfc/rfc4271#page-21) ) that is stable but may take longer to fail-over in cases of network impact or router failure.\n- `bfd` : An additional Bidirectional Forwarding Detection (BFD) protocol ( [RFC5880](https://docs.aws.amazon.com/https://www.rfc-editor.org/rfc/rfc5880) ) that enables fast failover by using more sensitive liveness detection.\n\nDefaults to `bgp-keepalive` ." + }, + "AWS::EC2::RouteServerPeer Tag": { + "Key": "The key of the tag.\n\nConstraints: Tag keys are case-sensitive and accept a maximum of 127 Unicode characters. May not begin with `aws:` .", + "Value": "The value of the tag.\n\nConstraints: Tag values are case-sensitive and accept a maximum of 256 Unicode characters." + }, + "AWS::EC2::RouteServerPropagation": { + "RouteServerId": "The ID of the route server configured for route propagation.", + "RouteTableId": "The ID of the route table configured for route server propagation." + }, "AWS::EC2::RouteTable": { "Tags": "Any tags assigned to the route table.", "VpcId": "The ID of the VPC." @@ -14672,8 +14838,8 @@ "AWS::EC2::SecurityGroup": { "GroupDescription": "A description for the security group.\n\nConstraints: Up to 255 characters in length\n\nValid characters: a-z, A-Z, 0-9, spaces, and ._-:/()#,@[]+=&;{}!$*", "GroupName": "The name of the security group. Names are case-insensitive and must be unique within the VPC.\n\nConstraints: Up to 255 characters in length. Can't start with `sg-` .\n\nValid characters: a-z, A-Z, 0-9, spaces, and ._-:/()#,@[]+=&;{}!$*", - "SecurityGroupEgress": "The outbound rules associated with the security group. There is a short interruption during which you cannot connect to the security group.", - "SecurityGroupIngress": "The inbound rules associated with the security group. There is a short interruption during which you cannot connect to the security group.", + "SecurityGroupEgress": "The outbound rules associated with the security group.", + "SecurityGroupIngress": "The inbound rules associated with the security group.", "Tags": "Any tags assigned to the security group.", "VpcId": "The ID of the VPC for the security group. If you do not specify a VPC, the default is to use the default VPC for the Region. If there's no specified VPC and no default VPC, security group creation fails." }, @@ -15234,6 +15400,7 @@ "SecurityGroupIds": "The IDs of the security groups to associate with the endpoint network interfaces. If this parameter is not specified, we use the default security group for the VPC. Security groups are supported only for interface endpoints.", "ServiceName": "The name of the endpoint service.", "ServiceNetworkArn": "The Amazon Resource Name (ARN) of the service network.", + "ServiceRegion": "Describes a Region.", "SubnetIds": "The IDs of the subnets in which to create endpoint network interfaces. You must specify this property for an interface endpoint or a Gateway Load Balancer endpoint. You can't specify this property for a gateway endpoint. For a Gateway Load Balancer endpoint, you can specify only one subnet.", "Tags": "The tags to associate with the endpoint.", "VpcEndpointType": "The type of endpoint.\n\nDefault: Gateway", @@ -15798,7 +15965,7 @@ }, "AWS::ECS::Service LogConfiguration": { "LogDriver": "The log driver to use for the container.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `syslog` , `splunk` , and `awsfirelens` .\n\nFor more information about using the `awslogs` log driver, see [Send Amazon ECS logs to CloudWatch](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nFor more information about using the `awsfirelens` log driver, see [Send Amazon ECS logs to an AWS service or AWS Partner](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html) .\n\n> If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.", - "Options": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using the Fargate launch type.Optional for the EC2 launch type, required for the Fargate launch type.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to CloudWatch Logs. The delivery mode you choose affects application availability when the flow of logs from container to CloudWatch is interrupted.\n\nIf you use the `blocking` mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", + "Options": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using Fargate.Optional when using EC2.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n\nThe following options apply to all supported log drivers.\n\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to the log driver specified using `logDriver` . The delivery mode you choose affects application availability when the flow of logs from container is interrupted.\n\nIf you use the `blocking` mode and the flow of logs is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n\nYou can set a default `mode` for all containers in a specific AWS Region by using the `defaultLogDriverMode` account setting. If you don't specify the `mode` option or configure the account setting, Amazon ECS will default to the `blocking` mode. For more information about the account setting, see [Default log driver mode](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#default-log-driver-mode) in the *Amazon Elastic Container Service Developer Guide* .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", "SecretOptions": "The secrets to pass to the log configuration. For more information, see [Specifying sensitive data](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html) in the *Amazon Elastic Container Service Developer Guide* ." }, "AWS::ECS::Service NetworkConfiguration": { @@ -16025,7 +16192,7 @@ }, "AWS::ECS::TaskDefinition LogConfiguration": { "LogDriver": "The log driver to use for the container.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `syslog` , `splunk` , and `awsfirelens` .\n\nFor more information about using the `awslogs` log driver, see [Send Amazon ECS logs to CloudWatch](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nFor more information about using the `awsfirelens` log driver, see [Send Amazon ECS logs to an AWS service or AWS Partner](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html) .\n\n> If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.", - "Options": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using the Fargate launch type.Optional for the EC2 launch type, required for the Fargate launch type.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to CloudWatch Logs. The delivery mode you choose affects application availability when the flow of logs from container to CloudWatch is interrupted.\n\nIf you use the `blocking` mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", + "Options": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using Fargate.Optional when using EC2.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n\nThe following options apply to all supported log drivers.\n\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to the log driver specified using `logDriver` . The delivery mode you choose affects application availability when the flow of logs from container is interrupted.\n\nIf you use the `blocking` mode and the flow of logs is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n\nYou can set a default `mode` for all containers in a specific AWS Region by using the `defaultLogDriverMode` account setting. If you don't specify the `mode` option or configure the account setting, Amazon ECS will default to the `blocking` mode. For more information about the account setting, see [Default log driver mode](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#default-log-driver-mode) in the *Amazon Elastic Container Service Developer Guide* .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", "SecretOptions": "The secrets to pass to the log configuration. For more information, see [Specifying sensitive data](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html) in the *Amazon Elastic Container Service Developer Guide* ." }, "AWS::ECS::TaskDefinition MountPoint": { @@ -16269,7 +16436,7 @@ "Logging": "The logging configuration for your cluster.", "Name": "The unique name to give to your cluster. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphanumeric character and can't be longer than 100 characters. The name must be unique within the AWS Region and AWS account that you're creating the cluster in. Note that underscores can't be used in AWS CloudFormation .", "OutpostConfig": "An object representing the configuration of your local Amazon EKS cluster on an AWS Outpost. This object isn't available for clusters on the AWS cloud.", - "RemoteNetworkConfig": "The configuration in the cluster for EKS Hybrid Nodes. You can't change or update this configuration after the cluster is created.", + "RemoteNetworkConfig": "The configuration in the cluster for EKS Hybrid Nodes. You can add, change, or remove this configuration after the cluster is created.", "ResourcesVpcConfig": "The VPC configuration that's used by the cluster control plane. Amazon EKS VPC resources have specific requirements to work properly with Kubernetes. For more information, see [Cluster VPC Considerations](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html) and [Cluster Security Group Considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) in the *Amazon EKS User Guide* . You must specify at least two subnets. You can specify up to five security groups, but we recommend that you use a dedicated security group for your cluster control plane.", "RoleArn": "The Amazon Resource Name (ARN) of the IAM role that provides permissions for the Kubernetes control plane to make calls to AWS API operations on your behalf. For more information, see [Amazon EKS Service IAM Role](https://docs.aws.amazon.com/eks/latest/userguide/service_IAM_role.html) in the **Amazon EKS User Guide** .", "StorageConfig": "Indicates the current configuration of the block storage capability on your EKS Auto Mode cluster. For example, if the capability is enabled or disabled. If the block storage capability is enabled, EKS Auto Mode will create and delete EBS volumes in your AWS account. For more information, see EKS Auto Mode block storage capability in the *Amazon EKS User Guide* .", @@ -17092,7 +17259,7 @@ "Value": "The tag's value. May be null." }, "AWS::ElastiCache::ReplicationGroup": { - "AtRestEncryptionEnabled": "A flag that enables encryption at rest when set to `true` .\n\nYou cannot modify the value of `AtRestEncryptionEnabled` after the replication group is created. To enable encryption at rest on a replication group you must set `AtRestEncryptionEnabled` to `true` when you create the replication group.\n\n*Required:* Only available when creating a replication group in an Amazon VPC using Redis OSS version `3.2.6` or `4.x` onward.\n\nDefault: `false`", + "AtRestEncryptionEnabled": "A flag that enables encryption at rest when set to `true` .\n\n*Required:* Only available when creating a replication group in an Amazon VPC using Redis OSS version `3.2.6` or `4.x` onward.\n\nDefault: `false`", "AuthToken": "*Reserved parameter.* The password used to access a password protected server.\n\n`AuthToken` can be specified only on replication groups where `TransitEncryptionEnabled` is `true` . For more information, see [Authenticating Valkey or Redis OSS users with the AUTH Command](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/auth.html) .\n\n> For HIPAA compliance, you must specify `TransitEncryptionEnabled` as `true` , an `AuthToken` , and a `CacheSubnetGroup` . \n\nPassword constraints:\n\n- Must be only printable ASCII characters.\n- Must be at least 16 characters and no more than 128 characters in length.\n- Nonalphanumeric characters are restricted to (!, &, #, $, ^, <, >, -, ).\n\nFor more information, see [AUTH password](https://docs.aws.amazon.com/http://redis.io/commands/AUTH) at http://redis.io/commands/AUTH.\n\n> If ADDING the AuthToken, update requires [Replacement](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-update-behaviors.html#update-replacement) .", "AutoMinorVersionUpgrade": "If you are running Valkey 7.2 or later, or Redis OSS 6.0 or later, set this parameter to yes if you want to opt-in to the next minor version upgrade campaign. This parameter is disabled for previous versions.", "AutomaticFailoverEnabled": "Specifies whether a read-only replica is automatically promoted to read/write primary if the existing primary fails.\n\n`AutomaticFailoverEnabled` must be enabled for Valkey or Redis OSS (cluster mode enabled) replication groups.\n\nDefault: false", @@ -17128,7 +17295,7 @@ "SnapshotWindow": "The daily time range (in UTC) during which ElastiCache begins taking a daily snapshot of your node group (shard).\n\nExample: `05:00-09:00`\n\nIf you do not specify this parameter, ElastiCache automatically chooses an appropriate time range.", "SnapshottingClusterId": "The cluster ID that is used as the daily snapshot source for the replication group. This parameter cannot be set for Valkey or Redis OSS (cluster mode enabled) replication groups.", "Tags": "A list of tags to be added to this resource. Tags are comma-separated key,value pairs (e.g. Key= `myKey` , Value= `myKeyValue` . You can include multiple tags as shown following: Key= `myKey` , Value= `myKeyValue` Key= `mySecondKey` , Value= `mySecondKeyValue` . Tags on replication groups will be replicated to all nodes.", - "TransitEncryptionEnabled": "A flag that enables in-transit encryption when set to `true` .\n\nYou cannot modify the value of `TransitEncryptionEnabled` after the cluster is created. To enable in-transit encryption on a cluster you must set `TransitEncryptionEnabled` to `true` when you create a cluster.\n\nThis parameter is valid only if the `Engine` parameter is `redis` , the `EngineVersion` parameter is `3.2.6` or `4.x` onward, and the cluster is being created in an Amazon VPC.\n\nIf you enable in-transit encryption, you must also specify a value for `CacheSubnetGroup` .\n\n> - TransitEncryptionEnabled is only available when creating a replication group in an Amazon VPC using Valkey version `7.2` and above, Redis OSS version `3.2.6` , or Redis OSS version `4.x` and above.\n> - TransitEncryptionEnabled is required when creating a new valkey replication group. \n\nDefault: `false`\n\n> For HIPAA compliance, you must specify `TransitEncryptionEnabled` as `true` , an `AuthToken` , and a `CacheSubnetGroup` .", + "TransitEncryptionEnabled": "A flag that enables in-transit encryption when set to `true` .\n\nThis parameter is only available when creating a replication group in an Amazon VPC using Valkey version `7.2` and above, Redis OSS version `3.2.6` , or Redis OSS version `4.x` and above, and the cluster is being created in an Amazon VPC.\n\nIf you enable in-transit encryption, you must also specify a value for `CacheSubnetGroup` .\n\n> TransitEncryptionEnabled is required when creating a new valkey replication group. \n\nDefault: `false`\n\n> For HIPAA compliance, you must specify `TransitEncryptionEnabled` as `true` , an `AuthToken` , and a `CacheSubnetGroup` .", "TransitEncryptionMode": "A setting that allows you to migrate your clients to use in-transit encryption, with no downtime.\n\nWhen setting `TransitEncryptionEnabled` to `true` , you can set your `TransitEncryptionMode` to `preferred` in the same request, to allow both encrypted and unencrypted connections at the same time. Once you migrate all your Valkey or Redis OSS clients to use encrypted connections you can modify the value to `required` to allow encrypted connections only.\n\nSetting `TransitEncryptionMode` to `required` is a two-step process that requires you to first set the `TransitEncryptionMode` to `preferred` , after that you can set `TransitEncryptionMode` to `required` .\n\nThis process will not trigger the replacement of the replication group.", "UserGroupIds": "The ID of user group to associate with the replication group." }, @@ -17977,6 +18144,7 @@ "ArchiveName": "The name for the archive to create.", "Description": "A description for the archive.", "EventPattern": "An event pattern to use to filter events sent to the archive.", + "KmsKeyIdentifier": "The identifier of the AWS KMS customer managed key for EventBridge to use, if you choose to use a customer managed key to encrypt this archive. The identifier can be the key Amazon Resource Name (ARN), KeyId, key alias, or key alias ARN.\n\nIf you do not specify a customer managed key identifier, EventBridge uses an AWS owned key to encrypt the archive.\n\nFor more information, see [Identify and view keys](https://docs.aws.amazon.com/kms/latest/developerguide/viewing-keys.html) in the *AWS Key Management Service Developer Guide* .\n\n> If you have specified that EventBridge use a customer managed key for encrypting the source event bus, we strongly recommend you also specify a customer managed key for any archives for the event bus as well.\n> \n> For more information, see [Encrypting archives](https://docs.aws.amazon.com/eventbridge/latest/userguide/encryption-archives.html) in the *Amazon EventBridge User Guide* .", "RetentionDays": "The number of days to retain events for. Default value is 0. If set to 0, events are retained indefinitely", "SourceArn": "The ARN of the event bus that sends events to the archive." }, @@ -17985,6 +18153,7 @@ "AuthorizationType": "The type of authorization to use for the connection.\n\n> OAUTH tokens are refreshed when a 401 or 407 response is returned.", "Description": "A description for the connection to create.", "InvocationConnectivityParameters": "For connections to private APIs, the parameters to use for invoking the API.\n\nFor more information, see [Connecting to private APIs](https://docs.aws.amazon.com/eventbridge/latest/userguide/connection-private.html) in the **Amazon EventBridge User Guide** .", + "KmsKeyIdentifier": "The identifier of the AWS KMS customer managed key for EventBridge to use, if you choose to use a customer managed key to encrypt this connection. The identifier can be the key Amazon Resource Name (ARN), KeyId, key alias, or key alias ARN.\n\nIf you do not specify a customer managed key identifier, EventBridge uses an AWS owned key to encrypt the connection.\n\nFor more information, see [Identify and view keys](https://docs.aws.amazon.com/kms/latest/developerguide/viewing-keys.html) in the *AWS Key Management Service Developer Guide* .", "Name": "The name for the connection to create." }, "AWS::Events::Connection ApiKeyAuthParameters": { @@ -18063,7 +18232,7 @@ "DeadLetterConfig": "Configuration details of the Amazon SQS queue for EventBridge to use as a dead-letter queue (DLQ).\n\nFor more information, see [Using dead-letter queues to process undelivered events](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-rule-event-delivery.html#eb-rule-dlq) in the *EventBridge User Guide* .", "Description": "The event bus description.", "EventSourceName": "If you are creating a partner event bus, this specifies the partner event source that the new event bus will be matched with.", - "KmsKeyIdentifier": "The identifier of the AWS KMS customer managed key for EventBridge to use, if you choose to use a customer managed key to encrypt events on this event bus. The identifier can be the key Amazon Resource Name (ARN), KeyId, key alias, or key alias ARN.\n\nIf you do not specify a customer managed key identifier, EventBridge uses an AWS owned key to encrypt events on the event bus.\n\nFor more information, see [Managing keys](https://docs.aws.amazon.com/kms/latest/developerguide/getting-started.html) in the *AWS Key Management Service Developer Guide* .\n\n> Archives and schema discovery are not supported for event buses encrypted using a customer managed key. EventBridge returns an error if:\n> \n> - You call `[CreateArchive](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_CreateArchive.html)` on an event bus set to use a customer managed key for encryption.\n> - You call `[CreateDiscoverer](https://docs.aws.amazon.com/eventbridge/latest/schema-reference/v1-discoverers.html#CreateDiscoverer)` on an event bus set to use a customer managed key for encryption.\n> - You call `[UpdatedEventBus](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_UpdatedEventBus.html)` to set a customer managed key on an event bus with an archives or schema discovery enabled.\n> \n> To enable archives or schema discovery on an event bus, choose to use an AWS owned key . For more information, see [Data encryption in EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-encryption.html) in the *Amazon EventBridge User Guide* .", + "KmsKeyIdentifier": "The identifier of the AWS KMS customer managed key for EventBridge to use, if you choose to use a customer managed key to encrypt events on this event bus. The identifier can be the key Amazon Resource Name (ARN), KeyId, key alias, or key alias ARN.\n\nIf you do not specify a customer managed key identifier, EventBridge uses an AWS owned key to encrypt events on the event bus.\n\nFor more information, see [Identify and view keys](https://docs.aws.amazon.com/kms/latest/developerguide/viewing-keys.html) in the *AWS Key Management Service Developer Guide* .\n\n> Schema discovery is not supported for event buses encrypted using a customer managed key. EventBridge returns an error if:\n> \n> - You call `[CreateDiscoverer](https://docs.aws.amazon.com/eventbridge/latest/schema-reference/v1-discoverers.html#CreateDiscoverer)` on an event bus set to use a customer managed key for encryption.\n> - You call `[UpdatedEventBus](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_UpdatedEventBus.html)` to set a customer managed key on an event bus with schema discovery enabled.\n> \n> To enable schema discovery on an event bus, choose to use an AWS owned key . For more information, see [Encrypting events](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-encryption-event-bus-cmkey.html) in the *Amazon EventBridge User Guide* . > If you have specified that EventBridge use a customer managed key for encrypting the source event bus, we strongly recommend you also specify a customer managed key for any archives for the event bus as well.\n> \n> For more information, see [Encrypting archives](https://docs.aws.amazon.com/eventbridge/latest/userguide/encryption-archives.html) in the *Amazon EventBridge User Guide* .", "Name": "The name of the new event bus.\n\nCustom event bus names can't contain the `/` character, but you can use the `/` character in partner event bus names. In addition, for partner event buses, the name must exactly match the name of the partner event source that this event bus is matched to.\n\nYou can't use the name `default` for a custom event bus, as this name is already used for your account's default event bus.", "Policy": "The permissions policy of the event bus, describing which other AWS accounts can write events to this event bus.", "Tags": "Tags to associate with the event bus." @@ -18563,7 +18732,7 @@ "OpenZFSConfiguration": "The Amazon FSx for OpenZFS configuration properties for the file system that you are creating.", "SecurityGroupIds": "A list of IDs specifying the security groups to apply to all network interfaces created for file system access. This list isn't returned in later requests to describe the file system.\n\n> You must specify a security group if you are creating a Multi-AZ FSx for ONTAP file system in a VPC subnet that has been shared with you.", "StorageCapacity": "Sets the storage capacity of the file system that you're creating.\n\n`StorageCapacity` is required if you are creating a new file system. It is not required if you are creating a file system by restoring a backup.\n\n*FSx for Lustre file systems* - The amount of storage capacity that you can configure depends on the value that you set for `StorageType` and the Lustre `DeploymentType` , as follows:\n\n- For `SCRATCH_2` , `PERSISTENT_2` and `PERSISTENT_1` deployment types using SSD storage type, the valid values are 1200 GiB, 2400 GiB, and increments of 2400 GiB.\n- For `PERSISTENT_1` HDD file systems, valid values are increments of 6000 GiB for 12 MB/s/TiB file systems and increments of 1800 GiB for 40 MB/s/TiB file systems.\n- For `SCRATCH_1` deployment type, valid values are 1200 GiB, 2400 GiB, and increments of 3600 GiB.\n\n*FSx for ONTAP file systems* - The amount of SSD storage capacity that you can configure depends on the value of the `HAPairs` property. The minimum value is calculated as 1,024 GiB * HAPairs and the maximum is calculated as 524,288 GiB * HAPairs, up to a maximum amount of SSD storage capacity of 1,048,576 GiB (1 pebibyte).\n\n*FSx for OpenZFS file systems* - The amount of storage capacity that you can configure is from 64 GiB up to 524,288 GiB (512 TiB). If you are creating a file system from a backup, you can specify a storage capacity equal to or greater than the original file system's storage capacity.\n\n*FSx for Windows File Server file systems* - The amount of storage capacity that you can configure depends on the value that you set for `StorageType` as follows:\n\n- For SSD storage, valid values are 32 GiB-65,536 GiB (64 TiB).\n- For HDD storage, valid values are 2000 GiB-65,536 GiB (64 TiB).", - "StorageType": "Sets the storage class for the file system that you're creating. Valid values are `SSD` , `HDD` , and `INTELLIGENT_TIERING` .\n\n- Set to `SSD` to use solid state drive storage. SSD is supported on all Windows, Lustre, ONTAP, and OpenZFS deployment types.\n- Set to `HDD` to use hard disk drive storage. HDD is supported on `SINGLE_AZ_2` and `MULTI_AZ_1` Windows file system deployment types, and on `PERSISTENT_1` Lustre file system deployment types.\n- Set to `INTELLIGENT_TIERING` to use fully elastic, intelligently-tiered storage. Intelligent-Tiering is only available for OpenZFS file systems with the Multi-AZ deployment type.\n\nDefault value is `SSD` . For more information, see [Storage type options](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/optimize-fsx-costs.html#storage-type-options) in the *FSx for Windows File Server User Guide* , [Multiple storage options](https://docs.aws.amazon.com/fsx/latest/LustreGuide/what-is.html#storage-options) in the *FSx for Lustre User Guide* , and [Working with Intelligent-Tiering](https://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/performance-intelligent-tiering) in the *Amazon FSx for OpenZFS User Guide* .", + "StorageType": "Sets the storage class for the file system that you're creating. Valid values are `SSD` , `HDD` , and `INTELLIGENT_TIERING` .\n\n- Set to `SSD` to use solid state drive storage. SSD is supported on all Windows, Lustre, ONTAP, and OpenZFS deployment types.\n- Set to `HDD` to use hard disk drive storage, which is supported on `SINGLE_AZ_2` and `MULTI_AZ_1` Windows file system deployment types, and on `PERSISTENT_1` Lustre file system deployment types.\n- Set to `INTELLIGENT_TIERING` to use fully elastic, intelligently-tiered storage. Intelligent-Tiering is only available for OpenZFS file systems with the Multi-AZ deployment type.\n\nDefault value is `SSD` . For more information, see [Storage type options](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/optimize-fsx-costs.html#storage-type-options) in the *FSx for Windows File Server User Guide* , [Multiple storage options](https://docs.aws.amazon.com/fsx/latest/LustreGuide/what-is.html#storage-options) in the *FSx for Lustre User Guide* , and [Working with Intelligent-Tiering](https://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/performance-intelligent-tiering) in the *Amazon FSx for OpenZFS User Guide* .", "SubnetIds": "Specifies the IDs of the subnets that the file system will be accessible from. For Windows and ONTAP `MULTI_AZ_1` deployment types,provide exactly two subnet IDs, one for the preferred file server and one for the standby file server. You specify one of these subnets as the preferred subnet using the `WindowsConfiguration > PreferredSubnetID` or `OntapConfiguration > PreferredSubnetID` properties. For more information about Multi-AZ file system configuration, see [Availability and durability: Single-AZ and Multi-AZ file systems](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/high-availability-multiAZ.html) in the *Amazon FSx for Windows User Guide* and [Availability and durability](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/high-availability-multiAZ.html) in the *Amazon FSx for ONTAP User Guide* .\n\nFor Windows `SINGLE_AZ_1` and `SINGLE_AZ_2` and all Lustre deployment types, provide exactly one subnet ID. The file server is launched in that subnet's Availability Zone.", "Tags": "The tags to associate with the file system. For more information, see [Tagging your Amazon FSx resources](https://docs.aws.amazon.com/fsx/latest/LustreGuide/tag-resources.html) in the *Amazon FSx for Lustre User Guide* .", "WindowsConfiguration": "The configuration object for the Microsoft Windows file system you are creating.\n\nThis value is required if `FileSystemType` is set to `WINDOWS` ." @@ -18595,7 +18764,7 @@ "ImportedFileChunkSize": "(Optional) For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system.\n\nThe default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.\n\n> This parameter is not supported for Lustre file systems with a data repository association.", "MetadataConfiguration": "", "PerUnitStorageThroughput": "Required with `PERSISTENT_1` and `PERSISTENT_2` deployment types, provisions the amount of read and write throughput for each 1 tebibyte (TiB) of file system storage capacity, in MB/s/TiB. File system throughput capacity is calculated by multiplying \ufb01le system storage capacity (TiB) by the `PerUnitStorageThroughput` (MB/s/TiB). For a 2.4-TiB \ufb01le system, provisioning 50 MB/s/TiB of `PerUnitStorageThroughput` yields 120 MB/s of \ufb01le system throughput. You pay for the amount of throughput that you provision.\n\nValid values:\n\n- For `PERSISTENT_1` SSD storage: 50, 100, 200 MB/s/TiB.\n- For `PERSISTENT_1` HDD storage: 12, 40 MB/s/TiB.\n- For `PERSISTENT_2` SSD storage: 125, 250, 500, 1000 MB/s/TiB.", - "WeeklyMaintenanceStartTime": "A recurring weekly time, in the format `D:HH:MM` .\n\n`D` is the day of the week, for which 1 represents Monday and 7 represents Sunday. For further details, see [the ISO-8601 spec as described on Wikipedia](https://docs.aws.amazon.com/https://en.wikipedia.org/wiki/ISO_week_date) .\n\n`HH` is the zero-padded hour of the day (0-23), and `MM` is the zero-padded minute of the hour.\n\nFor example, `1:05:00` specifies maintenance at 5 AM Monday." + "WeeklyMaintenanceStartTime": "The preferred start time to perform weekly maintenance, formatted d:HH:MM in the UTC time zone, where d is the weekday number, from 1 through 7, beginning with Monday and ending with Sunday.\n\nFor example, `1:05:00` specifies maintenance at 5 AM Monday." }, "AWS::FSx::FileSystem MetadataConfiguration": { "Iops": "", @@ -18616,7 +18785,7 @@ "RouteTableIds": "(Multi-AZ only) Specifies the route tables in which Amazon FSx creates the rules for routing traffic to the correct file server. You should specify all virtual private cloud (VPC) route tables associated with the subnets in which your clients are located. By default, Amazon FSx selects your VPC's default route table.\n\n> Amazon FSx manages these route tables for Multi-AZ file systems using tag-based authentication. These route tables are tagged with `Key: AmazonFSx; Value: ManagedByAmazonFSx` . When creating FSx for ONTAP Multi-AZ file systems using AWS CloudFormation we recommend that you add the `Key: AmazonFSx; Value: ManagedByAmazonFSx` tag manually.", "ThroughputCapacity": "Sets the throughput capacity for the file system that you're creating in megabytes per second (MBps). For more information, see [Managing throughput capacity](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/managing-throughput-capacity.html) in the FSx for ONTAP User Guide.\n\nAmazon FSx responds with an HTTP status code 400 (Bad Request) for the following conditions:\n\n- The value of `ThroughputCapacity` and `ThroughputCapacityPerHAPair` are not the same value.\n- The value of `ThroughputCapacity` when divided by the value of `HAPairs` is outside of the valid range for `ThroughputCapacity` .", "ThroughputCapacityPerHAPair": "Use to choose the throughput capacity per HA pair, rather than the total throughput for the file system.\n\nYou can define either the `ThroughputCapacityPerHAPair` or the `ThroughputCapacity` when creating a file system, but not both.\n\nThis field and `ThroughputCapacity` are the same for file systems powered by one HA pair.\n\n- For `SINGLE_AZ_1` and `MULTI_AZ_1` file systems, valid values are 128, 256, 512, 1024, 2048, or 4096 MBps.\n- For `SINGLE_AZ_2` , valid values are 1536, 3072, or 6144 MBps.\n- For `MULTI_AZ_2` , valid values are 384, 768, 1536, 3072, or 6144 MBps.\n\nAmazon FSx responds with an HTTP status code 400 (Bad Request) for the following conditions:\n\n- The value of `ThroughputCapacity` and `ThroughputCapacityPerHAPair` are not the same value for file systems with one HA pair.\n- The value of deployment type is `SINGLE_AZ_2` and `ThroughputCapacity` / `ThroughputCapacityPerHAPair` is not a valid HA pair (a value between 1 and 12).\n- The value of `ThroughputCapacityPerHAPair` is not a valid value.", - "WeeklyMaintenanceStartTime": "A recurring weekly time, in the format `D:HH:MM` .\n\n`D` is the day of the week, for which 1 represents Monday and 7 represents Sunday. For further details, see [the ISO-8601 spec as described on Wikipedia](https://docs.aws.amazon.com/https://en.wikipedia.org/wiki/ISO_week_date) .\n\n`HH` is the zero-padded hour of the day (0-23), and `MM` is the zero-padded minute of the hour.\n\nFor example, `1:05:00` specifies maintenance at 5 AM Monday." + "WeeklyMaintenanceStartTime": "The preferred start time to perform weekly maintenance, formatted d:HH:MM in the UTC time zone, where d is the weekday number, from 1 through 7, beginning with Monday and ending with Sunday.\n\nFor example, `1:05:00` specifies maintenance at 5 AM Monday." }, "AWS::FSx::FileSystem OpenZFSConfiguration": { "AutomaticBackupRetentionDays": "The number of days to retain automatic backups. Setting this property to `0` disables automatic backups. You can retain automatic backups for a maximum of 90 days. The default is `30` .", @@ -18632,7 +18801,7 @@ "RootVolumeConfiguration": "The configuration Amazon FSx uses when creating the root value of the Amazon FSx for OpenZFS file system. All volumes are children of the root volume.", "RouteTableIds": "(Multi-AZ only) Specifies the route tables in which Amazon FSx creates the rules for routing traffic to the correct file server. You should specify all virtual private cloud (VPC) route tables associated with the subnets in which your clients are located. By default, Amazon FSx selects your VPC's default route table.", "ThroughputCapacity": "Specifies the throughput of an Amazon FSx for OpenZFS file system, measured in megabytes per second (MBps). Valid values depend on the `DeploymentType` and `StorageType` that you choose, as follows:\n\n- For `INTELIGENT_TIERING` , valid values are 1280, 2560, 3840, 5120, 7680, or 10240 MBps.\n- For `MULTI_AZ_1` and `SINGLE_AZ_2` , valid values are 160, 320, 640, 1280, 2560, 3840, 5120, 7680, or 10240 MBps.\n- For `SINGLE_AZ_1` , valid values are 64, 128, 256, 512, 1024, 2048, 3072, or 4096 MBps.\n\nYou pay for additional throughput capacity that you provision.", - "WeeklyMaintenanceStartTime": "A recurring weekly time, in the format `D:HH:MM` .\n\n`D` is the day of the week, for which 1 represents Monday and 7 represents Sunday. For further details, see [the ISO-8601 spec as described on Wikipedia](https://docs.aws.amazon.com/https://en.wikipedia.org/wiki/ISO_week_date) .\n\n`HH` is the zero-padded hour of the day (0-23), and `MM` is the zero-padded minute of the hour.\n\nFor example, `1:05:00` specifies maintenance at 5 AM Monday." + "WeeklyMaintenanceStartTime": "The preferred start time to perform weekly maintenance, formatted d:HH:MM in the UTC time zone, where d is the weekday number, from 1 through 7, beginning with Monday and ending with Sunday.\n\nFor example, `1:05:00` specifies maintenance at 5 AM Monday." }, "AWS::FSx::FileSystem ReadCacheConfiguration": { "SizeGiB": "Required if `SizingMode` is set to `USER_PROVISIONED` . Specifies the size of the file system's SSD read cache, in gibibytes (GiB).", @@ -18675,7 +18844,7 @@ "PreferredSubnetId": "Required when `DeploymentType` is set to `MULTI_AZ_1` . This specifies the subnet in which you want the preferred file server to be located. For in- AWS applications, we recommend that you launch your clients in the same availability zone as your preferred file server to reduce cross-availability zone data transfer costs and minimize latency.", "SelfManagedActiveDirectoryConfiguration": "The configuration that Amazon FSx uses to join a FSx for Windows File Server file system or an FSx for ONTAP storage virtual machine (SVM) to a self-managed (including on-premises) Microsoft Active Directory (AD) directory. For more information, see [Using Amazon FSx for Windows with your self-managed Microsoft Active Directory](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/self-managed-AD.html) or [Managing FSx for ONTAP SVMs](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/managing-svms.html) .", "ThroughputCapacity": "Sets the throughput capacity of an Amazon FSx file system, measured in megabytes per second (MB/s), in 2 to the *n* th increments, between 2^3 (8) and 2^11 (2048).\n\n> To increase storage capacity, a file system must have a minimum throughput capacity of 16 MB/s.", - "WeeklyMaintenanceStartTime": "A recurring weekly time, in the format `D:HH:MM` .\n\n`D` is the day of the week, for which 1 represents Monday and 7 represents Sunday. For further details, see [the ISO-8601 spec as described on Wikipedia](https://docs.aws.amazon.com/https://en.wikipedia.org/wiki/ISO_week_date) .\n\n`HH` is the zero-padded hour of the day (0-23), and `MM` is the zero-padded minute of the hour.\n\nFor example, `1:05:00` specifies maintenance at 5 AM Monday." + "WeeklyMaintenanceStartTime": "The preferred start time to perform weekly maintenance, formatted d:HH:MM in the UTC time zone, where d is the weekday number, from 1 through 7, beginning with Monday and ending with Sunday." }, "AWS::FSx::Snapshot": { "Name": "The name of the snapshot.", @@ -19038,18 +19207,24 @@ "AWS::GameLift::Alias": { "Description": "A human-readable description of the alias.", "Name": "A descriptive label that is associated with an alias. Alias names do not need to be unique.", - "RoutingStrategy": "The routing configuration, including routing type and fleet target, for the alias." + "RoutingStrategy": "The routing configuration, including routing type and fleet target, for the alias.", + "Tags": "" }, "AWS::GameLift::Alias RoutingStrategy": { "FleetId": "A unique identifier for a fleet that the alias points to. If you specify `SIMPLE` for the `Type` property, you must specify this property.", "Message": "The message text to be used with a terminal routing strategy. If you specify `TERMINAL` for the `Type` property, you must specify this property.", "Type": "A type of routing strategy.\n\nPossible routing types include the following:\n\n- *SIMPLE* - The alias resolves to one specific fleet. Use this type when routing to active fleets.\n- *TERMINAL* - The alias does not resolve to a fleet but instead can be used to display a message to the user. A terminal alias throws a `TerminalRoutingStrategyException` with the message that you specified in the `Message` property." }, + "AWS::GameLift::Alias Tag": { + "Key": "The key for a developer-defined key value pair for tagging an AWS resource.", + "Value": "The value for a developer-defined key value pair for tagging an AWS resource." + }, "AWS::GameLift::Build": { "Name": "A descriptive label that is associated with a build. Build names do not need to be unique.", "OperatingSystem": "The operating system that your game server binaries run on. This value determines the type of fleet resources that you use for this build. If your game build contains multiple executables, they all must run on the same operating system. You must specify a valid operating system in this request. There is no default value. You can't change a build's operating system later.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use server SDK version 4.x for Amazon GameLift Servers, first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", "ServerSdkVersion": "A server SDK version you used when integrating your game server build with Amazon GameLift Servers. For more information see [Integrate games with custom game servers](https://docs.aws.amazon.com/gamelift/latest/developerguide/integration-custom-intro.html) . By default Amazon GameLift Servers sets this value to `4.0.2` .", "StorageLocation": "Information indicating where your game build files are stored. Use this parameter only when creating a build with files stored in an Amazon S3 bucket that you own. The storage location must specify an Amazon S3 bucket name and key. The location must also specify a role ARN that you set up to allow Amazon GameLift Servers to access your Amazon S3 bucket. The S3 bucket and your new build must be in the same Region.\n\nIf a `StorageLocation` is specified, the size of your file can be found in your Amazon S3 bucket. Amazon GameLift Servers will report a `SizeOnDisk` of 0.", + "Tags": "", "Version": "Version information that is associated with this build. Version strings do not need to be unique." }, "AWS::GameLift::Build StorageLocation": { @@ -19058,6 +19233,10 @@ "ObjectVersion": "A version of a stored file to retrieve, if the object versioning feature is turned on for the S3 bucket. Use this parameter to specify a specific version. If this parameter isn't set, Amazon GameLift Servers retrieves the latest version of the file.", "RoleArn": "The ARNfor an IAM role that allows Amazon GameLift to access the S3 bucket." }, + "AWS::GameLift::Build Tag": { + "Key": "The key for a developer-defined key value pair for tagging an AWS resource.", + "Value": "The value for a developer-defined key value pair for tagging an AWS resource." + }, "AWS::GameLift::ContainerFleet": { "BillingType": "Indicates whether the fleet uses On-Demand or Spot instances for this fleet. Learn more about when to use [On-Demand versus Spot Instances](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-ec2-instances.html#gamelift-ec2-instances-spot) . You can't update this fleet property.\n\nBy default, this property is set to `ON_DEMAND` .", "DeploymentConfiguration": "Set of rules for processing a deployment for a container fleet update.", @@ -19135,7 +19314,7 @@ "ContainerGroupType": "The type of container group. Container group type determines how Amazon GameLift Servers deploys the container group on each fleet instance.", "GameServerContainerDefinition": "The definition for the game server container in this group. This property is used only when the container group type is `GAME_SERVER` . This container definition specifies a container image with the game server build.", "Name": "A descriptive identifier for the container group definition. The name value is unique in an AWS Region.", - "OperatingSystem": "The platform that all containers in the container group definition run on.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use server SDK version 4.x for Amazon GameLift Servers, first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", + "OperatingSystem": "The platform that all containers in the container group definition run on.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use server SDK version 4.x for Amazon GameLift Servers, first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", "SourceVersionNumber": "", "SupportContainerDefinitions": "The set of definitions for support containers in this group. A container group definition might have zero support container definitions. Support container can be used in any type of container group.", "Tags": "", @@ -20696,7 +20875,7 @@ "AWS::GroundStation::DataflowEndpointGroup": { "ContactPostPassDurationSeconds": "Amount of time, in seconds, after a contact ends that the Ground Station Dataflow Endpoint Group will be in a `POSTPASS` state. A Ground Station Dataflow Endpoint Group State Change event will be emitted when the Dataflow Endpoint Group enters and exits the `POSTPASS` state.", "ContactPrePassDurationSeconds": "Amount of time, in seconds, before a contact starts that the Ground Station Dataflow Endpoint Group will be in a `PREPASS` state. A Ground Station Dataflow Endpoint Group State Change event will be emitted when the Dataflow Endpoint Group enters and exits the `PREPASS` state.", - "EndpointDetails": "List of Endpoint Details, containing address and port for each endpoint.", + "EndpointDetails": "List of Endpoint Details, containing address and port for each endpoint. All dataflow endpoints within a single dataflow endpoint group must be of the same type. You cannot mix AWS Ground Station Agent endpoints with Dataflow endpoints in the same group. If your use case requires both types of endpoints, you must create separate dataflow endpoint groups for each type.", "Tags": "Tags assigned to a resource." }, "AWS::GroundStation::DataflowEndpointGroup AwsGroundStationAgentEndpoint": { @@ -21822,7 +22001,7 @@ }, "AWS::IoT::AccountAuditConfiguration": { "AccountId": "The ID of the account. You can use the expression `!Sub \"${AWS::AccountId}\"` to use your account ID.", - "AuditCheckConfigurations": "Specifies which audit checks are enabled and disabled for this account.\n\nSome data collection might start immediately when certain checks are enabled. When a check is disabled, any data collected so far in relation to the check is deleted. To disable a check, set the value of the `Enabled:` key to `false` .\n\nIf an enabled check is removed from the template, it will also be disabled.\n\nYou can't disable a check if it's used by any scheduled audit. You must delete the check from the scheduled audit or delete the scheduled audit itself to disable the check.\n\nFor more information on avialbe auidt checks see [AWS::IoT::AccountAuditConfiguration AuditCheckConfigurations](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iot-accountauditconfiguration-auditcheckconfigurations.html)", + "AuditCheckConfigurations": "Specifies which audit checks are enabled and disabled for this account.\n\nSome data collection might start immediately when certain checks are enabled. When a check is disabled, any data collected so far in relation to the check is deleted. To disable a check, set the value of the `Enabled:` key to `false` .\n\nIf an enabled check is removed from the template, it will also be disabled.\n\nYou can't disable a check if it's used by any scheduled audit. You must delete the check from the scheduled audit or delete the scheduled audit itself to disable the check.\n\nFor more information on available audit checks see [AWS::IoT::AccountAuditConfiguration AuditCheckConfigurations](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iot-accountauditconfiguration-auditcheckconfigurations.html)", "AuditNotificationTargetConfigurations": "Information about the targets to which audit notifications are sent.", "RoleArn": "The Amazon Resource Name (ARN) of the role that grants permission to AWS IoT to access information about your devices, policies, certificates, and other items as required when performing an audit." }, @@ -22202,7 +22381,7 @@ "Frequency": "How often the scheduled audit occurs.", "ScheduledAuditName": "The name of the scheduled audit.", "Tags": "Metadata that can be used to manage the scheduled audit.", - "TargetCheckNames": "Which checks are performed during the scheduled audit. Checks must be enabled for your account. (Use `DescribeAccountAuditConfiguration` to see the list of all checks, including those that are enabled or use `UpdateAccountAuditConfiguration` to select which checks are enabled.)\n\nThe following checks are currently aviable:\n\n- `AUTHENTICATED_COGNITO_ROLE_OVERLY_PERMISSIVE_CHECK`\n- `CA_CERTIFICATE_EXPIRING_CHECK`\n- `CA_CERTIFICATE_KEY_QUALITY_CHECK`\n- `CONFLICTING_CLIENT_IDS_CHECK`\n- `DEVICE_CERTIFICATE_EXPIRING_CHECK`\n- `DEVICE_CERTIFICATE_KEY_QUALITY_CHECK`\n- `DEVICE_CERTIFICATE_SHARED_CHECK`\n- `IOT_POLICY_OVERLY_PERMISSIVE_CHECK`\n- `IOT_ROLE_ALIAS_ALLOWS_ACCESS_TO_UNUSED_SERVICES_CHECK`\n- `IOT_ROLE_ALIAS_OVERLY_PERMISSIVE_CHECK`\n- `LOGGING_DISABLED_CHECK`\n- `REVOKED_CA_CERTIFICATE_STILL_ACTIVE_CHECK`\n- `REVOKED_DEVICE_CERTIFICATE_STILL_ACTIVE_CHECK`\n- `UNAUTHENTICATED_COGNITO_ROLE_OVERLY_PERMISSIVE_CHECK`" + "TargetCheckNames": "Which checks are performed during the scheduled audit. Checks must be enabled for your account. (Use `DescribeAccountAuditConfiguration` to see the list of all checks, including those that are enabled or use `UpdateAccountAuditConfiguration` to select which checks are enabled.)\n\nThe following checks are currently available:\n\n- `AUTHENTICATED_COGNITO_ROLE_OVERLY_PERMISSIVE_CHECK`\n- `CA_CERTIFICATE_EXPIRING_CHECK`\n- `CA_CERTIFICATE_KEY_QUALITY_CHECK`\n- `CONFLICTING_CLIENT_IDS_CHECK`\n- `DEVICE_CERTIFICATE_EXPIRING_CHECK`\n- `DEVICE_CERTIFICATE_KEY_QUALITY_CHECK`\n- `DEVICE_CERTIFICATE_SHARED_CHECK`\n- `IOT_POLICY_OVERLY_PERMISSIVE_CHECK`\n- `IOT_ROLE_ALIAS_ALLOWS_ACCESS_TO_UNUSED_SERVICES_CHECK`\n- `IOT_ROLE_ALIAS_OVERLY_PERMISSIVE_CHECK`\n- `LOGGING_DISABLED_CHECK`\n- `REVOKED_CA_CERTIFICATE_STILL_ACTIVE_CHECK`\n- `REVOKED_DEVICE_CERTIFICATE_STILL_ACTIVE_CHECK`\n- `UNAUTHENTICATED_COGNITO_ROLE_OVERLY_PERMISSIVE_CHECK`" }, "AWS::IoT::ScheduledAudit Tag": { "Key": "The tag's key.", @@ -24293,7 +24472,7 @@ "Value": "" }, "AWS::KafkaConnect::Connector Vpc": { - "SecurityGroups": "The security groups for the connector.", + "SecurityGroups": "The security group IDs for the connector.", "Subnets": "The subnets for the connector." }, "AWS::KafkaConnect::Connector WorkerConfiguration": { @@ -24748,6 +24927,7 @@ "ResourcePolicy": "This is the description for the resource policy." }, "AWS::Kinesis::Stream": { + "DesiredShardLevelMetrics": "A list of shard-level metrics in properties to enable enhanced monitoring mode.", "Name": "The name of the Kinesis stream. If you don't specify a name, AWS CloudFormation generates a unique physical ID and uses that ID for the stream name. For more information, see [Name Type](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-name.html) .\n\nIf you specify a name, you cannot perform updates that require replacement of this resource. You can perform updates that require no or some interruption. If you must replace the resource, specify a new name.", "RetentionPeriodHours": "The number of hours for the data records that are stored in shards to remain accessible. The default value is 24. For more information about the stream retention period, see [Changing the Data Retention Period](https://docs.aws.amazon.com/streams/latest/dev/kinesis-extended-retention.html) in the Amazon Kinesis Developer Guide.", "ShardCount": "The number of shards that the stream uses. For greater provisioned throughput, increase the number of shards.", @@ -26055,6 +26235,18 @@ "AWS::Lex::Bot BKBExactResponseFields": { "AnswerField": "" }, + "AWS::Lex::Bot BedrockAgentConfiguration": { + "BedrockAgentAliasId": "", + "BedrockAgentId": "" + }, + "AWS::Lex::Bot BedrockAgentIntentConfiguration": { + "BedrockAgentConfiguration": "", + "BedrockAgentIntentKnowledgeBaseConfiguration": "" + }, + "AWS::Lex::Bot BedrockAgentIntentKnowledgeBaseConfiguration": { + "BedrockKnowledgeBaseArn": "", + "BedrockModelConfiguration": "" + }, "AWS::Lex::Bot BedrockGuardrailConfiguration": { "BedrockGuardrailIdentifier": "", "BedrockGuardrailVersion": "" @@ -26098,6 +26290,9 @@ "AWS::Lex::Bot CodeHookSpecification": { "LambdaCodeHook": "Specifies a Lambda function that verifies requests to a bot or fulfills the user's request to a bot." }, + "AWS::Lex::Bot CompositeSlotTypeSetting": { + "SubSlots": "Subslots in the composite slot." + }, "AWS::Lex::Bot Condition": { "ExpressionString": "The expression string that is evaluated." }, @@ -26221,6 +26416,7 @@ "Name": "The name of the context." }, "AWS::Lex::Bot Intent": { + "BedrockAgentIntentConfiguration": "", "Description": "A description of the intent. Use the description to help identify the intent in lists.", "DialogCodeHook": "Specifies that Amazon Lex invokes the alias Lambda function for each user input. You can invoke this Lambda function to personalize user interaction.", "FulfillmentCodeHook": "Specifies that Amazon Lex invokes the alias Lambda function when the intent is ready for fulfillment. You can invoke this function to complete the bot's transaction with the user.", @@ -26232,6 +26428,7 @@ "Name": "The name of the intent. Intent names must be unique within the locale that contains the intent and can't match the name of any built-in intent.", "OutputContexts": "A list of contexts that the intent activates when it is fulfilled.", "ParentIntentSignature": "A unique identifier for the built-in intent to base this intent on.", + "QInConnectIntentConfiguration": "", "QnAIntentConfiguration": "", "SampleUtterances": "A list of utterances that a user might say to signal the intent.", "SlotPriorities": "Indicates the priority for slots. Amazon Lex prompts the user for slot values in priority order.", @@ -26337,6 +26534,12 @@ "MessageSelectionStrategy": "Indicates how a message is selected from a message group among retries.", "PromptAttemptsSpecification": "Specifies the advanced settings on each attempt of the prompt." }, + "AWS::Lex::Bot QInConnectAssistantConfiguration": { + "AssistantArn": "" + }, + "AWS::Lex::Bot QInConnectIntentConfiguration": { + "QInConnectAssistantConfiguration": "" + }, "AWS::Lex::Bot QnAIntentConfiguration": { "BedrockModelConfiguration": "", "DataSourceConfiguration": "Contains details about the configuration of the data source used for the `AMAZON.QnAIntent` ." @@ -26409,6 +26612,7 @@ "SlotName": "The name of the slot." }, "AWS::Lex::Bot SlotType": { + "CompositeSlotTypeSetting": "", "Description": "A description of the slot type. Use the description to help identify the slot type in lists.", "ExternalSourceSetting": "Sets the type of external information used to create the slot type.", "Name": "The name of the slot type. A slot type name must be unique withing the account.", @@ -26454,6 +26658,10 @@ "MessageGroupsList": "One or more message groups, each containing one or more messages, that define the prompts that Amazon Lex sends to the user.", "TimeoutInSeconds": "If Amazon Lex waits longer than this length of time for a response, it will stop sending messages." }, + "AWS::Lex::Bot SubSlotTypeComposition": { + "Name": "Name of a constituent sub slot inside a composite slot.", + "SlotTypeId": "The unique identifier assigned to a slot type. This refers to either a built-in slot type or the unique slotTypeId of a custom slot type." + }, "AWS::Lex::Bot Tag": { "Key": "", "Value": "" @@ -26954,7 +27162,7 @@ "Value": "The key of the value that is associated with the specified map." }, "AWS::Location::PlaceIndex": { - "DataSource": "Specifies the geospatial data provider for the new place index.\n\n> This field is case-sensitive. Enter the valid values as shown. For example, entering `HERE` returns an error. \n\nValid values include:\n\n- `Esri` \u2013 For additional information about [Esri](https://docs.aws.amazon.com/location/previous/developerguide/esri.html) 's coverage in your region of interest, see [Esri details on geocoding coverage](https://docs.aws.amazon.com/https://developers.arcgis.com/rest/geocode/api-reference/geocode-coverage.htm) .\n- `Grab` \u2013 Grab provides place index functionality for Southeast Asia. For additional information about [GrabMaps](https://docs.aws.amazon.com/location/previous/developerguide/grab.html) ' coverage, see [GrabMaps countries and areas covered](https://docs.aws.amazon.com/location/previous/developerguide/grab.html#grab-coverage-area) .\n- `Here` \u2013 For additional information about [HERE Technologies](https://docs.aws.amazon.com/location/previous/developerguide/HERE.html) ' coverage in your region of interest, see [HERE details on goecoding coverage](https://docs.aws.amazon.com/https://developer.here.com/documentation/geocoder/dev_guide/topics/coverage-geocoder.html) .\n\n> If you specify HERE Technologies ( `Here` ) as the data provider, you may not [store results](https://docs.aws.amazon.com//location-places/latest/APIReference/API_DataSourceConfiguration.html) for locations in Japan. For more information, see the [AWS Service Terms](https://docs.aws.amazon.com/service-terms/) for Amazon Location Service.\n\nFor additional information , see [Data providers](https://docs.aws.amazon.com/location/previous/developerguide/what-is-data-provider.html) on the *Amazon Location Service Developer Guide* .", + "DataSource": "Specifies the geospatial data provider for the new place index.\n\n> This field is case-sensitive. Enter the valid values as shown. For example, entering `HERE` returns an error. \n\nValid values include:\n\n- `Esri` \u2013 For additional information about [Esri](https://docs.aws.amazon.com/location/previous/developerguide/esri.html) 's coverage in your region of interest, see [Esri details on geocoding coverage](https://docs.aws.amazon.com/https://developers.arcgis.com/rest/geocode/api-reference/geocode-coverage.htm) .\n- `Grab` \u2013 Grab provides place index functionality for Southeast Asia. For additional information about [GrabMaps](https://docs.aws.amazon.com/location/previous/developerguide/grab.html) ' coverage, see [GrabMaps countries and areas covered](https://docs.aws.amazon.com/location/previous/developerguide/grab.html#grab-coverage-area) .\n- `Here` \u2013 For additional information about [HERE Technologies](https://docs.aws.amazon.com/location/previous/developerguide/HERE.html) ' coverage in your region of interest, see [HERE details on goecoding coverage](https://docs.aws.amazon.com/https://developer.here.com/documentation/geocoder/dev_guide/topics/coverage-geocoder.html) .\n\n> If you specify HERE Technologies ( `Here` ) as the data provider, you may not [store results](https://docs.aws.amazon.com//location-places/latest/APIReference/API_DataSourceConfiguration.html) for locations in Japan. For more information, see the [AWS service terms](https://docs.aws.amazon.com/service-terms/) for Amazon Location Service.\n\nFor additional information , see [Data providers](https://docs.aws.amazon.com/location/previous/developerguide/what-is-data-provider.html) on the *Amazon Location Service developer guide* .", "DataSourceConfiguration": "Specifies the data storage option requesting Places.", "Description": "The optional description for the place index resource.", "IndexName": "The name of the place index resource.\n\nRequirements:\n\n- Contain only alphanumeric characters (A\u2013Z, a\u2013z, 0\u20139), hyphens (-), periods (.), and underscores (_).\n- Must be a unique place index resource name.\n- No spaces allowed. For example, `ExamplePlaceIndex` .", @@ -27489,8 +27697,8 @@ }, "AWS::MSK::Cluster BrokerLogs": { "CloudWatchLogs": "", - "Firehose": "", - "S3": "" + "Firehose": "Details of the Kinesis Data Firehose delivery stream that is the destination for broker logs.", + "S3": "Details of the Amazon S3 destination for broker logs." }, "AWS::MSK::Cluster BrokerNodeGroupInfo": { "BrokerAZDistribution": "This parameter is currently not in use.", @@ -27501,108 +27709,108 @@ "StorageInfo": "Contains information about storage volumes attached to Amazon MSK broker nodes." }, "AWS::MSK::Cluster ClientAuthentication": { - "Sasl": "", - "Tls": "", - "Unauthenticated": "" + "Sasl": "Details for client authentication using SASL. To turn on SASL, you must also turn on `EncryptionInTransit` by setting `inCluster` to true. You must set `clientBroker` to either `TLS` or `TLS_PLAINTEXT` . If you choose `TLS_PLAINTEXT` , then you must also set `unauthenticated` to true.", + "Tls": "Details for ClientAuthentication using TLS. To turn on TLS access control, you must also turn on `EncryptionInTransit` by setting `inCluster` to true and `clientBroker` to `TLS` .", + "Unauthenticated": "Details for ClientAuthentication using no authentication." }, "AWS::MSK::Cluster CloudWatchLogs": { - "Enabled": "", - "LogGroup": "" + "Enabled": "Specifies whether broker logs get sent to the specified CloudWatch Logs destination.", + "LogGroup": "The CloudWatch log group that is the destination for broker logs." }, "AWS::MSK::Cluster ConfigurationInfo": { - "Arn": "", - "Revision": "" + "Arn": "ARN of the configuration to use.", + "Revision": "The revision of the configuration to use." }, "AWS::MSK::Cluster ConnectivityInfo": { - "PublicAccess": "", - "VpcConnectivity": "" + "PublicAccess": "Access control settings for the cluster's brokers.", + "VpcConnectivity": "VPC connection control settings for brokers." }, "AWS::MSK::Cluster EBSStorageInfo": { - "ProvisionedThroughput": "", - "VolumeSize": "" + "ProvisionedThroughput": "EBS volume provisioned throughput information.", + "VolumeSize": "The size in GiB of the EBS volume for the data drive on each broker node." }, "AWS::MSK::Cluster EncryptionAtRest": { - "DataVolumeKMSKeyId": "" + "DataVolumeKMSKeyId": "The ARN of the Amazon KMS key for encrypting data at rest. If you don't specify a KMS key, MSK creates one for you and uses it." }, "AWS::MSK::Cluster EncryptionInTransit": { "ClientBroker": "Indicates the encryption setting for data in transit between clients and brokers. You must set it to one of the following values.\n\n- `TLS` : Indicates that client-broker communication is enabled with TLS only.\n- `TLS_PLAINTEXT` : Indicates that client-broker communication is enabled for both TLS-encrypted, as well as plaintext data.\n- `PLAINTEXT` : Indicates that client-broker communication is enabled in plaintext only.\n\nThe default value is `TLS` .", "InCluster": "When set to true, it indicates that data communication among the broker nodes of the cluster is encrypted. When set to false, the communication happens in plaintext.\n\nThe default value is true." }, "AWS::MSK::Cluster EncryptionInfo": { - "EncryptionAtRest": "", + "EncryptionAtRest": "The data-volume encryption details.", "EncryptionInTransit": "The details for encryption in transit." }, "AWS::MSK::Cluster Firehose": { - "DeliveryStream": "", - "Enabled": "" + "DeliveryStream": "The Kinesis Data Firehose delivery stream that is the destination for broker logs.", + "Enabled": "Specifies whether broker logs get send to the specified Kinesis Data Firehose delivery stream." }, "AWS::MSK::Cluster Iam": { - "Enabled": "" + "Enabled": "SASL/IAM authentication is enabled or not." }, "AWS::MSK::Cluster JmxExporter": { - "EnabledInBroker": "" + "EnabledInBroker": "Indicates whether you want to enable or disable the JMX Exporter." }, "AWS::MSK::Cluster LoggingInfo": { - "BrokerLogs": "" + "BrokerLogs": "You can configure your MSK cluster to send broker logs to different destination types. This configuration specifies the details of these destinations." }, "AWS::MSK::Cluster NodeExporter": { - "EnabledInBroker": "" + "EnabledInBroker": "Indicates whether you want to enable or disable the Node Exporter." }, "AWS::MSK::Cluster OpenMonitoring": { - "Prometheus": "" + "Prometheus": "Prometheus exporter settings." }, "AWS::MSK::Cluster Prometheus": { - "JmxExporter": "", - "NodeExporter": "" + "JmxExporter": "Indicates whether you want to enable or disable the JMX Exporter.", + "NodeExporter": "Indicates whether you want to enable or disable the Node Exporter." }, "AWS::MSK::Cluster ProvisionedThroughput": { - "Enabled": "", - "VolumeThroughput": "" + "Enabled": "Provisioned throughput is on or off.", + "VolumeThroughput": "Throughput value of the EBS volumes for the data drive on each kafka broker node in MiB per second." }, "AWS::MSK::Cluster PublicAccess": { - "Type": "" + "Type": "DISABLED means that public access is turned off. SERVICE_PROVIDED_EIPS means that public access is turned on." }, "AWS::MSK::Cluster S3": { - "Bucket": "", - "Enabled": "", - "Prefix": "" + "Bucket": "The name of the S3 bucket that is the destination for broker logs.", + "Enabled": "Specifies whether broker logs get sent to the specified Amazon S3 destination.", + "Prefix": "The S3 prefix that is the destination for broker logs." }, "AWS::MSK::Cluster Sasl": { - "Iam": "", - "Scram": "" + "Iam": "Details for ClientAuthentication using IAM.", + "Scram": "Details for SASL/SCRAM client authentication." }, "AWS::MSK::Cluster Scram": { - "Enabled": "" + "Enabled": "SASL/SCRAM authentication is enabled or not." }, "AWS::MSK::Cluster StorageInfo": { - "EBSStorageInfo": "" + "EBSStorageInfo": "EBS volume information." }, "AWS::MSK::Cluster Tls": { - "CertificateAuthorityArnList": "", - "Enabled": "" + "CertificateAuthorityArnList": "List of AWS Private CA ARNs.", + "Enabled": "TLS authentication is enabled or not." }, "AWS::MSK::Cluster Unauthenticated": { - "Enabled": "" + "Enabled": "Unauthenticated is enabled or not." }, "AWS::MSK::Cluster VpcConnectivity": { - "ClientAuthentication": "" + "ClientAuthentication": "VPC connection control settings for brokers." }, "AWS::MSK::Cluster VpcConnectivityClientAuthentication": { - "Sasl": "", - "Tls": "" + "Sasl": "Details for VpcConnectivity ClientAuthentication using SASL.", + "Tls": "Details for VpcConnectivity ClientAuthentication using TLS." }, "AWS::MSK::Cluster VpcConnectivityIam": { - "Enabled": "" + "Enabled": "SASL/IAM authentication is enabled or not." }, "AWS::MSK::Cluster VpcConnectivitySasl": { - "Iam": "", - "Scram": "" + "Iam": "Details for ClientAuthentication using IAM for VpcConnectivity.", + "Scram": "Details for SASL/SCRAM client authentication for VpcConnectivity." }, "AWS::MSK::Cluster VpcConnectivityScram": { - "Enabled": "" + "Enabled": "SASL/SCRAM authentication is enabled or not." }, "AWS::MSK::Cluster VpcConnectivityTls": { - "Enabled": "" + "Enabled": "TLS authentication is enabled or not." }, "AWS::MSK::ClusterPolicy": { "ClusterArn": "The Amazon Resource Name (ARN) that uniquely identifies the cluster.", @@ -27678,13 +27886,13 @@ "VpcConfigs": "VPC configuration information for the serverless cluster." }, "AWS::MSK::ServerlessCluster ClientAuthentication": { - "Sasl": "" + "Sasl": "Details for client authentication using SASL. To turn on SASL, you must also turn on `EncryptionInTransit` by setting `inCluster` to true. You must set `clientBroker` to either `TLS` or `TLS_PLAINTEXT` . If you choose `TLS_PLAINTEXT` , then you must also set `unauthenticated` to true." }, "AWS::MSK::ServerlessCluster Iam": { - "Enabled": "" + "Enabled": "SASL/IAM authentication is enabled or not." }, "AWS::MSK::ServerlessCluster Sasl": { - "Iam": "" + "Iam": "Details for ClientAuthentication using IAM." }, "AWS::MSK::ServerlessCluster VpcConfig": { "SecurityGroups": "", @@ -27797,7 +28005,7 @@ }, "AWS::Macie::Session": { "FindingPublishingFrequency": "Specifies how often Amazon Macie publishes updates to policy findings for the account. This includes publishing updates to AWS Security Hub and Amazon EventBridge (formerly Amazon CloudWatch Events ). Valid values are:\n\n- FIFTEEN_MINUTES\n- ONE_HOUR\n- SIX_HOURS", - "Status": "The status of Amazon Macie for the account. Valid values are: `ENABLED` , start or resume all Macie activities for the account; and, `PAUSED` , suspend all Macie activities for the account." + "Status": "The status of Amazon Macie for the account. Valid values are: `ENABLED` , start or resume Macie activities for the account; and, `PAUSED` , suspend Macie activities for the account." }, "AWS::ManagedBlockchain::Accessor": { "AccessorType": "The type of the accessor.\n\n> Currently, accessor type is restricted to `BILLING_TOKEN` .", @@ -30244,9 +30452,11 @@ "Engine": "The name of the engine used by the cluster.", "EngineVersion": "The Redis engine version used by the cluster .", "FinalSnapshotName": "The user-supplied name of a final cluster snapshot. This is the unique name that identifies the snapshot. MemoryDB creates the snapshot, and then deletes the cluster immediately afterward.", + "IpDiscovery": "The mechanism that the cluster uses to discover IP addresses. Returns 'ipv4' when DNS endpoints resolve to IPv4 addresses, or 'ipv6' when DNS endpoints resolve to IPv6 addresses.", "KmsKeyId": "The ID of the KMS key used to encrypt the cluster .", "MaintenanceWindow": "Specifies the weekly time range during which maintenance on the cluster is performed. It is specified as a range in the format `ddd:hh24:mi-ddd:hh24:mi` (24H Clock UTC). The minimum maintenance window is a 60 minute period.\n\n*Pattern* : `ddd:hh24:mi-ddd:hh24:mi`", "MultiRegionClusterName": "The name of the multi-Region cluster that this cluster belongs to.", + "NetworkType": "The IP address type for the cluster. Returns 'ipv4' for IPv4 only, 'ipv6' for IPv6 only, or 'dual-stack' if the cluster supports both IPv4 and IPv6 addressing.", "NodeType": "The cluster 's node type.", "NumReplicasPerShard": "The number of replicas to apply to each shard.\n\n*Default value* : `1`\n\n*Maximum value* : `5`", "NumShards": "The number of shards in the cluster .", @@ -30278,7 +30488,7 @@ "MultiRegionClusterNameSuffix": "A suffix to be added to the Multi-Region cluster name. Amazon MemoryDB automatically applies a prefix to the Multi-Region cluster Name when it is created. Each Amazon Region has its own prefix. For instance, a Multi-Region cluster Name created in the US-West-1 region will begin with \"virxk\", along with the suffix name you provide. The suffix guarantees uniqueness of the Multi-Region cluster name across multiple regions.", "MultiRegionParameterGroupName": "The name of the multi-Region parameter group associated with the cluster.", "NodeType": "The node type used by the multi-Region cluster.", - "NumShards": "TBD", + "NumShards": "The number of shards in the multi-Region cluster.", "TLSEnabled": "Indiciates if the multi-Region cluster is TLS enabled.", "Tags": "A list of tags to be applied to the multi-Region cluster.", "UpdateStrategy": "The strategy to use for the update operation. Supported values are \"coordinated\" or \"uncoordinated\"." @@ -31264,7 +31474,7 @@ "Main": "The path of the main definition file for the workflow.", "Name": "The workflow's name.", "ParameterTemplate": "The workflow's parameter template.", - "StorageCapacity": "The default storage capacity for the workflow runs, in gibibytes.", + "StorageCapacity": "The default static storage capacity (in gibibytes) for runs that use this workflow or workflow version.", "Tags": "Tags for the workflow." }, "AWS::Omics::Workflow WorkflowParameter": { @@ -31366,23 +31576,23 @@ "AWS::OpenSearchService::Application": { "AppConfigs": "", "DataSources": "", - "Endpoint": "Endpoint URL of an OpenSearch Application.", - "IamIdentityCenterOptions": "Container for IAM Identity Center Options settings.", - "Name": "Name of an OpenSearch Application.", + "Endpoint": "The endpoint URL of an OpenSearch application.", + "IamIdentityCenterOptions": "Settings container for integrating IAM Identity Center with OpenSearch UI applications, which enables enabling secure user authentication and access control across multiple data sources. This setup supports single sign-on (SSO) through IAM Identity Center, allowing centralized user management.", + "Name": "The name of an OpenSearch application.", "Tags": "" }, "AWS::OpenSearchService::Application AppConfig": { - "Key": "Specify the item to configure, such as admin role for the OpenSearch Application.", - "Value": "Specifies the value to configure for the key, such as an IAM user ARN." + "Key": "The configuration item to set, such as the admin role for the OpenSearch application.", + "Value": "The value assigned to the configuration key, such as an IAM user ARN." }, "AWS::OpenSearchService::Application DataSource": { "DataSourceArn": "", "DataSourceDescription": "Detailed description of a data source." }, "AWS::OpenSearchService::Application IamIdentityCenterOptions": { - "Enabled": "IAM Identity Center is enabled for the OpenSearch Application.", + "Enabled": "Indicates whether IAM Identity Center is enabled for the OpenSearch application.", "IamIdentityCenterInstanceArn": "", - "IamRoleForIdentityCenterApplicationArn": "Amazon Resource Name of the IAM Identity Center's Application created for the OpenSearch Application after enabling IAM Identity Center." + "IamRoleForIdentityCenterApplicationArn": "The Amazon Resource Name (ARN) of the IAM role assigned to the IAM Identity Center application for the OpenSearch application." }, "AWS::OpenSearchService::Application Tag": { "Key": "The tag key. Tag keys must be unique for the domain to which they are attached.", @@ -31400,7 +31610,7 @@ "EncryptionAtRestOptions": "Whether the domain should encrypt data at rest, and if so, the AWS KMS key to use. See [Encryption of data at rest for Amazon OpenSearch Service](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/encryption-at-rest.html) .\n\nIf no encryption at rest options were initially specified in the template, updating this property by adding it causes no interruption. However, if you change this property after it's already been set within a template, the domain is deleted and recreated in order to modify the property.", "EngineVersion": "The version of OpenSearch to use. The value must be in the format `OpenSearch_X.Y` or `Elasticsearch_X.Y` . If not specified, the latest version of OpenSearch is used. For information about the versions that OpenSearch Service supports, see [Supported versions of OpenSearch and Elasticsearch](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/what-is.html#choosing-version) in the *Amazon OpenSearch Service Developer Guide* .\n\nIf you set the [EnableVersionUpgrade](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html#cfn-attributes-updatepolicy-upgradeopensearchdomain) update policy to `true` , you can update `EngineVersion` without interruption. When `EnableVersionUpgrade` is set to `false` , or is not specified, updating `EngineVersion` results in [replacement](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-update-behaviors.html#update-replacement) .", "IPAddressType": "Choose either dual stack or IPv4 as your IP address type. Dual stack allows you to share domain resources across IPv4 and IPv6 address types, and is the recommended option. If you set your IP address type to dual stack, you can't change your address type later.", - "IdentityCenterOptions": "Container for IAM Identity Center Option control for the domain.", + "IdentityCenterOptions": "Configuration options for controlling IAM Identity Center integration within a domain.", "LogPublishingOptions": "An object with one or more of the following keys: `SEARCH_SLOW_LOGS` , `ES_APPLICATION_LOGS` , `INDEX_SLOW_LOGS` , `AUDIT_LOGS` , depending on the types of logs you want to publish. Each key needs a valid `LogPublishingOption` value. For the full syntax, see the [examples](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-opensearchservice-domain.html#aws-resource-opensearchservice-domain--examples) .", "NodeToNodeEncryptionOptions": "Specifies whether node-to-node encryption is enabled. See [Node-to-node encryption for Amazon OpenSearch Service](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ntn.html) .", "OffPeakWindowOptions": "Options for a domain's off-peak window, during which OpenSearch Service can perform mandatory configuration changes on the domain.", @@ -31462,12 +31672,12 @@ "KmsKeyId": "The KMS key ID. Takes the form `1a2a3a4-1a2a-3a4a-5a6a-1a2a3a4a5a6a` . Required if you enable encryption at rest.\n\nYou can also use `keyAlias` as a value.\n\nIf no encryption at rest options were initially specified in the template, updating this property by adding it causes no interruption. However, if you change this property after it's already been set within a template, the domain is deleted and recreated in order to modify the property." }, "AWS::OpenSearchService::Domain IdentityCenterOptions": { - "EnabledAPIAccess": "True to enable IAM Identity Center for API access in Amazon OpenSearch Service.", - "IdentityCenterApplicationARN": "The ARN for IAM Identity Center Application which will integrate with Amazon OpenSearch Service.", - "IdentityCenterInstanceARN": "The ARN for IAM Identity Center Instance.", - "IdentityStoreId": "The ID of IAM Identity Store.", - "RolesKey": "Specify the attribute that contains the backend role (groupName, groupID) of IAM Identity Center", - "SubjectKey": "Specify the attribute that contains the subject (username, userID, email) of IAM Identity Center." + "EnabledAPIAccess": "Indicates whether IAM Identity Center is enabled for the application.", + "IdentityCenterApplicationARN": "The ARN of the IAM Identity Center application that integrates with Amazon OpenSearch Service.", + "IdentityCenterInstanceARN": "The Amazon Resource Name (ARN) of the IAM Identity Center instance.", + "IdentityStoreId": "The identifier of the IAM Identity Store.", + "RolesKey": "Specifies the attribute that contains the backend role identifier (such as group name or group ID) in IAM Identity Center.", + "SubjectKey": "Specifies the attribute that contains the subject identifier (such as username, user ID, or email) in IAM Identity Center." }, "AWS::OpenSearchService::Domain Idp": { "EntityId": "The unique entity ID of the application in the SAML identity provider.", @@ -31489,13 +31699,13 @@ "MasterUserPassword": "Password for the master user. Only specify if `InternalUserDatabaseEnabled` is true in [AdvancedSecurityOptionsInput](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-opensearchservice-domain-advancedsecurityoptionsinput.html) .\n\nIf you don't want to specify this value directly within the template, you can use a [dynamic reference](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html) instead." }, "AWS::OpenSearchService::Domain NodeConfig": { - "Count": "The number of nodes of a particular node type in the cluster.", - "Enabled": "A boolean that indicates whether a particular node type is enabled or not.", - "Type": "The instance type of a particular node type in the cluster." + "Count": "The number of nodes of a specific type within the cluster.", + "Enabled": "A boolean value indicating whether a specific node type is active or inactive.", + "Type": "The instance type of a particular node within the cluster." }, "AWS::OpenSearchService::Domain NodeOption": { - "NodeConfig": "Container for specifying configuration of any node type.", - "NodeType": "Container for node type like coordinating." + "NodeConfig": "Configuration options for defining the setup of any node type.", + "NodeType": "Defines the type of node, such as coordinating nodes." }, "AWS::OpenSearchService::Domain NodeToNodeEncryptionOptions": { "Enabled": "Specifies to enable or disable node-to-node encryption on the domain. Required if you enable fine-grained access control in [AdvancedSecurityOptionsInput](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-opensearchservice-domain-advancedsecurityoptionsinput.html) ." @@ -32224,6 +32434,7 @@ "KeyArn": "The `KeyARN` of the key associated with the alias." }, "AWS::PaymentCryptography::Key": { + "DeriveKeyUsage": "The cryptographic usage of an ECDH derived key as de\ufb01ned in section A.5.2 of the TR-31 spec.", "Enabled": "Specifies whether the key is enabled.", "Exportable": "Specifies whether the key is exportable. This data is immutable after the key is created.", "KeyAttributes": "The role of the key, the algorithm it supports, and the cryptographic operations allowed with the key. This data is immutable after the key is created.", @@ -33309,6 +33520,9 @@ "Tags": "A list of key-value pairs that identify or categorize the data source connector. You can also use tags to help control access to the data source connector. Tag keys and values can consist of Unicode letters, digits, white space, and any of the following symbols: _ . : / = + - @.", "VpcConfiguration": "Configuration information for an Amazon VPC (Virtual Private Cloud) to connect to your data source. For more information, see [Using Amazon VPC with Amazon Q Business connectors](https://docs.aws.amazon.com/amazonq/latest/business-use-dg/connector-vpc.html) ." }, + "AWS::QBusiness::DataSource AudioExtractionConfiguration": { + "AudioExtractionStatus": "The status of audio extraction (ENABLED or DISABLED) for processing audio content from files." + }, "AWS::QBusiness::DataSource DataSourceVpcConfiguration": { "SecurityGroupIds": "A list of identifiers of security groups within your Amazon VPC. The security groups should enable Amazon Q Business to connect to the data source.", "SubnetIds": "A list of identifiers for subnets within your Amazon VPC. The subnets should be able to connect to each other in the VPC, and they should have outgoing access to the Internet through a NAT device." @@ -33336,7 +33550,7 @@ }, "AWS::QBusiness::DataSource HookConfiguration": { "InvocationCondition": "The condition used for when a Lambda function should be invoked.\n\nFor example, you can specify a condition that if there are empty date-time values, then Amazon Q Business should invoke a function that inserts the current date-time.", - "LambdaArn": "The Amazon Resource Name (ARN) of a role with permission to run a Lambda function during ingestion. For more information, see [IAM roles for Custom Document Enrichment (CDE)](https://docs.aws.amazon.com/amazonq/latest/business-use-dg/iam-roles.html#cde-iam-role) .", + "LambdaArn": "The Amazon Resource Name (ARN) of the Lambda function sduring ingestion. For more information, see [Using Lambda functions for Amazon Q Business document enrichment](https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/cde-lambda-operations.html) .", "RoleArn": "The Amazon Resource Name (ARN) of a role with permission to run `PreExtractionHookConfiguration` and `PostExtractionHookConfiguration` for altering document metadata and content during the document ingestion process.", "S3BucketName": "Stores the original, raw documents or the structured, parsed documents before and after altering them. For more information, see [Data contracts for Lambda functions](https://docs.aws.amazon.com/amazonq/latest/business-use-dg/cde-lambda-operations.html#cde-lambda-operations-data-contracts) ." }, @@ -33349,12 +33563,17 @@ "Target": "Configuration of the target document attribute or metadata field when ingesting documents into Amazon Q Business . You can also include a value." }, "AWS::QBusiness::DataSource MediaExtractionConfiguration": { - "ImageExtractionConfiguration": "The configuration for extracting semantic meaning from images in documents. For more information, see [Extracting semantic meaning from images and visuals](https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/extracting-meaning-from-images.html) ." + "AudioExtractionConfiguration": "Configuration settings for extracting and processing audio content from media files.", + "ImageExtractionConfiguration": "The configuration for extracting semantic meaning from images in documents. For more information, see [Extracting semantic meaning from images and visuals](https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/extracting-meaning-from-images.html) .", + "VideoExtractionConfiguration": "Configuration settings for extracting and processing video content from media files." }, "AWS::QBusiness::DataSource Tag": { "Key": "The key for the tag. Keys are not case sensitive and must be unique for the Amazon Q Business application or data source.", "Value": "The value associated with the tag. The value may be an empty string but it can't be null." }, + "AWS::QBusiness::DataSource VideoExtractionConfiguration": { + "VideoExtractionStatus": "The status of video extraction (ENABLED or DISABLED) for processing video content from files." + }, "AWS::QBusiness::Index": { "ApplicationId": "The identifier of the Amazon Q Business application using the index.", "CapacityConfiguration": "The capacity units you want to provision for your index. You can add and remove capacity to fit your usage needs.", @@ -36092,7 +36311,8 @@ "AWS::QuickSight::Analysis TableFieldOptions": { "Order": "The order of the field IDs that are configured as field options for a table visual.", "PinnedFieldOptions": "The settings for the pinned columns of a table visual.", - "SelectedFieldOptions": "The field options to be configured to a table." + "SelectedFieldOptions": "The field options to be configured to a table.", + "TransposedTableOptions": "The `TableOptions` of a transposed table." }, "AWS::QuickSight::Analysis TableFieldURLConfiguration": { "ImageConfiguration": "The image configuration of a table field URL.", @@ -36272,6 +36492,11 @@ "TotalCellStyle": "Cell styling options for the total cells.", "TotalsVisibility": "The visibility configuration for the total cells." }, + "AWS::QuickSight::Analysis TransposedTableOption": { + "ColumnIndex": "The index of a columns in a transposed table. The index range is 0-9999.", + "ColumnType": "The column type of the column in a transposed table. Choose one of the following options:\n\n- `ROW_HEADER_COLUMN` : Refers to the leftmost column of the row header in the transposed table.\n- `VALUE_COLUMN` : Refers to all value columns in the transposed table.", + "ColumnWidth": "The width of a column in a transposed table." + }, "AWS::QuickSight::Analysis TreeMapAggregatedFieldWells": { "Colors": "The color field well of a tree map. Values are grouped by aggregations based on group by fields.", "Groups": "The group by field well of a tree map. Values are grouped based on group by fields.", @@ -39134,7 +39359,8 @@ "AWS::QuickSight::Dashboard TableFieldOptions": { "Order": "The order of the field IDs that are configured as field options for a table visual.", "PinnedFieldOptions": "The settings for the pinned columns of a table visual.", - "SelectedFieldOptions": "The field options to be configured to a table." + "SelectedFieldOptions": "The field options to be configured to a table.", + "TransposedTableOptions": "The `TableOptions` of a transposed table." }, "AWS::QuickSight::Dashboard TableFieldURLConfiguration": { "ImageConfiguration": "The image configuration of a table field URL.", @@ -39314,6 +39540,11 @@ "TotalCellStyle": "Cell styling options for the total cells.", "TotalsVisibility": "The visibility configuration for the total cells." }, + "AWS::QuickSight::Dashboard TransposedTableOption": { + "ColumnIndex": "The index of a columns in a transposed table. The index range is 0-9999.", + "ColumnType": "The column type of the column in a transposed table. Choose one of the following options:\n\n- `ROW_HEADER_COLUMN` : Refers to the leftmost column of the row header in the transposed table.\n- `VALUE_COLUMN` : Refers to all value columns in the transposed table.", + "ColumnWidth": "The width of a column in a transposed table." + }, "AWS::QuickSight::Dashboard TreeMapAggregatedFieldWells": { "Colors": "The color field well of a tree map. Values are grouped by aggregations based on group by fields.", "Groups": "The group by field well of a tree map. Values are grouped based on group by fields.", @@ -42381,7 +42612,8 @@ "AWS::QuickSight::Template TableFieldOptions": { "Order": "The order of the field IDs that are configured as field options for a table visual.", "PinnedFieldOptions": "The settings for the pinned columns of a table visual.", - "SelectedFieldOptions": "The field options to be configured to a table." + "SelectedFieldOptions": "The field options to be configured to a table.", + "TransposedTableOptions": "The `TableOptions` of a transposed table." }, "AWS::QuickSight::Template TableFieldURLConfiguration": { "ImageConfiguration": "The image configuration of a table field URL.", @@ -42599,6 +42831,11 @@ "TotalCellStyle": "Cell styling options for the total cells.", "TotalsVisibility": "The visibility configuration for the total cells." }, + "AWS::QuickSight::Template TransposedTableOption": { + "ColumnIndex": "The index of a columns in a transposed table. The index range is 0-9999.", + "ColumnType": "The column type of the column in a transposed table. Choose one of the following options:\n\n- `ROW_HEADER_COLUMN` : Refers to the leftmost column of the row header in the transposed table.\n- `VALUE_COLUMN` : Refers to all value columns in the transposed table.", + "ColumnWidth": "The width of a column in a transposed table." + }, "AWS::QuickSight::Template TreeMapAggregatedFieldWells": { "Colors": "The color field well of a tree map. Values are grouped by aggregations based on group by fields.", "Groups": "The group by field well of a tree map. Values are grouped based on group by fields.", @@ -43266,6 +43503,7 @@ "DBSnapshotIdentifier": "The name or Amazon Resource Name (ARN) of the DB snapshot that's used to restore the DB instance. If you're restoring from a shared manual DB snapshot, you must specify the ARN of the snapshot.\n\nBy specifying this property, you can create a DB instance from the specified DB snapshot. If the `DBSnapshotIdentifier` property is an empty string or the `AWS::RDS::DBInstance` declaration has no `DBSnapshotIdentifier` property, AWS CloudFormation creates a new database. If the property contains a value (other than an empty string), AWS CloudFormation creates a database from the specified snapshot. If a snapshot with the specified name doesn't exist, AWS CloudFormation can't create the database and it rolls back the stack.\n\nSome DB instance properties aren't valid when you restore from a snapshot, such as the `MasterUsername` and `MasterUserPassword` properties, and the point-in-time recovery properties `RestoreTime` and `UseLatestRestorableTime` . For information about the properties that you can specify, see the [`RestoreDBInstanceFromDBSnapshot`](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromDBSnapshot.html) action in the *Amazon RDS API Reference* .\n\nAfter you restore a DB instance with a `DBSnapshotIdentifier` property, you must specify the same `DBSnapshotIdentifier` property for any future updates to the DB instance. When you specify this property for an update, the DB instance is not restored from the DB snapshot again, and the data in the database is not changed. However, if you don't specify the `DBSnapshotIdentifier` property, an empty DB instance is created, and the original DB instance is deleted. If you specify a property that is different from the previous snapshot restore property, a new DB instance is restored from the specified `DBSnapshotIdentifier` property, and the original DB instance is deleted.\n\nIf you specify the `DBSnapshotIdentifier` property to restore a DB instance (as opposed to specifying it for DB instance updates), then don't specify the following properties:\n\n- `CharacterSetName`\n- `DBClusterIdentifier`\n- `DBName`\n- `KmsKeyId`\n- `MasterUsername`\n- `MasterUserPassword`\n- `PromotionTier`\n- `SourceDBInstanceIdentifier`\n- `SourceRegion`\n- `StorageEncrypted` (for an unencrypted snapshot)\n- `Timezone`\n\n*Amazon Aurora*\n\nNot applicable. Snapshot restore is managed by the DB cluster.", "DBSubnetGroupName": "A DB subnet group to associate with the DB instance. If you update this value, the new subnet group must be a subnet group in a new VPC.\n\nIf you don't specify a DB subnet group, RDS uses the default DB subnet group if one exists. If a default DB subnet group does not exist, and you don't specify a `DBSubnetGroupName` , the DB instance fails to launch.\n\nFor more information about using Amazon RDS in a VPC, see [Amazon VPC and Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.html) in the *Amazon RDS User Guide* .\n\nThis setting doesn't apply to Amazon Aurora DB instances. The DB subnet group is managed by the DB cluster. If specified, the setting must match the DB cluster setting.", "DBSystemId": "The Oracle system identifier (SID), which is the name of the Oracle database instance that manages your database files. In this context, the term \"Oracle database instance\" refers exclusively to the system global area (SGA) and Oracle background processes. If you don't specify a SID, the value defaults to `RDSCDB` . The Oracle SID is also the name of your CDB.", + "DatabaseInsightsMode": "The mode of Database Insights to enable for the DB instance.\n\n> Aurora DB instances inherit this value from the DB cluster, so you can't change this value.", "DedicatedLogVolume": "Indicates whether the DB instance has a dedicated log volume (DLV) enabled.", "DeleteAutomatedBackups": "A value that indicates whether to remove automated backups immediately after the DB instance is deleted. This parameter isn't case-sensitive. The default is to remove automated backups immediately after the DB instance is deleted.\n\n*Amazon Aurora*\n\nNot applicable. When you delete a DB cluster, all automated backups for that DB cluster are deleted and can't be recovered. Manual DB cluster snapshots of the DB cluster are not deleted.", "DeletionProtection": "Specifies whether the DB instance has deletion protection enabled. The database can't be deleted when deletion protection is enabled. By default, deletion protection isn't enabled. For more information, see [Deleting a DB Instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_DeleteInstance.html) .\n\nThis setting doesn't apply to Amazon Aurora DB instances. You can enable or disable deletion protection for the DB cluster. For more information, see `CreateDBCluster` . DB instances in a DB cluster can be deleted even when deletion protection is enabled for the DB cluster.", @@ -43838,7 +44076,11 @@ "Port": "The custom port to use when connecting to a workgroup. Valid port ranges are 5431-5455 and 8191-8215. The default is 5439.", "PricePerformanceTarget": "An object that represents the price performance target settings for the workgroup.", "PubliclyAccessible": "A value that specifies whether the workgroup can be accessible from a public network.", + "RecoveryPointId": "The recovery point id to restore from.", "SecurityGroupIds": "A list of security group IDs to associate with the workgroup.", + "SnapshotArn": "The Amazon Resource Name (ARN) of the snapshot to restore from.", + "SnapshotName": "The snapshot name to restore from.", + "SnapshotOwnerAccount": "The Amazon Web Services account that owns the snapshot.", "SubnetIds": "A list of subnet IDs the workgroup is associated with.", "Tags": "The map of the key-value pairs used to tag the workgroup.", "TrackName": "An optional parameter for the name of the track for the workgroup. If you don't provide a track name, the workgroup is assigned to the current track.", @@ -44617,7 +44859,7 @@ }, "AWS::Route53Resolver::ResolverConfig": { "AutodefinedReverseFlag": "Represents the desired status of `AutodefinedReverse` . The only supported value on creation is `DISABLE` . Deletion of this resource will return `AutodefinedReverse` to its default value of `ENABLED` .", - "ResourceId": "The ID of the Amazon Virtual Private Cloud VPC that you're configuring Resolver for." + "ResourceId": "The ID of the Amazon Virtual Private Cloud VPC or a Route 53 Profile that you're configuring Resolver for." }, "AWS::Route53Resolver::ResolverDNSSECConfig": { "ResourceId": "The ID of the virtual private cloud (VPC) that you're configuring the DNSSEC validation status for." @@ -44644,7 +44886,12 @@ }, "AWS::Route53Resolver::ResolverQueryLoggingConfig": { "DestinationArn": "The ARN of the resource that you want Resolver to send query logs: an Amazon S3 bucket, a CloudWatch Logs log group, or a Kinesis Data Firehose delivery stream.", - "Name": "The name of the query logging configuration." + "Name": "The name of the query logging configuration.", + "Tags": "" + }, + "AWS::Route53Resolver::ResolverQueryLoggingConfig Tag": { + "Key": "The name for the tag. For example, if you want to associate Resolver resources with the account IDs of your customers for billing purposes, the value of `Key` might be `account-id` .", + "Value": "The value for the tag. For example, if `Key` is `account-id` , then `Value` might be the ID of the customer account that you're creating the resource for." }, "AWS::Route53Resolver::ResolverQueryLoggingConfigAssociation": { "ResolverQueryLogConfigId": "The ID of the query logging configuration that a VPC is associated with.", @@ -44739,7 +44986,7 @@ "InventoryConfigurations": "Specifies the inventory configuration for an Amazon S3 bucket. For more information, see [GET Bucket inventory](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETInventoryConfig.html) in the *Amazon S3 API Reference* .", "LifecycleConfiguration": "Specifies the lifecycle configuration for objects in an Amazon S3 bucket. For more information, see [Object Lifecycle Management](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html) in the *Amazon S3 User Guide* .", "LoggingConfiguration": "Settings that define where logs are stored.", - "MetadataTableConfiguration": "The metadata table configuration of an Amazon S3 general purpose bucket. For more information, see [Accelerating data discovery with S3 Metadata](https://docs.aws.amazon.com/AmazonS3/latest/userguide/metadata-tables-overview.html) and [Setting up permissions for configuring metadata tables](https://docs.aws.amazon.com/AmazonS3/latest/userguide/metadata-tables-permissions.html) .", + "MetadataTableConfiguration": "The metadata table configuration of an Amazon S3 general purpose bucket.", "MetricsConfigurations": "Specifies a metrics configuration for the CloudWatch request metrics (specified by the metrics configuration ID) from an Amazon S3 bucket. If you're updating an existing metrics configuration, note that this is a full replacement of the existing metrics configuration. If you don't include the elements you want to keep, they are erased. For more information, see [PutBucketMetricsConfiguration](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTMetricConfiguration.html) .", "NotificationConfiguration": "Configuration that defines how Amazon S3 handles bucket notifications.", "ObjectLockConfiguration": "> This operation is not supported for directory buckets. \n\nPlaces an Object Lock configuration on the specified bucket. The rule specified in the Object Lock configuration will be applied by default to every new object placed in the specified bucket. For more information, see [Locking Objects](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock.html) .\n\n> - The `DefaultRetention` settings require both a mode and a period.\n> - The `DefaultRetention` period can be either `Days` or `Years` but you must select one. You cannot specify `Days` and `Years` at the same time.\n> - You can enable Object Lock for new or existing buckets. For more information, see [Configuring Object Lock](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-configure.html) .", @@ -45196,6 +45443,28 @@ "Key": "Name of the object key.", "Value": "Value of the tag." }, + "AWS::S3Express::AccessPoint": { + "Bucket": "The name of the bucket that you want to associate the access point with.", + "BucketAccountId": "The AWS account ID that owns the bucket associated with this access point.", + "Name": "An access point name consists of a base name you provide, followed by the zoneID ( AWS Local Zone) followed by the prefix `--xa-s3` . For example, accesspointname--zoneID--xa-s3.", + "Policy": "The access point policy associated with the specified access point.", + "PublicAccessBlockConfiguration": "Public access is blocked by default to access points for directory buckets.", + "Scope": "You can use the access point scope to restrict access to specific prefixes, API operations, or a combination of both.\n\nFor more information, see [Manage the scope of your access points for directory buckets.](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-points-directory-buckets-manage-scope.html)", + "VpcConfiguration": "If you include this field, Amazon S3 restricts access to this access point to requests from the specified virtual private cloud (VPC)." + }, + "AWS::S3Express::AccessPoint PublicAccessBlockConfiguration": { + "BlockPublicAcls": "Specifies whether Amazon S3 should block public access control lists (ACLs) for this bucket and objects in this bucket. Setting this element to `TRUE` causes the following behavior:\n\n- PUT Bucket ACL and PUT Object ACL calls fail if the specified ACL is public.\n- PUT Object calls fail if the request includes a public ACL.\n- PUT Bucket calls fail if the request includes a public ACL.\n\nEnabling this setting doesn't affect existing policies or ACLs.", + "BlockPublicPolicy": "Specifies whether Amazon S3 should block public bucket policies for this bucket. Setting this element to `TRUE` causes Amazon S3 to reject calls to PUT Bucket policy if the specified bucket policy allows public access.\n\nEnabling this setting doesn't affect existing bucket policies.", + "IgnorePublicAcls": "Specifies whether Amazon S3 should ignore public ACLs for this bucket and objects in this bucket. Setting this element to `TRUE` causes Amazon S3 to ignore all public ACLs on this bucket and objects in this bucket.\n\nEnabling this setting doesn't affect the persistence of any existing ACLs and doesn't prevent new public ACLs from being set.", + "RestrictPublicBuckets": "Specifies whether Amazon S3 should restrict public bucket policies for this bucket. Setting this element to `TRUE` restricts access to this bucket to only AWS service principals and authorized users within this account if the bucket has a public policy.\n\nEnabling this setting doesn't affect previously stored bucket policies, except that public and cross-account access within any public bucket policy, including non-public delegation to specific accounts, is blocked." + }, + "AWS::S3Express::AccessPoint Scope": { + "Permissions": "You can include one or more API operations as permissions.", + "Prefixes": "You can specify any amount of prefixes, but the total length of characters of all prefixes must be less than 256 bytes in size." + }, + "AWS::S3Express::AccessPoint VpcConfiguration": { + "VpcId": "If this field is specified, this access point will only allow connections from the specified VPC ID." + }, "AWS::S3Express::BucketPolicy": { "Bucket": "The name of the S3 directory bucket to which the policy applies.", "PolicyDocument": "A policy document containing permissions to add to the specified bucket. In IAM, you must provide policy documents in JSON format. However, in CloudFormation you can provide the policy in JSON or YAML format because CloudFormation converts YAML to JSON before submitting it to IAM. For more information, see the AWS::IAM::Policy [PolicyDocument](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-policy.html#cfn-iam-policy-policydocument) resource description in this guide and [Policies and Permissions in Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/dev/access-policy-language-overview.html) in the *Amazon S3 User Guide* ." @@ -45334,9 +45603,14 @@ "NetworkInterfaceId": "The ID for the network interface." }, "AWS::S3Tables::TableBucket": { + "EncryptionConfiguration": "Configuration specifying how data should be encrypted. This structure defines the encryption algorithm and optional KMS key to be used for server-side encryption.", "TableBucketName": "The name for the table bucket.", "UnreferencedFileRemoval": "The unreferenced file removal settings for your table bucket. Unreferenced file removal identifies and deletes all objects that are not referenced by any table snapshots. For more information, see the [*Amazon S3 User Guide*](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-table-buckets-maintenance.html) ." }, + "AWS::S3Tables::TableBucket EncryptionConfiguration": { + "KMSKeyArn": "The Amazon Resource Name (ARN) of the KMS key to use for encryption. This field is required only when `sseAlgorithm` is set to `aws:kms` .", + "SSEAlgorithm": "The server-side encryption algorithm to use. Valid values are `AES256` for S3-managed encryption keys, or `aws:kms` for AWS KMS-managed encryption keys. If you choose SSE-KMS encryption you must grant the S3 Tables maintenance principal access to your KMS key. For more information, see [Permissions requirements for S3 Tables SSE-KMS encryption](https://docs.aws.amazon.com//AmazonS3/latest/userguide/s3-tables-kms-permissions.html) ." + }, "AWS::S3Tables::TableBucket UnreferencedFileRemoval": { "NoncurrentDays": "The number of days an object can be noncurrent before Amazon S3 deletes it.", "Status": "The status of the unreferenced file removal configuration for your table bucket.", @@ -45495,6 +45769,7 @@ "AWS::SES::MailManagerIngressPoint": { "IngressPointConfiguration": "The configuration of the ingress endpoint resource.", "IngressPointName": "A user friendly name for an ingress endpoint resource.", + "NetworkConfiguration": "The network type (IPv4-only, Dual-Stack, PrivateLink) of the ingress endpoint resource.", "RuleSetId": "The identifier of an existing rule set that you attach to an ingress endpoint resource.", "StatusToUpdate": "The update status of an ingress endpoint.", "Tags": "The tags used to organize, track, or control access for the resource. For example, { \"tags\": {\"key1\":\"value1\", \"key2\":\"value2\"} }.", @@ -45505,6 +45780,16 @@ "SecretArn": "The SecretsManager::Secret ARN of the ingress endpoint resource.", "SmtpPassword": "The password of the ingress endpoint resource." }, + "AWS::SES::MailManagerIngressPoint NetworkConfiguration": { + "PrivateNetworkConfiguration": "Specifies the network configuration for the private ingress point.", + "PublicNetworkConfiguration": "Specifies the network configuration for the public ingress point." + }, + "AWS::SES::MailManagerIngressPoint PrivateNetworkConfiguration": { + "VpcEndpointId": "The identifier of the VPC endpoint to associate with this private ingress point." + }, + "AWS::SES::MailManagerIngressPoint PublicNetworkConfiguration": { + "IpType": "The IP address type for the public ingress point. Valid values are IPV4 and DUAL_STACK." + }, "AWS::SES::MailManagerIngressPoint Tag": { "Key": "The key of the key-value tag.", "Value": "The value of the key-value tag." @@ -45582,6 +45867,7 @@ "Operator": "The matching operator for a boolean condition expression." }, "AWS::SES::MailManagerRuleSet RuleBooleanToEvaluate": { + "Analysis": "The Add On ARN and its returned value to evaluate in a boolean condition expression.", "Attribute": "The boolean type representing the allowed attribute types for an email." }, "AWS::SES::MailManagerRuleSet RuleCondition": { @@ -45618,6 +45904,7 @@ "Values": "The string(s) to be evaluated in a string condition expression. For all operators, except for NOT_EQUALS, if multiple values are given, the values are processed as an OR. That is, if any of the values match the email's string using the given operator, the condition is deemed to match. However, for NOT_EQUALS, the condition is only deemed to match if none of the given strings match the email's string." }, "AWS::SES::MailManagerRuleSet RuleStringToEvaluate": { + "Analysis": "The Add On ARN and its returned value to evaluate in a string condition expression.", "Attribute": "The email attribute to evaluate in a string condition expression.", "MimeHeaderAttribute": "The email MIME X-Header attribute to evaluate in a string condition expression." }, @@ -45671,12 +45958,21 @@ "Operator": "The matching operator for an IP condition expression.", "Values": "The right hand side argument of an IP condition expression." }, + "AWS::SES::MailManagerTrafficPolicy IngressIpv6Expression": { + "Evaluate": "", + "Operator": "", + "Values": "" + }, + "AWS::SES::MailManagerTrafficPolicy IngressIpv6ToEvaluate": { + "Attribute": "" + }, "AWS::SES::MailManagerTrafficPolicy IngressStringExpression": { "Evaluate": "The left hand side argument of a string condition expression.", "Operator": "", "Values": "The right hand side argument of a string condition expression." }, "AWS::SES::MailManagerTrafficPolicy IngressStringToEvaluate": { + "Analysis": "", "Attribute": "The enum type representing the allowed attribute types for a string condition." }, "AWS::SES::MailManagerTrafficPolicy IngressTlsProtocolExpression": { @@ -45690,6 +45986,7 @@ "AWS::SES::MailManagerTrafficPolicy PolicyCondition": { "BooleanExpression": "This represents a boolean type condition matching on the incoming mail. It performs the boolean operation configured in 'Operator' and evaluates the 'Protocol' object against the 'Value'.", "IpExpression": "This represents an IP based condition matching on the incoming mail. It performs the operation configured in 'Operator' and evaluates the 'Protocol' object against the 'Value'.", + "Ipv6Expression": "", "StringExpression": "This represents a string based condition matching on the incoming mail. It performs the string operation configured in 'Operator' and evaluates the 'Protocol' object against the 'Value'.", "TlsExpression": "This represents a TLS based condition matching on the incoming mail. It performs the operation configured in 'Operator' and evaluates the 'Protocol' object against the 'Value'." }, @@ -46285,7 +46582,7 @@ "AWS::SSMQuickSetup::ConfigurationManager ConfigurationDefinition": { "LocalDeploymentAdministrationRoleArn": "The ARN of the IAM role used to administrate local configuration deployments.", "LocalDeploymentExecutionRoleName": "The name of the IAM role used to deploy local configurations.", - "Parameters": "The parameters for the configuration definition type. Parameters for configuration definitions vary based the configuration type. The following lists outline the parameters for each configuration type.\n\n- **AWS Config Recording (Type: AWS QuickSetupType-CFGRecording)** - - `RecordAllResources`\n\n- Description: (Optional) A boolean value that determines whether all supported resources are recorded. The default value is \" `true` \".\n- `ResourceTypesToRecord`\n\n- Description: (Optional) A comma separated list of resource types you want to record.\n- `RecordGlobalResourceTypes`\n\n- Description: (Optional) A boolean value that determines whether global resources are recorded with all resource configurations. The default value is \" `false` \".\n- `GlobalResourceTypesRegion`\n\n- Description: (Optional) Determines the AWS Region where global resources are recorded.\n- `UseCustomBucket`\n\n- Description: (Optional) A boolean value that determines whether a custom Amazon S3 bucket is used for delivery. The default value is \" `false` \".\n- `DeliveryBucketName`\n\n- Description: (Optional) The name of the Amazon S3 bucket you want AWS Config to deliver configuration snapshots and configuration history files to.\n- `DeliveryBucketPrefix`\n\n- Description: (Optional) The key prefix you want to use in the custom Amazon S3 bucket.\n- `NotificationOptions`\n\n- Description: (Optional) Determines the notification configuration for the recorder. The valid values are `NoStreaming` , `UseExistingTopic` , and `CreateTopic` . The default value is `NoStreaming` .\n- `CustomDeliveryTopicAccountId`\n\n- Description: (Optional) The ID of the AWS account where the Amazon SNS topic you want to use for notifications resides. You must specify a value for this parameter if you use the `UseExistingTopic` notification option.\n- `CustomDeliveryTopicName`\n\n- Description: (Optional) The name of the Amazon SNS topic you want to use for notifications. You must specify a value for this parameter if you use the `UseExistingTopic` notification option.\n- `RemediationSchedule`\n\n- Description: (Optional) A rate expression that defines the schedule for drift remediation. The valid values are `rate(30 days)` , `rate(7 days)` , `rate(1 days)` , and `none` . The default value is \" `none` \".\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) The ID of the root of your Organization. This configuration type doesn't currently support choosing specific OUs. The configuration will be deployed to all the OUs in the Organization.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Change Manager (Type: AWS QuickSetupType-SSMChangeMgr)** - - `DelegatedAccountId`\n\n- Description: (Required) The ID of the delegated administrator account.\n- `JobFunction`\n\n- Description: (Required) The name for the Change Manager job function.\n- `PermissionType`\n\n- Description: (Optional) Specifies whether you want to use default administrator permissions for the job function role, or provide a custom IAM policy. The valid values are `CustomPermissions` and `AdminPermissions` . The default value for the parameter is `CustomerPermissions` .\n- `CustomPermissions`\n\n- Description: (Optional) A JSON string containing the IAM policy you want your job function to use. You must provide a value for this parameter if you specify `CustomPermissions` for the `PermissionType` parameter.\n- `TargetOrganizationalUnits`\n\n- Description: (Required) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Conformance Packs (Type: AWS QuickSetupType-CFGCPacks)** - - `DelegatedAccountId`\n\n- Description: (Optional) The ID of the delegated administrator account. This parameter is required for Organization deployments.\n- `RemediationSchedule`\n\n- Description: (Optional) A rate expression that defines the schedule for drift remediation. The valid values are `rate(30 days)` , `rate(14 days)` , `rate(2 days)` , and `none` . The default value is \" `none` \".\n- `CPackNames`\n\n- Description: (Required) A comma separated list of AWS Config conformance packs.\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) The ID of the root of your Organization. This configuration type doesn't currently support choosing specific OUs. The configuration will be deployed to all the OUs in the Organization.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Default Host Management Configuration (Type: AWS QuickSetupType-DHMC)** - - `UpdateSSMAgent`\n\n- Description: (Optional) A boolean value that determines whether the SSM Agent is updated on the target instances every 2 weeks. The default value is \" `true` \".\n- `TargetOrganizationalUnits`\n\n- Description: (Required) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) The AWS Regions to deploy the configuration to. For this type, the parameter only accepts a value of `AllRegions` .\n- **DevOps\u00a0Guru (Type: AWS QuickSetupType-DevOpsGuru)** - - `AnalyseAllResources`\n\n- Description: (Optional) A boolean value that determines whether DevOps\u00a0Guru analyzes all AWS CloudFormation stacks in the account. The default value is \" `false` \".\n- `EnableSnsNotifications`\n\n- Description: (Optional) A boolean value that determines whether DevOps\u00a0Guru sends notifications when an insight is created. The default value is \" `true` \".\n- `EnableSsmOpsItems`\n\n- Description: (Optional) A boolean value that determines whether DevOps\u00a0Guru creates an OpsCenter OpsItem when an insight is created. The default value is \" `true` \".\n- `EnableDriftRemediation`\n\n- Description: (Optional) A boolean value that determines whether a drift remediation schedule is used. The default value is \" `false` \".\n- `RemediationSchedule`\n\n- Description: (Optional) A rate expression that defines the schedule for drift remediation. The valid values are `rate(30 days)` , `rate(14 days)` , `rate(1 days)` , and `none` . The default value is \" `none` \".\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Distributor (Type: AWS QuickSetupType-Distributor)** - - `PackagesToInstall`\n\n- Description: (Required) A comma separated list of packages you want to install on the target instances. The valid values are `AWSEFSTools` , `AWSCWAgent` , and `AWSEC2LaunchAgent` .\n- `RemediationSchedule`\n\n- Description: (Optional) A rate expression that defines the schedule for drift remediation. The valid values are `rate(30 days)` , `rate(14 days)` , `rate(2 days)` , and `none` . The default value is \" `rate(30 days)` \".\n- `IsPolicyAttachAllowed`\n\n- Description: (Optional) A boolean value that determines whether Quick Setup attaches policies to instances profiles already associated with the target instances. The default value is \" `false` \".\n- `TargetType`\n\n- Description: (Optional) Determines how instances are targeted for local account deployments. Don't specify a value for this parameter if you're deploying to OUs. The valid values are `*` , `InstanceIds` , `ResourceGroups` , and `Tags` . Use `*` to target all instances in the account.\n- `TargetInstances`\n\n- Description: (Optional) A comma separated list of instance IDs. You must provide a value for this parameter if you specify `InstanceIds` for the `TargetType` parameter.\n- `TargetTagKey`\n\n- Description: (Required) The tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `TargetTagValue`\n\n- Description: (Required) The value of the tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `ResourceGroupName`\n\n- Description: (Required) The name of the resource group associated with the instances you want to target. You must provide a value for this parameter if you specify `ResourceGroups` for the `TargetType` parameter.\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Host Management (Type: AWS QuickSetupType-SSMHostMgmt)** - - `UpdateSSMAgent`\n\n- Description: (Optional) A boolean value that determines whether the SSM Agent is updated on the target instances every 2 weeks. The default value is \" `true` \".\n- `UpdateEc2LaunchAgent`\n\n- Description: (Optional) A boolean value that determines whether the EC2 Launch agent is updated on the target instances every month. The default value is \" `false` \".\n- `CollectInventory`\n\n- Description: (Optional) A boolean value that determines whether instance metadata is collected on the target instances every 30 minutes. The default value is \" `true` \".\n- `ScanInstances`\n\n- Description: (Optional) A boolean value that determines whether the target instances are scanned daily for available patches. The default value is \" `true` \".\n- `InstallCloudWatchAgent`\n\n- Description: (Optional) A boolean value that determines whether the Amazon CloudWatch agent is installed on the target instances. The default value is \" `false` \".\n- `UpdateCloudWatchAgent`\n\n- Description: (Optional) A boolean value that determines whether the Amazon CloudWatch agent is updated on the target instances every month. The default value is \" `false` \".\n- `IsPolicyAttachAllowed`\n\n- Description: (Optional) A boolean value that determines whether Quick Setup attaches policies to instances profiles already associated with the target instances. The default value is \" `false` \".\n- `TargetType`\n\n- Description: (Optional) Determines how instances are targeted for local account deployments. Don't specify a value for this parameter if you're deploying to OUs. The valid values are `*` , `InstanceIds` , `ResourceGroups` , and `Tags` . Use `*` to target all instances in the account.\n- `TargetInstances`\n\n- Description: (Optional) A comma separated list of instance IDs. You must provide a value for this parameter if you specify `InstanceIds` for the `TargetType` parameter.\n- `TargetTagKey`\n\n- Description: (Optional) The tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `TargetTagValue`\n\n- Description: (Optional) The value of the tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `ResourceGroupName`\n\n- Description: (Optional) The name of the resource group associated with the instances you want to target. You must provide a value for this parameter if you specify `ResourceGroups` for the `TargetType` parameter.\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **OpsCenter (Type: AWS QuickSetupType-SSMOpsCenter)** - - `DelegatedAccountId`\n\n- Description: (Required) The ID of the delegated administrator account.\n- `TargetOrganizationalUnits`\n\n- Description: (Required) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Patch Policy (Type: AWS QuickSetupType-PatchPolicy)** - - `PatchPolicyName`\n\n- Description: (Required) A name for the patch policy. The value you provide is applied to target Amazon EC2 instances as a tag.\n- `SelectedPatchBaselines`\n\n- Description: (Required) An array of JSON objects containing the information for the patch baselines to include in your patch policy.\n- `PatchBaselineUseDefault`\n\n- Description: (Optional) A value that determines whether the selected patch baselines are all AWS provided. Supported values are `default` and `custom` .\n- `PatchBaselineRegion`\n\n- Description: (Required) The AWS Region where the patch baseline exist.\n- `ConfigurationOptionsPatchOperation`\n\n- Description: (Optional) Determines whether target instances scan for available patches, or scan and install available patches. The valid values are `Scan` and `ScanAndInstall` . The default value for the parameter is `Scan` .\n- `ConfigurationOptionsScanValue`\n\n- Description: (Optional) A cron expression that is used as the schedule for when instances scan for available patches.\n- `ConfigurationOptionsInstallValue`\n\n- Description: (Optional) A cron expression that is used as the schedule for when instances install available patches.\n- `ConfigurationOptionsScanNextInterval`\n\n- Description: (Optional) A boolean value that determines whether instances should scan for available patches at the next cron interval. The default value is \" `false` \".\n- `ConfigurationOptionsInstallNextInterval`\n\n- Description: (Optional) A boolean value that determines whether instances should scan for available patches at the next cron interval. The default value is \" `false` \".\n- `RebootOption`\n\n- Description: (Optional) Determines whether instances are rebooted after patches are installed. Valid values are `RebootIfNeeded` and `NoReboot` .\n- `IsPolicyAttachAllowed`\n\n- Description: (Optional) A boolean value that determines whether Quick Setup attaches policies to instances profiles already associated with the target instances. The default value is \" `false` \".\n- `OutputLogEnableS3`\n\n- Description: (Optional) A boolean value that determines whether command output logs are sent to Amazon S3.\n- `OutputS3Location`\n\n- Description: (Optional) A JSON string containing information about the Amazon S3 bucket where you want to store the output details of the request.\n\n- `OutputS3BucketRegion`\n\n- Description: (Optional) The AWS Region where the Amazon S3 bucket you want to deliver command output to is located.\n- `OutputS3BucketName`\n\n- Description: (Optional) The name of the Amazon S3 bucket you want to deliver command output to.\n- `OutputS3KeyPrefix`\n\n- Description: (Optional) The key prefix you want to use in the custom Amazon S3 bucket.\n- `TargetType`\n\n- Description: (Optional) Determines how instances are targeted for local account deployments. Don't specify a value for this parameter if you're deploying to OUs. The valid values are `*` , `InstanceIds` , `ResourceGroups` , and `Tags` . Use `*` to target all instances in the account.\n- `TargetInstances`\n\n- Description: (Optional) A comma separated list of instance IDs. You must provide a value for this parameter if you specify `InstanceIds` for the `TargetType` parameter.\n- `TargetTagKey`\n\n- Description: (Required) The tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `TargetTagValue`\n\n- Description: (Required) The value of the tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `ResourceGroupName`\n\n- Description: (Required) The name of the resource group associated with the instances you want to target. You must provide a value for this parameter if you specify `ResourceGroups` for the `TargetType` parameter.\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Resource Explorer (Type: AWS QuickSetupType-ResourceExplorer)** - - `SelectedAggregatorRegion`\n\n- Description: (Required) The AWS Region where you want to create the aggregator index.\n- `ReplaceExistingAggregator`\n\n- Description: (Required) A boolean value that determines whether to demote an existing aggregator if it is in a Region that differs from the value you specify for the `SelectedAggregatorRegion` .\n- `TargetOrganizationalUnits`\n\n- Description: (Required) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Resource Scheduler (Type: AWS QuickSetupType-Scheduler)** - - `TargetTagKey`\n\n- Description: (Required) The tag key assigned to the instances you want to target.\n- `TargetTagValue`\n\n- Description: (Required) The value of the tag key assigned to the instances you want to target.\n- `ICalendarString`\n\n- Description: (Required) An iCalendar formatted string containing the schedule you want Change Manager to use.\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.", + "Parameters": "The parameters for the configuration definition type. Parameters for configuration definitions vary based the configuration type. The following lists outline the parameters for each configuration type.\n\n- **AWS Config Recording (Type: AWS QuickSetupType-CFGRecording)** - - `RecordAllResources`\n\n- Description: (Optional) A boolean value that determines whether all supported resources are recorded. The default value is \" `true` \".\n- `ResourceTypesToRecord`\n\n- Description: (Optional) A comma separated list of resource types you want to record.\n- `RecordGlobalResourceTypes`\n\n- Description: (Optional) A boolean value that determines whether global resources are recorded with all resource configurations. The default value is \" `false` \".\n- `GlobalResourceTypesRegion`\n\n- Description: (Optional) Determines the AWS Region where global resources are recorded.\n- `UseCustomBucket`\n\n- Description: (Optional) A boolean value that determines whether a custom Amazon S3 bucket is used for delivery. The default value is \" `false` \".\n- `DeliveryBucketName`\n\n- Description: (Optional) The name of the Amazon S3 bucket you want AWS Config to deliver configuration snapshots and configuration history files to.\n- `DeliveryBucketPrefix`\n\n- Description: (Optional) The key prefix you want to use in the custom Amazon S3 bucket.\n- `NotificationOptions`\n\n- Description: (Optional) Determines the notification configuration for the recorder. The valid values are `NoStreaming` , `UseExistingTopic` , and `CreateTopic` . The default value is `NoStreaming` .\n- `CustomDeliveryTopicAccountId`\n\n- Description: (Optional) The ID of the AWS account where the Amazon SNS topic you want to use for notifications resides. You must specify a value for this parameter if you use the `UseExistingTopic` notification option.\n- `CustomDeliveryTopicName`\n\n- Description: (Optional) The name of the Amazon SNS topic you want to use for notifications. You must specify a value for this parameter if you use the `UseExistingTopic` notification option.\n- `RemediationSchedule`\n\n- Description: (Optional) A rate expression that defines the schedule for drift remediation. The valid values are `rate(30 days)` , `rate(7 days)` , `rate(1 days)` , and `none` . The default value is \" `none` \".\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) The ID of the root of your Organization. This configuration type doesn't currently support choosing specific OUs. The configuration will be deployed to all the OUs in the Organization.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Change Manager (Type: AWS QuickSetupType-SSMChangeMgr)** - - `DelegatedAccountId`\n\n- Description: (Required) The ID of the delegated administrator account.\n- `JobFunction`\n\n- Description: (Required) The name for the Change Manager job function.\n- `PermissionType`\n\n- Description: (Optional) Specifies whether you want to use default administrator permissions for the job function role, or provide a custom IAM policy. The valid values are `CustomPermissions` and `AdminPermissions` . The default value for the parameter is `CustomerPermissions` .\n- `CustomPermissions`\n\n- Description: (Optional) A JSON string containing the IAM policy you want your job function to use. You must provide a value for this parameter if you specify `CustomPermissions` for the `PermissionType` parameter.\n- `TargetOrganizationalUnits`\n\n- Description: (Required) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Conformance Packs (Type: AWS QuickSetupType-CFGCPacks)** - - `DelegatedAccountId`\n\n- Description: (Optional) The ID of the delegated administrator account. This parameter is required for Organization deployments.\n- `RemediationSchedule`\n\n- Description: (Optional) A rate expression that defines the schedule for drift remediation. The valid values are `rate(30 days)` , `rate(14 days)` , `rate(2 days)` , and `none` . The default value is \" `none` \".\n- `CPackNames`\n\n- Description: (Required) A comma separated list of AWS Config conformance packs.\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) The ID of the root of your Organization. This configuration type doesn't currently support choosing specific OUs. The configuration will be deployed to all the OUs in the Organization.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Default Host Management Configuration (Type: AWS QuickSetupType-DHMC)** - - `UpdateSSMAgent`\n\n- Description: (Optional) A boolean value that determines whether the SSM Agent is updated on the target instances every 2 weeks. The default value is \" `true` \".\n- `TargetOrganizationalUnits`\n\n- Description: (Required) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) The AWS Regions to deploy the configuration to. For this type, the parameter only accepts a value of `AllRegions` .\n- **DevOps\u00a0Guru (Type: AWS QuickSetupType-DevOpsGuru)** - - `AnalyseAllResources`\n\n- Description: (Optional) A boolean value that determines whether DevOps\u00a0Guru analyzes all AWS CloudFormation stacks in the account. The default value is \" `false` \".\n- `EnableSnsNotifications`\n\n- Description: (Optional) A boolean value that determines whether DevOps\u00a0Guru sends notifications when an insight is created. The default value is \" `true` \".\n- `EnableSsmOpsItems`\n\n- Description: (Optional) A boolean value that determines whether DevOps\u00a0Guru creates an OpsCenter OpsItem when an insight is created. The default value is \" `true` \".\n- `EnableDriftRemediation`\n\n- Description: (Optional) A boolean value that determines whether a drift remediation schedule is used. The default value is \" `false` \".\n- `RemediationSchedule`\n\n- Description: (Optional) A rate expression that defines the schedule for drift remediation. The valid values are `rate(30 days)` , `rate(14 days)` , `rate(1 days)` , and `none` . The default value is \" `none` \".\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Distributor (Type: AWS QuickSetupType-Distributor)** - - `PackagesToInstall`\n\n- Description: (Required) A comma separated list of packages you want to install on the target instances. The valid values are `AWSEFSTools` , `AWSCWAgent` , and `AWSEC2LaunchAgent` .\n- `RemediationSchedule`\n\n- Description: (Optional) A rate expression that defines the schedule for drift remediation. The valid values are `rate(30 days)` , `rate(14 days)` , `rate(2 days)` , and `none` . The default value is \" `rate(30 days)` \".\n- `IsPolicyAttachAllowed`\n\n- Description: (Optional) A boolean value that determines whether Quick Setup attaches policies to instances profiles already associated with the target instances. The default value is \" `false` \".\n- `TargetType`\n\n- Description: (Optional) Determines how instances are targeted for local account deployments. Don't specify a value for this parameter if you're deploying to OUs. The valid values are `*` , `InstanceIds` , `ResourceGroups` , and `Tags` . Use `*` to target all instances in the account.\n- `TargetInstances`\n\n- Description: (Optional) A comma separated list of instance IDs. You must provide a value for this parameter if you specify `InstanceIds` for the `TargetType` parameter.\n- `TargetTagKey`\n\n- Description: (Required) The tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `TargetTagValue`\n\n- Description: (Required) The value of the tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `ResourceGroupName`\n\n- Description: (Required) The name of the resource group associated with the instances you want to target. You must provide a value for this parameter if you specify `ResourceGroups` for the `TargetType` parameter.\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Host Management (Type: AWS QuickSetupType-SSMHostMgmt)** - - `UpdateSSMAgent`\n\n- Description: (Optional) A boolean value that determines whether the SSM Agent is updated on the target instances every 2 weeks. The default value is \" `true` \".\n- `UpdateEc2LaunchAgent`\n\n- Description: (Optional) A boolean value that determines whether the EC2 Launch agent is updated on the target instances every month. The default value is \" `false` \".\n- `CollectInventory`\n\n- Description: (Optional) A boolean value that determines whether instance metadata is collected on the target instances every 30 minutes. The default value is \" `true` \".\n- `ScanInstances`\n\n- Description: (Optional) A boolean value that determines whether the target instances are scanned daily for available patches. The default value is \" `true` \".\n- `InstallCloudWatchAgent`\n\n- Description: (Optional) A boolean value that determines whether the Amazon CloudWatch agent is installed on the target instances. The default value is \" `false` \".\n- `UpdateCloudWatchAgent`\n\n- Description: (Optional) A boolean value that determines whether the Amazon CloudWatch agent is updated on the target instances every month. The default value is \" `false` \".\n- `IsPolicyAttachAllowed`\n\n- Description: (Optional) A boolean value that determines whether Quick Setup attaches policies to instances profiles already associated with the target instances. The default value is \" `false` \".\n- `TargetType`\n\n- Description: (Optional) Determines how instances are targeted for local account deployments. Don't specify a value for this parameter if you're deploying to OUs. The valid values are `*` , `InstanceIds` , `ResourceGroups` , and `Tags` . Use `*` to target all instances in the account.\n- `TargetInstances`\n\n- Description: (Optional) A comma separated list of instance IDs. You must provide a value for this parameter if you specify `InstanceIds` for the `TargetType` parameter.\n- `TargetTagKey`\n\n- Description: (Optional) The tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `TargetTagValue`\n\n- Description: (Optional) The value of the tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `ResourceGroupName`\n\n- Description: (Optional) The name of the resource group associated with the instances you want to target. You must provide a value for this parameter if you specify `ResourceGroups` for the `TargetType` parameter.\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **OpsCenter (Type: AWS QuickSetupType-SSMOpsCenter)** - - `DelegatedAccountId`\n\n- Description: (Required) The ID of the delegated administrator account.\n- `TargetOrganizationalUnits`\n\n- Description: (Required) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Patch Policy (Type: AWS QuickSetupType-PatchPolicy)** - - `PatchPolicyName`\n\n- Description: (Required) A name for the patch policy. The value you provide is applied to target Amazon EC2 instances as a tag.\n- `SelectedPatchBaselines`\n\n- Description: (Required) An array of JSON objects containing the information for the patch baselines to include in your patch policy.\n- `PatchBaselineUseDefault`\n\n- Description: (Optional) A value that determines whether the selected patch baselines are all AWS provided. Supported values are `default` and `custom` .\n- `PatchBaselineRegion`\n\n- Description: (Required) The AWS Region where the patch baseline exist.\n- `ConfigurationOptionsPatchOperation`\n\n- Description: (Optional) Determines whether target instances scan for available patches, or scan and install available patches. The valid values are `Scan` and `ScanAndInstall` . The default value for the parameter is `Scan` .\n- `ConfigurationOptionsScanValue`\n\n- Description: (Optional) A cron expression that is used as the schedule for when instances scan for available patches.\n- `ConfigurationOptionsInstallValue`\n\n- Description: (Optional) A cron expression that is used as the schedule for when instances install available patches.\n- `ConfigurationOptionsScanNextInterval`\n\n- Description: (Optional) A boolean value that determines whether instances should scan for available patches at the next cron interval. The default value is \" `false` \".\n- `ConfigurationOptionsInstallNextInterval`\n\n- Description: (Optional) A boolean value that determines whether instances should scan for available patches at the next cron interval. The default value is \" `false` \".\n- `RebootOption`\n\n- Description: (Optional) Determines whether instances are rebooted after patches are installed. Valid values are `RebootIfNeeded` and `NoReboot` .\n- `IsPolicyAttachAllowed`\n\n- Description: (Optional) A boolean value that determines whether Quick Setup attaches policies to instances profiles already associated with the target instances. The default value is \" `false` \".\n- `OutputLogEnableS3`\n\n- Description: (Optional) A boolean value that determines whether command output logs are sent to Amazon S3.\n- `OutputS3Location`\n\n- Description: (Optional) Information about the Amazon S3 bucket where you want to store the output details of the request.\n\n- `OutputBucketRegion`\n\n- Description: (Optional) The AWS Region where the Amazon S3 bucket you want to deliver command output to is located.\n- `OutputS3BucketName`\n\n- Description: (Optional) The name of the Amazon S3 bucket you want to deliver command output to.\n- `OutputS3KeyPrefix`\n\n- Description: (Optional) The key prefix you want to use in the custom Amazon S3 bucket.\n- `TargetType`\n\n- Description: (Optional) Determines how instances are targeted for local account deployments. Don't specify a value for this parameter if you're deploying to OUs. The valid values are `*` , `InstanceIds` , `ResourceGroups` , and `Tags` . Use `*` to target all instances in the account.\n- `TargetInstances`\n\n- Description: (Optional) A comma separated list of instance IDs. You must provide a value for this parameter if you specify `InstanceIds` for the `TargetType` parameter.\n- `TargetTagKey`\n\n- Description: (Required) The tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `TargetTagValue`\n\n- Description: (Required) The value of the tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `ResourceGroupName`\n\n- Description: (Required) The name of the resource group associated with the instances you want to target. You must provide a value for this parameter if you specify `ResourceGroups` for the `TargetType` parameter.\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Resource Explorer (Type: AWS QuickSetupType-ResourceExplorer)** - - `SelectedAggregatorRegion`\n\n- Description: (Required) The AWS Region where you want to create the aggregator index.\n- `ReplaceExistingAggregator`\n\n- Description: (Required) A boolean value that determines whether to demote an existing aggregator if it is in a Region that differs from the value you specify for the `SelectedAggregatorRegion` .\n- `TargetOrganizationalUnits`\n\n- Description: (Required) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Resource Scheduler (Type: AWS QuickSetupType-Scheduler)** - - `TargetTagKey`\n\n- Description: (Required) The tag key assigned to the instances you want to target.\n- `TargetTagValue`\n\n- Description: (Required) The value of the tag key assigned to the instances you want to target.\n- `ICalendarString`\n\n- Description: (Required) An iCalendar formatted string containing the schedule you want Change Manager to use.\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.", "Type": "The type of the Quick Setup configuration.", "TypeVersion": "The version of the Quick Setup type used.", "id": "The ID of the configuration definition." @@ -46451,7 +46748,7 @@ "InstanceType": "The instance type of the instance group of a SageMaker HyperPod cluster.", "LifeCycleConfig": "The lifecycle configuration for a SageMaker HyperPod cluster.", "OnStartDeepHealthChecks": "A flag indicating whether deep health checks should be performed when the HyperPod cluster instance group is created or updated. Deep health checks are comprehensive, invasive tests that validate the health of the underlying hardware and infrastructure components.", - "OverrideVpcConfig": "", + "OverrideVpcConfig": "The customized Amazon VPC configuration at the instance group level that overrides the default Amazon VPC configuration of the SageMaker HyperPod cluster.", "ThreadsPerCore": "The number of threads per CPU core you specified under `CreateCluster` ." }, "AWS::SageMaker::Cluster ClusterInstanceStorageConfig": { @@ -48017,22 +48314,23 @@ "Content": "A base64-encoded string that contains a shell script for a notebook instance lifecycle configuration." }, "AWS::SageMaker::PartnerApp": { - "ApplicationConfig": "", - "AuthType": "", - "EnableIamSessionBasedIdentity": "", - "ExecutionRoleArn": "", - "MaintenanceConfig": "", - "Name": "The name of the SageMaker Partner AI App.", - "Tags": "", - "Tier": "", - "Type": "The type of SageMaker Partner AI App to create. Must be one of the following: `lakera-guard` , `comet` , `deepchecks-llm-evaluation` , or `fiddler` ." + "ApplicationConfig": "Configuration settings for the Partner AI App.", + "AuthType": "Defines the authentication type used for the Partner AI App.", + "EnableIamSessionBasedIdentity": "Enables IAM Session based Identity for PartnerApp.", + "ExecutionRoleArn": "The Amazon Resource Name (ARN) of the IAM role of the user.", + "KmsKeyId": "The AWS KMS customer managed key used to encrypt the data associated with the PartnerApp.", + "MaintenanceConfig": "A collection of settings that specify the maintenance schedule for the PartnerApp.", + "Name": "The name of the Partner AI App. This name must be unique within your account and region.", + "Tags": "A list of tags to apply to the PartnerApp.", + "Tier": "Specifies the tier or level of the Partner AI App. The tier size impacts the speed and capabilities of the application. For more information, see [Set up Partner AI Apps](https://docs.aws.amazon.com/sagemaker/latest/dg/partner-app-onboard.html) .", + "Type": "Specifies the type of Partner AI App being created." }, "AWS::SageMaker::PartnerApp PartnerAppConfig": { - "AdminUsers": "The list of users that are given admin access to the SageMaker Partner AI App.", - "Arguments": "This is a map of required inputs for a SageMaker Partner AI App. Based on the application type, the map is populated with a key and value pair that is specific to the user and application." + "AdminUsers": "A list of users that will have administrative access to the Partner AI App.", + "Arguments": "Additional arguments passed to the Partner AI App during initialization or runtime." }, "AWS::SageMaker::PartnerApp PartnerAppMaintenanceConfig": { - "MaintenanceWindowStart": "The day and time of the week in Coordinated Universal Time (UTC) 24-hour standard time that weekly maintenance updates are scheduled. This value must take the following format: `3-letter-day:24-h-hour:minute` . For example: `TUE:03:30` ." + "MaintenanceWindowStart": "The maintenance window start day and time for the PartnerApp." }, "AWS::SageMaker::PartnerApp Tag": { "Key": "The tag key. Tag keys must be unique per resource.", @@ -49244,7 +49542,7 @@ }, "AWS::StepFunctions::Activity": { "EncryptionConfiguration": "Encryption configuration for the activity.\n\nActivity configuration is immutable, and resource names must be unique. To set customer managed keys for encryption, you must create a *new Activity* . If you attempt to change the configuration in your CFN template for an existing activity, you will receive an `ActivityAlreadyExists` exception.\n\nTo update your activity to include customer managed keys, set a new activity name within your AWS CloudFormation template.", - "Name": "The name of the activity.\n\nA name must *not* contain:\n\n- white space\n- brackets `< > { } [ ]`\n- wildcard characters `? *`\n- special characters `\" # % \\ ^ | ~ ` $ & , ; : /`\n- control characters ( `U+0000-001F` , `U+007F-009F` )\n\nTo enable logging with CloudWatch Logs, the name should only contain 0-9, A-Z, a-z, - and _.", + "Name": "The name of the activity.\n\nA name must *not* contain:\n\n- white space\n- brackets `< > { } [ ]`\n- wildcard characters `? *`\n- special characters `\" # % \\ ^ | ~ ` $ & , ; : /`\n- control characters ( `U+0000-001F` , `U+007F-009F` , `U+FFFE-FFFF` )\n- surrogates ( `U+D800-DFFF` )\n- invalid characters ( `U+10FFFF` )\n\nTo enable logging with CloudWatch Logs, the name should only contain 0-9, A-Z, a-z, - and _.", "Tags": "The list of tags to add to a resource.\n\nTags may only contain Unicode letters, digits, white space, or these symbols: `_ . : / = + - @` ." }, "AWS::StepFunctions::Activity EncryptionConfiguration": { @@ -49597,11 +49895,11 @@ "Value": "Contains one or more values that you assigned to the key name you create." }, "AWS::Transfer::Certificate": { - "ActiveDate": "An optional date that specifies when the certificate becomes active.", + "ActiveDate": "An optional date that specifies when the certificate becomes active. If you do not specify a value, `ActiveDate` takes the same value as `NotBeforeDate` , which is specified by the CA.", "Certificate": "The file name for the certificate.", "CertificateChain": "The list of certificates that make up the chain for the certificate.", "Description": "The name or description that's used to identity the certificate.", - "InactiveDate": "An optional date that specifies when the certificate becomes inactive.", + "InactiveDate": "An optional date that specifies when the certificate becomes inactive. If you do not specify a value, `InactiveDate` takes the same value as `NotAfterDate` , which is specified by the CA.", "PrivateKey": "The file that contains the private key for the certificate that's being imported.", "Tags": "Key-value pairs that can be used to group and search for certificates.", "Usage": "Specifies how this certificate is used. It can be used in the following ways:\n\n- `SIGNING` : For signing AS2 messages\n- `ENCRYPTION` : For encrypting AS2 messages\n- `TLS` : For securing AS2 communications sent over HTTPS" @@ -49624,7 +49922,7 @@ "Compression": "Specifies whether the AS2 file is compressed.", "EncryptionAlgorithm": "The algorithm that is used to encrypt the file.\n\nNote the following:\n\n- Do not use the `DES_EDE3_CBC` algorithm unless you must support a legacy client that requires it, as it is a weak encryption algorithm.\n- You can only specify `NONE` if the URL for your connector uses HTTPS. Using HTTPS ensures that no traffic is sent in clear text.", "LocalProfileId": "A unique identifier for the AS2 local profile.", - "MdnResponse": "Used for outbound requests (from an AWS Transfer Family server to a partner AS2 server) to determine whether the partner response for transfers is synchronous or asynchronous. Specify either of the following values:\n\n- `SYNC` : The system expects a synchronous MDN response, confirming that the file was transferred successfully (or not).\n- `NONE` : Specifies that no MDN response is required.", + "MdnResponse": "Used for outbound requests (from an AWS Transfer Family connector to a partner AS2 server) to determine whether the partner response for transfers is synchronous or asynchronous. Specify either of the following values:\n\n- `SYNC` : The system expects a synchronous MDN response, confirming that the file was transferred successfully (or not).\n- `NONE` : Specifies that no MDN response is required.", "MdnSigningAlgorithm": "The signing algorithm for the MDN response.\n\n> If set to DEFAULT (or not set at all), the value for `SigningAlgorithm` is used.", "MessageSubject": "Used as the `Subject` HTTP header attribute in AS2 messages that are being sent with the connector.", "PartnerProfileId": "A unique identifier for the partner profile for the connector.", @@ -49632,8 +49930,8 @@ "SigningAlgorithm": "The algorithm that is used to sign the AS2 messages sent with the connector." }, "AWS::Transfer::Connector SftpConfig": { - "TrustedHostKeys": "The public portion of the host key, or keys, that are used to identify the external server to which you are connecting. You can use the `ssh-keyscan` command against the SFTP server to retrieve the necessary key.\n\nThe three standard SSH public key format elements are `` , `` , and an optional `` , with spaces between each element. Specify only the `` and `` : do not enter the `` portion of the key.\n\nFor the trusted host key, AWS Transfer Family accepts RSA and ECDSA keys.\n\n- For RSA keys, the `` string is `ssh-rsa` .\n- For ECDSA keys, the `` string is either `ecdsa-sha2-nistp256` , `ecdsa-sha2-nistp384` , or `ecdsa-sha2-nistp521` , depending on the size of the key you generated.\n\nRun this command to retrieve the SFTP server host key, where your SFTP server name is `ftp.host.com` .\n\n`ssh-keyscan ftp.host.com`\n\nThis prints the public host key to standard output.\n\n`ftp.host.com ssh-rsa AAAAB3Nza... `TrustedHostKeys` is optional for `CreateConnector` . If not provided, you can use `TestConnection` to retrieve the server host key during the initial connection attempt, and subsequently update the connector with the observed host key. \n\nThe three standard SSH public key format elements are `` , `` , and an optional `` , with spaces between each element. Specify only the `` and `` : do not enter the `` portion of the key.\n\nFor the trusted host key, AWS Transfer Family accepts RSA and ECDSA keys.\n\n- For RSA keys, the `` string is `ssh-rsa` .\n- For ECDSA keys, the `` string is either `ecdsa-sha2-nistp256` , `ecdsa-sha2-nistp384` , or `ecdsa-sha2-nistp521` , depending on the size of the key you generated.\n\nRun this command to retrieve the SFTP server host key, where your SFTP server name is `ftp.host.com` .\n\n`ssh-keyscan ftp.host.com`\n\nThis prints the public host key to standard output.\n\n`ftp.host.com ssh-rsa AAAAB3Nza... - Required when creating an SFTP connector\n> - Optional when updating an existing SFTP connector" }, "AWS::Transfer::Connector Tag": { "Key": "The name assigned to the tag that you create.", @@ -49656,7 +49954,7 @@ "EndpointType": "The type of endpoint that you want your server to use. You can choose to make your server's endpoint publicly accessible (PUBLIC) or host it inside your VPC. With an endpoint that is hosted in a VPC, you can restrict access to your server and resources only within your VPC or choose to make it internet facing by attaching Elastic IP addresses directly to it.\n\n> After May 19, 2021, you won't be able to create a server using `EndpointType=VPC_ENDPOINT` in your AWS account if your account hasn't already done so before May 19, 2021. If you have already created servers with `EndpointType=VPC_ENDPOINT` in your AWS account on or before May 19, 2021, you will not be affected. After this date, use `EndpointType` = `VPC` .\n> \n> For more information, see [Discontinuing the use of VPC_ENDPOINT](https://docs.aws.amazon.com//transfer/latest/userguide/create-server-in-vpc.html#deprecate-vpc-endpoint) .\n> \n> It is recommended that you use `VPC` as the `EndpointType` . With this endpoint type, you have the option to directly associate up to three Elastic IPv4 addresses (BYO IP included) with your server's endpoint and use VPC security groups to restrict traffic by the client's public IP address. This is not possible with `EndpointType` set to `VPC_ENDPOINT` .", "IdentityProviderDetails": "Required when `IdentityProviderType` is set to `AWS_DIRECTORY_SERVICE` , `AWS _LAMBDA` or `API_GATEWAY` . Accepts an array containing all of the information required to use a directory in `AWS_DIRECTORY_SERVICE` or invoke a customer-supplied authentication API, including the API Gateway URL. Cannot be specified when `IdentityProviderType` is set to `SERVICE_MANAGED` .", "IdentityProviderType": "The mode of authentication for a server. The default value is `SERVICE_MANAGED` , which allows you to store and access user credentials within the AWS Transfer Family service.\n\nUse `AWS_DIRECTORY_SERVICE` to provide access to Active Directory groups in AWS Directory Service for Microsoft Active Directory or Microsoft Active Directory in your on-premises environment or in AWS using AD Connector. This option also requires you to provide a Directory ID by using the `IdentityProviderDetails` parameter.\n\nUse the `API_GATEWAY` value to integrate with an identity provider of your choosing. The `API_GATEWAY` setting requires you to provide an Amazon API Gateway endpoint URL to call for authentication by using the `IdentityProviderDetails` parameter.\n\nUse the `AWS_LAMBDA` value to directly use an AWS Lambda function as your identity provider. If you choose this value, you must specify the ARN for the Lambda function in the `Function` parameter for the `IdentityProviderDetails` data type.", - "LoggingRole": "The Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that allows a server to turn on Amazon CloudWatch logging for Amazon S3 or Amazon EFSevents. When set, you can view user activity in your CloudWatch logs.", + "LoggingRole": "The Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that allows a server to turn on Amazon CloudWatch logging for Amazon S3 or Amazon EFS events. When set, you can view user activity in your CloudWatch logs.", "PostAuthenticationLoginBanner": "Specifies a string to display when users connect to a server. This string is displayed after the user authenticates.\n\n> The SFTP protocol does not support post-authentication display banners.", "PreAuthenticationLoginBanner": "Specifies a string to display when users connect to a server. This string is displayed before the user authenticates. For example, the following banner displays details about using the system:\n\n`This system is for the use of authorized users only. Individuals using this computer system without authority, or in excess of their authority, are subject to having all of their activities on this system monitored and recorded by system personnel.`", "ProtocolDetails": "The protocol settings that are configured for your server.\n\n- To indicate passive mode (for FTP and FTPS protocols), use the `PassiveIp` parameter. Enter a single dotted-quad IPv4 address, such as the external IP address of a firewall, router, or load balancer.\n- To ignore the error that is generated when the client attempts to use the `SETSTAT` command on a file that you are uploading to an Amazon S3 bucket, use the `SetStatOption` parameter. To have the AWS Transfer Family server ignore the `SETSTAT` command and upload files without needing to make any changes to your SFTP client, set the value to `ENABLE_NO_OP` . If you set the `SetStatOption` parameter to `ENABLE_NO_OP` , Transfer Family generates a log entry to Amazon CloudWatch Logs, so that you can determine when the client is making a `SETSTAT` call.\n- To determine whether your AWS Transfer Family server resumes recent, negotiated sessions through a unique session ID, use the `TlsSessionResumptionMode` parameter.\n- `As2Transports` indicates the transport method for the AS2 messages. Currently, only HTTP is supported.\n\nThe `Protocols` parameter is an array of strings.\n\n*Allowed values* : One or more of `SFTP` , `FTPS` , `FTP` , `AS2`", @@ -49733,6 +50031,7 @@ "IdentityProviderDetails": "You can provide a structure that contains the details for the identity provider to use with your web app.\n\nFor more details about this parameter, see [Configure your identity provider for Transfer Family web apps](https://docs.aws.amazon.com//transfer/latest/userguide/webapp-identity-center.html) .", "Tags": "Key-value pairs that can be used to group and search for web apps. Tags are metadata attached to web apps for any purpose.", "WebAppCustomization": "A structure that contains the customization fields for the web app. You can provide a title, logo, and icon to customize the appearance of your web app.", + "WebAppEndpointPolicy": "Setting for the type of endpoint policy for the web app. The default value is `STANDARD` .\n\nIf your web app was created in an AWS GovCloud (US) Region , the value of this parameter can be `FIPS` , which indicates the web app endpoint is FIPS-compliant.", "WebAppUnits": "A union that contains the value for number of concurrent connections or the user sessions on your web app." }, "AWS::Transfer::WebApp IdentityProviderDetails": { @@ -51203,7 +51502,7 @@ "ApiFormat": "The API format used for this AI Prompt.", "AssistantId": "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN.", "Description": "The description of the AI Prompt.", - "ModelId": "The identifier of the model used for this AI Prompt. Model Ids supported are: `anthropic.claude-3-haiku-20240307-v1:0` .", + "ModelId": "The identifier of the model used for this AI Prompt. The following model Ids are supported:\n\n- `anthropic.claude-3-haiku--v1:0`\n- `apac.amazon.nova-lite-v1:0`\n- `apac.amazon.nova-micro-v1:0`\n- `apac.amazon.nova-pro-v1:0`\n- `apac.anthropic.claude-3-5-sonnet--v2:0`\n- `apac.anthropic.claude-3-haiku-20240307-v1:0`\n- `eu.amazon.nova-lite-v1:0`\n- `eu.amazon.nova-micro-v1:0`\n- `eu.amazon.nova-pro-v1:0`\n- `eu.anthropic.claude-3-7-sonnet-20250219-v1:0`\n- `eu.anthropic.claude-3-haiku-20240307-v1:0`\n- `us.amazon.nova-lite-v1:0`\n- `us.amazon.nova-micro-v1:0`\n- `us.amazon.nova-pro-v1:0`\n- `us.anthropic.claude-3-5-haiku-20241022-v1:0`\n- `us.anthropic.claude-3-7-sonnet-20250219-v1:0`\n- `us.anthropic.claude-3-haiku-20240307-v1:0`", "Name": "The name of the AI Prompt", "Tags": "The tags used to organize, track, or control access for this resource.", "TemplateConfiguration": "The configuration of the prompt template for this AI Prompt.", diff --git a/schema_source/cloudformation.schema.json b/schema_source/cloudformation.schema.json index daa71469a..2d19845b5 100644 --- a/schema_source/cloudformation.schema.json +++ b/schema_source/cloudformation.schema.json @@ -565,7 +565,7 @@ "type": "string" }, "KeyStorageSecurityStandard": { - "markdownDescription": "Specifies a cryptographic key management compliance standard used for handling CA keys.\n\nDefault: FIPS_140_2_LEVEL_3_OR_HIGHER\n\n> Some AWS Regions do not support the default. When creating a CA in these Regions, you must provide `FIPS_140_2_LEVEL_2_OR_HIGHER` as the argument for `KeyStorageSecurityStandard` . Failure to do this results in an `InvalidArgsException` with the message, \"A certificate authority cannot be created in this region with the specified security standard.\"\n> \n> For information about security standard support in various Regions, see [Storage and security compliance of AWS Private CA private keys](https://docs.aws.amazon.com/privateca/latest/userguide/data-protection.html#private-keys) .", + "markdownDescription": "Specifies a cryptographic key management compliance standard for handling and protecting CA keys.\n\nDefault: FIPS_140_2_LEVEL_3_OR_HIGHER\n\n> Some AWS Regions don't support the default value. When you create a CA in these Regions, you must use `CCPC_LEVEL_1_OR_HIGHER` for the `KeyStorageSecurityStandard` parameter. If you don't, the operation returns an `InvalidArgsException` with this message: \"A certificate authority cannot be created in this region with the specified security standard.\"\n> \n> For information about security standard support in different AWS Regions, see [Storage and security compliance of AWS Private CA private keys](https://docs.aws.amazon.com/privateca/latest/userguide/data-protection.html#private-keys) .", "title": "KeyStorageSecurityStandard", "type": "string" }, @@ -9771,7 +9771,7 @@ "type": "string" }, "Description": { - "markdownDescription": "A description of the configuration.", + "markdownDescription": "A description of the configuration.\n\n> Due to HTTP limitations, this field only supports ASCII characters.", "title": "Description", "type": "string" }, @@ -20595,7 +20595,7 @@ "type": "number" }, "ResourceId": { - "markdownDescription": "The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .\n- Pool of WorkSpaces - The resource type is `workspacespool` and the unique identifier is the pool ID. Example: `workspacespool/wspool-123456` .", + "markdownDescription": "The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Amazon ElastiCache cache cluster - The resource type is `cache-cluster` and the unique identifier is the cache cluster name. Example: `cache-cluster/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .\n- Pool of WorkSpaces - The resource type is `workspacespool` and the unique identifier is the pool ID. Example: `workspacespool/wspool-123456` .", "title": "ResourceId", "type": "string" }, @@ -20605,7 +20605,7 @@ "type": "string" }, "ScalableDimension": { - "markdownDescription": "The scalable dimension associated with the scalable target. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.\n- `workspaces:workspacespool:DesiredUserSessions` - The number of user sessions for the WorkSpaces in the pool.", + "markdownDescription": "The scalable dimension associated with the scalable target. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:cache-cluster:Nodes` - The number of nodes for an Amazon ElastiCache cache cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.\n- `workspaces:workspacespool:DesiredUserSessions` - The number of user sessions for the WorkSpaces in the pool.", "title": "ScalableDimension", "type": "string" }, @@ -20776,17 +20776,17 @@ "type": "string" }, "PolicyType": { - "markdownDescription": "The scaling policy type.\n\nThe following policy types are supported:\n\n`TargetTrackingScaling` \u2014Not supported for Amazon EMR\n\n`StepScaling` \u2014Not supported for DynamoDB, Amazon Comprehend, Lambda, Amazon Keyspaces, Amazon MSK, Amazon ElastiCache, or Neptune.", + "markdownDescription": "The scaling policy type.\n\nThe following policy types are supported:\n\n`TargetTrackingScaling` \u2014Not supported for Amazon EMR\n\n`StepScaling` \u2014Not supported for DynamoDB, Amazon Comprehend, Lambda, Amazon Keyspaces, Amazon MSK, Amazon ElastiCache, or Neptune.\n\n`PredictiveScaling` \u2014Only supported for Amazon ECS", "title": "PolicyType", "type": "string" }, "ResourceId": { - "markdownDescription": "The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .\n- Pool of WorkSpaces - The resource type is `workspacespool` and the unique identifier is the pool ID. Example: `workspacespool/wspool-123456` .", + "markdownDescription": "The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier.\n\n- ECS service - The resource type is `service` and the unique identifier is the cluster name and service name. Example: `service/my-cluster/my-service` .\n- Spot Fleet - The resource type is `spot-fleet-request` and the unique identifier is the Spot Fleet request ID. Example: `spot-fleet-request/sfr-73fbd2ce-aa30-494c-8788-1cee4EXAMPLE` .\n- EMR cluster - The resource type is `instancegroup` and the unique identifier is the cluster ID and instance group ID. Example: `instancegroup/j-2EEZNYKUA1NTV/ig-1791Y4E1L8YI0` .\n- AppStream 2.0 fleet - The resource type is `fleet` and the unique identifier is the fleet name. Example: `fleet/sample-fleet` .\n- DynamoDB table - The resource type is `table` and the unique identifier is the table name. Example: `table/my-table` .\n- DynamoDB global secondary index - The resource type is `index` and the unique identifier is the index name. Example: `table/my-table/index/my-table-index` .\n- Aurora DB cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:my-db-cluster` .\n- SageMaker endpoint variant - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- Custom resources are not supported with a resource type. This parameter must specify the `OutputValue` from the CloudFormation template stack used to access the resources. The unique identifier is defined by the service provider. More information is available in our [GitHub repository](https://docs.aws.amazon.com/https://github.com/aws/aws-auto-scaling-custom-resource) .\n- Amazon Comprehend document classification endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:document-classifier-endpoint/EXAMPLE` .\n- Amazon Comprehend entity recognizer endpoint - The resource type and unique identifier are specified using the endpoint ARN. Example: `arn:aws:comprehend:us-west-2:123456789012:entity-recognizer-endpoint/EXAMPLE` .\n- Lambda provisioned concurrency - The resource type is `function` and the unique identifier is the function name with a function version or alias name suffix that is not `$LATEST` . Example: `function:my-function:prod` or `function:my-function:1` .\n- Amazon Keyspaces table - The resource type is `table` and the unique identifier is the table name. Example: `keyspace/mykeyspace/table/mytable` .\n- Amazon MSK cluster - The resource type and unique identifier are specified using the cluster ARN. Example: `arn:aws:kafka:us-east-1:123456789012:cluster/demo-cluster-1/6357e0b2-0e6a-4b86-a0b4-70df934c2e31-5` .\n- Amazon ElastiCache replication group - The resource type is `replication-group` and the unique identifier is the replication group name. Example: `replication-group/mycluster` .\n- Amazon ElastiCache cache cluster - The resource type is `cache-cluster` and the unique identifier is the cache cluster name. Example: `cache-cluster/mycluster` .\n- Neptune cluster - The resource type is `cluster` and the unique identifier is the cluster name. Example: `cluster:mycluster` .\n- SageMaker serverless endpoint - The resource type is `variant` and the unique identifier is the resource ID. Example: `endpoint/my-end-point/variant/KMeansClustering` .\n- SageMaker inference component - The resource type is `inference-component` and the unique identifier is the resource ID. Example: `inference-component/my-inference-component` .\n- Pool of WorkSpaces - The resource type is `workspacespool` and the unique identifier is the pool ID. Example: `workspacespool/wspool-123456` .", "title": "ResourceId", "type": "string" }, "ScalableDimension": { - "markdownDescription": "The scalable dimension. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.\n- `workspaces:workspacespool:DesiredUserSessions` - The number of user sessions for the WorkSpaces in the pool.", + "markdownDescription": "The scalable dimension. This string consists of the service namespace, resource type, and scaling property.\n\n- `ecs:service:DesiredCount` - The task count of an ECS service.\n- `elasticmapreduce:instancegroup:InstanceCount` - The instance count of an EMR Instance Group.\n- `ec2:spot-fleet-request:TargetCapacity` - The target capacity of a Spot Fleet.\n- `appstream:fleet:DesiredCapacity` - The capacity of an AppStream 2.0 fleet.\n- `dynamodb:table:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB table.\n- `dynamodb:table:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB table.\n- `dynamodb:index:ReadCapacityUnits` - The provisioned read capacity for a DynamoDB global secondary index.\n- `dynamodb:index:WriteCapacityUnits` - The provisioned write capacity for a DynamoDB global secondary index.\n- `rds:cluster:ReadReplicaCount` - The count of Aurora Replicas in an Aurora DB cluster. Available for Aurora MySQL-compatible edition and Aurora PostgreSQL-compatible edition.\n- `sagemaker:variant:DesiredInstanceCount` - The number of EC2 instances for a SageMaker model endpoint variant.\n- `custom-resource:ResourceType:Property` - The scalable dimension for a custom resource provided by your own application or service.\n- `comprehend:document-classifier-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend document classification endpoint.\n- `comprehend:entity-recognizer-endpoint:DesiredInferenceUnits` - The number of inference units for an Amazon Comprehend entity recognizer endpoint.\n- `lambda:function:ProvisionedConcurrency` - The provisioned concurrency for a Lambda function.\n- `cassandra:table:ReadCapacityUnits` - The provisioned read capacity for an Amazon Keyspaces table.\n- `cassandra:table:WriteCapacityUnits` - The provisioned write capacity for an Amazon Keyspaces table.\n- `kafka:broker-storage:VolumeSize` - The provisioned volume size (in GiB) for brokers in an Amazon MSK cluster.\n- `elasticache:cache-cluster:Nodes` - The number of nodes for an Amazon ElastiCache cache cluster.\n- `elasticache:replication-group:NodeGroups` - The number of node groups for an Amazon ElastiCache replication group.\n- `elasticache:replication-group:Replicas` - The number of replicas per node group for an Amazon ElastiCache replication group.\n- `neptune:cluster:ReadReplicaCount` - The count of read replicas in an Amazon Neptune DB cluster.\n- `sagemaker:variant:DesiredProvisionedConcurrency` - The provisioned concurrency for a SageMaker serverless endpoint.\n- `sagemaker:inference-component:DesiredCopyCount` - The number of copies across an endpoint for a SageMaker inference component.\n- `workspaces:workspacespool:DesiredUserSessions` - The number of user sessions for the WorkSpaces in the pool.", "title": "ScalableDimension", "type": "string" }, @@ -27173,7 +27173,7 @@ }, "Tags": { "additionalProperties": true, - "markdownDescription": "Key-value pair tags to be applied to Amazon EC2 resources that are launched in the compute environment. For AWS Batch , these take the form of `\"String1\": \"String2\"` , where `String1` is the tag key and `String2` is the tag value-for example, `{ \"Name\": \"Batch Instance - C4OnDemand\" }` . This is helpful for recognizing your Batch instances in the Amazon EC2 console. These tags aren't seen when using the AWS Batch `ListTagsForResource` API operation.\n\nWhen updating a compute environment, changing this setting requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .\n\n> This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.", + "markdownDescription": "Key-value pair tags to be applied to Amazon EC2 resources that are launched in the compute environment. For AWS Batch , these take the form of `\"String1\": \"String2\"` , where `String1` is the tag key and `String2` is the tag value (for example, `{ \"Name\": \"Batch Instance - C4OnDemand\" }` ). This is helpful for recognizing your Batch instances in the Amazon EC2 console. These tags aren't seen when using the AWS Batch `ListTagsForResource` API operation.\n\nWhen updating a compute environment, changing this setting requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .\n\n> This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.", "patternProperties": { "^[a-zA-Z0-9]+$": { "type": "string" @@ -27274,7 +27274,7 @@ "type": "number" }, "TerminateJobsOnUpdate": { - "markdownDescription": "Specifies whether jobs are automatically terminated when the computer environment infrastructure is updated. The default value is `false` .", + "markdownDescription": "Specifies whether jobs are automatically terminated when the compute environment infrastructure is updated. The default value is `false` .", "title": "TerminateJobsOnUpdate", "type": "boolean" } @@ -28067,7 +28067,7 @@ "additionalProperties": false, "properties": { "LogDriver": { - "markdownDescription": "The log driver to use for the container. The valid values that are listed for this parameter are log drivers that the Amazon ECS container agent can communicate with by default.\n\nThe supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `logentries` , `syslog` , and `splunk` .\n\n> Jobs that are running on Fargate resources are restricted to the `awslogs` and `splunk` log drivers. \n\n- **awslogs** - Specifies the Amazon CloudWatch Logs logging driver. For more information, see [Using the awslogs log driver](https://docs.aws.amazon.com/batch/latest/userguide/using_awslogs.html) in the *AWS Batch User Guide* and [Amazon CloudWatch Logs logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/) in the Docker documentation.\n- **fluentd** - Specifies the Fluentd logging driver. For more information including usage and options, see [Fluentd logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/fluentd/) in the *Docker documentation* .\n- **gelf** - Specifies the Graylog Extended Format (GELF) logging driver. For more information including usage and options, see [Graylog Extended Format logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/gelf/) in the *Docker documentation* .\n- **journald** - Specifies the journald logging driver. For more information including usage and options, see [Journald logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/journald/) in the *Docker documentation* .\n- **json-file** - Specifies the JSON file logging driver. For more information including usage and options, see [JSON File logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/json-file/) in the *Docker documentation* .\n- **splunk** - Specifies the Splunk logging driver. For more information including usage and options, see [Splunk logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/splunk/) in the *Docker documentation* .\n- **syslog** - Specifies the syslog logging driver. For more information including usage and options, see [Syslog logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/syslog/) in the *Docker documentation* .\n\n> If you have a custom driver that's not listed earlier that you want to work with the Amazon ECS container agent, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you want to have included. However, Amazon Web Services doesn't currently support running modified copies of this software. \n\nThis parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep \"Server API version\"`", + "markdownDescription": "The log driver to use for the container. The valid values that are listed for this parameter are log drivers that the Amazon ECS container agent can communicate with by default.\n\nThe supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `logentries` , `syslog` , and `splunk` .\n\n> Jobs that are running on Fargate resources are restricted to the `awslogs` and `splunk` log drivers. \n\n- **awsfirelens** - Specifies the firelens logging driver. For more information on configuring Firelens, see [Send Amazon ECS logs to an AWS service or AWS Partner](https://docs.aws.amazon.com//AmazonECS/latest/developerguide/using_firelens.html) in the *Amazon Elastic Container Service Developer Guide* .\n- **awslogs** - Specifies the Amazon CloudWatch Logs logging driver. For more information, see [Using the awslogs log driver](https://docs.aws.amazon.com/batch/latest/userguide/using_awslogs.html) in the *AWS Batch User Guide* and [Amazon CloudWatch Logs logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/) in the Docker documentation.\n- **fluentd** - Specifies the Fluentd logging driver. For more information including usage and options, see [Fluentd logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/fluentd/) in the *Docker documentation* .\n- **gelf** - Specifies the Graylog Extended Format (GELF) logging driver. For more information including usage and options, see [Graylog Extended Format logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/gelf/) in the *Docker documentation* .\n- **journald** - Specifies the journald logging driver. For more information including usage and options, see [Journald logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/journald/) in the *Docker documentation* .\n- **json-file** - Specifies the JSON file logging driver. For more information including usage and options, see [JSON File logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/json-file/) in the *Docker documentation* .\n- **splunk** - Specifies the Splunk logging driver. For more information including usage and options, see [Splunk logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/splunk/) in the *Docker documentation* .\n- **syslog** - Specifies the syslog logging driver. For more information including usage and options, see [Syslog logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/syslog/) in the *Docker documentation* .\n\n> If you have a custom driver that's not listed earlier that you want to work with the Amazon ECS container agent, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you want to have included. However, Amazon Web Services doesn't currently support running modified copies of this software. \n\nThis parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep \"Server API version\"`", "title": "LogDriver", "type": "string" }, @@ -28783,7 +28783,7 @@ "type": "number" }, "ShareDecaySeconds": { - "markdownDescription": "The amount of time (in seconds) to use to calculate a fair-share percentage for each share identifier in use. A value of zero (0) indicates the default minimum time window (600 seconds). The maximum supported value is 604800 (1 week).\n\nThe decay allows for more recently run jobs to have more weight than jobs that ran earlier. Consider adjusting this number if you have jobs that (on average) run longer than ten minutes, or a large difference in job count or job run times between share identifiers, and the allocation of resources doesn\u2019t meet your needs.", + "markdownDescription": "The amount of time (in seconds) to use to calculate a fair-share percentage for each share identifier in use. A value of zero (0) indicates the default minimum time window (600 seconds). The maximum supported value is 604800 (1 week).\n\nThe decay allows for more recently run jobs to have more weight than jobs that ran earlier. Consider adjusting this number if you have jobs that (on average) run longer than ten minutes, or a large difference in job count or job run times between share identifiers, and the allocation of resources doesn't meet your needs.", "title": "ShareDecaySeconds", "type": "number" }, @@ -31926,7 +31926,7 @@ "items": { "type": "string" }, - "markdownDescription": "Specifies the AWS Regions that the keyspace is replicated in. You must specify at least two Regions, including the Region that the keyspace is being created in.", + "markdownDescription": "Specifies the AWS Regions that the keyspace is replicated in. You must specify at least two Regions, including the Region that the keyspace is being created in.\n\nTo specify a Region [that's disabled by default](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-regions.html#rande-manage-enable) , you must first enable the Region. For more information, see [Multi-Region replication in AWS Regions disabled by default](https://docs.aws.amazon.com/keyspaces/latest/devguide/multiRegion-replication_how-it-works.html#howitworks_mrr_opt_in) in the *Amazon Keyspaces Developer Guide* .", "title": "RegionList", "type": "array" }, @@ -32931,7 +32931,7 @@ "items": { "type": "string" }, - "markdownDescription": "The abilities granted to the collaboration creator.\n\n*Allowed values* `CAN_QUERY` | `CAN_RECEIVE_RESULTS`", + "markdownDescription": "The abilities granted to the collaboration creator.\n\n*Allowed values* `CAN_QUERY` | `CAN_RECEIVE_RESULTS` | `CAN_RUN_JOB`", "title": "CreatorMemberAbilities", "type": "array" }, @@ -39205,7 +39205,7 @@ "type": "array" }, "Field": { - "markdownDescription": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `resources.type` (required), `eventName` , `readOnly` , and `resources.ARN` . The following additional fields are available for event data stores: `eventSource` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events (for event data stores only), and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor management and data events for event data stores, you can use it to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source. For a list of services supporting network activity events, see [Logging network activity events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-network-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide* .\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - This is an optional field available only for event data stores, which is used to filter management and data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - This is an optional field available only for event data stores, which is used to filter management and data events based on whether the events originated from an AWS Management Console session. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor more information about the ARN formats of data event resources, see [Actions, resources, and condition keys for AWS services](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html) in the *Service Authorization Reference* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - This is an optional field available only for event data stores, which is used to filter management and data events on the userIdentity ARN. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", + "markdownDescription": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `eventName` , `eventSource` , `eventType` , `resources.type` (required), `readOnly` , `resources.ARN` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events, and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor data events for trails, this is an optional field that you can use to include or exclude any event source and can use any operator.\n\nFor management and data events for event data stores, this is an optional field that you can use to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source. For a list of services supporting network activity events, see [Logging network activity events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-network-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide* .\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - For event data stores, this is an optional field available for event data stores to filter management and data events on the event type. For trails, this is an optional field to filter data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - For event data stores, this is an optional field used to filter management and data events based on whether the events originated from an AWS Management Console session. For trails, this is an optional field used to filter data events. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor more information about the ARN formats of data event resources, see [Actions, resources, and condition keys for AWS services](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html) in the *Service Authorization Reference* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - For event data stores, this is an optional field used to filter management and data events for actions taken by specific IAM identities. For trails, this is an optional field used to filter data events. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", "title": "Field", "type": "string" }, @@ -39528,7 +39528,7 @@ "type": "array" }, "Field": { - "markdownDescription": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `resources.type` (required), `eventName` , `readOnly` , and `resources.ARN` . The following additional fields are available for event data stores: `eventSource` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events (for event data stores only), and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor management and data events for event data stores, you can use it to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source. For a list of services supporting network activity events, see [Logging network activity events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-network-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide* .\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - This is an optional field available only for event data stores, which is used to filter management and data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - This is an optional field available only for event data stores, which is used to filter management and data events based on whether the events originated from an AWS Management Console session. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor more information about the ARN formats of data event resources, see [Actions, resources, and condition keys for AWS services](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html) in the *Service Authorization Reference* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - This is an optional field available only for event data stores, which is used to filter management and data events on the userIdentity ARN. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", + "markdownDescription": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `eventName` , `eventSource` , `eventType` , `resources.type` (required), `readOnly` , `resources.ARN` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events, and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor data events for trails, this is an optional field that you can use to include or exclude any event source and can use any operator.\n\nFor management and data events for event data stores, this is an optional field that you can use to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source. For a list of services supporting network activity events, see [Logging network activity events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-network-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide* .\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - For event data stores, this is an optional field available for event data stores to filter management and data events on the event type. For trails, this is an optional field to filter data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - For event data stores, this is an optional field used to filter management and data events based on whether the events originated from an AWS Management Console session. For trails, this is an optional field used to filter data events. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor more information about the ARN formats of data event resources, see [Actions, resources, and condition keys for AWS services](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html) in the *Service Authorization Reference* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - For event data stores, this is an optional field used to filter management and data events for actions taken by specific IAM identities. For trails, this is an optional field used to filter data events. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", "title": "Field", "type": "string" }, @@ -67521,7 +67521,7 @@ "items": { "type": "string" }, - "markdownDescription": "Represents the non-key attribute names which will be projected into the index.\n\nFor local secondary indexes, the total count of `NonKeyAttributes` summed across all of the local secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.", + "markdownDescription": "Represents the non-key attribute names which will be projected into the index.\n\nFor global and local secondary indexes, the total count of `NonKeyAttributes` summed across all of the secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total. This limit only applies when you specify the ProjectionType of `INCLUDE` . You still can specify the ProjectionType of `ALL` to project all attributes from the source table, even if the table has more than 100 attributes.", "title": "NonKeyAttributes", "type": "array" }, @@ -68168,7 +68168,7 @@ "items": { "type": "string" }, - "markdownDescription": "Represents the non-key attribute names which will be projected into the index.\n\nFor local secondary indexes, the total count of `NonKeyAttributes` summed across all of the local secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.", + "markdownDescription": "Represents the non-key attribute names which will be projected into the index.\n\nFor global and local secondary indexes, the total count of `NonKeyAttributes` summed across all of the secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total. This limit only applies when you specify the ProjectionType of `INCLUDE` . You still can specify the ProjectionType of `ALL` to project all attributes from the source table, even if the table has more than 100 attributes.", "title": "NonKeyAttributes", "type": "array" }, @@ -73398,7 +73398,7 @@ "type": "string" }, "DeviceIndex": { - "markdownDescription": "The device index for the network interface attachment. If the network interface is of type `interface` , you must specify a device index.\n\nIf you create a launch template that includes secondary network interfaces but no primary network interface, and you specify it using the `LaunchTemplate` property of `AWS::EC2::Instance` , then you must include a primary network interface using the `NetworkInterfaces` property of `AWS::EC2::Instance` .", + "markdownDescription": "The device index for the network interface attachment. The primary network interface has a device index of 0. If the network interface is of type `interface` , you must specify a device index.\n\nIf you create a launch template that includes secondary network interfaces but no primary network interface, and you specify it using the `LaunchTemplate` property of `AWS::EC2::Instance` , then you must include a primary network interface using the `NetworkInterfaces` property of `AWS::EC2::Instance` .", "title": "DeviceIndex", "type": "number" }, @@ -76573,7 +76573,7 @@ "items": { "$ref": "#/definitions/AWS::EC2::SecurityGroup.Egress" }, - "markdownDescription": "The outbound rules associated with the security group. There is a short interruption during which you cannot connect to the security group.", + "markdownDescription": "The outbound rules associated with the security group.", "title": "SecurityGroupEgress", "type": "array" }, @@ -76581,7 +76581,7 @@ "items": { "$ref": "#/definitions/AWS::EC2::SecurityGroup.Ingress" }, - "markdownDescription": "The inbound rules associated with the security group. There is a short interruption during which you cannot connect to the security group.", + "markdownDescription": "The inbound rules associated with the security group.", "title": "SecurityGroupIngress", "type": "array" }, @@ -83809,7 +83809,7 @@ }, "Options": { "additionalProperties": true, - "markdownDescription": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using the Fargate launch type.Optional for the EC2 launch type, required for the Fargate launch type.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to CloudWatch Logs. The delivery mode you choose affects application availability when the flow of logs from container to CloudWatch is interrupted.\n\nIf you use the `blocking` mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", + "markdownDescription": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using Fargate.Optional when using EC2.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n\nThe following options apply to all supported log drivers.\n\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to the log driver specified using `logDriver` . The delivery mode you choose affects application availability when the flow of logs from container is interrupted.\n\nIf you use the `blocking` mode and the flow of logs is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n\nYou can set a default `mode` for all containers in a specific AWS Region by using the `defaultLogDriverMode` account setting. If you don't specify the `mode` option or configure the account setting, Amazon ECS will default to the `blocking` mode. For more information about the account setting, see [Default log driver mode](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#default-log-driver-mode) in the *Amazon Elastic Container Service Developer Guide* .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", "patternProperties": { "^[a-zA-Z0-9]+$": { "type": "string" @@ -84979,7 +84979,7 @@ }, "Options": { "additionalProperties": true, - "markdownDescription": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using the Fargate launch type.Optional for the EC2 launch type, required for the Fargate launch type.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to CloudWatch Logs. The delivery mode you choose affects application availability when the flow of logs from container to CloudWatch is interrupted.\n\nIf you use the `blocking` mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", + "markdownDescription": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using Fargate.Optional when using EC2.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n\nThe following options apply to all supported log drivers.\n\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to the log driver specified using `logDriver` . The delivery mode you choose affects application availability when the flow of logs from container is interrupted.\n\nIf you use the `blocking` mode and the flow of logs is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n\nYou can set a default `mode` for all containers in a specific AWS Region by using the `defaultLogDriverMode` account setting. If you don't specify the `mode` option or configure the account setting, Amazon ECS will default to the `blocking` mode. For more information about the account setting, see [Default log driver mode](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#default-log-driver-mode) in the *Amazon Elastic Container Service Developer Guide* .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", "patternProperties": { "^[a-zA-Z0-9]+$": { "type": "string" @@ -90813,7 +90813,7 @@ "additionalProperties": false, "properties": { "AtRestEncryptionEnabled": { - "markdownDescription": "A flag that enables encryption at rest when set to `true` .\n\nYou cannot modify the value of `AtRestEncryptionEnabled` after the replication group is created. To enable encryption at rest on a replication group you must set `AtRestEncryptionEnabled` to `true` when you create the replication group.\n\n*Required:* Only available when creating a replication group in an Amazon VPC using Redis OSS version `3.2.6` or `4.x` onward.\n\nDefault: `false`", + "markdownDescription": "A flag that enables encryption at rest when set to `true` .\n\n*Required:* Only available when creating a replication group in an Amazon VPC using Redis OSS version `3.2.6` or `4.x` onward.\n\nDefault: `false`", "title": "AtRestEncryptionEnabled", "type": "boolean" }, @@ -91014,7 +91014,7 @@ "type": "array" }, "TransitEncryptionEnabled": { - "markdownDescription": "A flag that enables in-transit encryption when set to `true` .\n\nYou cannot modify the value of `TransitEncryptionEnabled` after the cluster is created. To enable in-transit encryption on a cluster you must set `TransitEncryptionEnabled` to `true` when you create a cluster.\n\nThis parameter is valid only if the `Engine` parameter is `redis` , the `EngineVersion` parameter is `3.2.6` or `4.x` onward, and the cluster is being created in an Amazon VPC.\n\nIf you enable in-transit encryption, you must also specify a value for `CacheSubnetGroup` .\n\n> - TransitEncryptionEnabled is only available when creating a replication group in an Amazon VPC using Valkey version `7.2` and above, Redis OSS version `3.2.6` , or Redis OSS version `4.x` and above.\n> - TransitEncryptionEnabled is required when creating a new valkey replication group. \n\nDefault: `false`\n\n> For HIPAA compliance, you must specify `TransitEncryptionEnabled` as `true` , an `AuthToken` , and a `CacheSubnetGroup` .", + "markdownDescription": "A flag that enables in-transit encryption when set to `true` .\n\nThis parameter is only available when creating a replication group in an Amazon VPC using Valkey version `7.2` and above, Redis OSS version `3.2.6` , or Redis OSS version `4.x` and above, and the cluster is being created in an Amazon VPC.\n\nIf you enable in-transit encryption, you must also specify a value for `CacheSubnetGroup` .\n\n> TransitEncryptionEnabled is required when creating a new valkey replication group. \n\nDefault: `false`\n\n> For HIPAA compliance, you must specify `TransitEncryptionEnabled` as `true` , an `AuthToken` , and a `CacheSubnetGroup` .", "title": "TransitEncryptionEnabled", "type": "boolean" }, @@ -99811,7 +99811,7 @@ "type": "number" }, "StorageType": { - "markdownDescription": "Sets the storage class for the file system that you're creating. Valid values are `SSD` , `HDD` , and `INTELLIGENT_TIERING` .\n\n- Set to `SSD` to use solid state drive storage. SSD is supported on all Windows, Lustre, ONTAP, and OpenZFS deployment types.\n- Set to `HDD` to use hard disk drive storage. HDD is supported on `SINGLE_AZ_2` and `MULTI_AZ_1` Windows file system deployment types, and on `PERSISTENT_1` Lustre file system deployment types.\n- Set to `INTELLIGENT_TIERING` to use fully elastic, intelligently-tiered storage. Intelligent-Tiering is only available for OpenZFS file systems with the Multi-AZ deployment type.\n\nDefault value is `SSD` . For more information, see [Storage type options](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/optimize-fsx-costs.html#storage-type-options) in the *FSx for Windows File Server User Guide* , [Multiple storage options](https://docs.aws.amazon.com/fsx/latest/LustreGuide/what-is.html#storage-options) in the *FSx for Lustre User Guide* , and [Working with Intelligent-Tiering](https://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/performance-intelligent-tiering) in the *Amazon FSx for OpenZFS User Guide* .", + "markdownDescription": "Sets the storage class for the file system that you're creating. Valid values are `SSD` , `HDD` , and `INTELLIGENT_TIERING` .\n\n- Set to `SSD` to use solid state drive storage. SSD is supported on all Windows, Lustre, ONTAP, and OpenZFS deployment types.\n- Set to `HDD` to use hard disk drive storage, which is supported on `SINGLE_AZ_2` and `MULTI_AZ_1` Windows file system deployment types, and on `PERSISTENT_1` Lustre file system deployment types.\n- Set to `INTELLIGENT_TIERING` to use fully elastic, intelligently-tiered storage. Intelligent-Tiering is only available for OpenZFS file systems with the Multi-AZ deployment type.\n\nDefault value is `SSD` . For more information, see [Storage type options](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/optimize-fsx-costs.html#storage-type-options) in the *FSx for Windows File Server User Guide* , [Multiple storage options](https://docs.aws.amazon.com/fsx/latest/LustreGuide/what-is.html#storage-options) in the *FSx for Lustre User Guide* , and [Working with Intelligent-Tiering](https://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/performance-intelligent-tiering) in the *Amazon FSx for OpenZFS User Guide* .", "title": "StorageType", "type": "string" }, @@ -99983,7 +99983,7 @@ "type": "number" }, "WeeklyMaintenanceStartTime": { - "markdownDescription": "A recurring weekly time, in the format `D:HH:MM` .\n\n`D` is the day of the week, for which 1 represents Monday and 7 represents Sunday. For further details, see [the ISO-8601 spec as described on Wikipedia](https://docs.aws.amazon.com/https://en.wikipedia.org/wiki/ISO_week_date) .\n\n`HH` is the zero-padded hour of the day (0-23), and `MM` is the zero-padded minute of the hour.\n\nFor example, `1:05:00` specifies maintenance at 5 AM Monday.", + "markdownDescription": "The preferred start time to perform weekly maintenance, formatted d:HH:MM in the UTC time zone, where d is the weekday number, from 1 through 7, beginning with Monday and ending with Sunday.\n\nFor example, `1:05:00` specifies maintenance at 5 AM Monday.", "title": "WeeklyMaintenanceStartTime", "type": "string" } @@ -100066,7 +100066,7 @@ "type": "number" }, "WeeklyMaintenanceStartTime": { - "markdownDescription": "A recurring weekly time, in the format `D:HH:MM` .\n\n`D` is the day of the week, for which 1 represents Monday and 7 represents Sunday. For further details, see [the ISO-8601 spec as described on Wikipedia](https://docs.aws.amazon.com/https://en.wikipedia.org/wiki/ISO_week_date) .\n\n`HH` is the zero-padded hour of the day (0-23), and `MM` is the zero-padded minute of the hour.\n\nFor example, `1:05:00` specifies maintenance at 5 AM Monday.", + "markdownDescription": "The preferred start time to perform weekly maintenance, formatted d:HH:MM in the UTC time zone, where d is the weekday number, from 1 through 7, beginning with Monday and ending with Sunday.\n\nFor example, `1:05:00` specifies maintenance at 5 AM Monday.", "title": "WeeklyMaintenanceStartTime", "type": "string" } @@ -100146,7 +100146,7 @@ "type": "number" }, "WeeklyMaintenanceStartTime": { - "markdownDescription": "A recurring weekly time, in the format `D:HH:MM` .\n\n`D` is the day of the week, for which 1 represents Monday and 7 represents Sunday. For further details, see [the ISO-8601 spec as described on Wikipedia](https://docs.aws.amazon.com/https://en.wikipedia.org/wiki/ISO_week_date) .\n\n`HH` is the zero-padded hour of the day (0-23), and `MM` is the zero-padded minute of the hour.\n\nFor example, `1:05:00` specifies maintenance at 5 AM Monday.", + "markdownDescription": "The preferred start time to perform weekly maintenance, formatted d:HH:MM in the UTC time zone, where d is the weekday number, from 1 through 7, beginning with Monday and ending with Sunday.\n\nFor example, `1:05:00` specifies maintenance at 5 AM Monday.", "title": "WeeklyMaintenanceStartTime", "type": "string" } @@ -100320,7 +100320,7 @@ "type": "number" }, "WeeklyMaintenanceStartTime": { - "markdownDescription": "A recurring weekly time, in the format `D:HH:MM` .\n\n`D` is the day of the week, for which 1 represents Monday and 7 represents Sunday. For further details, see [the ISO-8601 spec as described on Wikipedia](https://docs.aws.amazon.com/https://en.wikipedia.org/wiki/ISO_week_date) .\n\n`HH` is the zero-padded hour of the day (0-23), and `MM` is the zero-padded minute of the hour.\n\nFor example, `1:05:00` specifies maintenance at 5 AM Monday.", + "markdownDescription": "The preferred start time to perform weekly maintenance, formatted d:HH:MM in the UTC time zone, where d is the weekday number, from 1 through 7, beginning with Monday and ending with Sunday.", "title": "WeeklyMaintenanceStartTime", "type": "string" } @@ -102875,7 +102875,7 @@ "type": "string" }, "OperatingSystem": { - "markdownDescription": "The platform that all containers in the container group definition run on.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use server SDK version 4.x for Amazon GameLift Servers, first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", + "markdownDescription": "The platform that all containers in the container group definition run on.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use server SDK version 4.x for Amazon GameLift Servers, first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", "title": "OperatingSystem", "type": "string" }, @@ -112467,7 +112467,7 @@ "items": { "$ref": "#/definitions/AWS::GroundStation::DataflowEndpointGroup.EndpointDetails" }, - "markdownDescription": "List of Endpoint Details, containing address and port for each endpoint.", + "markdownDescription": "List of Endpoint Details, containing address and port for each endpoint. All dataflow endpoints within a single dataflow endpoint group must be of the same type. You cannot mix AWS Ground Station Agent endpoints with Dataflow endpoints in the same group. If your use case requires both types of endpoints, you must create separate dataflow endpoint groups for each type.", "title": "EndpointDetails", "type": "array" }, @@ -120235,7 +120235,7 @@ }, "AuditCheckConfigurations": { "$ref": "#/definitions/AWS::IoT::AccountAuditConfiguration.AuditCheckConfigurations", - "markdownDescription": "Specifies which audit checks are enabled and disabled for this account.\n\nSome data collection might start immediately when certain checks are enabled. When a check is disabled, any data collected so far in relation to the check is deleted. To disable a check, set the value of the `Enabled:` key to `false` .\n\nIf an enabled check is removed from the template, it will also be disabled.\n\nYou can't disable a check if it's used by any scheduled audit. You must delete the check from the scheduled audit or delete the scheduled audit itself to disable the check.\n\nFor more information on avialbe auidt checks see [AWS::IoT::AccountAuditConfiguration AuditCheckConfigurations](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iot-accountauditconfiguration-auditcheckconfigurations.html)", + "markdownDescription": "Specifies which audit checks are enabled and disabled for this account.\n\nSome data collection might start immediately when certain checks are enabled. When a check is disabled, any data collected so far in relation to the check is deleted. To disable a check, set the value of the `Enabled:` key to `false` .\n\nIf an enabled check is removed from the template, it will also be disabled.\n\nYou can't disable a check if it's used by any scheduled audit. You must delete the check from the scheduled audit or delete the scheduled audit itself to disable the check.\n\nFor more information on available audit checks see [AWS::IoT::AccountAuditConfiguration AuditCheckConfigurations](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iot-accountauditconfiguration-auditcheckconfigurations.html)", "title": "AuditCheckConfigurations" }, "AuditNotificationTargetConfigurations": { @@ -122520,7 +122520,7 @@ "items": { "type": "string" }, - "markdownDescription": "Which checks are performed during the scheduled audit. Checks must be enabled for your account. (Use `DescribeAccountAuditConfiguration` to see the list of all checks, including those that are enabled or use `UpdateAccountAuditConfiguration` to select which checks are enabled.)\n\nThe following checks are currently aviable:\n\n- `AUTHENTICATED_COGNITO_ROLE_OVERLY_PERMISSIVE_CHECK`\n- `CA_CERTIFICATE_EXPIRING_CHECK`\n- `CA_CERTIFICATE_KEY_QUALITY_CHECK`\n- `CONFLICTING_CLIENT_IDS_CHECK`\n- `DEVICE_CERTIFICATE_EXPIRING_CHECK`\n- `DEVICE_CERTIFICATE_KEY_QUALITY_CHECK`\n- `DEVICE_CERTIFICATE_SHARED_CHECK`\n- `IOT_POLICY_OVERLY_PERMISSIVE_CHECK`\n- `IOT_ROLE_ALIAS_ALLOWS_ACCESS_TO_UNUSED_SERVICES_CHECK`\n- `IOT_ROLE_ALIAS_OVERLY_PERMISSIVE_CHECK`\n- `LOGGING_DISABLED_CHECK`\n- `REVOKED_CA_CERTIFICATE_STILL_ACTIVE_CHECK`\n- `REVOKED_DEVICE_CERTIFICATE_STILL_ACTIVE_CHECK`\n- `UNAUTHENTICATED_COGNITO_ROLE_OVERLY_PERMISSIVE_CHECK`", + "markdownDescription": "Which checks are performed during the scheduled audit. Checks must be enabled for your account. (Use `DescribeAccountAuditConfiguration` to see the list of all checks, including those that are enabled or use `UpdateAccountAuditConfiguration` to select which checks are enabled.)\n\nThe following checks are currently available:\n\n- `AUTHENTICATED_COGNITO_ROLE_OVERLY_PERMISSIVE_CHECK`\n- `CA_CERTIFICATE_EXPIRING_CHECK`\n- `CA_CERTIFICATE_KEY_QUALITY_CHECK`\n- `CONFLICTING_CLIENT_IDS_CHECK`\n- `DEVICE_CERTIFICATE_EXPIRING_CHECK`\n- `DEVICE_CERTIFICATE_KEY_QUALITY_CHECK`\n- `DEVICE_CERTIFICATE_SHARED_CHECK`\n- `IOT_POLICY_OVERLY_PERMISSIVE_CHECK`\n- `IOT_ROLE_ALIAS_ALLOWS_ACCESS_TO_UNUSED_SERVICES_CHECK`\n- `IOT_ROLE_ALIAS_OVERLY_PERMISSIVE_CHECK`\n- `LOGGING_DISABLED_CHECK`\n- `REVOKED_CA_CERTIFICATE_STILL_ACTIVE_CHECK`\n- `REVOKED_DEVICE_CERTIFICATE_STILL_ACTIVE_CHECK`\n- `UNAUTHENTICATED_COGNITO_ROLE_OVERLY_PERMISSIVE_CHECK`", "title": "TargetCheckNames", "type": "array" } @@ -134146,7 +134146,7 @@ "items": { "type": "string" }, - "markdownDescription": "The security groups for the connector.", + "markdownDescription": "The security group IDs for the connector.", "title": "SecurityGroups", "type": "array" }, @@ -149060,7 +149060,7 @@ "additionalProperties": false, "properties": { "DataSource": { - "markdownDescription": "Specifies the geospatial data provider for the new place index.\n\n> This field is case-sensitive. Enter the valid values as shown. For example, entering `HERE` returns an error. \n\nValid values include:\n\n- `Esri` \u2013 For additional information about [Esri](https://docs.aws.amazon.com/location/previous/developerguide/esri.html) 's coverage in your region of interest, see [Esri details on geocoding coverage](https://docs.aws.amazon.com/https://developers.arcgis.com/rest/geocode/api-reference/geocode-coverage.htm) .\n- `Grab` \u2013 Grab provides place index functionality for Southeast Asia. For additional information about [GrabMaps](https://docs.aws.amazon.com/location/previous/developerguide/grab.html) ' coverage, see [GrabMaps countries and areas covered](https://docs.aws.amazon.com/location/previous/developerguide/grab.html#grab-coverage-area) .\n- `Here` \u2013 For additional information about [HERE Technologies](https://docs.aws.amazon.com/location/previous/developerguide/HERE.html) ' coverage in your region of interest, see [HERE details on goecoding coverage](https://docs.aws.amazon.com/https://developer.here.com/documentation/geocoder/dev_guide/topics/coverage-geocoder.html) .\n\n> If you specify HERE Technologies ( `Here` ) as the data provider, you may not [store results](https://docs.aws.amazon.com//location-places/latest/APIReference/API_DataSourceConfiguration.html) for locations in Japan. For more information, see the [AWS Service Terms](https://docs.aws.amazon.com/service-terms/) for Amazon Location Service.\n\nFor additional information , see [Data providers](https://docs.aws.amazon.com/location/previous/developerguide/what-is-data-provider.html) on the *Amazon Location Service Developer Guide* .", + "markdownDescription": "Specifies the geospatial data provider for the new place index.\n\n> This field is case-sensitive. Enter the valid values as shown. For example, entering `HERE` returns an error. \n\nValid values include:\n\n- `Esri` \u2013 For additional information about [Esri](https://docs.aws.amazon.com/location/previous/developerguide/esri.html) 's coverage in your region of interest, see [Esri details on geocoding coverage](https://docs.aws.amazon.com/https://developers.arcgis.com/rest/geocode/api-reference/geocode-coverage.htm) .\n- `Grab` \u2013 Grab provides place index functionality for Southeast Asia. For additional information about [GrabMaps](https://docs.aws.amazon.com/location/previous/developerguide/grab.html) ' coverage, see [GrabMaps countries and areas covered](https://docs.aws.amazon.com/location/previous/developerguide/grab.html#grab-coverage-area) .\n- `Here` \u2013 For additional information about [HERE Technologies](https://docs.aws.amazon.com/location/previous/developerguide/HERE.html) ' coverage in your region of interest, see [HERE details on goecoding coverage](https://docs.aws.amazon.com/https://developer.here.com/documentation/geocoder/dev_guide/topics/coverage-geocoder.html) .\n\n> If you specify HERE Technologies ( `Here` ) as the data provider, you may not [store results](https://docs.aws.amazon.com//location-places/latest/APIReference/API_DataSourceConfiguration.html) for locations in Japan. For more information, see the [AWS service terms](https://docs.aws.amazon.com/service-terms/) for Amazon Location Service.\n\nFor additional information , see [Data providers](https://docs.aws.amazon.com/location/previous/developerguide/what-is-data-provider.html) on the *Amazon Location Service developer guide* .", "title": "DataSource", "type": "string" }, @@ -151974,12 +151974,12 @@ }, "Firehose": { "$ref": "#/definitions/AWS::MSK::Cluster.Firehose", - "markdownDescription": "", + "markdownDescription": "Details of the Kinesis Data Firehose delivery stream that is the destination for broker logs.", "title": "Firehose" }, "S3": { "$ref": "#/definitions/AWS::MSK::Cluster.S3", - "markdownDescription": "", + "markdownDescription": "Details of the Amazon S3 destination for broker logs.", "title": "S3" } }, @@ -152036,17 +152036,17 @@ "properties": { "Sasl": { "$ref": "#/definitions/AWS::MSK::Cluster.Sasl", - "markdownDescription": "", + "markdownDescription": "Details for client authentication using SASL. To turn on SASL, you must also turn on `EncryptionInTransit` by setting `inCluster` to true. You must set `clientBroker` to either `TLS` or `TLS_PLAINTEXT` . If you choose `TLS_PLAINTEXT` , then you must also set `unauthenticated` to true.", "title": "Sasl" }, "Tls": { "$ref": "#/definitions/AWS::MSK::Cluster.Tls", - "markdownDescription": "", + "markdownDescription": "Details for ClientAuthentication using TLS. To turn on TLS access control, you must also turn on `EncryptionInTransit` by setting `inCluster` to true and `clientBroker` to `TLS` .", "title": "Tls" }, "Unauthenticated": { "$ref": "#/definitions/AWS::MSK::Cluster.Unauthenticated", - "markdownDescription": "", + "markdownDescription": "Details for ClientAuthentication using no authentication.", "title": "Unauthenticated" } }, @@ -152056,12 +152056,12 @@ "additionalProperties": false, "properties": { "Enabled": { - "markdownDescription": "", + "markdownDescription": "Specifies whether broker logs get sent to the specified CloudWatch Logs destination.", "title": "Enabled", "type": "boolean" }, "LogGroup": { - "markdownDescription": "", + "markdownDescription": "The CloudWatch log group that is the destination for broker logs.", "title": "LogGroup", "type": "string" } @@ -152075,12 +152075,12 @@ "additionalProperties": false, "properties": { "Arn": { - "markdownDescription": "", + "markdownDescription": "ARN of the configuration to use.", "title": "Arn", "type": "string" }, "Revision": { - "markdownDescription": "", + "markdownDescription": "The revision of the configuration to use.", "title": "Revision", "type": "number" } @@ -152096,12 +152096,12 @@ "properties": { "PublicAccess": { "$ref": "#/definitions/AWS::MSK::Cluster.PublicAccess", - "markdownDescription": "", + "markdownDescription": "Access control settings for the cluster's brokers.", "title": "PublicAccess" }, "VpcConnectivity": { "$ref": "#/definitions/AWS::MSK::Cluster.VpcConnectivity", - "markdownDescription": "", + "markdownDescription": "VPC connection control settings for brokers.", "title": "VpcConnectivity" } }, @@ -152112,11 +152112,11 @@ "properties": { "ProvisionedThroughput": { "$ref": "#/definitions/AWS::MSK::Cluster.ProvisionedThroughput", - "markdownDescription": "", + "markdownDescription": "EBS volume provisioned throughput information.", "title": "ProvisionedThroughput" }, "VolumeSize": { - "markdownDescription": "", + "markdownDescription": "The size in GiB of the EBS volume for the data drive on each broker node.", "title": "VolumeSize", "type": "number" } @@ -152127,7 +152127,7 @@ "additionalProperties": false, "properties": { "DataVolumeKMSKeyId": { - "markdownDescription": "", + "markdownDescription": "The ARN of the Amazon KMS key for encrypting data at rest. If you don't specify a KMS key, MSK creates one for you and uses it.", "title": "DataVolumeKMSKeyId", "type": "string" } @@ -152158,7 +152158,7 @@ "properties": { "EncryptionAtRest": { "$ref": "#/definitions/AWS::MSK::Cluster.EncryptionAtRest", - "markdownDescription": "", + "markdownDescription": "The data-volume encryption details.", "title": "EncryptionAtRest" }, "EncryptionInTransit": { @@ -152173,12 +152173,12 @@ "additionalProperties": false, "properties": { "DeliveryStream": { - "markdownDescription": "", + "markdownDescription": "The Kinesis Data Firehose delivery stream that is the destination for broker logs.", "title": "DeliveryStream", "type": "string" }, "Enabled": { - "markdownDescription": "", + "markdownDescription": "Specifies whether broker logs get send to the specified Kinesis Data Firehose delivery stream.", "title": "Enabled", "type": "boolean" } @@ -152192,7 +152192,7 @@ "additionalProperties": false, "properties": { "Enabled": { - "markdownDescription": "", + "markdownDescription": "SASL/IAM authentication is enabled or not.", "title": "Enabled", "type": "boolean" } @@ -152206,7 +152206,7 @@ "additionalProperties": false, "properties": { "EnabledInBroker": { - "markdownDescription": "", + "markdownDescription": "Indicates whether you want to enable or disable the JMX Exporter.", "title": "EnabledInBroker", "type": "boolean" } @@ -152221,7 +152221,7 @@ "properties": { "BrokerLogs": { "$ref": "#/definitions/AWS::MSK::Cluster.BrokerLogs", - "markdownDescription": "", + "markdownDescription": "You can configure your MSK cluster to send broker logs to different destination types. This configuration specifies the details of these destinations.", "title": "BrokerLogs" } }, @@ -152234,7 +152234,7 @@ "additionalProperties": false, "properties": { "EnabledInBroker": { - "markdownDescription": "", + "markdownDescription": "Indicates whether you want to enable or disable the Node Exporter.", "title": "EnabledInBroker", "type": "boolean" } @@ -152249,7 +152249,7 @@ "properties": { "Prometheus": { "$ref": "#/definitions/AWS::MSK::Cluster.Prometheus", - "markdownDescription": "", + "markdownDescription": "Prometheus exporter settings.", "title": "Prometheus" } }, @@ -152263,12 +152263,12 @@ "properties": { "JmxExporter": { "$ref": "#/definitions/AWS::MSK::Cluster.JmxExporter", - "markdownDescription": "", + "markdownDescription": "Indicates whether you want to enable or disable the JMX Exporter.", "title": "JmxExporter" }, "NodeExporter": { "$ref": "#/definitions/AWS::MSK::Cluster.NodeExporter", - "markdownDescription": "", + "markdownDescription": "Indicates whether you want to enable or disable the Node Exporter.", "title": "NodeExporter" } }, @@ -152278,12 +152278,12 @@ "additionalProperties": false, "properties": { "Enabled": { - "markdownDescription": "", + "markdownDescription": "Provisioned throughput is on or off.", "title": "Enabled", "type": "boolean" }, "VolumeThroughput": { - "markdownDescription": "", + "markdownDescription": "Throughput value of the EBS volumes for the data drive on each kafka broker node in MiB per second.", "title": "VolumeThroughput", "type": "number" } @@ -152294,7 +152294,7 @@ "additionalProperties": false, "properties": { "Type": { - "markdownDescription": "", + "markdownDescription": "DISABLED means that public access is turned off. SERVICE_PROVIDED_EIPS means that public access is turned on.", "title": "Type", "type": "string" } @@ -152305,17 +152305,17 @@ "additionalProperties": false, "properties": { "Bucket": { - "markdownDescription": "", + "markdownDescription": "The name of the S3 bucket that is the destination for broker logs.", "title": "Bucket", "type": "string" }, "Enabled": { - "markdownDescription": "", + "markdownDescription": "Specifies whether broker logs get sent to the specified Amazon S3 destination.", "title": "Enabled", "type": "boolean" }, "Prefix": { - "markdownDescription": "", + "markdownDescription": "The S3 prefix that is the destination for broker logs.", "title": "Prefix", "type": "string" } @@ -152330,12 +152330,12 @@ "properties": { "Iam": { "$ref": "#/definitions/AWS::MSK::Cluster.Iam", - "markdownDescription": "", + "markdownDescription": "Details for ClientAuthentication using IAM.", "title": "Iam" }, "Scram": { "$ref": "#/definitions/AWS::MSK::Cluster.Scram", - "markdownDescription": "", + "markdownDescription": "Details for SASL/SCRAM client authentication.", "title": "Scram" } }, @@ -152345,7 +152345,7 @@ "additionalProperties": false, "properties": { "Enabled": { - "markdownDescription": "", + "markdownDescription": "SASL/SCRAM authentication is enabled or not.", "title": "Enabled", "type": "boolean" } @@ -152360,7 +152360,7 @@ "properties": { "EBSStorageInfo": { "$ref": "#/definitions/AWS::MSK::Cluster.EBSStorageInfo", - "markdownDescription": "", + "markdownDescription": "EBS volume information.", "title": "EBSStorageInfo" } }, @@ -152373,12 +152373,12 @@ "items": { "type": "string" }, - "markdownDescription": "", + "markdownDescription": "List of AWS Private CA ARNs.", "title": "CertificateAuthorityArnList", "type": "array" }, "Enabled": { - "markdownDescription": "", + "markdownDescription": "TLS authentication is enabled or not.", "title": "Enabled", "type": "boolean" } @@ -152389,7 +152389,7 @@ "additionalProperties": false, "properties": { "Enabled": { - "markdownDescription": "", + "markdownDescription": "Unauthenticated is enabled or not.", "title": "Enabled", "type": "boolean" } @@ -152404,7 +152404,7 @@ "properties": { "ClientAuthentication": { "$ref": "#/definitions/AWS::MSK::Cluster.VpcConnectivityClientAuthentication", - "markdownDescription": "", + "markdownDescription": "VPC connection control settings for brokers.", "title": "ClientAuthentication" } }, @@ -152415,12 +152415,12 @@ "properties": { "Sasl": { "$ref": "#/definitions/AWS::MSK::Cluster.VpcConnectivitySasl", - "markdownDescription": "", + "markdownDescription": "Details for VpcConnectivity ClientAuthentication using SASL.", "title": "Sasl" }, "Tls": { "$ref": "#/definitions/AWS::MSK::Cluster.VpcConnectivityTls", - "markdownDescription": "", + "markdownDescription": "Details for VpcConnectivity ClientAuthentication using TLS.", "title": "Tls" } }, @@ -152430,7 +152430,7 @@ "additionalProperties": false, "properties": { "Enabled": { - "markdownDescription": "", + "markdownDescription": "SASL/IAM authentication is enabled or not.", "title": "Enabled", "type": "boolean" } @@ -152445,12 +152445,12 @@ "properties": { "Iam": { "$ref": "#/definitions/AWS::MSK::Cluster.VpcConnectivityIam", - "markdownDescription": "", + "markdownDescription": "Details for ClientAuthentication using IAM for VpcConnectivity.", "title": "Iam" }, "Scram": { "$ref": "#/definitions/AWS::MSK::Cluster.VpcConnectivityScram", - "markdownDescription": "", + "markdownDescription": "Details for SASL/SCRAM client authentication for VpcConnectivity.", "title": "Scram" } }, @@ -152460,7 +152460,7 @@ "additionalProperties": false, "properties": { "Enabled": { - "markdownDescription": "", + "markdownDescription": "SASL/SCRAM authentication is enabled or not.", "title": "Enabled", "type": "boolean" } @@ -152474,7 +152474,7 @@ "additionalProperties": false, "properties": { "Enabled": { - "markdownDescription": "", + "markdownDescription": "TLS authentication is enabled or not.", "title": "Enabled", "type": "boolean" } @@ -153062,7 +153062,7 @@ "properties": { "Sasl": { "$ref": "#/definitions/AWS::MSK::ServerlessCluster.Sasl", - "markdownDescription": "", + "markdownDescription": "Details for client authentication using SASL. To turn on SASL, you must also turn on `EncryptionInTransit` by setting `inCluster` to true. You must set `clientBroker` to either `TLS` or `TLS_PLAINTEXT` . If you choose `TLS_PLAINTEXT` , then you must also set `unauthenticated` to true.", "title": "Sasl" } }, @@ -153075,7 +153075,7 @@ "additionalProperties": false, "properties": { "Enabled": { - "markdownDescription": "", + "markdownDescription": "SASL/IAM authentication is enabled or not.", "title": "Enabled", "type": "boolean" } @@ -153090,7 +153090,7 @@ "properties": { "Iam": { "$ref": "#/definitions/AWS::MSK::ServerlessCluster.Iam", - "markdownDescription": "", + "markdownDescription": "Details for ClientAuthentication using IAM.", "title": "Iam" } }, @@ -153908,7 +153908,7 @@ "type": "string" }, "Status": { - "markdownDescription": "The status of Amazon Macie for the account. Valid values are: `ENABLED` , start or resume all Macie activities for the account; and, `PAUSED` , suspend all Macie activities for the account.", + "markdownDescription": "The status of Amazon Macie for the account. Valid values are: `ENABLED` , start or resume Macie activities for the account; and, `PAUSED` , suspend Macie activities for the account.", "title": "Status", "type": "string" } @@ -171591,7 +171591,7 @@ "type": "object" }, "StorageCapacity": { - "markdownDescription": "The default storage capacity for the workflow runs, in gibibytes.", + "markdownDescription": "The default static storage capacity (in gibibytes) for runs that use this workflow or workflow version.", "title": "StorageCapacity", "type": "number" }, @@ -234337,7 +234337,7 @@ "type": "string" }, "ResourceId": { - "markdownDescription": "The ID of the Amazon Virtual Private Cloud VPC that you're configuring Resolver for.", + "markdownDescription": "The ID of the Amazon Virtual Private Cloud VPC or a Route 53 Profile that you're configuring Resolver for.", "title": "ResourceId", "type": "string" } @@ -260510,7 +260510,7 @@ "additionalProperties": false, "properties": { "Name": { - "markdownDescription": "The name of the activity.\n\nA name must *not* contain:\n\n- white space\n- brackets `< > { } [ ]`\n- wildcard characters `? *`\n- special characters `\" # % \\ ^ | ~ ` $ & , ; : /`\n- control characters ( `U+0000-001F` , `U+007F-009F` )\n\nTo enable logging with CloudWatch Logs, the name should only contain 0-9, A-Z, a-z, - and _.", + "markdownDescription": "The name of the activity.\n\nA name must *not* contain:\n\n- white space\n- brackets `< > { } [ ]`\n- wildcard characters `? *`\n- special characters `\" # % \\ ^ | ~ ` $ & , ; : /`\n- control characters ( `U+0000-001F` , `U+007F-009F` , `U+FFFE-FFFF` )\n- surrogates ( `U+D800-DFFF` )\n- invalid characters ( `U+10FFFF` )\n\nTo enable logging with CloudWatch Logs, the name should only contain 0-9, A-Z, a-z, - and _.", "title": "Name", "type": "string" }, @@ -262771,7 +262771,7 @@ "additionalProperties": false, "properties": { "ActiveDate": { - "markdownDescription": "An optional date that specifies when the certificate becomes active.", + "markdownDescription": "An optional date that specifies when the certificate becomes active. If you do not specify a value, `ActiveDate` takes the same value as `NotBeforeDate` , which is specified by the CA.", "title": "ActiveDate", "type": "string" }, @@ -262791,7 +262791,7 @@ "type": "string" }, "InactiveDate": { - "markdownDescription": "An optional date that specifies when the certificate becomes inactive.", + "markdownDescription": "An optional date that specifies when the certificate becomes inactive. If you do not specify a value, `InactiveDate` takes the same value as `NotAfterDate` , which is specified by the CA.", "title": "InactiveDate", "type": "string" }, @@ -262961,7 +262961,7 @@ "type": "string" }, "MdnResponse": { - "markdownDescription": "Used for outbound requests (from an AWS Transfer Family server to a partner AS2 server) to determine whether the partner response for transfers is synchronous or asynchronous. Specify either of the following values:\n\n- `SYNC` : The system expects a synchronous MDN response, confirming that the file was transferred successfully (or not).\n- `NONE` : Specifies that no MDN response is required.", + "markdownDescription": "Used for outbound requests (from an AWS Transfer Family connector to a partner AS2 server) to determine whether the partner response for transfers is synchronous or asynchronous. Specify either of the following values:\n\n- `SYNC` : The system expects a synchronous MDN response, confirming that the file was transferred successfully (or not).\n- `NONE` : Specifies that no MDN response is required.", "title": "MdnResponse", "type": "string" }, @@ -262995,12 +262995,12 @@ "items": { "type": "string" }, - "markdownDescription": "The public portion of the host key, or keys, that are used to identify the external server to which you are connecting. You can use the `ssh-keyscan` command against the SFTP server to retrieve the necessary key.\n\nThe three standard SSH public key format elements are `` , `` , and an optional `` , with spaces between each element. Specify only the `` and `` : do not enter the `` portion of the key.\n\nFor the trusted host key, AWS Transfer Family accepts RSA and ECDSA keys.\n\n- For RSA keys, the `` string is `ssh-rsa` .\n- For ECDSA keys, the `` string is either `ecdsa-sha2-nistp256` , `ecdsa-sha2-nistp384` , or `ecdsa-sha2-nistp521` , depending on the size of the key you generated.\n\nRun this command to retrieve the SFTP server host key, where your SFTP server name is `ftp.host.com` .\n\n`ssh-keyscan ftp.host.com`\n\nThis prints the public host key to standard output.\n\n`ftp.host.com ssh-rsa AAAAB3Nza... `TrustedHostKeys` is optional for `CreateConnector` . If not provided, you can use `TestConnection` to retrieve the server host key during the initial connection attempt, and subsequently update the connector with the observed host key. \n\nThe three standard SSH public key format elements are `` , `` , and an optional `` , with spaces between each element. Specify only the `` and `` : do not enter the `` portion of the key.\n\nFor the trusted host key, AWS Transfer Family accepts RSA and ECDSA keys.\n\n- For RSA keys, the `` string is `ssh-rsa` .\n- For ECDSA keys, the `` string is either `ecdsa-sha2-nistp256` , `ecdsa-sha2-nistp384` , or `ecdsa-sha2-nistp521` , depending on the size of the key you generated.\n\nRun this command to retrieve the SFTP server host key, where your SFTP server name is `ftp.host.com` .\n\n`ssh-keyscan ftp.host.com`\n\nThis prints the public host key to standard output.\n\n`ftp.host.com ssh-rsa AAAAB3Nza... - Required when creating an SFTP connector\n> - Optional when updating an existing SFTP connector", "title": "UserSecretId", "type": "string" } @@ -263162,7 +263162,7 @@ "type": "string" }, "LoggingRole": { - "markdownDescription": "The Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that allows a server to turn on Amazon CloudWatch logging for Amazon S3 or Amazon EFSevents. When set, you can view user activity in your CloudWatch logs.", + "markdownDescription": "The Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that allows a server to turn on Amazon CloudWatch logging for Amazon S3 or Amazon EFS events. When set, you can view user activity in your CloudWatch logs.", "title": "LoggingRole", "type": "string" },