The name of a logical container where backups are stored. Backup vaults are identified\n by names that are unique to the account used to create them and the Region where they are\n created.
\n
Accepted characters include lowercase letters, numbers, and hyphens.
An ARN that uniquely identifies a recovery point; for example,\n arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
An ARN that uniquely identifies a recovery point; for example,\n arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
The date and time that a backup index was created, in Unix format and Coordinated\n Universal Time (UTC). The value of CreationDate is accurate to milliseconds.\n For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087\n AM.
The date and time that a backup index was deleted, in Unix format and Coordinated\n Universal Time (UTC). The value of CreationDate is accurate to milliseconds.\n For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087\n AM.
The date and time that a backup index finished creation, in Unix format and Coordinated\n Universal Time (UTC). The value of CreationDate is accurate to milliseconds.\n For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087\n AM.
An ARN that uniquely identifies a recovery point; for example,\n arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45\n
The date and time that a backup was created, in Unix format and Coordinated\n Universal Time (UTC). The value of CreationDate is accurate to milliseconds.\n For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087\n AM.
The date and time that a backup index was created, in Unix format and Coordinated\n Universal Time (UTC). The value of CreationDate is accurate to milliseconds.\n For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087\n AM.
The next item following a partial list of returned recovery points.
\n
For example, if a request\n is made to return MaxResults number of indexed recovery points, NextToken\n allows you to return more items in your list starting at the location pointed to by the\n next token.
The next item following a partial list of returned recovery points.
\n
For example, if a request\n is made to return MaxResults number of indexed recovery points, NextToken\n allows you to return more items in your list starting at the location pointed to by the\n next token.
The backup option for a selected resource. This option is only available for\n Windows Volume Shadow Copy Service (VSS) backup jobs.
\n
Valid values: Set to \"WindowsVSS\":\"enabled\" to enable the\n WindowsVSS backup option and create a Windows VSS backup. Set to\n \"WindowsVSS\"\"disabled\" to create a regular backup. The\n WindowsVSS option is not enabled by default.
The name of a logical container where backups are stored. Backup vaults are identified\n by names that are unique to the account used to create them and the Region where they are\n created.
\n
Accepted characters include lowercase letters, numbers, and hyphens.
An ARN that uniquely identifies a recovery point; for example,\n arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
The name of a logical container where backups are stored. Backup vaults are identified\n by names that are unique to the account used to create them and the Region where they are\n created.
An ARN that uniquely identifies a recovery point; for example,\n arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
This exception occurs when a conflict with a previous successful\n operation is detected. This generally occurs when the previous \n operation did not have time to propagate to the host serving the \n current request.
\n
A retry (with appropriate backoff logic) is the recommended \n response to this exception.
This contains arrays of objects, which may include \n CreationTimes time condition objects, FilePaths \n string objects, LastModificationTimes time \n condition objects,
These are one or more items in the \n results that match values for the Amazon Resource \n Name (ARN) of recovery points returned in a search \n of Amazon EBS backup metadata.
These are one or more items in the \n results that match values for the Amazon Resource \n Name (ARN) of source resources returned in a search \n of Amazon EBS backup metadata.
The date and time that a search job completed, in Unix format and Coordinated\n Universal Time (UTC). The value of CompletionTime is accurate to milliseconds.\n For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087\n AM.
The date and time that a search job was created, in Unix format and Coordinated\n Universal Time (UTC). The value of CompletionTime is accurate to milliseconds.\n For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087\n AM.
The date and time that an export job was created, in Unix format and Coordinated Universal\n Time (UTC). The value of CreationTime is accurate to milliseconds. For\n example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087\n AM.
The date and time that an export job completed, in Unix format and Coordinated Universal\n Time (UTC). The value of CreationTime is accurate to milliseconds. For\n example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087\n AM.
This operation returns a list of all backups (recovery \n points) in a paginated format that were included in \n the search job.
\n
If a search does not display an expected backup in \n the results, you can call this operation to display each \n backup included in the search. Any backups that were not \n included because they have a FAILED status \n from a permissions issue will be displayed, along with a \n status message.
\n
Only recovery points with a backup index that has \n a status of ACTIVE will be included in search results. \n If the index has any other status, its status will be \n displayed along with a status message.
The next item following a partial list of returned backups \n included in a search job.
\n
For example, if a request\n is made to return MaxResults number of backups, NextToken\n allows you to return more items in your list starting at the location pointed to by the\n next token.
The next item following a partial list of returned backups \n included in a search job.
\n
For example, if a request\n is made to return MaxResults number of backups, NextToken\n allows you to return more items in your list starting at the location pointed to by the\n next token.
The next item following a partial list of returned \n search job results.
\n
For example, if a request\n is made to return MaxResults number of \n search job results, NextToken\n allows you to return more items in your list starting at the location pointed to by the\n next token.
The next item following a partial list of \n search job results.
\n
For example, if a request\n is made to return MaxResults number of backups, NextToken\n allows you to return more items in your list starting at the location pointed to by the\n next token.
The next item following a partial list of returned \n search jobs.
\n
For example, if a request\n is made to return MaxResults number of backups, NextToken\n allows you to return more items in your list starting at the location pointed to by the\n next token.
The next item following a partial list of returned backups \n included in a search job.
\n
For example, if a request\n is made to return MaxResults number of backups, NextToken\n allows you to return more items in your list starting at the location pointed to by the\n next token.
The next item following a partial list of returned backups \n included in a search job.
\n
For example, if a request\n is made to return MaxResults number of backups, NextToken\n allows you to return more items in your list starting at the location pointed to by the\n next token.
The next item following a partial list of returned backups \n included in a search job.
\n
For example, if a request\n is made to return MaxResults number of backups, NextToken\n allows you to return more items in your list starting at the location pointed to by the\n next token.
A string that defines what values will be \n returned.
\n
If this is included, avoid combinations of \n operators that will return all possible values. \n For example, including both EQUALS_TO \n and NOT_EQUALS_TO with a value of 4 \n will return all values.
These are items in the returned results that match \n recovery point Amazon Resource Names (ARN) input during \n a search of Amazon S3 backup metadata.
Include this parameter to allow multiple identical \n calls for idempotency.
\n
A client token is valid for 8 hours after the first \n request that uses it is completed. After this time,\n any request with the same token is treated as a \n new request.
This object can contain BackupResourceTypes, \n BackupResourceArns, BackupResourceCreationTime, \n BackupResourceTags, and SourceResourceArns to \n filter the recovery points returned by the search \n job.
The date and time that a job was created, in Unix format and Coordinated\n Universal Time (UTC). The value of CompletionTime is accurate to milliseconds.\n For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087\n AM.
Include this parameter to allow multiple identical \n calls for idempotency.
\n
A client token is valid for 8 hours after the first \n request that uses it is completed. After this time,\n any request with the same token is treated as a \n new request.
Optional tags to include. A tag is a key-value pair you can use to manage, \n filter, and search for your resources. Allowed characters include UTF-8 letters, \n numbers, spaces, and the following characters: + - = . _ : /.
A string that defines what values will be \n returned.
\n
If this is included, avoid combinations of \n operators that will return all possible values. \n For example, including both EQUALS_TO \n and NOT_EQUALS_TO with a value of 4 \n will return all values.
Required tags to include. A tag is a key-value pair you can use to manage, \n filter, and search for your resources. Allowed characters include UTF-8 letters, \n numbers, spaces, and the following characters: + - = . _ : /.
A string that defines what values will be \n returned.
\n
If this is included, avoid combinations of \n operators that will return all possible values. \n For example, including both EQUALS_TO \n and NOT_EQUALS_TO with a value of 4 \n will return all values.
The input fails to satisfy the constraints specified by a service.
",
+ "smithy.api#error": "client",
+ "smithy.api#httpError": 400
+ }
+ }
+ }
+}
\ No newline at end of file
diff --git a/aws-models/batch.json b/aws-models/batch.json
index b7de037903c3..7319b98ad97b 100644
--- a/aws-models/batch.json
+++ b/aws-models/batch.json
@@ -1810,27 +1810,27 @@
"allocationStrategy": {
"target": "com.amazonaws.batch#CRAllocationStrategy",
"traits": {
- "smithy.api#documentation": "
The allocation strategy to use for the compute resource if not enough instances of the best\n fitting instance type can be allocated. This might be because of availability of the instance\n type in the Region or Amazon EC2 service limits. For more\n information, see Allocation strategies in the Batch User Guide.
\n \n
This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n \n
\n
BEST_FIT (default)
\n
\n
Batch selects an instance type that best fits the needs of the jobs with a preference\n for the lowest-cost instance type. If additional instances of the selected instance type\n aren't available, Batch waits for the additional instances to be available. If there aren't\n enough instances available or the user is reaching Amazon EC2 service limits,\n additional jobs aren't run until the currently running jobs are completed. This allocation\n strategy keeps costs lower but can limit scaling. If you're using Spot Fleets with\n BEST_FIT, the Spot Fleet IAM Role must be specified. Compute resources that use\n a BEST_FIT allocation strategy don't support infrastructure updates and can't\n update some parameters. For more information, see Updating compute environments in\n the Batch User Guide.
\n
\n
BEST_FIT_PROGRESSIVE
\n
\n
Batch selects additional instance types that are large enough to meet the requirements\n of the jobs in the queue. Its preference is for instance types with lower cost vCPUs. If\n additional instances of the previously selected instance types aren't available, Batch\n selects new instance types.
\n
\n
SPOT_CAPACITY_OPTIMIZED
\n
\n
Batch selects one or more instance types that are large enough to meet the requirements\n of the jobs in the queue. Its preference is for instance types that are less likely to be\n interrupted. This allocation strategy is only available for Spot Instance compute\n resources.
\n
\n
SPOT_PRICE_CAPACITY_OPTIMIZED
\n
\n
The price and capacity optimized allocation strategy looks at both price and capacity to\n select the Spot Instance pools that are the least likely to be interrupted and have the lowest\n possible price. This allocation strategy is only available for Spot Instance compute\n resources.
\n
\n
\n
With BEST_FIT_PROGRESSIVE,SPOT_CAPACITY_OPTIMIZED and\n SPOT_PRICE_CAPACITY_OPTIMIZED\n (recommended) strategies using On-Demand or Spot Instances, and the\n BEST_FIT strategy using Spot Instances, Batch might need to exceed\n maxvCpus to meet your capacity requirements. In this event, Batch never exceeds\n maxvCpus by more than a single instance.
"
+ "smithy.api#documentation": "
The allocation strategy to use for the compute resource if not enough instances of the best\n fitting instance type can be allocated. This might be because of availability of the instance\n type in the Region or Amazon EC2 service limits. For more\n information, see Allocation strategies in the Batch User Guide.
\n \n
This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n \n
\n
BEST_FIT (default)
\n
\n
Batch selects an instance type that best fits the needs of the jobs with a preference\n for the lowest-cost instance type. If additional instances of the selected instance type\n aren't available, Batch waits for the additional instances to be available. If there aren't\n enough instances available or the user is reaching Amazon EC2 service limits,\n additional jobs aren't run until the currently running jobs are completed. This allocation\n strategy keeps costs lower but can limit scaling. If you're using Spot Fleets with\n BEST_FIT, the Spot Fleet IAM Role must be specified. Compute resources that use\n a BEST_FIT allocation strategy don't support infrastructure updates and can't\n update some parameters. For more information, see Updating compute environments in\n the Batch User Guide.
\n
\n
BEST_FIT_PROGRESSIVE
\n
\n
Batch selects additional instance types that are large enough to meet the requirements\n of the jobs in the queue. Its preference is for instance types with lower cost vCPUs. If\n additional instances of the previously selected instance types aren't available, Batch\n selects new instance types.
\n
\n
SPOT_CAPACITY_OPTIMIZED
\n
\n
Batch selects one or more instance types that are large enough to meet the requirements\n of the jobs in the queue. Its preference is for instance types that are less likely to be\n interrupted. This allocation strategy is only available for Spot Instance compute\n resources.
\n
\n
SPOT_PRICE_CAPACITY_OPTIMIZED
\n
\n
The price and capacity optimized allocation strategy looks at both price and capacity to\n select the Spot Instance pools that are the least likely to be interrupted and have the lowest\n possible price. This allocation strategy is only available for Spot Instance compute\n resources.
\n
\n
\n
With BEST_FIT_PROGRESSIVE,SPOT_CAPACITY_OPTIMIZED and\n SPOT_PRICE_CAPACITY_OPTIMIZED (recommended) strategies using On-Demand or Spot \n Instances, and the BEST_FIT strategy using Spot Instances, Batch might need to \n exceed maxvCpus to meet your capacity requirements. In this event, Batch never \n exceeds maxvCpus by more than a single instance.
The maximum number of\n vCPUs that a\n compute environment can\n support.
\n \n
With BEST_FIT_PROGRESSIVE,SPOT_CAPACITY_OPTIMIZED and\n SPOT_PRICE_CAPACITY_OPTIMIZED\n (recommended) strategies using On-Demand or Spot Instances, and the\n BEST_FIT strategy using Spot Instances, Batch might need to exceed\n maxvCpus to meet your capacity requirements. In this event, Batch never exceeds\n maxvCpus by more than a single instance.
\n ",
+ "smithy.api#documentation": "
The maximum number of vCPUs that a compute environment can support.
\n \n
With BEST_FIT_PROGRESSIVE,SPOT_CAPACITY_OPTIMIZED and\n SPOT_PRICE_CAPACITY_OPTIMIZED (recommended) strategies using On-Demand or Spot Instances, \n and the BEST_FIT strategy using Spot Instances, Batch might need to exceed\n maxvCpus to meet your capacity requirements. In this event, Batch never exceeds\n maxvCpus by more than a single instance.
The desired number of\n vCPUS in the\n compute environment. Batch modifies this value between the minimum and maximum values based on\n job queue demand.
\n \n
This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n "
+ "smithy.api#documentation": "
The desired number of vCPUS in the compute environment. Batch modifies this value between \n the minimum and maximum values based on job queue demand.
\n \n
This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
The maximum percentage that a Spot Instance price can be when compared with the On-Demand\n price for that instance type before instances are launched. For example, if your maximum\n percentage is 20%, then the Spot price must be less than 20% of the current On-Demand price for\n that Amazon EC2 instance. You always pay the lowest (market) price and never more than your maximum\n percentage. If you leave this field empty, the default value is 100% of the On-Demand\n price. For most use cases,\n we recommend leaving this field empty.
\n \n
This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n "
+ "smithy.api#documentation": "
The maximum percentage that a Spot Instance price can be when compared with the On-Demand\n price for that instance type before instances are launched. For example, if your maximum\n percentage is 20%, then the Spot price must be less than 20% of the current On-Demand price for\n that Amazon EC2 instance. You always pay the lowest (market) price and never more than your maximum\n percentage. If you leave this field empty, the default value is 100% of the On-Demand\n price. For most use cases, we recommend leaving this field empty.
\n \n
This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
The maximum number of Amazon EC2 vCPUs that an environment can reach.
\n \n
With BEST_FIT_PROGRESSIVE,SPOT_CAPACITY_OPTIMIZED and\n SPOT_PRICE_CAPACITY_OPTIMIZED\n (recommended) strategies using On-Demand or Spot Instances, and the\n BEST_FIT strategy using Spot Instances, Batch might need to exceed\n maxvCpus to meet your capacity requirements. In this event, Batch never exceeds\n maxvCpus by more than a single instance.
\n "
+ "smithy.api#documentation": "
The maximum number of Amazon EC2 vCPUs that an environment can reach.
\n \n
With BEST_FIT_PROGRESSIVE,SPOT_CAPACITY_OPTIMIZED and\n SPOT_PRICE_CAPACITY_OPTIMIZED (recommended) strategies using On-Demand or Spot \n Instances, and the BEST_FIT strategy using Spot Instances, Batch might need to \n exceed maxvCpus to meet your capacity requirements. In this event, Batch never \n exceeds maxvCpus by more than a single instance.
The desired number of\n vCPUS in the\n compute environment. Batch modifies this value between the minimum and maximum values based on\n job queue demand.
\n \n
This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n \n \n
Batch doesn't support changing the desired number of vCPUs of an existing compute\n environment. Don't specify this parameter for compute environments using Amazon EKS clusters.
\n \n \n
When you update the desiredvCpus setting, the value must be between the\n minvCpus and maxvCpus values.
\n
Additionally, the updated desiredvCpus value must be greater than or equal to\n the current desiredvCpus value. For more information, see Troubleshooting\n Batch in the Batch User Guide.
\n "
+ "smithy.api#documentation": "
The desired number of vCPUS in the compute environment. Batch modifies this value between \n the minimum and maximum values based on job queue demand.
\n \n
This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n \n \n
Batch doesn't support changing the desired number of vCPUs of an existing compute\n environment. Don't specify this parameter for compute environments using Amazon EKS clusters.
\n \n \n
When you update the desiredvCpus setting, the value must be between the\n minvCpus and maxvCpus values.
\n
Additionally, the updated desiredvCpus value must be greater than or equal to\n the current desiredvCpus value. For more information, see Troubleshooting\n Batch in the Batch User Guide.
The allocation strategy to use for the compute resource if there's not enough instances of\n the best fitting instance type that can be allocated. This might be because of availability of\n the instance type in the Region or Amazon EC2 service limits. For more\n information, see Allocation strategies in the Batch User Guide.
\n
When updating a compute environment, changing the allocation strategy requires an\n infrastructure update of the compute environment. For more information, see Updating compute\n environments in the Batch User Guide. BEST_FIT isn't\n supported when updating a compute environment.
\n \n
This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n \n
\n
BEST_FIT_PROGRESSIVE
\n
\n
Batch selects additional instance types that are large enough to meet the requirements\n of the jobs in the queue. Its preference is for instance types with lower cost vCPUs. If\n additional instances of the previously selected instance types aren't available, Batch\n selects new instance types.
\n
\n
SPOT_CAPACITY_OPTIMIZED
\n
\n
Batch selects one or more instance types that are large enough to meet the requirements\n of the jobs in the queue. Its preference is for instance types that are less likely to be\n interrupted. This allocation strategy is only available for Spot Instance compute\n resources.
\n
\n
SPOT_PRICE_CAPACITY_OPTIMIZED
\n
\n
The price and capacity optimized allocation strategy looks at both price and capacity to\n select the Spot Instance pools that are the least likely to be interrupted and have the lowest\n possible price. This allocation strategy is only available for Spot Instance compute\n resources.
\n
\n
\n
With BEST_FIT_PROGRESSIVE,SPOT_CAPACITY_OPTIMIZED and\n SPOT_PRICE_CAPACITY_OPTIMIZED\n (recommended) strategies using On-Demand or Spot Instances, and the\n BEST_FIT strategy using Spot Instances, Batch might need to exceed\n maxvCpus to meet your capacity requirements. In this event, Batch never exceeds\n maxvCpus by more than a single instance.
"
+ "smithy.api#documentation": "
The allocation strategy to use for the compute resource if there's not enough instances of\n the best fitting instance type that can be allocated. This might be because of availability of\n the instance type in the Region or Amazon EC2 service limits. For more\n information, see Allocation strategies in the Batch User Guide.
\n
When updating a compute environment, changing the allocation strategy requires an\n infrastructure update of the compute environment. For more information, see Updating compute\n environments in the Batch User Guide. BEST_FIT isn't\n supported when updating a compute environment.
\n \n
This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n \n
\n
BEST_FIT_PROGRESSIVE
\n
\n
Batch selects additional instance types that are large enough to meet the requirements\n of the jobs in the queue. Its preference is for instance types with lower cost vCPUs. If\n additional instances of the previously selected instance types aren't available, Batch\n selects new instance types.
\n
\n
SPOT_CAPACITY_OPTIMIZED
\n
\n
Batch selects one or more instance types that are large enough to meet the requirements\n of the jobs in the queue. Its preference is for instance types that are less likely to be\n interrupted. This allocation strategy is only available for Spot Instance compute\n resources.
\n
\n
SPOT_PRICE_CAPACITY_OPTIMIZED
\n
\n
The price and capacity optimized allocation strategy looks at both price and capacity to\n select the Spot Instance pools that are the least likely to be interrupted and have the lowest\n possible price. This allocation strategy is only available for Spot Instance compute\n resources.
\n
\n
\n
With BEST_FIT_PROGRESSIVE,SPOT_CAPACITY_OPTIMIZED and\n SPOT_PRICE_CAPACITY_OPTIMIZED (recommended) strategies using On-Demand or Spot Instances, \n and the BEST_FIT strategy using Spot Instances, Batch might need to exceed\n maxvCpus to meet your capacity requirements. In this event, Batch never exceeds\n maxvCpus by more than a single instance.
The Amazon ECS instance profile applied to Amazon EC2 instances in a compute environment.\n Required for Amazon EC2\n instances. You can specify the short name or full Amazon Resource Name (ARN) of an instance\n profile. For example, \n ecsInstanceRole\n or\n arn:aws:iam:::instance-profile/ecsInstanceRole\n .\n For more information, see Amazon ECS instance role in the Batch User Guide.
\n
When updating a compute environment, changing this setting requires an infrastructure update\n of the compute environment. For more information, see Updating compute environments in the\n Batch User Guide.
\n \n
This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n "
+ "smithy.api#documentation": "
The Amazon ECS instance profile applied to Amazon EC2 instances in a compute environment.\n Required for Amazon EC2 instances. You can specify the short name or full Amazon Resource Name (ARN) of an instance\n profile. For example, \n ecsInstanceRole\n or\n arn:aws:iam:::instance-profile/ecsInstanceRole\n .\n For more information, see Amazon ECS instance role in the Batch User Guide.
\n
When updating a compute environment, changing this setting requires an infrastructure update\n of the compute environment. For more information, see Updating compute environments in the\n Batch User Guide.
\n \n
This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
The maximum percentage that a Spot Instance price can be when compared with the On-Demand\n price for that instance type before instances are launched. For example, if your maximum\n percentage is 20%, the Spot price must be less than 20% of the current On-Demand price for that\n Amazon EC2 instance. You always pay the lowest (market) price and never more than your maximum\n percentage. For most use\n cases, we recommend leaving this field empty.
\n
When updating a compute environment, changing the bid percentage requires an infrastructure\n update of the compute environment. For more information, see Updating compute environments in the\n Batch User Guide.
\n \n
This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n "
+ "smithy.api#documentation": "
The maximum percentage that a Spot Instance price can be when compared with the On-Demand\n price for that instance type before instances are launched. For example, if your maximum\n percentage is 20%, the Spot price must be less than 20% of the current On-Demand price for that\n Amazon EC2 instance. You always pay the lowest (market) price and never more than your maximum\n percentage. For most use cases, we recommend leaving this field empty.
\n
When updating a compute environment, changing the bid percentage requires an infrastructure\n update of the compute environment. For more information, see Updating compute environments in the\n Batch User Guide.
\n \n
This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
The Amazon Resource Name (ARN) of the\n execution\n role that Batch can assume. For more information,\n see Batch execution IAM\n role in the Batch User Guide.
"
+ "smithy.api#documentation": "
The Amazon Resource Name (ARN) of the execution role that Batch can assume. For more information,\n see Batch execution IAM\n role in the Batch User Guide.
Required.\n The image used to start a container. This string is passed directly to the\n Docker daemon. Images in the Docker Hub registry are available by default. Other repositories are\n specified with\n \n repository-url/image:tag\n .\n It can be 255 characters long. It can contain uppercase and lowercase letters, numbers,\n hyphens (-), underscores (_), colons (:), periods (.), forward slashes (/), and number signs (#). This parameter maps to Image in the\n Create a container section of the Docker Remote API and the IMAGE\n parameter of docker run.
\n \n
Docker image architecture must match the processor architecture of the compute resources\n that they're scheduled on. For example, ARM-based Docker images can only run on ARM-based\n compute resources.
\n \n
\n
\n
Images in Amazon ECR Public repositories use the full registry/repository[:tag] or\n registry/repository[@digest] naming conventions. For example,\n public.ecr.aws/registry_alias/my-web-app:latest\n .
\n
\n
\n
Images in Amazon ECR repositories use the full registry and repository URI (for example,\n 123456789012.dkr.ecr..amazonaws.com/).
\n
\n
\n
Images in official repositories on Docker Hub use a single name (for example,\n ubuntu or mongo).
\n
\n
\n
Images in other repositories on Docker Hub are qualified with an organization name (for\n example, amazon/amazon-ecs-agent).
\n
\n
\n
Images in other online repositories are qualified further by a domain name (for example,\n quay.io/assemblyline/ubuntu).
\n
\n
"
+ "smithy.api#documentation": "
Required. The image used to start a container. This string is passed directly to the\n Docker daemon. Images in the Docker Hub registry are available by default. Other repositories are\n specified with\n \n repository-url/image:tag\n .\n It can be 255 characters long. It can contain uppercase and lowercase letters, numbers,\n hyphens (-), underscores (_), colons (:), periods (.), forward slashes (/), and number signs (#). This parameter maps to Image in the\n Create a container section of the Docker Remote API and the IMAGE\n parameter of docker run.
\n \n
Docker image architecture must match the processor architecture of the compute resources\n that they're scheduled on. For example, ARM-based Docker images can only run on ARM-based\n compute resources.
\n \n
\n
\n
Images in Amazon ECR Public repositories use the full registry/repository[:tag] or\n registry/repository[@digest] naming conventions. For example,\n public.ecr.aws/registry_alias/my-web-app:latest\n .
\n
\n
\n
Images in Amazon ECR repositories use the full registry and repository URI (for example,\n 123456789012.dkr.ecr..amazonaws.com/).
\n
\n
\n
Images in official repositories on Docker Hub use a single name (for example,\n ubuntu or mongo).
\n
\n
\n
Images in other repositories on Docker Hub are qualified with an organization name (for\n example, amazon/amazon-ecs-agent).
\n
\n
\n
Images in other online repositories are qualified further by a domain name (for example,\n quay.io/assemblyline/ubuntu).
Creates an Batch compute environment. You can create MANAGED or\n UNMANAGED compute environments. MANAGED compute environments can\n use Amazon EC2 or Fargate resources. UNMANAGED compute environments can only use\n EC2 resources.
\n
In a managed compute environment, Batch manages the capacity and instance types of the\n compute resources within the environment. This is based on the compute resource specification\n that you define or the launch template that you\n specify when you create the compute environment. Either, you can choose to use EC2 On-Demand\n Instances and EC2 Spot Instances. Or, you can use Fargate and Fargate Spot capacity in\n your managed compute environment. You can optionally set a maximum price so that Spot\n Instances only launch when the Spot Instance price is less than a specified percentage of the\n On-Demand price.
\n \n
Multi-node parallel jobs aren't supported on Spot Instances.
\n \n
In an unmanaged compute environment, you can manage your own EC2 compute resources and\n have flexibility with how you configure your compute resources. For example, you can use\n custom AMIs. However, you must verify that each of your AMIs meet the Amazon ECS container instance\n AMI specification. For more information, see container instance AMIs in the\n Amazon Elastic Container Service Developer Guide. After you created your unmanaged compute environment,\n you can use the DescribeComputeEnvironments operation to find the Amazon ECS\n cluster that's associated with it. Then, launch your container instances into that Amazon ECS\n cluster. For more information, see Launching an Amazon ECS container\n instance in the Amazon Elastic Container Service Developer Guide.
\n \n
To create a compute environment that uses EKS resources, the caller must have\n permissions to call eks:DescribeCluster.
\n \n \n
Batch doesn't automatically upgrade the AMIs in a compute environment after it's\n created. For example, it also doesn't update the AMIs in your compute environment when a\n newer version of the Amazon ECS optimized AMI is available. You're responsible for the management\n of the guest operating system. This includes any updates and security patches. You're also\n responsible for any additional application software or utilities that you install on the\n compute resources. There are two ways to use a new AMI for your Batch jobs. The original\n method is to complete these steps:
\n \n
\n
Create a new compute environment with the new AMI.
\n
\n
\n
Add the compute environment to an existing job queue.
\n
\n
\n
Remove the earlier compute environment from your job queue.
\n
\n
\n
Delete the earlier compute environment.
\n
\n \n
In April 2022, Batch added enhanced support for updating compute environments. For\n more information, see Updating compute environments.\n To use the enhanced updating of compute environments to update AMIs, follow these\n rules:
\n
\n
\n
Either don't set the service role (serviceRole) parameter or set it to\n the AWSBatchServiceRole service-linked role.
\n
\n
\n
Set the allocation strategy (allocationStrategy) parameter to\n BEST_FIT_PROGRESSIVE, SPOT_CAPACITY_OPTIMIZED, or\n SPOT_PRICE_CAPACITY_OPTIMIZED.
\n
\n
\n
Set the update to latest image version (updateToLatestImageVersion)\n parameter to\n true.\n The updateToLatestImageVersion parameter is used when you update a compute\n environment. This parameter is ignored when you create a compute\n environment.
\n
\n
\n
Don't specify an AMI ID in imageId, imageIdOverride (in\n \n ec2Configuration\n ), or in the launch template\n (launchTemplate). In that case, Batch selects the latest Amazon ECS\n optimized AMI that's supported by Batch at the time the infrastructure update is\n initiated. Alternatively, you can specify the AMI ID in the imageId or\n imageIdOverride parameters, or the launch template identified by the\n LaunchTemplate properties. Changing any of these properties starts an\n infrastructure update. If the AMI ID is specified in the launch template, it can't be\n replaced by specifying an AMI ID in either the imageId or\n imageIdOverride parameters. It can only be replaced by specifying a\n different launch template, or if the launch template version is set to\n $Default or $Latest, by setting either a new default version\n for the launch template (if $Default) or by adding a new version to the\n launch template (if $Latest).
\n
\n
\n
If these rules are followed, any update that starts an infrastructure update causes the\n AMI ID to be re-selected. If the version setting in the launch template\n (launchTemplate) is set to $Latest or $Default, the\n latest or default version of the launch template is evaluated up at the time of the\n infrastructure update, even if the launchTemplate wasn't updated.
\n ",
+ "smithy.api#documentation": "
Creates an Batch compute environment. You can create MANAGED or\n UNMANAGED compute environments. MANAGED compute environments can\n use Amazon EC2 or Fargate resources. UNMANAGED compute environments can only use\n EC2 resources.
\n
In a managed compute environment, Batch manages the capacity and instance types of the\n compute resources within the environment. This is based on the compute resource specification\n that you define or the launch template that you\n specify when you create the compute environment. Either, you can choose to use EC2 On-Demand\n Instances and EC2 Spot Instances. Or, you can use Fargate and Fargate Spot capacity in\n your managed compute environment. You can optionally set a maximum price so that Spot\n Instances only launch when the Spot Instance price is less than a specified percentage of the\n On-Demand price.
\n \n
Multi-node parallel jobs aren't supported on Spot Instances.
\n \n
In an unmanaged compute environment, you can manage your own EC2 compute resources and\n have flexibility with how you configure your compute resources. For example, you can use\n custom AMIs. However, you must verify that each of your AMIs meet the Amazon ECS container instance\n AMI specification. For more information, see container instance AMIs in the\n Amazon Elastic Container Service Developer Guide. After you created your unmanaged compute environment,\n you can use the DescribeComputeEnvironments operation to find the Amazon ECS\n cluster that's associated with it. Then, launch your container instances into that Amazon ECS\n cluster. For more information, see Launching an Amazon ECS container\n instance in the Amazon Elastic Container Service Developer Guide.
\n \n
To create a compute environment that uses EKS resources, the caller must have\n permissions to call eks:DescribeCluster.
\n \n \n
Batch doesn't automatically upgrade the AMIs in a compute environment after it's\n created. For example, it also doesn't update the AMIs in your compute environment when a\n newer version of the Amazon ECS optimized AMI is available. You're responsible for the management\n of the guest operating system. This includes any updates and security patches. You're also\n responsible for any additional application software or utilities that you install on the\n compute resources. There are two ways to use a new AMI for your Batch jobs. The original\n method is to complete these steps:
\n \n
\n
Create a new compute environment with the new AMI.
\n
\n
\n
Add the compute environment to an existing job queue.
\n
\n
\n
Remove the earlier compute environment from your job queue.
\n
\n
\n
Delete the earlier compute environment.
\n
\n \n
In April 2022, Batch added enhanced support for updating compute environments. For\n more information, see Updating compute environments.\n To use the enhanced updating of compute environments to update AMIs, follow these\n rules:
\n
\n
\n
Either don't set the service role (serviceRole) parameter or set it to\n the AWSBatchServiceRole service-linked role.
\n
\n
\n
Set the allocation strategy (allocationStrategy) parameter to\n BEST_FIT_PROGRESSIVE, SPOT_CAPACITY_OPTIMIZED, or\n SPOT_PRICE_CAPACITY_OPTIMIZED.
\n
\n
\n
Set the update to latest image version (updateToLatestImageVersion)\n parameter to true. The updateToLatestImageVersion parameter \n is used when you update a compute environment. This parameter is ignored when you create \n a compute environment.
\n
\n
\n
Don't specify an AMI ID in imageId, imageIdOverride (in\n \n ec2Configuration\n ), or in the launch template\n (launchTemplate). In that case, Batch selects the latest Amazon ECS\n optimized AMI that's supported by Batch at the time the infrastructure update is\n initiated. Alternatively, you can specify the AMI ID in the imageId or\n imageIdOverride parameters, or the launch template identified by the\n LaunchTemplate properties. Changing any of these properties starts an\n infrastructure update. If the AMI ID is specified in the launch template, it can't be\n replaced by specifying an AMI ID in either the imageId or\n imageIdOverride parameters. It can only be replaced by specifying a\n different launch template, or if the launch template version is set to\n $Default or $Latest, by setting either a new default version\n for the launch template (if $Default) or by adding a new version to the\n launch template (if $Latest).
\n
\n
\n
If these rules are followed, any update that starts an infrastructure update causes the\n AMI ID to be re-selected. If the version setting in the launch template\n (launchTemplate) is set to $Latest or $Default, the\n latest or default version of the launch template is evaluated up at the time of the\n infrastructure update, even if the launchTemplate wasn't updated.
The properties for a task definition that describes the container and volume definitions of\n an Amazon ECS task. You can specify which Docker images to use, the required resources, and other\n configurations related to launching the task definition through an Amazon ECS service or task.
Key-value pairs used to identify, sort, and organize cube resources. Can contain up to 63\n uppercase letters, lowercase letters, numbers, hyphens (-), and underscores (_). Labels can be\n added or modified at any time. Each resource can have multiple labels, but each key must be\n unique for a given object.
Key-value pairs used to attach arbitrary, non-identifying metadata to Kubernetes objects. \n Valid annotation keys have two segments: an optional prefix and a name, separated by a \n slash (/).
\n
\n
\n
The prefix is optional and must be 253 characters or less. If specified, the prefix \n must be a DNS subdomain− a series of DNS labels separated by dots (.), and it must \n end with a slash (/).
\n
\n
\n
The name segment is required and must be 63 characters or less. It can include alphanumeric \n characters ([a-z0-9A-Z]), dashes (-), underscores (_), and dots (.), but must begin and end \n with an alphanumeric character.
\n
\n
\n \n
Annotation values must be 255 characters or less.
\n \n
Annotations can be added or modified at any time. Each resource can have multiple annotations.
The namespace of the Amazon EKS cluster. In Kubernetes, namespaces provide a mechanism for isolating \n groups of resources within a single cluster. Names of resources need to be unique within a namespace, \n but not across namespaces. Batch places Batch Job pods in this namespace. If this field is provided, \n the value can't be empty or null. It must meet the following requirements:
\n
\n
\n
1-63 characters long
\n
\n
\n
Can't be set to default
\n
\n
\n
Can't start with kube\n
\n
\n
\n
Must match the following regular expression:\n ^[a-z0-9]([-a-z0-9]*[a-z0-9])?$\n
\n
\n
\n
\n For more information, see \n Namespaces in the Kubernetes documentation. This namespace can be \n different from the kubernetesNamespace set in the compute environment's \n EksConfiguration, but must have identical role-based access control (RBAC) roles as \n the compute environment's kubernetesNamespace. For multi-node parallel jobs,\n the same value must be provided across all the node ranges.
Describes and uniquely identifies Kubernetes resources. For example, the compute environment that\n a pod runs in or the jobID for a job running in the pod. For more information, see\n Understanding Kubernetes Objects in the Kubernetes documentation.
"
+ "smithy.api#documentation": "
Describes and uniquely identifies Kubernetes resources. For example, the compute environment that\n a pod runs in or the jobID for a job running in the pod. For more information, see\n \n Understanding Kubernetes Objects in the Kubernetes documentation.
The name of the persistentVolumeClaim bounded to a persistentVolume. \n For more information, see \n Persistent Volume Claims in the Kubernetes documentation.
An optional boolean value indicating if the mount is read only. Default is false. For more\n information, see \n Read Only Mounts in the Kubernetes documentation.
A persistentVolumeClaim volume is used to mount a PersistentVolume\n into a Pod. PersistentVolumeClaims are a way for users to \"claim\" durable storage without knowing \n the details of the particular cloud environment. See the information about PersistentVolumes\n in the Kubernetes documentation.
Specifies the configuration of a Kubernetes persistentVolumeClaim bounded to a \n persistentVolume. For more information, see \n Persistent Volume Claims in the Kubernetes documentation.
Contains a glob pattern to match against the StatusReason returned for a job.\n The pattern can contain up to 512 characters. It can contain letters, numbers, periods (.),\n colons (:), and white spaces (including spaces or tabs).\n It can\n optionally end with an asterisk (*) so that only the start of the string needs to be an exact\n match.
"
+ "smithy.api#documentation": "
Contains a glob pattern to match against the StatusReason returned for a job.\n The pattern can contain up to 512 characters. It can contain letters, numbers, periods (.),\n colons (:), and white spaces (including spaces or tabs). It can optionally end with an asterisk (*) \n so that only the start of the string needs to be an exact match.
The operating system for the compute environment.\n Valid values are:\n LINUX (default), WINDOWS_SERVER_2019_CORE,\n WINDOWS_SERVER_2019_FULL, WINDOWS_SERVER_2022_CORE, and\n WINDOWS_SERVER_2022_FULL.
\n \n
The following parameters can’t be set for Windows containers: linuxParameters,\n privileged, user, ulimits,\n readonlyRootFilesystem,\n and efsVolumeConfiguration.
\n \n \n
The Batch Scheduler checks\n the compute environments\n that are attached to the job queue before registering a task definition with\n Fargate. In this\n scenario, the job queue is where the job is submitted. If the job requires a\n Windows container and the first compute environment is LINUX, the compute\n environment is skipped and the next compute environment is checked until a Windows-based compute\n environment is found.
\n \n \n
Fargate Spot is not supported for\n ARM64 and\n Windows-based containers on Fargate. A job queue will be blocked if a\n Fargate\n ARM64 or\n Windows job is submitted to a job queue with only Fargate Spot compute environments.\n However, you can attach both FARGATE and\n FARGATE_SPOT compute environments to the same job queue.
\n "
+ "smithy.api#documentation": "
The operating system for the compute environment. Valid values are:\n LINUX (default), WINDOWS_SERVER_2019_CORE,\n WINDOWS_SERVER_2019_FULL, WINDOWS_SERVER_2022_CORE, and\n WINDOWS_SERVER_2022_FULL.
\n \n
The following parameters can’t be set for Windows containers: linuxParameters,\n privileged, user, ulimits,\n readonlyRootFilesystem, and efsVolumeConfiguration.
\n \n \n
The Batch Scheduler checks the compute environments that are attached to the job queue before \n registering a task definition with Fargate. In this scenario, the job queue is where the job is \n submitted. If the job requires a Windows container and the first compute environment is LINUX, \n the compute environment is skipped and the next compute environment is checked until a Windows-based \n compute environment is found.
\n \n \n
Fargate Spot is not supported for ARM64 and Windows-based containers on Fargate. \n A job queue will be blocked if a Fargate ARM64 or Windows job is submitted to a job \n queue with only Fargate Spot compute environments. However, you can attach both FARGATE and\n FARGATE_SPOT compute environments to the same job queue.
The vCPU architecture. The default value is X86_64. Valid values are\n X86_64 and ARM64.
\n \n
This parameter must be set to\n X86_64\n for Windows containers.
\n \n \n
Fargate Spot is not supported for ARM64 and Windows-based containers on\n Fargate. A job queue will be blocked if a Fargate ARM64 or Windows job is\n submitted to a job queue with only Fargate Spot compute environments. However, you can attach\n both FARGATE and FARGATE_SPOT compute environments to the same job\n queue.
\n "
+ "smithy.api#documentation": "
The vCPU architecture. The default value is X86_64. Valid values are\n X86_64 and ARM64.
\n \n
This parameter must be set to X86_64 for Windows containers.
\n \n \n
Fargate Spot is not supported for ARM64 and Windows-based containers on\n Fargate. A job queue will be blocked if a Fargate ARM64 or Windows job is\n submitted to a job queue with only Fargate Spot compute environments. However, you can attach\n both FARGATE and FARGATE_SPOT compute environments to the same job\n queue.
The scheduling priority for the job. This only affects jobs in job queues with a fair\n share policy. Jobs with a higher scheduling priority are scheduled before jobs with a lower\n scheduling priority.\n This\n overrides any scheduling priority in the job definition and works only within a single share\n identifier.
\n
The minimum supported value is 0 and the maximum supported value is 9999.
"
+ "smithy.api#documentation": "
The scheduling priority for the job. This only affects jobs in job queues with a fair\n share policy. Jobs with a higher scheduling priority are scheduled before jobs with a lower\n scheduling priority. This overrides any scheduling priority in the job definition and works only \n within a single share identifier.
\n
The minimum supported value is 0 and the maximum supported value is 9999.
Specifies how long, in seconds, CloudFront waits for a response from the origin. This is\n\t\t\talso known as the origin response timeout. The minimum timeout is 1\n\t\t\tsecond, the maximum is 60 seconds, and the default (if you don't specify otherwise) is\n\t\t\t30 seconds.
\n
For more information, see Origin Response Timeout in the\n\t\t\t\tAmazon CloudFront Developer Guide.
"
+ "smithy.api#documentation": "
Specifies how long, in seconds, CloudFront waits for a response from the origin. This is\n\t\t\talso known as the origin response timeout. The minimum timeout is 1\n\t\t\tsecond, the maximum is 60 seconds, and the default (if you don't specify otherwise) is\n\t\t\t30 seconds.
Specifies how long, in seconds, CloudFront persists its connection to the origin. The\n\t\t\tminimum timeout is 1 second, the maximum is 60 seconds, and the default (if you don't\n\t\t\tspecify otherwise) is 5 seconds.
\n
For more information, see Origin Keep-alive Timeout in the\n\t\t\t\tAmazon CloudFront Developer Guide.
"
+ "smithy.api#documentation": "
Specifies how long, in seconds, CloudFront persists its connection to the origin. The\n\t\t\tminimum timeout is 1 second, the maximum is 60 seconds, and the default (if you don't\n\t\t\tspecify otherwise) is 5 seconds.
The object that you want CloudFront to request from your origin (for example,\n\t\t\t\tindex.html) when a viewer requests the root URL for your distribution\n\t\t\t\t(https://www.example.com) instead of an object in your distribution\n\t\t\t\t(https://www.example.com/product-description.html). Specifying a\n\t\t\tdefault root object avoids exposing the contents of your distribution.
\n
Specify only the object name, for example, index.html. Don't add a\n\t\t\t\t/ before the object name.
\n
If you don't want to specify a default root object when you create a distribution,\n\t\t\tinclude an empty DefaultRootObject element.
\n
To delete the default root object from an existing distribution, update the\n\t\t\tdistribution configuration and include an empty DefaultRootObject\n\t\t\telement.
\n
To replace the default root object, update the distribution configuration and specify\n\t\t\tthe new object.
When a viewer requests the root URL for your distribution, the default root object is the\n\t\t\tobject that you want CloudFront to request from your origin. For example, if your root URL is\n\t\t\t\thttps://www.example.com, you can specify CloudFront to return the\n\t\t\t\tindex.html file as the default root object. You can specify a default\n\t\t\troot object so that viewers see a specific file or object, instead of another object in\n\t\t\tyour distribution (for example,\n\t\t\t\thttps://www.example.com/product-description.html). A default root\n\t\t\tobject avoids exposing the contents of your distribution.
\n
You can specify the object name or a path to the object name (for example,\n\t\t\t\tindex.html or exampleFolderName/index.html). Your string\n\t\t\tcan't begin with a forward slash (/). Only specify the object name or the\n\t\t\tpath to the object.
\n
If you don't want to specify a default root object when you create a distribution,\n\t\t\tinclude an empty DefaultRootObject element.
\n
To delete the default root object from an existing distribution, update the\n\t\t\tdistribution configuration and include an empty DefaultRootObject\n\t\t\telement.
\n
To replace the default root object, update the distribution configuration and specify\n\t\t\tthe new object.
\n
For more information about the default root object, see Specify a default root object in the Amazon CloudFront Developer Guide.
Specifies how long, in seconds, CloudFront waits for a response from the origin. This is\n\t\t\talso known as the origin response timeout. The minimum timeout is 1\n\t\t\tsecond, the maximum is 60 seconds, and the default (if you don't specify otherwise) is\n\t\t\t30 seconds.
Specifies how long, in seconds, CloudFront persists its connection to the origin. The\n\t\t\tminimum timeout is 1 second, the maximum is 60 seconds, and the default (if you don't\n\t\t\tspecify otherwise) is 5 seconds.
A category defines what kind of action can be taken in the stage, and constrains\n the provider type for the action. Valid categories are limited to one of the following\n values.
\n
\n
\n
Source
\n
\n
\n
Build
\n
\n
\n
Test
\n
\n
\n
Deploy
\n
\n
\n
Invoke
\n
\n
\n
Approval
\n
\n
",
+ "smithy.api#documentation": "
A category defines what kind of action can be taken in the stage, and constrains\n the provider type for the action. Valid categories are limited to one of the following\n values.
The condition for the stage. A condition is made up of the rules and the result for\n the condition.
"
+ "smithy.api#documentation": "
The condition for the stage. A condition is made up of the rules and the result for\n the condition. For more information about conditions, see Stage conditions.\n For more information about rules, see the CodePipeline rule\n reference.
The shell commands to run with your commands rule in CodePipeline. All commands\n are supported except multi-line formats. While CodeBuild logs and permissions\n are used, you do not need to create any resources in CodeBuild.
\n \n
Using compute time for this action will incur separate charges in CodeBuild.
Represents information about the rule to be created for an associated condition. An\n example would be creating a new rule for an entry condition, such as a rule that checks\n for a test result before allowing the run to enter the deployment stage.
"
+ "smithy.api#documentation": "
Represents information about the rule to be created for an associated condition. An\n example would be creating a new rule for an entry condition, such as a rule that checks\n for a test result before allowing the run to enter the deployment stage. For more\n information about conditions, see Stage conditions.\n For more information about rules, see the CodePipeline rule\n reference.
If a service is using the rolling update (ECS) deployment type, the\n\t\t\t\tmaximumPercent parameter represents an upper limit on the number of\n\t\t\tyour service's tasks that are allowed in the RUNNING or\n\t\t\t\tPENDING state during a deployment, as a percentage of the\n\t\t\t\tdesiredCount (rounded down to the nearest integer). This parameter\n\t\t\tenables you to define the deployment batch size. For example, if your service is using\n\t\t\tthe REPLICA service scheduler and has a desiredCount of four\n\t\t\ttasks and a maximumPercent value of 200%, the scheduler may start four new\n\t\t\ttasks before stopping the four older tasks (provided that the cluster resources required\n\t\t\tto do this are available). The default maximumPercent value for a service\n\t\t\tusing the REPLICA service scheduler is 200%.
\n
The Amazon ECS scheduler uses this parameter to replace unhealthy tasks by starting\n\t\t\treplacement tasks first and then stopping the unhealthy tasks, as long as cluster\n\t\t\tresources for starting replacement tasks are available. For more information about how\n\t\t\tthe scheduler replaces unhealthy tasks, see Amazon ECS\n\t\t\tservices.
\n
If a service is using either the blue/green (CODE_DEPLOY) or\n\t\t\t\tEXTERNAL deployment types, and tasks in the service use the\n\t\t\tEC2 launch type, the maximum percent\n\t\t\tvalue is set to the default value. The maximum percent\n\t\t\tvalue is used to define the upper limit on the number of the tasks in the service that\n\t\t\tremain in the RUNNING state while the container instances are in the\n\t\t\t\tDRAINING state.
\n \n
You can't specify a custom maximumPercent value for a service that\n\t\t\t\tuses either the blue/green (CODE_DEPLOY) or EXTERNAL\n\t\t\t\tdeployment types and has tasks that use the EC2 launch type.
\n \n
If the tasks in the service use the Fargate launch type, the maximum\n\t\t\tpercent value is not used, although it is returned when describing your service.
"
+ "smithy.api#documentation": "
If a service is using the rolling update (ECS) deployment type, the\n\t\t\t\tmaximumPercent parameter represents an upper limit on the number of\n\t\t\tyour service's tasks that are allowed in the RUNNING or\n\t\t\t\tPENDING state during a deployment, as a percentage of the\n\t\t\t\tdesiredCount (rounded down to the nearest integer). This parameter\n\t\t\tenables you to define the deployment batch size. For example, if your service is using\n\t\t\tthe REPLICA service scheduler and has a desiredCount of four\n\t\t\ttasks and a maximumPercent value of 200%, the scheduler may start four new\n\t\t\ttasks before stopping the four older tasks (provided that the cluster resources required\n\t\t\tto do this are available). The default maximumPercent value for a service\n\t\t\tusing the REPLICA service scheduler is 200%.
\n
The Amazon ECS scheduler uses this parameter to replace unhealthy tasks by starting\n\t\t\treplacement tasks first and then stopping the unhealthy tasks, as long as cluster\n\t\t\tresources for starting replacement tasks are available. For more information about how\n\t\t\tthe scheduler replaces unhealthy tasks, see Amazon ECS\n\t\t\tservices.
\n
If a service is using either the blue/green (CODE_DEPLOY) or\n\t\t\t\tEXTERNAL deployment types, and tasks in the service use the\n\t\t\tEC2 launch type, the maximum percent\n\t\t\tvalue is set to the default value. The maximum percent\n\t\t\tvalue is used to define the upper limit on the number of the tasks in the service that\n\t\t\tremain in the RUNNING state while the container instances are in the\n\t\t\t\tDRAINING state.
\n \n
You can't specify a custom maximumPercent value for a service that\n\t\t\t\tuses either the blue/green (CODE_DEPLOY) or EXTERNAL\n\t\t\t\tdeployment types and has tasks that use the EC2 launch type.
\n \n
If the service uses either the blue/green (CODE_DEPLOY) or EXTERNAL\n\t\t\tdeployment types, and the tasks in the service use the Fargate launch type, the maximum\n\t\t\tpercent value is not used. The value is still returned when describing your service.
The cluster that hosts the service. This can either be the cluster name or ARN.\n\t\t\tStarting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon\n\t\t\tElastic Inference (EI), and will help current customers migrate their workloads to\n\t\t\toptions that offer better price and performanceIf you don't specify a cluster,\n\t\t\t\tdefault is used.
"
+ "smithy.api#documentation": "
The cluster that hosts the service. This can either be the cluster name or ARN.\n\t\t\tStarting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon\n\t\t\tElastic Inference (EI), and will help current customers migrate their workloads to\n\t\t\toptions that offer better price and performance. If you don't specify a cluster,\n\t\t\t\tdefault is used.
The configuration options to send to the log driver.
\n
The options you can specify depend on the log driver. Some of the options you can\n\t\t\tspecify when you use the awslogs log driver to route logs to Amazon CloudWatch\n\t\t\tinclude the following:
\n
\n
awslogs-create-group
\n
\n
Required: No
\n
Specify whether you want the log group to be created automatically. If\n\t\t\t\t\t\tthis option isn't specified, it defaults to false.
\n \n
Your IAM policy must include the logs:CreateLogGroup\n\t\t\t\t\t\t\tpermission before you attempt to use\n\t\t\t\t\t\t\tawslogs-create-group.
\n \n
\n
awslogs-region
\n
\n
Required: Yes
\n
Specify the Amazon Web Services Region that the awslogs log driver is to\n\t\t\t\t\t\tsend your Docker logs to. You can choose to send all of your logs from\n\t\t\t\t\t\tclusters in different Regions to a single region in CloudWatch Logs. This is so that\n\t\t\t\t\t\tthey're all visible in one location. Otherwise, you can separate them by\n\t\t\t\t\t\tRegion for more granularity. Make sure that the specified log group exists\n\t\t\t\t\t\tin the Region that you specify with this option.
\n
\n
awslogs-group
\n
\n
Required: Yes
\n
Make sure to specify a log group that the awslogs log driver\n\t\t\t\t\t\tsends its log streams to.
\n
\n
awslogs-stream-prefix
\n
\n
Required: Yes, when using the Fargate launch\n\t\t\t\t\t\t\ttype.Optional for the EC2 launch type,\n\t\t\t\t\t\t\trequired for the Fargate launch type.
\n
Use the awslogs-stream-prefix option to associate a log\n\t\t\t\t\t\tstream with the specified prefix, the container name, and the ID of the\n\t\t\t\t\t\tAmazon ECS task that the container belongs to. If you specify a prefix with this\n\t\t\t\t\t\toption, then the log stream takes the format\n\t\t\t\t\t\t\tprefix-name/container-name/ecs-task-id.
\n
If you don't specify a prefix with this option, then the log stream is\n\t\t\t\t\t\tnamed after the container ID that's assigned by the Docker daemon on the\n\t\t\t\t\t\tcontainer instance. Because it's difficult to trace logs back to the\n\t\t\t\t\t\tcontainer that sent them with just the Docker container ID (which is only\n\t\t\t\t\t\tavailable on the container instance), we recommend that you specify a prefix\n\t\t\t\t\t\twith this option.
\n
For Amazon ECS services, you can use the service name as the prefix. Doing so,\n\t\t\t\t\t\tyou can trace log streams to the service that the container belongs to, the\n\t\t\t\t\t\tname of the container that sent them, and the ID of the task that the\n\t\t\t\t\t\tcontainer belongs to.
\n
You must specify a stream-prefix for your logs to have your logs appear in\n\t\t\t\t\t\tthe Log pane when using the Amazon ECS console.
\n
\n
awslogs-datetime-format
\n
\n
Required: No
\n
This option defines a multiline start pattern in Python\n\t\t\t\t\t\t\tstrftime format. A log message consists of a line that\n\t\t\t\t\t\tmatches the pattern and any following lines that don’t match the pattern.\n\t\t\t\t\t\tThe matched line is the delimiter between log messages.
\n
One example of a use case for using this format is for parsing output such\n\t\t\t\t\t\tas a stack dump, which might otherwise be logged in multiple entries. The\n\t\t\t\t\t\tcorrect pattern allows it to be captured in a single entry.
You cannot configure both the awslogs-datetime-format and\n\t\t\t\t\t\t\tawslogs-multiline-pattern options.
\n \n
Multiline logging performs regular expression parsing and matching of\n\t\t\t\t\t\t\tall log messages. This might have a negative impact on logging\n\t\t\t\t\t\t\tperformance.
\n \n
\n
awslogs-multiline-pattern
\n
\n
Required: No
\n
This option defines a multiline start pattern that uses a regular\n\t\t\t\t\t\texpression. A log message consists of a line that matches the pattern and\n\t\t\t\t\t\tany following lines that don’t match the pattern. The matched line is the\n\t\t\t\t\t\tdelimiter between log messages.
This option is ignored if awslogs-datetime-format is also\n\t\t\t\t\t\tconfigured.
\n
You cannot configure both the awslogs-datetime-format and\n\t\t\t\t\t\t\tawslogs-multiline-pattern options.
\n \n
Multiline logging performs regular expression parsing and matching of\n\t\t\t\t\t\t\tall log messages. This might have a negative impact on logging\n\t\t\t\t\t\t\tperformance.
\n \n
\n
mode
\n
\n
Required: No
\n
Valid values: non-blocking | blocking\n
\n
This option defines the delivery mode of log messages from the container\n\t\t\t\t\t\tto CloudWatch Logs. The delivery mode you choose affects application availability when\n\t\t\t\t\t\tthe flow of logs from container to CloudWatch is interrupted.
\n
If you use the blocking mode and the flow of logs to CloudWatch is\n\t\t\t\t\t\tinterrupted, calls from container code to write to the stdout\n\t\t\t\t\t\tand stderr streams will block. The logging thread of the\n\t\t\t\t\t\tapplication will block as a result. This may cause the application to become\n\t\t\t\t\t\tunresponsive and lead to container healthcheck failure.
\n
If you use the non-blocking mode, the container's logs are\n\t\t\t\t\t\tinstead stored in an in-memory intermediate buffer configured with the\n\t\t\t\t\t\t\tmax-buffer-size option. This prevents the application from\n\t\t\t\t\t\tbecoming unresponsive when logs cannot be sent to CloudWatch. We recommend using\n\t\t\t\t\t\tthis mode if you want to ensure service availability and are okay with some\n\t\t\t\t\t\tlog loss. For more information, see Preventing log loss with non-blocking mode in the awslogs\n\t\t\t\t\t\t\tcontainer log driver.
\n
\n
max-buffer-size
\n
\n
Required: No
\n
Default value: 1m\n
\n
When non-blocking mode is used, the\n\t\t\t\t\t\t\tmax-buffer-size log option controls the size of the buffer\n\t\t\t\t\t\tthat's used for intermediate message storage. Make sure to specify an\n\t\t\t\t\t\tadequate buffer size based on your application. When the buffer fills up,\n\t\t\t\t\t\tfurther logs cannot be stored. Logs that cannot be stored are lost.
\n
\n
\n
To route logs using the splunk log router, you need to specify a\n\t\t\t\tsplunk-token and a splunk-url.
\n
When you use the awsfirelens log router to route logs to an Amazon Web Services Service\n\t\t\tor Amazon Web Services Partner Network destination for log storage and analytics, you can set the\n\t\t\t\tlog-driver-buffer-limit option to limit the number of events that are\n\t\t\tbuffered in memory, before being sent to the log router container. It can help to\n\t\t\tresolve potential log loss issue because high throughput might result in memory running\n\t\t\tout for the buffer inside of Docker.
\n
Other options you can specify when using awsfirelens to route logs depend\n\t\t\ton the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region\n\t\t\twith region and a name for the log stream with\n\t\t\tdelivery_stream.
\n
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with\n\t\t\t\tregion and a data stream name with stream.
\n
When you export logs to Amazon OpenSearch Service, you can specify options like Name,\n\t\t\t\tHost (OpenSearch Service endpoint without protocol), Port,\n\t\t\t\tIndex, Type, Aws_auth,\n\t\t\t\tAws_region, Suppress_Type_Name, and\n\t\t\ttls.
\n
When you export logs to Amazon S3, you can specify the bucket using the bucket\n\t\t\toption. You can also specify region, total_file_size,\n\t\t\t\tupload_timeout, and use_put_object as options.
\n
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'\n
"
+ "smithy.api#documentation": "
The configuration options to send to the log driver.
\n
The options you can specify depend on the log driver. Some of the options you can\n\t\t\tspecify when you use the awslogs log driver to route logs to Amazon CloudWatch\n\t\t\tinclude the following:
\n
\n
awslogs-create-group
\n
\n
Required: No
\n
Specify whether you want the log group to be created automatically. If\n\t\t\t\t\t\tthis option isn't specified, it defaults to false.
\n \n
Your IAM policy must include the logs:CreateLogGroup\n\t\t\t\t\t\t\tpermission before you attempt to use\n\t\t\t\t\t\t\tawslogs-create-group.
\n \n
\n
awslogs-region
\n
\n
Required: Yes
\n
Specify the Amazon Web Services Region that the awslogs log driver is to\n\t\t\t\t\t\tsend your Docker logs to. You can choose to send all of your logs from\n\t\t\t\t\t\tclusters in different Regions to a single region in CloudWatch Logs. This is so that\n\t\t\t\t\t\tthey're all visible in one location. Otherwise, you can separate them by\n\t\t\t\t\t\tRegion for more granularity. Make sure that the specified log group exists\n\t\t\t\t\t\tin the Region that you specify with this option.
\n
\n
awslogs-group
\n
\n
Required: Yes
\n
Make sure to specify a log group that the awslogs log driver\n\t\t\t\t\t\tsends its log streams to.
\n
\n
awslogs-stream-prefix
\n
\n
Required: Yes, when using the Fargate launch\n\t\t\t\t\t\t\ttype.Optional for the EC2 launch type,\n\t\t\t\t\t\t\trequired for the Fargate launch type.
\n
Use the awslogs-stream-prefix option to associate a log\n\t\t\t\t\t\tstream with the specified prefix, the container name, and the ID of the\n\t\t\t\t\t\tAmazon ECS task that the container belongs to. If you specify a prefix with this\n\t\t\t\t\t\toption, then the log stream takes the format\n\t\t\t\t\t\t\tprefix-name/container-name/ecs-task-id.
\n
If you don't specify a prefix with this option, then the log stream is\n\t\t\t\t\t\tnamed after the container ID that's assigned by the Docker daemon on the\n\t\t\t\t\t\tcontainer instance. Because it's difficult to trace logs back to the\n\t\t\t\t\t\tcontainer that sent them with just the Docker container ID (which is only\n\t\t\t\t\t\tavailable on the container instance), we recommend that you specify a prefix\n\t\t\t\t\t\twith this option.
\n
For Amazon ECS services, you can use the service name as the prefix. Doing so,\n\t\t\t\t\t\tyou can trace log streams to the service that the container belongs to, the\n\t\t\t\t\t\tname of the container that sent them, and the ID of the task that the\n\t\t\t\t\t\tcontainer belongs to.
\n
You must specify a stream-prefix for your logs to have your logs appear in\n\t\t\t\t\t\tthe Log pane when using the Amazon ECS console.
\n
\n
awslogs-datetime-format
\n
\n
Required: No
\n
This option defines a multiline start pattern in Python\n\t\t\t\t\t\t\tstrftime format. A log message consists of a line that\n\t\t\t\t\t\tmatches the pattern and any following lines that don’t match the pattern.\n\t\t\t\t\t\tThe matched line is the delimiter between log messages.
\n
One example of a use case for using this format is for parsing output such\n\t\t\t\t\t\tas a stack dump, which might otherwise be logged in multiple entries. The\n\t\t\t\t\t\tcorrect pattern allows it to be captured in a single entry.
You cannot configure both the awslogs-datetime-format and\n\t\t\t\t\t\t\tawslogs-multiline-pattern options.
\n \n
Multiline logging performs regular expression parsing and matching of\n\t\t\t\t\t\t\tall log messages. This might have a negative impact on logging\n\t\t\t\t\t\t\tperformance.
\n \n
\n
awslogs-multiline-pattern
\n
\n
Required: No
\n
This option defines a multiline start pattern that uses a regular\n\t\t\t\t\t\texpression. A log message consists of a line that matches the pattern and\n\t\t\t\t\t\tany following lines that don’t match the pattern. The matched line is the\n\t\t\t\t\t\tdelimiter between log messages.
This option is ignored if awslogs-datetime-format is also\n\t\t\t\t\t\tconfigured.
\n
You cannot configure both the awslogs-datetime-format and\n\t\t\t\t\t\t\tawslogs-multiline-pattern options.
\n \n
Multiline logging performs regular expression parsing and matching of\n\t\t\t\t\t\t\tall log messages. This might have a negative impact on logging\n\t\t\t\t\t\t\tperformance.
\n \n
\n
mode
\n
\n
Required: No
\n
Valid values: non-blocking | blocking\n
\n
This option defines the delivery mode of log messages from the container\n\t\t\t\t\t\tto CloudWatch Logs. The delivery mode you choose affects application availability when\n\t\t\t\t\t\tthe flow of logs from container to CloudWatch is interrupted.
\n
If you use the blocking mode and the flow of logs to CloudWatch is\n\t\t\t\t\t\tinterrupted, calls from container code to write to the stdout\n\t\t\t\t\t\tand stderr streams will block. The logging thread of the\n\t\t\t\t\t\tapplication will block as a result. This may cause the application to become\n\t\t\t\t\t\tunresponsive and lead to container healthcheck failure.
\n
If you use the non-blocking mode, the container's logs are\n\t\t\t\t\t\tinstead stored in an in-memory intermediate buffer configured with the\n\t\t\t\t\t\t\tmax-buffer-size option. This prevents the application from\n\t\t\t\t\t\tbecoming unresponsive when logs cannot be sent to CloudWatch. We recommend using\n\t\t\t\t\t\tthis mode if you want to ensure service availability and are okay with some\n\t\t\t\t\t\tlog loss. For more information, see Preventing log loss with non-blocking mode in the awslogs\n\t\t\t\t\t\t\tcontainer log driver.
\n
\n
max-buffer-size
\n
\n
Required: No
\n
Default value: 1m\n
\n
When non-blocking mode is used, the\n\t\t\t\t\t\t\tmax-buffer-size log option controls the size of the buffer\n\t\t\t\t\t\tthat's used for intermediate message storage. Make sure to specify an\n\t\t\t\t\t\tadequate buffer size based on your application. When the buffer fills up,\n\t\t\t\t\t\tfurther logs cannot be stored. Logs that cannot be stored are lost.
\n
\n
\n
To route logs using the splunk log router, you need to specify a\n\t\t\t\tsplunk-token and a splunk-url.
\n
When you use the awsfirelens log router to route logs to an Amazon Web Services Service\n\t\t\tor Amazon Web Services Partner Network destination for log storage and analytics, you can set the\n\t\t\t\tlog-driver-buffer-limit option to limit the number of events that are\n\t\t\tbuffered in memory, before being sent to the log router container. It can help to\n\t\t\tresolve potential log loss issue because high throughput might result in memory running\n\t\t\tout for the buffer inside of Docker.
\n
Other options you can specify when using awsfirelens to route logs depend\n\t\t\ton the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region\n\t\t\twith region and a name for the log stream with\n\t\t\tdelivery_stream.
\n
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with\n\t\t\t\tregion and a data stream name with stream.
\n
When you export logs to Amazon OpenSearch Service, you can specify options like Name,\n\t\t\t\tHost (OpenSearch Service endpoint without protocol), Port,\n\t\t\t\tIndex, Type, Aws_auth,\n\t\t\t\tAws_region, Suppress_Type_Name, and\n\t\t\ttls. For more information, see Under the hood: FireLens for Amazon ECS Tasks.
\n
When you export logs to Amazon S3, you can specify the bucket using the bucket\n\t\t\toption. You can also specify region, total_file_size,\n\t\t\t\tupload_timeout, and use_put_object as options.
\n
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'\n
Enables fault injection when you register your task definition and allows for fault injection requests \n\t\t\tto be accepted from the task's containers. The default value is false.
Configures the maintenance window that you want for the runtime environment. The maintenance window must have the format ddd:hh24:mi-ddd:hh24:mi and must be less than 24 hours. The following two examples are valid maintenance windows: sun:23:45-mon:00:15 or sat:01:00-sat:03:00.
\n
If you do not provide a value, a random system-generated value will be assigned.
Specifies which canary run to use the screenshots from as the baseline for future visual monitoring with this canary. Valid values are \n nextrun to use the screenshots from the next run after this update is made, lastrun to use the screenshots from the most recent run \n before this update was made, or the value of Id in the \n CanaryRun from any past run of this canary.
",
+ "smithy.api#documentation": "
Specifies which canary run to use the screenshots from as the baseline for future visual monitoring with this canary. Valid values are \n nextrun to use the screenshots from the next run after this update is made, lastrun to use the screenshots from the most recent run \n before this update was made, or the value of Id in the \n CanaryRun from a run of this a canary in the past 31 days. If you specify the Id of a canary run older than 31 days, \n the operation returns a 400 validation exception error..
A string in the form of a detailed message explaining the status of a backup index associated with the recovery point.
/// - On failure, responds with [`SdkError`](crate::operation::describe_recovery_point::DescribeRecoveryPointError)
pub fn describe_recovery_point(&self) -> crate::operation::describe_recovery_point::builders::DescribeRecoveryPointFluentBuilder {
crate::operation::describe_recovery_point::builders::DescribeRecoveryPointFluentBuilder::new(self.handle.clone())
diff --git a/sdk/backup/src/client/get_recovery_point_index_details.rs b/sdk/backup/src/client/get_recovery_point_index_details.rs
new file mode 100644
index 000000000000..96b0b3832975
--- /dev/null
+++ b/sdk/backup/src/client/get_recovery_point_index_details.rs
@@ -0,0 +1,24 @@
+// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
+impl super::Client {
+ /// Constructs a fluent builder for the [`GetRecoveryPointIndexDetails`](crate::operation::get_recovery_point_index_details::builders::GetRecoveryPointIndexDetailsFluentBuilder) operation.
+ ///
+ /// - The fluent builder is configurable:
+ /// - [`backup_vault_name(impl Into)`](crate::operation::get_recovery_point_index_details::builders::GetRecoveryPointIndexDetailsFluentBuilder::backup_vault_name) / [`set_backup_vault_name(Option)`](crate::operation::get_recovery_point_index_details::builders::GetRecoveryPointIndexDetailsFluentBuilder::set_backup_vault_name): required: **true**
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
Accepted characters include lowercase letters, numbers, and hyphens.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
+ /// - On success, responds with [`GetRecoveryPointIndexDetailsOutput`](crate::operation::get_recovery_point_index_details::GetRecoveryPointIndexDetailsOutput) with field(s):
+ /// - [`recovery_point_arn(Option)`](crate::operation::get_recovery_point_index_details::GetRecoveryPointIndexDetailsOutput::recovery_point_arn):
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
The date and time that a backup index was created, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
The date and time that a backup index was deleted, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
The date and time that a backup index finished creation, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
Count of items within the backup index associated with the recovery point.
+ /// - On failure, responds with [`SdkError`](crate::operation::get_recovery_point_index_details::GetRecoveryPointIndexDetailsError)
+ pub fn get_recovery_point_index_details(
+ &self,
+ ) -> crate::operation::get_recovery_point_index_details::builders::GetRecoveryPointIndexDetailsFluentBuilder {
+ crate::operation::get_recovery_point_index_details::builders::GetRecoveryPointIndexDetailsFluentBuilder::new(self.handle.clone())
+ }
+}
diff --git a/sdk/backup/src/client/list_indexed_recovery_points.rs b/sdk/backup/src/client/list_indexed_recovery_points.rs
new file mode 100644
index 000000000000..d7d596605d37
--- /dev/null
+++ b/sdk/backup/src/client/list_indexed_recovery_points.rs
@@ -0,0 +1,21 @@
+// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
+impl super::Client {
+ /// Constructs a fluent builder for the [`ListIndexedRecoveryPoints`](crate::operation::list_indexed_recovery_points::builders::ListIndexedRecoveryPointsFluentBuilder) operation.
+ /// This operation supports pagination; See [`into_paginator()`](crate::operation::list_indexed_recovery_points::builders::ListIndexedRecoveryPointsFluentBuilder::into_paginator).
+ ///
+ /// - The fluent builder is configurable:
+ /// - [`next_token(impl Into)`](crate::operation::list_indexed_recovery_points::builders::ListIndexedRecoveryPointsFluentBuilder::next_token) / [`set_next_token(Option)`](crate::operation::list_indexed_recovery_points::builders::ListIndexedRecoveryPointsFluentBuilder::set_next_token): required: **false**
The next item following a partial list of returned recovery points.
For example, if a request is made to return MaxResults number of indexed recovery points, NextToken allows you to return more items in your list starting at the location pointed to by the next token.
Include this parameter to filter the returned list by the indicated statuses.
Accepted values: PENDING | ACTIVE | FAILED | DELETING
A recovery point with an index that has the status of ACTIVE can be included in a search.
+ /// - On success, responds with [`ListIndexedRecoveryPointsOutput`](crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPointsOutput) with field(s):
+ /// - [`indexed_recovery_points(Option>)`](crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPointsOutput::indexed_recovery_points):
This is a list of recovery points that have an associated index, belonging to the specified account.
The next item following a partial list of returned recovery points.
For example, if a request is made to return MaxResults number of indexed recovery points, NextToken allows you to return more items in your list starting at the location pointed to by the next token.
The lifecycle defines when a protected resource is transitioned to cold storage and when it expires. Backup will transition and expire backups automatically according to the lifecycle that you define.
Backups transitioned to cold storage must be stored in cold storage for a minimum of 90 days. Therefore, the “retention” setting must be 90 days greater than the “transition to cold after days” setting. The “transition to cold after days” setting cannot be changed after a backup has been transitioned to cold.
Resource types that can transition to cold storage are listed in the Feature availability by resource table. Backup ignores this expression for other resource types.
This parameter has a maximum value of 100 years (36,500 days).
The backup option for a selected resource. This option is only available for Windows Volume Shadow Copy Service (VSS) backup jobs.
Valid values: Set to "WindowsVSS":"enabled" to enable the WindowsVSS backup option and create a Windows VSS backup. Set to "WindowsVSS""disabled" to create a regular backup. The WindowsVSS option is not enabled by default.
Include this parameter to enable index creation if your backup job has a resource type that supports backup indexes.
Resource types that support backup indexes include:
EBS for Amazon Elastic Block Store
S3 for Amazon Simple Storage Service (Amazon S3)
Index can have 1 of 2 possible values, either ENABLED or DISABLED.
To create a backup index for an eligible ACTIVE recovery point that does not yet have a backup index, set value to ENABLED.
To delete a backup index, set value to DISABLED.
/// - On success, responds with [`StartBackupJobOutput`](crate::operation::start_backup_job::StartBackupJobOutput) with field(s):
/// - [`backup_job_id(Option)`](crate::operation::start_backup_job::StartBackupJobOutput::backup_job_id):
Uniquely identifies a request to Backup to back up a resource.
Note: This field is only returned for Amazon EFS and Advanced DynamoDB resources.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
diff --git a/sdk/backup/src/client/update_recovery_point_index_settings.rs b/sdk/backup/src/client/update_recovery_point_index_settings.rs
new file mode 100644
index 000000000000..84eb26384ea0
--- /dev/null
+++ b/sdk/backup/src/client/update_recovery_point_index_settings.rs
@@ -0,0 +1,21 @@
+// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
+impl super::Client {
+ /// Constructs a fluent builder for the [`UpdateRecoveryPointIndexSettings`](crate::operation::update_recovery_point_index_settings::builders::UpdateRecoveryPointIndexSettingsFluentBuilder) operation.
+ ///
+ /// - The fluent builder is configurable:
+ /// - [`backup_vault_name(impl Into)`](crate::operation::update_recovery_point_index_settings::builders::UpdateRecoveryPointIndexSettingsFluentBuilder::backup_vault_name) / [`set_backup_vault_name(Option)`](crate::operation::update_recovery_point_index_settings::builders::UpdateRecoveryPointIndexSettingsFluentBuilder::set_backup_vault_name): required: **true**
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
Accepted characters include lowercase letters, numbers, and hyphens.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
Index can have 1 of 2 possible values, either ENABLED or DISABLED.
To create a backup index for an eligible ACTIVE recovery point that does not yet have a backup index, set value to ENABLED.
To delete a backup index, set value to DISABLED.
+ /// - On success, responds with [`UpdateRecoveryPointIndexSettingsOutput`](crate::operation::update_recovery_point_index_settings::UpdateRecoveryPointIndexSettingsOutput) with field(s):
+ /// - [`backup_vault_name(Option)`](crate::operation::update_recovery_point_index_settings::UpdateRecoveryPointIndexSettingsOutput::backup_vault_name):
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
The request failed due to a temporary failure of the server.
+ ServiceUnavailableException(crate::types::error::ServiceUnavailableException),
+ /// An unexpected error occurred (e.g., invalid JSON returned by the service or an unknown error code).
+ #[deprecated(note = "Matching `Unhandled` directly is not forwards compatible. Instead, match using a \
+ variable wildcard pattern and check `.code()`:
+ \
+ `err if err.code() == Some(\"SpecificExceptionCode\") => { /* handle the error */ }`
+ \
+ See [`ProvideErrorMetadata`](#impl-ProvideErrorMetadata-for-GetRecoveryPointIndexDetailsError) for what information is available for the error.")]
+ Unhandled(crate::error::sealed_unhandled::Unhandled),
+}
+impl GetRecoveryPointIndexDetailsError {
+ /// Creates the `GetRecoveryPointIndexDetailsError::Unhandled` variant from any error type.
+ pub fn unhandled(
+ err: impl ::std::convert::Into<::std::boxed::Box>,
+ ) -> Self {
+ Self::Unhandled(crate::error::sealed_unhandled::Unhandled {
+ source: err.into(),
+ meta: ::std::default::Default::default(),
+ })
+ }
+
+ /// Creates the `GetRecoveryPointIndexDetailsError::Unhandled` variant from an [`ErrorMetadata`](::aws_smithy_types::error::ErrorMetadata).
+ pub fn generic(err: ::aws_smithy_types::error::ErrorMetadata) -> Self {
+ Self::Unhandled(crate::error::sealed_unhandled::Unhandled {
+ source: err.clone().into(),
+ meta: err,
+ })
+ }
+ ///
+ /// Returns error metadata, which includes the error code, message,
+ /// request ID, and potentially additional information.
+ ///
+ pub fn meta(&self) -> &::aws_smithy_types::error::ErrorMetadata {
+ match self {
+ Self::InvalidParameterValueException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
+ Self::MissingParameterValueException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
+ Self::ResourceNotFoundException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
+ Self::ServiceUnavailableException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
+ Self::Unhandled(e) => &e.meta,
+ }
+ }
+ /// Returns `true` if the error kind is `GetRecoveryPointIndexDetailsError::InvalidParameterValueException`.
+ pub fn is_invalid_parameter_value_exception(&self) -> bool {
+ matches!(self, Self::InvalidParameterValueException(_))
+ }
+ /// Returns `true` if the error kind is `GetRecoveryPointIndexDetailsError::MissingParameterValueException`.
+ pub fn is_missing_parameter_value_exception(&self) -> bool {
+ matches!(self, Self::MissingParameterValueException(_))
+ }
+ /// Returns `true` if the error kind is `GetRecoveryPointIndexDetailsError::ResourceNotFoundException`.
+ pub fn is_resource_not_found_exception(&self) -> bool {
+ matches!(self, Self::ResourceNotFoundException(_))
+ }
+ /// Returns `true` if the error kind is `GetRecoveryPointIndexDetailsError::ServiceUnavailableException`.
+ pub fn is_service_unavailable_exception(&self) -> bool {
+ matches!(self, Self::ServiceUnavailableException(_))
+ }
+}
+impl ::std::error::Error for GetRecoveryPointIndexDetailsError {
+ fn source(&self) -> ::std::option::Option<&(dyn ::std::error::Error + 'static)> {
+ match self {
+ Self::InvalidParameterValueException(_inner) => ::std::option::Option::Some(_inner),
+ Self::MissingParameterValueException(_inner) => ::std::option::Option::Some(_inner),
+ Self::ResourceNotFoundException(_inner) => ::std::option::Option::Some(_inner),
+ Self::ServiceUnavailableException(_inner) => ::std::option::Option::Some(_inner),
+ Self::Unhandled(_inner) => ::std::option::Option::Some(&*_inner.source),
+ }
+ }
+}
+impl ::std::fmt::Display for GetRecoveryPointIndexDetailsError {
+ fn fmt(&self, f: &mut ::std::fmt::Formatter<'_>) -> ::std::fmt::Result {
+ match self {
+ Self::InvalidParameterValueException(_inner) => _inner.fmt(f),
+ Self::MissingParameterValueException(_inner) => _inner.fmt(f),
+ Self::ResourceNotFoundException(_inner) => _inner.fmt(f),
+ Self::ServiceUnavailableException(_inner) => _inner.fmt(f),
+ Self::Unhandled(_inner) => {
+ if let ::std::option::Option::Some(code) = ::aws_smithy_types::error::metadata::ProvideErrorMetadata::code(self) {
+ write!(f, "unhandled error ({code})")
+ } else {
+ f.write_str("unhandled error")
+ }
+ }
+ }
+ }
+}
+impl ::aws_smithy_types::retry::ProvideErrorKind for GetRecoveryPointIndexDetailsError {
+ fn code(&self) -> ::std::option::Option<&str> {
+ ::aws_smithy_types::error::metadata::ProvideErrorMetadata::code(self)
+ }
+ fn retryable_error_kind(&self) -> ::std::option::Option<::aws_smithy_types::retry::ErrorKind> {
+ ::std::option::Option::None
+ }
+}
+impl ::aws_smithy_types::error::metadata::ProvideErrorMetadata for GetRecoveryPointIndexDetailsError {
+ fn meta(&self) -> &::aws_smithy_types::error::ErrorMetadata {
+ match self {
+ Self::InvalidParameterValueException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
+ Self::MissingParameterValueException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
+ Self::ResourceNotFoundException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
+ Self::ServiceUnavailableException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
+ Self::Unhandled(_inner) => &_inner.meta,
+ }
+ }
+}
+impl ::aws_smithy_runtime_api::client::result::CreateUnhandledError for GetRecoveryPointIndexDetailsError {
+ fn create_unhandled_error(
+ source: ::std::boxed::Box,
+ meta: ::std::option::Option<::aws_smithy_types::error::ErrorMetadata>,
+ ) -> Self {
+ Self::Unhandled(crate::error::sealed_unhandled::Unhandled {
+ source,
+ meta: meta.unwrap_or_default(),
+ })
+ }
+}
+impl ::aws_types::request_id::RequestId for crate::operation::get_recovery_point_index_details::GetRecoveryPointIndexDetailsError {
+ fn request_id(&self) -> Option<&str> {
+ self.meta().request_id()
+ }
+}
+
+pub use crate::operation::get_recovery_point_index_details::_get_recovery_point_index_details_output::GetRecoveryPointIndexDetailsOutput;
+
+pub use crate::operation::get_recovery_point_index_details::_get_recovery_point_index_details_input::GetRecoveryPointIndexDetailsInput;
+
+mod _get_recovery_point_index_details_input;
+
+mod _get_recovery_point_index_details_output;
+
+/// Builders
+pub mod builders;
diff --git a/sdk/backup/src/operation/get_recovery_point_index_details/_get_recovery_point_index_details_input.rs b/sdk/backup/src/operation/get_recovery_point_index_details/_get_recovery_point_index_details_input.rs
new file mode 100644
index 000000000000..7123648e9216
--- /dev/null
+++ b/sdk/backup/src/operation/get_recovery_point_index_details/_get_recovery_point_index_details_input.rs
@@ -0,0 +1,83 @@
+// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
+#[allow(missing_docs)] // documentation missing in model
+#[non_exhaustive]
+#[derive(::std::clone::Clone, ::std::cmp::PartialEq, ::std::fmt::Debug)]
+pub struct GetRecoveryPointIndexDetailsInput {
+ ///
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
+ ///
Accepted characters include lowercase letters, numbers, and hyphens.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
+ ///
Accepted characters include lowercase letters, numbers, and hyphens.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
+ ///
Accepted characters include lowercase letters, numbers, and hyphens.
+ /// This field is required.
+ pub fn backup_vault_name(mut self, input: impl ::std::convert::Into<::std::string::String>) -> Self {
+ self.backup_vault_name = ::std::option::Option::Some(input.into());
+ self
+ }
+ ///
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
+ ///
Accepted characters include lowercase letters, numbers, and hyphens.
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
+ ///
Accepted characters include lowercase letters, numbers, and hyphens.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
+ /// This field is required.
+ pub fn recovery_point_arn(mut self, input: impl ::std::convert::Into<::std::string::String>) -> Self {
+ self.recovery_point_arn = ::std::option::Option::Some(input.into());
+ self
+ }
+ ///
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
+ pub fn get_recovery_point_arn(&self) -> &::std::option::Option<::std::string::String> {
+ &self.recovery_point_arn
+ }
+ /// Consumes the builder and constructs a [`GetRecoveryPointIndexDetailsInput`](crate::operation::get_recovery_point_index_details::GetRecoveryPointIndexDetailsInput).
+ pub fn build(
+ self,
+ ) -> ::std::result::Result<
+ crate::operation::get_recovery_point_index_details::GetRecoveryPointIndexDetailsInput,
+ ::aws_smithy_types::error::operation::BuildError,
+ > {
+ ::std::result::Result::Ok(crate::operation::get_recovery_point_index_details::GetRecoveryPointIndexDetailsInput {
+ backup_vault_name: self.backup_vault_name,
+ recovery_point_arn: self.recovery_point_arn,
+ })
+ }
+}
diff --git a/sdk/backup/src/operation/get_recovery_point_index_details/_get_recovery_point_index_details_output.rs b/sdk/backup/src/operation/get_recovery_point_index_details/_get_recovery_point_index_details_output.rs
new file mode 100644
index 000000000000..ddeb2123f7d7
--- /dev/null
+++ b/sdk/backup/src/operation/get_recovery_point_index_details/_get_recovery_point_index_details_output.rs
@@ -0,0 +1,257 @@
+// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
+#[allow(missing_docs)] // documentation missing in model
+#[non_exhaustive]
+#[derive(::std::clone::Clone, ::std::cmp::PartialEq, ::std::fmt::Debug)]
+pub struct GetRecoveryPointIndexDetailsOutput {
+ ///
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
The date and time that a backup index was created, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
The date and time that a backup index was deleted, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
The date and time that a backup index finished creation, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
The date and time that a backup index was created, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
The date and time that a backup index was deleted, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
The date and time that a backup index finished creation, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
The date and time that a backup index was created, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
The date and time that a backup index was created, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
The date and time that a backup index was created, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
The date and time that a backup index was deleted, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
The date and time that a backup index was deleted, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
The date and time that a backup index was deleted, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
The date and time that a backup index finished creation, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
The date and time that a backup index finished creation, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
The date and time that a backup index finished creation, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
This operation returns the metadata and details specific to the backup index associated with the specified recovery point.
+#[derive(::std::clone::Clone, ::std::fmt::Debug)]
+pub struct GetRecoveryPointIndexDetailsFluentBuilder {
+ handle: ::std::sync::Arc,
+ inner: crate::operation::get_recovery_point_index_details::builders::GetRecoveryPointIndexDetailsInputBuilder,
+ config_override: ::std::option::Option,
+}
+impl
+ crate::client::customize::internal::CustomizableSend<
+ crate::operation::get_recovery_point_index_details::GetRecoveryPointIndexDetailsOutput,
+ crate::operation::get_recovery_point_index_details::GetRecoveryPointIndexDetailsError,
+ > for GetRecoveryPointIndexDetailsFluentBuilder
+{
+ fn send(
+ self,
+ config_override: crate::config::Builder,
+ ) -> crate::client::customize::internal::BoxFuture<
+ crate::client::customize::internal::SendResult<
+ crate::operation::get_recovery_point_index_details::GetRecoveryPointIndexDetailsOutput,
+ crate::operation::get_recovery_point_index_details::GetRecoveryPointIndexDetailsError,
+ >,
+ > {
+ ::std::boxed::Box::pin(async move { self.config_override(config_override).send().await })
+ }
+}
+impl GetRecoveryPointIndexDetailsFluentBuilder {
+ /// Creates a new `GetRecoveryPointIndexDetailsFluentBuilder`.
+ pub(crate) fn new(handle: ::std::sync::Arc) -> Self {
+ Self {
+ handle,
+ inner: ::std::default::Default::default(),
+ config_override: ::std::option::Option::None,
+ }
+ }
+ /// Access the GetRecoveryPointIndexDetails as a reference.
+ pub fn as_input(&self) -> &crate::operation::get_recovery_point_index_details::builders::GetRecoveryPointIndexDetailsInputBuilder {
+ &self.inner
+ }
+ /// Sends the request and returns the response.
+ ///
+ /// If an error occurs, an `SdkError` will be returned with additional details that
+ /// can be matched against.
+ ///
+ /// By default, any retryable failures will be retried twice. Retry behavior
+ /// is configurable with the [RetryConfig](aws_smithy_types::retry::RetryConfig), which can be
+ /// set when configuring the client.
+ pub async fn send(
+ self,
+ ) -> ::std::result::Result<
+ crate::operation::get_recovery_point_index_details::GetRecoveryPointIndexDetailsOutput,
+ ::aws_smithy_runtime_api::client::result::SdkError<
+ crate::operation::get_recovery_point_index_details::GetRecoveryPointIndexDetailsError,
+ ::aws_smithy_runtime_api::client::orchestrator::HttpResponse,
+ >,
+ > {
+ let input = self
+ .inner
+ .build()
+ .map_err(::aws_smithy_runtime_api::client::result::SdkError::construction_failure)?;
+ let runtime_plugins = crate::operation::get_recovery_point_index_details::GetRecoveryPointIndexDetails::operation_runtime_plugins(
+ self.handle.runtime_plugins.clone(),
+ &self.handle.conf,
+ self.config_override,
+ );
+ crate::operation::get_recovery_point_index_details::GetRecoveryPointIndexDetails::orchestrate(&runtime_plugins, input).await
+ }
+
+ /// Consumes this builder, creating a customizable operation that can be modified before being sent.
+ pub fn customize(
+ self,
+ ) -> crate::client::customize::CustomizableOperation<
+ crate::operation::get_recovery_point_index_details::GetRecoveryPointIndexDetailsOutput,
+ crate::operation::get_recovery_point_index_details::GetRecoveryPointIndexDetailsError,
+ Self,
+ > {
+ crate::client::customize::CustomizableOperation::new(self)
+ }
+ pub(crate) fn config_override(mut self, config_override: impl ::std::convert::Into) -> Self {
+ self.set_config_override(::std::option::Option::Some(config_override.into()));
+ self
+ }
+
+ pub(crate) fn set_config_override(&mut self, config_override: ::std::option::Option) -> &mut Self {
+ self.config_override = config_override;
+ self
+ }
+ ///
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
+ ///
Accepted characters include lowercase letters, numbers, and hyphens.
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
+ ///
Accepted characters include lowercase letters, numbers, and hyphens.
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
+ ///
Accepted characters include lowercase letters, numbers, and hyphens.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
The request failed due to a temporary failure of the server.
+ ServiceUnavailableException(crate::types::error::ServiceUnavailableException),
+ /// An unexpected error occurred (e.g., invalid JSON returned by the service or an unknown error code).
+ #[deprecated(note = "Matching `Unhandled` directly is not forwards compatible. Instead, match using a \
+ variable wildcard pattern and check `.code()`:
+ \
+ `err if err.code() == Some(\"SpecificExceptionCode\") => { /* handle the error */ }`
+ \
+ See [`ProvideErrorMetadata`](#impl-ProvideErrorMetadata-for-ListIndexedRecoveryPointsError) for what information is available for the error.")]
+ Unhandled(crate::error::sealed_unhandled::Unhandled),
+}
+impl ListIndexedRecoveryPointsError {
+ /// Creates the `ListIndexedRecoveryPointsError::Unhandled` variant from any error type.
+ pub fn unhandled(
+ err: impl ::std::convert::Into<::std::boxed::Box>,
+ ) -> Self {
+ Self::Unhandled(crate::error::sealed_unhandled::Unhandled {
+ source: err.into(),
+ meta: ::std::default::Default::default(),
+ })
+ }
+
+ /// Creates the `ListIndexedRecoveryPointsError::Unhandled` variant from an [`ErrorMetadata`](::aws_smithy_types::error::ErrorMetadata).
+ pub fn generic(err: ::aws_smithy_types::error::ErrorMetadata) -> Self {
+ Self::Unhandled(crate::error::sealed_unhandled::Unhandled {
+ source: err.clone().into(),
+ meta: err,
+ })
+ }
+ ///
+ /// Returns error metadata, which includes the error code, message,
+ /// request ID, and potentially additional information.
+ ///
+ pub fn meta(&self) -> &::aws_smithy_types::error::ErrorMetadata {
+ match self {
+ Self::InvalidParameterValueException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
+ Self::ResourceNotFoundException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
+ Self::ServiceUnavailableException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
+ Self::Unhandled(e) => &e.meta,
+ }
+ }
+ /// Returns `true` if the error kind is `ListIndexedRecoveryPointsError::InvalidParameterValueException`.
+ pub fn is_invalid_parameter_value_exception(&self) -> bool {
+ matches!(self, Self::InvalidParameterValueException(_))
+ }
+ /// Returns `true` if the error kind is `ListIndexedRecoveryPointsError::ResourceNotFoundException`.
+ pub fn is_resource_not_found_exception(&self) -> bool {
+ matches!(self, Self::ResourceNotFoundException(_))
+ }
+ /// Returns `true` if the error kind is `ListIndexedRecoveryPointsError::ServiceUnavailableException`.
+ pub fn is_service_unavailable_exception(&self) -> bool {
+ matches!(self, Self::ServiceUnavailableException(_))
+ }
+}
+impl ::std::error::Error for ListIndexedRecoveryPointsError {
+ fn source(&self) -> ::std::option::Option<&(dyn ::std::error::Error + 'static)> {
+ match self {
+ Self::InvalidParameterValueException(_inner) => ::std::option::Option::Some(_inner),
+ Self::ResourceNotFoundException(_inner) => ::std::option::Option::Some(_inner),
+ Self::ServiceUnavailableException(_inner) => ::std::option::Option::Some(_inner),
+ Self::Unhandled(_inner) => ::std::option::Option::Some(&*_inner.source),
+ }
+ }
+}
+impl ::std::fmt::Display for ListIndexedRecoveryPointsError {
+ fn fmt(&self, f: &mut ::std::fmt::Formatter<'_>) -> ::std::fmt::Result {
+ match self {
+ Self::InvalidParameterValueException(_inner) => _inner.fmt(f),
+ Self::ResourceNotFoundException(_inner) => _inner.fmt(f),
+ Self::ServiceUnavailableException(_inner) => _inner.fmt(f),
+ Self::Unhandled(_inner) => {
+ if let ::std::option::Option::Some(code) = ::aws_smithy_types::error::metadata::ProvideErrorMetadata::code(self) {
+ write!(f, "unhandled error ({code})")
+ } else {
+ f.write_str("unhandled error")
+ }
+ }
+ }
+ }
+}
+impl ::aws_smithy_types::retry::ProvideErrorKind for ListIndexedRecoveryPointsError {
+ fn code(&self) -> ::std::option::Option<&str> {
+ ::aws_smithy_types::error::metadata::ProvideErrorMetadata::code(self)
+ }
+ fn retryable_error_kind(&self) -> ::std::option::Option<::aws_smithy_types::retry::ErrorKind> {
+ ::std::option::Option::None
+ }
+}
+impl ::aws_smithy_types::error::metadata::ProvideErrorMetadata for ListIndexedRecoveryPointsError {
+ fn meta(&self) -> &::aws_smithy_types::error::ErrorMetadata {
+ match self {
+ Self::InvalidParameterValueException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
+ Self::ResourceNotFoundException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
+ Self::ServiceUnavailableException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
+ Self::Unhandled(_inner) => &_inner.meta,
+ }
+ }
+}
+impl ::aws_smithy_runtime_api::client::result::CreateUnhandledError for ListIndexedRecoveryPointsError {
+ fn create_unhandled_error(
+ source: ::std::boxed::Box,
+ meta: ::std::option::Option<::aws_smithy_types::error::ErrorMetadata>,
+ ) -> Self {
+ Self::Unhandled(crate::error::sealed_unhandled::Unhandled {
+ source,
+ meta: meta.unwrap_or_default(),
+ })
+ }
+}
+impl ::aws_types::request_id::RequestId for crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPointsError {
+ fn request_id(&self) -> Option<&str> {
+ self.meta().request_id()
+ }
+}
+
+pub use crate::operation::list_indexed_recovery_points::_list_indexed_recovery_points_output::ListIndexedRecoveryPointsOutput;
+
+pub use crate::operation::list_indexed_recovery_points::_list_indexed_recovery_points_input::ListIndexedRecoveryPointsInput;
+
+mod _list_indexed_recovery_points_input;
+
+mod _list_indexed_recovery_points_output;
+
+/// Builders
+pub mod builders;
+
+/// Paginator for this operation
+pub mod paginator;
diff --git a/sdk/backup/src/operation/list_indexed_recovery_points/_list_indexed_recovery_points_input.rs b/sdk/backup/src/operation/list_indexed_recovery_points/_list_indexed_recovery_points_input.rs
new file mode 100644
index 000000000000..9fc7e0df55aa
--- /dev/null
+++ b/sdk/backup/src/operation/list_indexed_recovery_points/_list_indexed_recovery_points_input.rs
@@ -0,0 +1,236 @@
+// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
+#[allow(missing_docs)] // documentation missing in model
+#[non_exhaustive]
+#[derive(::std::clone::Clone, ::std::cmp::PartialEq, ::std::fmt::Debug)]
+pub struct ListIndexedRecoveryPointsInput {
+ ///
The next item following a partial list of returned recovery points.
+ ///
For example, if a request is made to return MaxResults number of indexed recovery points, NextToken allows you to return more items in your list starting at the location pointed to by the next token.
The next item following a partial list of returned recovery points.
+ ///
For example, if a request is made to return MaxResults number of indexed recovery points, NextToken allows you to return more items in your list starting at the location pointed to by the next token.
The next item following a partial list of returned recovery points.
+ ///
For example, if a request is made to return MaxResults number of indexed recovery points, NextToken allows you to return more items in your list starting at the location pointed to by the next token.
The next item following a partial list of returned recovery points.
+ ///
For example, if a request is made to return MaxResults number of indexed recovery points, NextToken allows you to return more items in your list starting at the location pointed to by the next token.
The next item following a partial list of returned recovery points.
+ ///
For example, if a request is made to return MaxResults number of indexed recovery points, NextToken allows you to return more items in your list starting at the location pointed to by the next token.
The next item following a partial list of returned recovery points.
+ ///
For example, if a request is made to return MaxResults number of indexed recovery points, NextToken allows you to return more items in your list starting at the location pointed to by the next token.
This is a list of recovery points that have an associated index, belonging to the specified account.
+ ///
+ /// If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use `.indexed_recovery_points.is_none()`.
+ pub fn indexed_recovery_points(&self) -> &[crate::types::IndexedRecoveryPoint] {
+ self.indexed_recovery_points.as_deref().unwrap_or_default()
+ }
+ ///
The next item following a partial list of returned recovery points.
+ ///
For example, if a request is made to return MaxResults number of indexed recovery points, NextToken allows you to return more items in your list starting at the location pointed to by the next token.
+ pub fn next_token(&self) -> ::std::option::Option<&str> {
+ self.next_token.as_deref()
+ }
+}
+impl ::aws_types::request_id::RequestId for ListIndexedRecoveryPointsOutput {
+ fn request_id(&self) -> Option<&str> {
+ self._request_id.as_deref()
+ }
+}
+impl ListIndexedRecoveryPointsOutput {
+ /// Creates a new builder-style object to manufacture [`ListIndexedRecoveryPointsOutput`](crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPointsOutput).
+ pub fn builder() -> crate::operation::list_indexed_recovery_points::builders::ListIndexedRecoveryPointsOutputBuilder {
+ crate::operation::list_indexed_recovery_points::builders::ListIndexedRecoveryPointsOutputBuilder::default()
+ }
+}
+
+/// A builder for [`ListIndexedRecoveryPointsOutput`](crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPointsOutput).
+#[derive(::std::clone::Clone, ::std::cmp::PartialEq, ::std::default::Default, ::std::fmt::Debug)]
+#[non_exhaustive]
+pub struct ListIndexedRecoveryPointsOutputBuilder {
+ pub(crate) indexed_recovery_points: ::std::option::Option<::std::vec::Vec>,
+ pub(crate) next_token: ::std::option::Option<::std::string::String>,
+ _request_id: Option,
+}
+impl ListIndexedRecoveryPointsOutputBuilder {
+ /// Appends an item to `indexed_recovery_points`.
+ ///
+ /// To override the contents of this collection use [`set_indexed_recovery_points`](Self::set_indexed_recovery_points).
+ ///
+ ///
This is a list of recovery points that have an associated index, belonging to the specified account.
The next item following a partial list of returned recovery points.
+ ///
For example, if a request is made to return MaxResults number of indexed recovery points, NextToken allows you to return more items in your list starting at the location pointed to by the next token.
The next item following a partial list of returned recovery points.
+ ///
For example, if a request is made to return MaxResults number of indexed recovery points, NextToken allows you to return more items in your list starting at the location pointed to by the next token.
The next item following a partial list of returned recovery points.
+ ///
For example, if a request is made to return MaxResults number of indexed recovery points, NextToken allows you to return more items in your list starting at the location pointed to by the next token.
+ pub fn get_next_token(&self) -> &::std::option::Option<::std::string::String> {
+ &self.next_token
+ }
+ pub(crate) fn _request_id(mut self, request_id: impl Into) -> Self {
+ self._request_id = Some(request_id.into());
+ self
+ }
+
+ pub(crate) fn _set_request_id(&mut self, request_id: Option) -> &mut Self {
+ self._request_id = request_id;
+ self
+ }
+ /// Consumes the builder and constructs a [`ListIndexedRecoveryPointsOutput`](crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPointsOutput).
+ pub fn build(self) -> crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPointsOutput {
+ crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPointsOutput {
+ indexed_recovery_points: self.indexed_recovery_points,
+ next_token: self.next_token,
+ _request_id: self._request_id,
+ }
+ }
+}
diff --git a/sdk/backup/src/operation/list_indexed_recovery_points/builders.rs b/sdk/backup/src/operation/list_indexed_recovery_points/builders.rs
new file mode 100644
index 000000000000..c92fb6ac0a46
--- /dev/null
+++ b/sdk/backup/src/operation/list_indexed_recovery_points/builders.rs
@@ -0,0 +1,246 @@
+// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
+pub use crate::operation::list_indexed_recovery_points::_list_indexed_recovery_points_output::ListIndexedRecoveryPointsOutputBuilder;
+
+pub use crate::operation::list_indexed_recovery_points::_list_indexed_recovery_points_input::ListIndexedRecoveryPointsInputBuilder;
+
+impl crate::operation::list_indexed_recovery_points::builders::ListIndexedRecoveryPointsInputBuilder {
+ /// Sends a request with this input using the given client.
+ pub async fn send_with(
+ self,
+ client: &crate::Client,
+ ) -> ::std::result::Result<
+ crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPointsOutput,
+ ::aws_smithy_runtime_api::client::result::SdkError<
+ crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPointsError,
+ ::aws_smithy_runtime_api::client::orchestrator::HttpResponse,
+ >,
+ > {
+ let mut fluent_builder = client.list_indexed_recovery_points();
+ fluent_builder.inner = self;
+ fluent_builder.send().await
+ }
+}
+/// Fluent builder constructing a request to `ListIndexedRecoveryPoints`.
+///
+///
This operation returns a list of recovery points that have an associated index, belonging to the specified account.
+///
Optional parameters you can include are: MaxResults; NextToken; SourceResourceArns; CreatedBefore; CreatedAfter; and ResourceType.
+#[derive(::std::clone::Clone, ::std::fmt::Debug)]
+pub struct ListIndexedRecoveryPointsFluentBuilder {
+ handle: ::std::sync::Arc,
+ inner: crate::operation::list_indexed_recovery_points::builders::ListIndexedRecoveryPointsInputBuilder,
+ config_override: ::std::option::Option,
+}
+impl
+ crate::client::customize::internal::CustomizableSend<
+ crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPointsOutput,
+ crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPointsError,
+ > for ListIndexedRecoveryPointsFluentBuilder
+{
+ fn send(
+ self,
+ config_override: crate::config::Builder,
+ ) -> crate::client::customize::internal::BoxFuture<
+ crate::client::customize::internal::SendResult<
+ crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPointsOutput,
+ crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPointsError,
+ >,
+ > {
+ ::std::boxed::Box::pin(async move { self.config_override(config_override).send().await })
+ }
+}
+impl ListIndexedRecoveryPointsFluentBuilder {
+ /// Creates a new `ListIndexedRecoveryPointsFluentBuilder`.
+ pub(crate) fn new(handle: ::std::sync::Arc) -> Self {
+ Self {
+ handle,
+ inner: ::std::default::Default::default(),
+ config_override: ::std::option::Option::None,
+ }
+ }
+ /// Access the ListIndexedRecoveryPoints as a reference.
+ pub fn as_input(&self) -> &crate::operation::list_indexed_recovery_points::builders::ListIndexedRecoveryPointsInputBuilder {
+ &self.inner
+ }
+ /// Sends the request and returns the response.
+ ///
+ /// If an error occurs, an `SdkError` will be returned with additional details that
+ /// can be matched against.
+ ///
+ /// By default, any retryable failures will be retried twice. Retry behavior
+ /// is configurable with the [RetryConfig](aws_smithy_types::retry::RetryConfig), which can be
+ /// set when configuring the client.
+ pub async fn send(
+ self,
+ ) -> ::std::result::Result<
+ crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPointsOutput,
+ ::aws_smithy_runtime_api::client::result::SdkError<
+ crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPointsError,
+ ::aws_smithy_runtime_api::client::orchestrator::HttpResponse,
+ >,
+ > {
+ let input = self
+ .inner
+ .build()
+ .map_err(::aws_smithy_runtime_api::client::result::SdkError::construction_failure)?;
+ let runtime_plugins = crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPoints::operation_runtime_plugins(
+ self.handle.runtime_plugins.clone(),
+ &self.handle.conf,
+ self.config_override,
+ );
+ crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPoints::orchestrate(&runtime_plugins, input).await
+ }
+
+ /// Consumes this builder, creating a customizable operation that can be modified before being sent.
+ pub fn customize(
+ self,
+ ) -> crate::client::customize::CustomizableOperation<
+ crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPointsOutput,
+ crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPointsError,
+ Self,
+ > {
+ crate::client::customize::CustomizableOperation::new(self)
+ }
+ pub(crate) fn config_override(mut self, config_override: impl ::std::convert::Into) -> Self {
+ self.set_config_override(::std::option::Option::Some(config_override.into()));
+ self
+ }
+
+ pub(crate) fn set_config_override(&mut self, config_override: ::std::option::Option) -> &mut Self {
+ self.config_override = config_override;
+ self
+ }
+ /// Create a paginator for this request
+ ///
+ /// Paginators are used by calling [`send().await`](crate::operation::list_indexed_recovery_points::paginator::ListIndexedRecoveryPointsPaginator::send) which returns a [`PaginationStream`](aws_smithy_async::future::pagination_stream::PaginationStream).
+ pub fn into_paginator(self) -> crate::operation::list_indexed_recovery_points::paginator::ListIndexedRecoveryPointsPaginator {
+ crate::operation::list_indexed_recovery_points::paginator::ListIndexedRecoveryPointsPaginator::new(self.handle, self.inner)
+ }
+ ///
The next item following a partial list of returned recovery points.
+ ///
For example, if a request is made to return MaxResults number of indexed recovery points, NextToken allows you to return more items in your list starting at the location pointed to by the next token.
The next item following a partial list of returned recovery points.
+ ///
For example, if a request is made to return MaxResults number of indexed recovery points, NextToken allows you to return more items in your list starting at the location pointed to by the next token.
The next item following a partial list of returned recovery points.
+ ///
For example, if a request is made to return MaxResults number of indexed recovery points, NextToken allows you to return more items in your list starting at the location pointed to by the next token.
Include this parameter to filter the returned list by the indicated statuses.
+ ///
Accepted values: PENDING | ACTIVE | FAILED | DELETING
+ ///
A recovery point with an index that has the status of ACTIVE can be included in a search.
+ pub fn get_index_status(&self) -> &::std::option::Option {
+ self.inner.get_index_status()
+ }
+}
diff --git a/sdk/backup/src/operation/list_indexed_recovery_points/paginator.rs b/sdk/backup/src/operation/list_indexed_recovery_points/paginator.rs
new file mode 100644
index 000000000000..43008cc6b28b
--- /dev/null
+++ b/sdk/backup/src/operation/list_indexed_recovery_points/paginator.rs
@@ -0,0 +1,150 @@
+// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
+/// Paginator for [`ListIndexedRecoveryPoints`](crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPoints)
+pub struct ListIndexedRecoveryPointsPaginator {
+ handle: std::sync::Arc,
+ builder: crate::operation::list_indexed_recovery_points::builders::ListIndexedRecoveryPointsInputBuilder,
+ stop_on_duplicate_token: bool,
+}
+
+impl ListIndexedRecoveryPointsPaginator {
+ /// Create a new paginator-wrapper
+ pub(crate) fn new(
+ handle: std::sync::Arc,
+ builder: crate::operation::list_indexed_recovery_points::builders::ListIndexedRecoveryPointsInputBuilder,
+ ) -> Self {
+ Self {
+ handle,
+ builder,
+ stop_on_duplicate_token: true,
+ }
+ }
+
+ /// Set the page size
+ ///
+ /// _Note: this method will override any previously set value for `max_results`_
+ pub fn page_size(mut self, limit: i32) -> Self {
+ self.builder.max_results = ::std::option::Option::Some(limit);
+ self
+ }
+
+ /// Create a flattened paginator
+ ///
+ /// This paginator automatically flattens results using `indexed_recovery_points`. Queries to the underlying service
+ /// are dispatched lazily.
+ pub fn items(self) -> crate::operation::list_indexed_recovery_points::paginator::ListIndexedRecoveryPointsPaginatorItems {
+ crate::operation::list_indexed_recovery_points::paginator::ListIndexedRecoveryPointsPaginatorItems(self)
+ }
+
+ /// Stop paginating when the service returns the same pagination token twice in a row.
+ ///
+ /// Defaults to true.
+ ///
+ /// For certain operations, it may be useful to continue on duplicate token. For example,
+ /// if an operation is for tailing a log file in real-time, then continuing may be desired.
+ /// This option can be set to `false` to accommodate these use cases.
+ pub fn stop_on_duplicate_token(mut self, stop_on_duplicate_token: bool) -> Self {
+ self.stop_on_duplicate_token = stop_on_duplicate_token;
+ self
+ }
+
+ /// Create the pagination stream
+ ///
+ /// _Note:_ No requests will be dispatched until the stream is used
+ /// (e.g. with the [`.next().await`](aws_smithy_async::future::pagination_stream::PaginationStream::next) method).
+ pub fn send(
+ self,
+ ) -> ::aws_smithy_async::future::pagination_stream::PaginationStream<
+ ::std::result::Result<
+ crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPointsOutput,
+ ::aws_smithy_runtime_api::client::result::SdkError<
+ crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPointsError,
+ ::aws_smithy_runtime_api::client::orchestrator::HttpResponse,
+ >,
+ >,
+ > {
+ // Move individual fields out of self for the borrow checker
+ let builder = self.builder;
+ let handle = self.handle;
+ let runtime_plugins = crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPoints::operation_runtime_plugins(
+ handle.runtime_plugins.clone(),
+ &handle.conf,
+ ::std::option::Option::None,
+ )
+ .with_operation_plugin(crate::sdk_feature_tracker::paginator::PaginatorFeatureTrackerRuntimePlugin::new());
+ ::aws_smithy_async::future::pagination_stream::PaginationStream::new(::aws_smithy_async::future::pagination_stream::fn_stream::FnStream::new(
+ move |tx| {
+ ::std::boxed::Box::pin(async move {
+ // Build the input for the first time. If required fields are missing, this is where we'll produce an early error.
+ let mut input = match builder
+ .build()
+ .map_err(::aws_smithy_runtime_api::client::result::SdkError::construction_failure)
+ {
+ ::std::result::Result::Ok(input) => input,
+ ::std::result::Result::Err(e) => {
+ let _ = tx.send(::std::result::Result::Err(e)).await;
+ return;
+ }
+ };
+ loop {
+ let resp =
+ crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPoints::orchestrate(&runtime_plugins, input.clone())
+ .await;
+ // If the input member is None or it was an error
+ let done = match resp {
+ ::std::result::Result::Ok(ref resp) => {
+ let new_token = crate::lens::reflens_list_indexed_recovery_points_output_output_next_token(resp);
+ // Pagination is exhausted when the next token is an empty string
+ let is_empty = new_token.map(|token| token.is_empty()).unwrap_or(true);
+ if !is_empty && new_token == input.next_token.as_ref() && self.stop_on_duplicate_token {
+ true
+ } else {
+ input.next_token = new_token.cloned();
+ is_empty
+ }
+ }
+ ::std::result::Result::Err(_) => true,
+ };
+ if tx.send(resp).await.is_err() {
+ // receiving end was dropped
+ return;
+ }
+ if done {
+ return;
+ }
+ }
+ })
+ },
+ ))
+ }
+}
+
+/// Flattened paginator for `ListIndexedRecoveryPointsPaginator`
+///
+/// This is created with [`.items()`](ListIndexedRecoveryPointsPaginator::items)
+pub struct ListIndexedRecoveryPointsPaginatorItems(ListIndexedRecoveryPointsPaginator);
+
+impl ListIndexedRecoveryPointsPaginatorItems {
+ /// Create the pagination stream
+ ///
+ /// _Note_: No requests will be dispatched until the stream is used
+ /// (e.g. with the [`.next().await`](aws_smithy_async::future::pagination_stream::PaginationStream::next) method).
+ ///
+ /// To read the entirety of the paginator, use [`.collect::, _>()`](aws_smithy_async::future::pagination_stream::PaginationStream::collect).
+ pub fn send(
+ self,
+ ) -> ::aws_smithy_async::future::pagination_stream::PaginationStream<
+ ::std::result::Result<
+ crate::types::IndexedRecoveryPoint,
+ ::aws_smithy_runtime_api::client::result::SdkError<
+ crate::operation::list_indexed_recovery_points::ListIndexedRecoveryPointsError,
+ ::aws_smithy_runtime_api::client::orchestrator::HttpResponse,
+ >,
+ >,
+ > {
+ ::aws_smithy_async::future::pagination_stream::TryFlatMap::new(self.0.send()).flat_map(|page| {
+ crate::lens::lens_list_indexed_recovery_points_output_output_indexed_recovery_points(page)
+ .unwrap_or_default()
+ .into_iter()
+ })
+ }
+}
diff --git a/sdk/backup/src/operation/start_backup_job/_start_backup_job_input.rs b/sdk/backup/src/operation/start_backup_job/_start_backup_job_input.rs
index af235cd33054..a59a545ce910 100644
--- a/sdk/backup/src/operation/start_backup_job/_start_backup_job_input.rs
+++ b/sdk/backup/src/operation/start_backup_job/_start_backup_job_input.rs
@@ -28,6 +28,18 @@ pub struct StartBackupJobInput {
///
The backup option for a selected resource. This option is only available for Windows Volume Shadow Copy Service (VSS) backup jobs.
///
Valid values: Set to "WindowsVSS":"enabled" to enable the WindowsVSS backup option and create a Windows VSS backup. Set to "WindowsVSS""disabled" to create a regular backup. The WindowsVSS option is not enabled by default.
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Amazon Web Services Region where they are created.
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Amazon Web Services Region where they are created.
The request failed due to a temporary failure of the server.
+ ServiceUnavailableException(crate::types::error::ServiceUnavailableException),
+ /// An unexpected error occurred (e.g., invalid JSON returned by the service or an unknown error code).
+ #[deprecated(note = "Matching `Unhandled` directly is not forwards compatible. Instead, match using a \
+ variable wildcard pattern and check `.code()`:
+ \
+ `err if err.code() == Some(\"SpecificExceptionCode\") => { /* handle the error */ }`
+ \
+ See [`ProvideErrorMetadata`](#impl-ProvideErrorMetadata-for-UpdateRecoveryPointIndexSettingsError) for what information is available for the error.")]
+ Unhandled(crate::error::sealed_unhandled::Unhandled),
+}
+impl UpdateRecoveryPointIndexSettingsError {
+ /// Creates the `UpdateRecoveryPointIndexSettingsError::Unhandled` variant from any error type.
+ pub fn unhandled(
+ err: impl ::std::convert::Into<::std::boxed::Box>,
+ ) -> Self {
+ Self::Unhandled(crate::error::sealed_unhandled::Unhandled {
+ source: err.into(),
+ meta: ::std::default::Default::default(),
+ })
+ }
+
+ /// Creates the `UpdateRecoveryPointIndexSettingsError::Unhandled` variant from an [`ErrorMetadata`](::aws_smithy_types::error::ErrorMetadata).
+ pub fn generic(err: ::aws_smithy_types::error::ErrorMetadata) -> Self {
+ Self::Unhandled(crate::error::sealed_unhandled::Unhandled {
+ source: err.clone().into(),
+ meta: err,
+ })
+ }
+ ///
+ /// Returns error metadata, which includes the error code, message,
+ /// request ID, and potentially additional information.
+ ///
+ pub fn meta(&self) -> &::aws_smithy_types::error::ErrorMetadata {
+ match self {
+ Self::InvalidParameterValueException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
+ Self::InvalidRequestException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
+ Self::MissingParameterValueException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
+ Self::ResourceNotFoundException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
+ Self::ServiceUnavailableException(e) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(e),
+ Self::Unhandled(e) => &e.meta,
+ }
+ }
+ /// Returns `true` if the error kind is `UpdateRecoveryPointIndexSettingsError::InvalidParameterValueException`.
+ pub fn is_invalid_parameter_value_exception(&self) -> bool {
+ matches!(self, Self::InvalidParameterValueException(_))
+ }
+ /// Returns `true` if the error kind is `UpdateRecoveryPointIndexSettingsError::InvalidRequestException`.
+ pub fn is_invalid_request_exception(&self) -> bool {
+ matches!(self, Self::InvalidRequestException(_))
+ }
+ /// Returns `true` if the error kind is `UpdateRecoveryPointIndexSettingsError::MissingParameterValueException`.
+ pub fn is_missing_parameter_value_exception(&self) -> bool {
+ matches!(self, Self::MissingParameterValueException(_))
+ }
+ /// Returns `true` if the error kind is `UpdateRecoveryPointIndexSettingsError::ResourceNotFoundException`.
+ pub fn is_resource_not_found_exception(&self) -> bool {
+ matches!(self, Self::ResourceNotFoundException(_))
+ }
+ /// Returns `true` if the error kind is `UpdateRecoveryPointIndexSettingsError::ServiceUnavailableException`.
+ pub fn is_service_unavailable_exception(&self) -> bool {
+ matches!(self, Self::ServiceUnavailableException(_))
+ }
+}
+impl ::std::error::Error for UpdateRecoveryPointIndexSettingsError {
+ fn source(&self) -> ::std::option::Option<&(dyn ::std::error::Error + 'static)> {
+ match self {
+ Self::InvalidParameterValueException(_inner) => ::std::option::Option::Some(_inner),
+ Self::InvalidRequestException(_inner) => ::std::option::Option::Some(_inner),
+ Self::MissingParameterValueException(_inner) => ::std::option::Option::Some(_inner),
+ Self::ResourceNotFoundException(_inner) => ::std::option::Option::Some(_inner),
+ Self::ServiceUnavailableException(_inner) => ::std::option::Option::Some(_inner),
+ Self::Unhandled(_inner) => ::std::option::Option::Some(&*_inner.source),
+ }
+ }
+}
+impl ::std::fmt::Display for UpdateRecoveryPointIndexSettingsError {
+ fn fmt(&self, f: &mut ::std::fmt::Formatter<'_>) -> ::std::fmt::Result {
+ match self {
+ Self::InvalidParameterValueException(_inner) => _inner.fmt(f),
+ Self::InvalidRequestException(_inner) => _inner.fmt(f),
+ Self::MissingParameterValueException(_inner) => _inner.fmt(f),
+ Self::ResourceNotFoundException(_inner) => _inner.fmt(f),
+ Self::ServiceUnavailableException(_inner) => _inner.fmt(f),
+ Self::Unhandled(_inner) => {
+ if let ::std::option::Option::Some(code) = ::aws_smithy_types::error::metadata::ProvideErrorMetadata::code(self) {
+ write!(f, "unhandled error ({code})")
+ } else {
+ f.write_str("unhandled error")
+ }
+ }
+ }
+ }
+}
+impl ::aws_smithy_types::retry::ProvideErrorKind for UpdateRecoveryPointIndexSettingsError {
+ fn code(&self) -> ::std::option::Option<&str> {
+ ::aws_smithy_types::error::metadata::ProvideErrorMetadata::code(self)
+ }
+ fn retryable_error_kind(&self) -> ::std::option::Option<::aws_smithy_types::retry::ErrorKind> {
+ ::std::option::Option::None
+ }
+}
+impl ::aws_smithy_types::error::metadata::ProvideErrorMetadata for UpdateRecoveryPointIndexSettingsError {
+ fn meta(&self) -> &::aws_smithy_types::error::ErrorMetadata {
+ match self {
+ Self::InvalidParameterValueException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
+ Self::InvalidRequestException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
+ Self::MissingParameterValueException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
+ Self::ResourceNotFoundException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
+ Self::ServiceUnavailableException(_inner) => ::aws_smithy_types::error::metadata::ProvideErrorMetadata::meta(_inner),
+ Self::Unhandled(_inner) => &_inner.meta,
+ }
+ }
+}
+impl ::aws_smithy_runtime_api::client::result::CreateUnhandledError for UpdateRecoveryPointIndexSettingsError {
+ fn create_unhandled_error(
+ source: ::std::boxed::Box,
+ meta: ::std::option::Option<::aws_smithy_types::error::ErrorMetadata>,
+ ) -> Self {
+ Self::Unhandled(crate::error::sealed_unhandled::Unhandled {
+ source,
+ meta: meta.unwrap_or_default(),
+ })
+ }
+}
+impl ::aws_types::request_id::RequestId for crate::operation::update_recovery_point_index_settings::UpdateRecoveryPointIndexSettingsError {
+ fn request_id(&self) -> Option<&str> {
+ self.meta().request_id()
+ }
+}
+
+pub use crate::operation::update_recovery_point_index_settings::_update_recovery_point_index_settings_output::UpdateRecoveryPointIndexSettingsOutput;
+
+pub use crate::operation::update_recovery_point_index_settings::_update_recovery_point_index_settings_input::UpdateRecoveryPointIndexSettingsInput;
+
+mod _update_recovery_point_index_settings_input;
+
+mod _update_recovery_point_index_settings_output;
+
+/// Builders
+pub mod builders;
diff --git a/sdk/backup/src/operation/update_recovery_point_index_settings/_update_recovery_point_index_settings_input.rs b/sdk/backup/src/operation/update_recovery_point_index_settings/_update_recovery_point_index_settings_input.rs
new file mode 100644
index 000000000000..a5a99af6e78e
--- /dev/null
+++ b/sdk/backup/src/operation/update_recovery_point_index_settings/_update_recovery_point_index_settings_input.rs
@@ -0,0 +1,145 @@
+// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
+#[allow(missing_docs)] // documentation missing in model
+#[non_exhaustive]
+#[derive(::std::clone::Clone, ::std::cmp::PartialEq, ::std::fmt::Debug)]
+pub struct UpdateRecoveryPointIndexSettingsInput {
+ ///
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
+ ///
Accepted characters include lowercase letters, numbers, and hyphens.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
+ ///
Accepted characters include lowercase letters, numbers, and hyphens.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
+ ///
Accepted characters include lowercase letters, numbers, and hyphens.
+ /// This field is required.
+ pub fn backup_vault_name(mut self, input: impl ::std::convert::Into<::std::string::String>) -> Self {
+ self.backup_vault_name = ::std::option::Option::Some(input.into());
+ self
+ }
+ ///
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
+ ///
Accepted characters include lowercase letters, numbers, and hyphens.
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
+ ///
Accepted characters include lowercase letters, numbers, and hyphens.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
+ /// This field is required.
+ pub fn recovery_point_arn(mut self, input: impl ::std::convert::Into<::std::string::String>) -> Self {
+ self.recovery_point_arn = ::std::option::Option::Some(input.into());
+ self
+ }
+ ///
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
Index can have 1 of 2 possible values, either ENABLED or DISABLED.
+ ///
A value of ENABLED means a backup index for an eligible ACTIVE recovery point has been created.
+ ///
A value of DISABLED means a backup index was deleted.
+ pub fn get_index(&self) -> &::std::option::Option {
+ &self.index
+ }
+ pub(crate) fn _request_id(mut self, request_id: impl Into) -> Self {
+ self._request_id = Some(request_id.into());
+ self
+ }
+
+ pub(crate) fn _set_request_id(&mut self, request_id: Option) -> &mut Self {
+ self._request_id = request_id;
+ self
+ }
+ /// Consumes the builder and constructs a [`UpdateRecoveryPointIndexSettingsOutput`](crate::operation::update_recovery_point_index_settings::UpdateRecoveryPointIndexSettingsOutput).
+ pub fn build(self) -> crate::operation::update_recovery_point_index_settings::UpdateRecoveryPointIndexSettingsOutput {
+ crate::operation::update_recovery_point_index_settings::UpdateRecoveryPointIndexSettingsOutput {
+ backup_vault_name: self.backup_vault_name,
+ recovery_point_arn: self.recovery_point_arn,
+ index_status: self.index_status,
+ index: self.index,
+ _request_id: self._request_id,
+ }
+ }
+}
diff --git a/sdk/backup/src/operation/update_recovery_point_index_settings/builders.rs b/sdk/backup/src/operation/update_recovery_point_index_settings/builders.rs
new file mode 100644
index 000000000000..fea6110b635b
--- /dev/null
+++ b/sdk/backup/src/operation/update_recovery_point_index_settings/builders.rs
@@ -0,0 +1,180 @@
+// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
+pub use crate::operation::update_recovery_point_index_settings::_update_recovery_point_index_settings_output::UpdateRecoveryPointIndexSettingsOutputBuilder;
+
+pub use crate::operation::update_recovery_point_index_settings::_update_recovery_point_index_settings_input::UpdateRecoveryPointIndexSettingsInputBuilder;
+
+impl crate::operation::update_recovery_point_index_settings::builders::UpdateRecoveryPointIndexSettingsInputBuilder {
+ /// Sends a request with this input using the given client.
+ pub async fn send_with(
+ self,
+ client: &crate::Client,
+ ) -> ::std::result::Result<
+ crate::operation::update_recovery_point_index_settings::UpdateRecoveryPointIndexSettingsOutput,
+ ::aws_smithy_runtime_api::client::result::SdkError<
+ crate::operation::update_recovery_point_index_settings::UpdateRecoveryPointIndexSettingsError,
+ ::aws_smithy_runtime_api::client::orchestrator::HttpResponse,
+ >,
+ > {
+ let mut fluent_builder = client.update_recovery_point_index_settings();
+ fluent_builder.inner = self;
+ fluent_builder.send().await
+ }
+}
+/// Fluent builder constructing a request to `UpdateRecoveryPointIndexSettings`.
+///
+///
This operation updates the settings of a recovery point index.
+///
Required: BackupVaultName, RecoveryPointArn, and IAMRoleArn
+#[derive(::std::clone::Clone, ::std::fmt::Debug)]
+pub struct UpdateRecoveryPointIndexSettingsFluentBuilder {
+ handle: ::std::sync::Arc,
+ inner: crate::operation::update_recovery_point_index_settings::builders::UpdateRecoveryPointIndexSettingsInputBuilder,
+ config_override: ::std::option::Option,
+}
+impl
+ crate::client::customize::internal::CustomizableSend<
+ crate::operation::update_recovery_point_index_settings::UpdateRecoveryPointIndexSettingsOutput,
+ crate::operation::update_recovery_point_index_settings::UpdateRecoveryPointIndexSettingsError,
+ > for UpdateRecoveryPointIndexSettingsFluentBuilder
+{
+ fn send(
+ self,
+ config_override: crate::config::Builder,
+ ) -> crate::client::customize::internal::BoxFuture<
+ crate::client::customize::internal::SendResult<
+ crate::operation::update_recovery_point_index_settings::UpdateRecoveryPointIndexSettingsOutput,
+ crate::operation::update_recovery_point_index_settings::UpdateRecoveryPointIndexSettingsError,
+ >,
+ > {
+ ::std::boxed::Box::pin(async move { self.config_override(config_override).send().await })
+ }
+}
+impl UpdateRecoveryPointIndexSettingsFluentBuilder {
+ /// Creates a new `UpdateRecoveryPointIndexSettingsFluentBuilder`.
+ pub(crate) fn new(handle: ::std::sync::Arc) -> Self {
+ Self {
+ handle,
+ inner: ::std::default::Default::default(),
+ config_override: ::std::option::Option::None,
+ }
+ }
+ /// Access the UpdateRecoveryPointIndexSettings as a reference.
+ pub fn as_input(&self) -> &crate::operation::update_recovery_point_index_settings::builders::UpdateRecoveryPointIndexSettingsInputBuilder {
+ &self.inner
+ }
+ /// Sends the request and returns the response.
+ ///
+ /// If an error occurs, an `SdkError` will be returned with additional details that
+ /// can be matched against.
+ ///
+ /// By default, any retryable failures will be retried twice. Retry behavior
+ /// is configurable with the [RetryConfig](aws_smithy_types::retry::RetryConfig), which can be
+ /// set when configuring the client.
+ pub async fn send(
+ self,
+ ) -> ::std::result::Result<
+ crate::operation::update_recovery_point_index_settings::UpdateRecoveryPointIndexSettingsOutput,
+ ::aws_smithy_runtime_api::client::result::SdkError<
+ crate::operation::update_recovery_point_index_settings::UpdateRecoveryPointIndexSettingsError,
+ ::aws_smithy_runtime_api::client::orchestrator::HttpResponse,
+ >,
+ > {
+ let input = self
+ .inner
+ .build()
+ .map_err(::aws_smithy_runtime_api::client::result::SdkError::construction_failure)?;
+ let runtime_plugins = crate::operation::update_recovery_point_index_settings::UpdateRecoveryPointIndexSettings::operation_runtime_plugins(
+ self.handle.runtime_plugins.clone(),
+ &self.handle.conf,
+ self.config_override,
+ );
+ crate::operation::update_recovery_point_index_settings::UpdateRecoveryPointIndexSettings::orchestrate(&runtime_plugins, input).await
+ }
+
+ /// Consumes this builder, creating a customizable operation that can be modified before being sent.
+ pub fn customize(
+ self,
+ ) -> crate::client::customize::CustomizableOperation<
+ crate::operation::update_recovery_point_index_settings::UpdateRecoveryPointIndexSettingsOutput,
+ crate::operation::update_recovery_point_index_settings::UpdateRecoveryPointIndexSettingsError,
+ Self,
+ > {
+ crate::client::customize::CustomizableOperation::new(self)
+ }
+ pub(crate) fn config_override(mut self, config_override: impl ::std::convert::Into) -> Self {
+ self.set_config_override(::std::option::Option::Some(config_override.into()));
+ self
+ }
+
+ pub(crate) fn set_config_override(&mut self, config_override: ::std::option::Option) -> &mut Self {
+ self.config_override = config_override;
+ self
+ }
+ ///
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
+ ///
Accepted characters include lowercase letters, numbers, and hyphens.
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
+ ///
Accepted characters include lowercase letters, numbers, and hyphens.
The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created.
+ ///
Accepted characters include lowercase letters, numbers, and hyphens.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.