Skip to content

Commit c1629de

Browse files
authored
Merge pull request #7206 from jddocs/rc-v1.366.0
[Release Candidate] v1.366.0
2 parents 5dc78e8 + b04871d commit c1629de

File tree

23 files changed

+50
-50
lines changed
  • docs
    • guides
      • akamai/solutions
        • iot-firmware-upgrades-with-obj-and-cdn
        • observability-with-datastream-and-multiplexing
        • observability-with-datastream-and-trafficpeak
      • applications
        • big-data/getting-started-with-pytorch-lightning
        • configuration-management/terraform/secrets-management-with-terraform
        • messaging/linode-object-storage-with-mastodon
      • development/version-control/using-gitlab-runners-with-linode-object-storage
      • kubernetes/deploy-minio-on-kubernetes-using-kubespray-and-ansible
      • platform
        • migrate-to-linode
          • migrate-from-aws-s3-to-linode-object-storage
          • migrate-from-azure-blob-storage-to-linode-object-storage
          • migrate-from-google-cloud-storage-to-linode-object-storage
        • object-storage
          • backing-up-compute-instance-to-object-storage
          • how-to-configure-nextcloud-to-use-linode-object-storage-as-an-external-storage-mount
          • how-to-move-objects-between-buckets
          • working-with-cors-linode-object-storage
      • tools-reference/tools/rclone-object-storage-file-sync
      • uptime/monitoring/migrating-from-aws-cloudwatch-to-prometheus-and-grafana-on-akamai
    • marketplace-docs/guides
    • reference-architecture

23 files changed

+50
-50
lines changed

docs/guides/akamai/solutions/iot-firmware-upgrades-with-obj-and-cdn/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ This solution creates a streamlined delivery pipeline that allows developers to
3939

4040
### Systems and Components
4141

42-
- **Linode Object Storage:** An S3 compatible Object Storage bucket
42+
- **Linode Object Storage:** An Amazon S3-compatible Object Storage bucket
4343

4444
- **Linode VM:** A Dedicated 16GB Linode virtual machine
4545

docs/guides/akamai/solutions/observability-with-datastream-and-multiplexing/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ Coupling cloud-based multiplexing with DataStream edge logging allows you to con
5858

5959
### Integration and Migration Effort
6060

61-
The multiplexing solution in this guide does not require the migration of any application-critical software or data. This solution exists as a location-agnostic, cloud-based pipeline between your edge delivery infrastructure and log storage endpoints (i.e. s3 buckets, Google Cloud Storage, etc.).
61+
The multiplexing solution in this guide does not require the migration of any application-critical software or data. This solution exists as a location-agnostic, cloud-based pipeline between your edge delivery infrastructure and log storage endpoints (i.e. Amazon S3-compatible buckets, Google Cloud Storage, etc.).
6262

6363
Using the following example, you can reduce your overall egress costs by pointing your cloud multiplexing architecture to Akamai’s Object Storage rather than a third-party object storage solution.
6464

docs/guides/akamai/solutions/observability-with-datastream-and-trafficpeak/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,4 +87,4 @@ Below is a high-level diagram and walkthrough of a DataStream and TrafficPeak ar
8787

8888
- **VMs:** Compute Instances used to run TrafficPeak’s log ingest and processing software. Managed by Akamai.
8989

90-
- **Object Storage:** S3 compatible object storage used to store log data from TrafficPeak. Managed by Akamai.
90+
- **Object Storage:** Amazon S3-compatible object storage used to store log data from TrafficPeak. Managed by Akamai.

docs/guides/applications/big-data/getting-started-with-pytorch-lightning/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ An optimized pipeline consists of a set of one or more "gold images". These beco
5151

5252
Lightning code is configured to include multiple data loader steps to train neural networks. Depending on the desired training iterations and epochs, configured code can optionally store numerous intermediate storage objects and spaces. This allows for the isolation of training and validation steps for further testing, validation, and feedback loops.
5353

54-
Throughout the modeling process, various storage spaces are used for staging purposes. These spaces might be confined to the Linux instance running PyTorch Lightning. Alternatively, they can have inputs sourced from static or streaming objects located either within or outside the instance. Such sourced locations can include various URLs, local Linode volumes, Linode (or other S3 buckets), or external sources. This allows instances to be chained across multiple GPU instances if desired.
54+
Throughout the modeling process, various storage spaces are used for staging purposes. These spaces might be confined to the Linux instance running PyTorch Lightning. Alternatively, they can have inputs sourced from static or streaming objects located either within or outside the instance. Such sourced locations can include various URLs, local Linode volumes, Linode (or other Amazon S3-compatible buckets), or external sources. This allows instances to be chained across multiple GPU instances if desired.
5555

5656
This introduces an additional stage in the pipeline between and among instances for high-volume or large tensor data source research.
5757

@@ -91,7 +91,7 @@ Several storage profiles work for the needs of modeling research, including:
9191

9292
- **Mounted Linode Volumes**: Up to eight logical disk volumes ranging from 10 GB to 80 TB can be optionally added to any Linode. Volumes are mounted and unmounted either manually or programmatically. Volumes may be added, deleted, and/or backed-up during the research cycle. Volume storage costs are optional.
9393

94-
- **Linode Object Storage**: Similar to CORS S3 storage, Linode Object Storage emulates AWS or DreamHost S3 storage, so S3 objects can be migrated to Linode and behave similarly. Standard S3 buckets can be imported, stored, or deleted as needed during the research cycle. Object storage costs are optional.
94+
- **Linode Object Storage**: Similar to CORS S3 storage, Linode Object Storage emulates AWS or DreamHost S3 storage, so Amazon S3-compatible objects can be migrated to Linode and behave similarly. Standard S3 buckets can be imported, stored, or deleted as needed during the research cycle. Object storage costs are optional.
9595

9696
- **External URL Code Calls**: External networked data sources are subject to the data flow charges associated with the Linode GPU or other instance cost.
9797

docs/guides/applications/configuration-management/terraform/secrets-management-with-terraform/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -173,7 +173,7 @@ As of the writing of this guide, **sensitive information used to generate your T
173173

174174
### Remote Backends
175175

176-
Terraform [*backends*](https://www.terraform.io/docs/backends/index.html) allow the user to securely store their state in a remote location. For example, a key/value store like [Consul](https://www.consul.io/), or an S3 compatible bucket storage like [Minio](https://www.minio.io/). This allows the Terraform state to be read from the remote store. Because the state only ever exists locally in memory, there is no worry about storing secrets in plain text.
176+
Terraform [*backends*](https://www.terraform.io/docs/backends/index.html) allow the user to securely store their state in a remote location. For example, a key/value store like [Consul](https://www.consul.io/), or an Amazon S3-compatible bucket storage like [Minio](https://www.minio.io/). This allows the Terraform state to be read from the remote store. Because the state only ever exists locally in memory, there is no worry about storing secrets in plain text.
177177

178178
Some backends, like Consul, also allow for state locking. If one user is applying a state, another user cannot make any changes.
179179

docs/guides/applications/messaging/linode-object-storage-with-mastodon/index.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -34,13 +34,13 @@ Mastodon by default stores its media attachments locally. Every upload is saved
3434

3535
If your Mastodon instance stays below a certain size and traffic level, these image uploads might not cause issues. But as your Mastodon instance grows, the local storage approach can cause difficulties. Media stored in this way is often difficult to manage and a burden on your server.
3636

37-
But object storage, by contrast, excels when it comes to storing static files — like Mastodon's media attachments. An S3-compatible object storage bucket can more readily store a large number of static files and scale appropriately.
37+
But object storage, by contrast, excels when it comes to storing static files — like Mastodon's media attachments. An Amazon S3-compatible object storage bucket can more readily store a large number of static files and scale appropriately.
3838

3939
To learn more about the features of object storage generally and Linode Object Storage more particularly, take a look at our [Linode Object Storage overview](/docs/products/storage/object-storage/).
4040

4141
## How to Use Linode Object Storage with Mastodon
4242

43-
The rest of this guide walks you through setting up a Mastodon instance to use Linode Object Storage for storing its media attachments. Although the guide uses Linode Object Storage, the steps should also provide an effective model for using other S3-compatible object storage buckets with Mastodon.
43+
The rest of this guide walks you through setting up a Mastodon instance to use Linode Object Storage for storing its media attachments. Although the guide uses Linode Object Storage, the steps should also provide an effective model for using other Amazon S3-compatible object storage buckets with Mastodon.
4444

4545
The tutorial gives instructions for creating a new Mastodon instance, but the instructions should also work for most existing Mastodon instances regardless of whether it was installed on Docker or from source. Additionally, the tutorial includes steps for migrating existing, locally-stored Mastodon media to the object storage instance.
4646

@@ -195,7 +195,7 @@ At this point, your Mastodon instance is ready to start storing media on your Li
195195
196196
If you are adding object storage to an existing Mastodon instance, likely already have content stored locally. And likely you want to migrate that content to your new Linode Object Storage bucket.
197197
198-
To do so, you can use a tool for managing S3 storage to copy local contents to your remote object storage bucket. For instance, AWS has a command-line S3 tool that should be configurable for Linode Object Storage.
198+
To do so, you can use a tool for managing Amazon S3-compatible storage to copy local contents to your remote object storage bucket. For instance, AWS has a command-line S3 tool that should be configurable for Linode Object Storage.
199199
200200
However, this guide uses the powerful and flexible [rclone](https://rclone.org/s3/). `rclone` operates on a wide range of storage devices and platforms, not just S3, and it is exceptional for syncing across storage mediums.
201201
@@ -239,6 +239,6 @@ Perhaps the simplest way to verify your Mastodon configuration is by making a po
239239
240240
You Mastodon instance now has its media storage needs being handled by object storage. And with that your server has become more scalable and prepared for an expanding user base.
241241
242-
The links below provide additional information on how the setup between Mastodon and an S3-compatible storage works.
242+
The links below provide additional information on how the setup between Mastodon and Amazon S3-compatible storage works.
243243
244244
To keep learning about Mastodon, be sure to take a look at the official [Mastodon blog](https://blog.joinmastodon.org/) and the [Mastodon discussion board](https://discourse.joinmastodon.org/).

docs/guides/development/version-control/using-gitlab-runners-with-linode-object-storage/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -395,7 +395,7 @@ test-job-1:
395395
396396
By default, cached files are stored locally alongside your GitLab Runner Manager. But that option may not be the most efficient, especially as your GitLab pipelines become more complicated and your projects' storage needs expand.
397397
398-
To remedy this, you can adjust your GitLab Runner configuration to use an S3-compatible object storage solution, like [Linode Object Storage](/docs/products/storage/object-storage/get-started/).
398+
To remedy this, you can adjust your GitLab Runner configuration to use an Amazon S3-compatible object storage solution, like [Linode Object Storage](/docs/products/storage/object-storage/get-started/).
399399
400400
These next steps show you how you can integrate a Linode Object Storage bucket with your GitLab Runner to store cached resources from CI/CD jobs.
401401

docs/guides/kubernetes/deploy-minio-on-kubernetes-using-kubespray-and-ansible/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ external_resources:
2020

2121
## What is Minio?
2222

23-
Minio is an open source, S3 compatible object store that can be hosted on a Linode. Deployment on a Kubernetes cluster is supported in both standalone and distributed modes. This guide uses [Kubespray](https://github.com/kubernetes-incubator/kubespray) to deploy a Kubernetes cluster on three servers running Ubuntu 16.04. Kubespray comes packaged with Ansible playbooks that simplify setup on the cluster. Minio is then installed in standalone mode on the cluster to demonstrate how to create a service.
23+
Minio is an open source, Amazon S3-compatible object store that can be hosted on a Linode. Deployment on a Kubernetes cluster is supported in both standalone and distributed modes. This guide uses [Kubespray](https://github.com/kubernetes-incubator/kubespray) to deploy a Kubernetes cluster on three servers running Ubuntu 16.04. Kubespray comes packaged with Ansible playbooks that simplify setup on the cluster. Minio is then installed in standalone mode on the cluster to demonstrate how to create a service.
2424

2525
## Before You Begin
2626

@@ -411,6 +411,6 @@ Persistent Volumes(PV) are an abstraction in Kubernetes that represents a unit o
411411

412412
![Minio Login Screen](minio-login-screen.png)
413413

414-
1. Minio has similar functionality to S3: file uploads, creating buckets, and storing other data.
414+
1. Minio has similar functionality to Amazon S3: file uploads, creating buckets, and storing other data.
415415

416416
![Minio Browser](minio-browser.png)

docs/guides/platform/migrate-to-linode/migrate-from-aws-s3-to-linode-object-storage/index.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ external_resources:
1313
- '[Linode Object Storage guides & tutorials](/docs/guides/platform/object-storage/)'
1414
---
1515

16-
Linode Object Storage is an S3-compatible service used for storing large amounts of unstructured data. This guide includes steps on how to migrate up to 100TB of static content from AWS S3 to Linode Object Storage using rclone, along with how to monitor your migration using rclone’s WebUI GUI.
16+
Linode Object Storage is an Amazon S3-compatible service used for storing large amounts of unstructured data. This guide includes steps on how to migrate up to 100TB of static content from AWS S3 to Linode Object Storage using rclone, along with how to monitor your migration using rclone’s WebUI GUI.
1717

1818
## Migration Considerations
1919

@@ -37,7 +37,7 @@ Linode Object Storage is an S3-compatible service used for storing large amounts
3737

3838
There are two architecture options for completing a data migration from AWS S3 to Linode Object Storage. One of these architectures is required to be in place prior to initiating the data migration:
3939

40-
**Architecture 1:** Utilizes an EC2 instance running rclone in the same region as the source S3 bucket. Data is transferred internally from the S3 bucket to the EC2 instance and then over the public internet from the EC2 instance to the target Linode Object Storage bucket.
40+
**Architecture 1:** Utilizes an EC2 instance running rclone in the same region as the source AWS S3 bucket. Data is transferred internally from the AWS S3 bucket to the EC2 instance and then over the public internet from the EC2 instance to the target Linode Object Storage bucket.
4141

4242
- **Recommended for:** speed of transfer, users with AWS platform familiarity
4343

@@ -53,7 +53,7 @@ Rclone generally performs better when placed closer to the source data being cop
5353

5454
1. A source AWS S3 bucket with the content to be transferred.
5555

56-
1. An AWS EC2 instance running rclone in the same region as the source S3 bucket. The S3 bucket communicates with the EC2 instance via VPC Endpoint within the AWS region. Your IAM policy should allow S3 access only via your VPC Endpoint.
56+
1. An AWS EC2 instance running rclone in the same region as the source AWS S3 bucket. The AWS S3 bucket communicates with the EC2 instance via VPC Endpoint within the AWS region. Your IAM policy should allow S3 access only via your VPC Endpoint.
5757

5858
1. Data is copied across the public internet from the AWS EC2 instance to a target Linode Object Storage bucket. This results in egress (outbound traffic) being calculated by AWS.
5959

@@ -93,7 +93,7 @@ Rclone generally performs better when placed closer to the source data being cop
9393
- Secret key
9494
- Region ID
9595

96-
- If using Architecture 1, there must be a [VPC gateway endpoint created](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html) for S3 in the same VPC where your EC2 instance is deployed. This should be the same region as your S3 bucket.
96+
- If using Architecture 1, there must be a [VPC gateway endpoint created](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html) for S3 in the same VPC where your EC2 instance is deployed. This should be the same region as your AWS S3 bucket.
9797

9898
- An **existing Linode Object Storage bucket** with:
9999

@@ -194,7 +194,7 @@ Rclone generally performs better when placed closer to the source data being cop
194194
195195
#### Rclone Copy Command Breakdown
196196
197-
- `aws:aws-bucket-name/`: The AWS remote provider and source S3 bucket. Including the slash at the end informs the `copy` command to include everything within the bucket.
197+
- `aws:aws-bucket-name/`: The AWS remote provider and source AWS S3 bucket. Including the slash at the end informs the `copy` command to include everything within the bucket.
198198
199199
- `linode:linode-bucket-name/`: The Linode remote provider and target Object Storage bucket.
200200

docs/guides/platform/migrate-to-linode/migrate-from-azure-blob-storage-to-linode-object-storage/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ external_resources:
1212
- '[Linode Object Storage guides & tutorials](/docs/guides/platform/object-storage/)'
1313
---
1414

15-
Linode Object Storage is an S3-compatible service used for storing large amounts of unstructured data. This guide includes steps on how to migrate up to 100TB of static content from Azure Blob Storage to Linode Object Storage using rclone, along with how to monitor your migration using rclone’s WebUI GUI.
15+
Linode Object Storage is an Amazon S3-compatible service used for storing large amounts of unstructured data. This guide includes steps on how to migrate up to 100TB of static content from Azure Blob Storage to Linode Object Storage using rclone, along with how to monitor your migration using rclone’s WebUI GUI.
1616

1717
## Migration Considerations
1818

@@ -305,4 +305,4 @@ There are several next steps to consider after a successful object storage migra
305305
306306
- **Confirm the changeover is functioning as expected.** Allow some time to make sure your updated workloads and jobs are interacting successfully with Linode Object Storage. Once you confirm everything is working as expected, you can safely delete the original source bucket and its contents.
307307
308-
- **Take any additional steps to update your system for S3 compatibility.** Since the Azure Blob Storage API is not S3 compatible, you may need to make internal configuration changes to ensure your system is set up to communicate using S3 protocol. This means your system should be updated to use an S3-compatible [SDK](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingAWSSDK.html) like [Boto3](https://aws.amazon.com/sdk-for-python/) or S3-compatible command line utility like [s3cmd](https://s3tools.org/s3cmd). The [AWS SDK](https://techdocs.akamai.com/cloud-computing/docs/using-the-aws-sdk-for-php-with-object-storage) can also be configured to function with Linode Object Storage.
308+
- **Take any additional steps to update your system for Amazon S3 compatibility.** Since the Azure Blob Storage API is not Amazon S3-compatible, you may need to make internal configuration changes to ensure your system is set up to communicate using S3 protocol. This means your system should be updated to use an Amazon S3-compatible [SDK](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingAWSSDK.html) like [Boto3](https://aws.amazon.com/sdk-for-python/) or Amazon S3-compatible command line utility like [s3cmd](https://s3tools.org/s3cmd). The [AWS SDK](https://techdocs.akamai.com/cloud-computing/docs/using-the-aws-sdk-for-php-with-object-storage) can also be configured to function with Linode Object Storage.

0 commit comments

Comments
 (0)