diff --git a/_partials/_livesync-configure-source-database-awsrds.md b/_partials/_livesync-configure-source-database-awsrds.md index 0cb2d35809..9dbefb57fc 100644 --- a/_partials/_livesync-configure-source-database-awsrds.md +++ b/_partials/_livesync-configure-source-database-awsrds.md @@ -31,7 +31,7 @@ Updating parameters on a PostgreSQL instance will cause an outage. Choose a time Changing parameters will cause an outage. Wait for the database instance to reboot before continuing. 1. Verify that the settings are live in your database. -1. **Create a user for livesync and assign permissions** +1. **Create a user for $LIVESYNC and assign permissions** 1. Create ``: @@ -63,7 +63,7 @@ Updating parameters on a PostgreSQL instance will cause an outage. Choose a time EOF ``` - If the tables you are syncing are not in the `public` schema, grant the user permissions for each schema you are syncing.: + If the tables you are syncing are not in the `public` schema, grant the user permissions for each schema you are syncing: ```sql psql $SOURCE < TO ; diff --git a/_partials/_livesync-configure-source-database.md b/_partials/_livesync-configure-source-database.md index 3ce73d7a4e..f4fc65ffef 100644 --- a/_partials/_livesync-configure-source-database.md +++ b/_partials/_livesync-configure-source-database.md @@ -15,7 +15,7 @@ import EnableReplication from "versionContent/_partials/_migrate_live_setup_enab This will require a restart of the PostgreSQL source database. -1. **Create a user for livesync and assign permissions** +1. **Create a user for $LIVESYNC and assign permissions** 1. Create ``: @@ -47,7 +47,7 @@ import EnableReplication from "versionContent/_partials/_migrate_live_setup_enab EOF ``` - If the tables you are syncing are not in the `public` schema, grant the user permissions for each schema you are syncing.: + If the tables you are syncing are not in the `public` schema, grant the user permissions for each schema you are syncing: ```sql psql $SOURCE < TO ; diff --git a/_partials/_livesync-console.md b/_partials/_livesync-console.md index d9ca0192ef..e15f9b37b6 100644 --- a/_partials/_livesync-console.md +++ b/_partials/_livesync-console.md @@ -11,16 +11,16 @@ import TuneSourceDatabaseAWSRDS from "versionContent/_partials/_livesync-configu - Ensure that the source $PG instance and the target $SERVICE_LONG have the same extensions installed. - LiveSync does not create extensions on the target. If the table uses column types from an extension, + $LIVESYNC_CAP does not create extensions on the target. If the table uses column types from an extension, first create the extension on the target $SERVICE_LONG before syncing the table. ## Limitations -* Indexes(including Primary Key and Unique constraints) are not migrated by $SERVICE_LONG. +* Indexes (including Primary Key and Unique constraints) are not migrated by $SERVICE_LONG. - We recommend that you create only necessary indexes on the target $SERVICE_LONG depending on your query patterns. + We recommend that you create only the necessary indexes on the target $SERVICE_LONG depending on your query patterns. -* Tables with user defined types are not migrated by $SERVICE_LONG. +* Tables with user-defined types are not migrated by $SERVICE_LONG. You need to create the user defined types on the target $SERVICE_LONG before syncing the table. @@ -74,41 +74,42 @@ To sync data from your PostgreSQL database to your $SERVICE_LONG using $CONSOLE: 1. **Connect to your $SERVICE_LONG** In [$CONSOLE][portal-ops-mode], select the service to sync live data to. -1. **Start livesync** - 1. Click `Actions` > `livesync for PostgreSQL`. +1. **Start $LIVESYNC** + 1. Click `Actions` > `Livesync for PostgreSQL`. 1. **Connect the source database and target $SERVICE_SHORT** ![Livesync wizard](https://assets.timescale.com/docs/images/livesync-wizard.png) - In `livesync for PostgreSQL`: + In `Livesync for Postgres`: 1. Set the `Livesync Name`. - 1. Set the` PostgreSQL Connection String` to point to the source database you want to sync to Timescale. + 1. Set the `PostgreSQL Connection String` to point to the source database you want to sync to Timescale. This is the connection string for [``][livesync-tune-source-db]. - 1. Press `Continue`. + 1. Click `Continue`. $CONSOLE connects to the source database and retrieves the schema information. 1. **Optimize the data to synchronize in hypertables** ![livesync start](https://assets.timescale.com/docs/images/livesync-start.png) - 1. Select the table to sync, and press `+`. - $CONSOLE checks the table schema and, if possible suggests the column to use as the time dimension in a hypertable. + 1. Select the table to sync and click `+`. + + $CONSOLE checks the table schema and, if possible, suggests the column to use as the time dimension in a hypertable. 1. Repeat this step for each table you want to sync. - 1. Press `Start Livesync`. + 1. Click `Start Livesync`. - $CONSOLE starts livesync between the source database and the target $SERVICE_SHORT and displays the progress. + $CONSOLE starts $LIVESYNC between the source database and the target $SERVICE_SHORT and displays the progress. 1. **Monitor syncronization** - 1. To view the progress of the livesync, click the name of the livesync process: + 1. To view the progress of the $LIVESYNC, click the name of the $LIVESYNC process: ![livesync view status](https://assets.timescale.com/docs/images/livesync-view-status.png) - 1. To pause and restart livesync, click the buttons on the right of the livesync process and select an action: + 1. To pause and restart $LIVESYNC, click the buttons on the right of the $LIVESYNC process and select an action: ![livesync start stop](https://assets.timescale.com/docs/images/livesync-start-stop.png) -And that is it, you are using Livesync to synchronize all the data, or specific tables, from a PostgreSQL database -instance to your $SERVICE_LONG in real-time. +And that is it, you are using $LIVESYNC to synchronize all the data, or specific tables, from a PostgreSQL database +instance to your $SERVICE_LONG in real time. [install-psql]: /integrations/:currentVersion:/psql/ [portal-ops-mode]: https://console.cloud.timescale.com/dashboard/services diff --git a/_partials/_livesync-limitations.md b/_partials/_livesync-limitations.md index 69949ec0c0..dfbeaed8e7 100644 --- a/_partials/_livesync-limitations.md +++ b/_partials/_livesync-limitations.md @@ -4,7 +4,7 @@ the same changes to the source PostgreSQL instance. * Ensure that the source $PG instance and the target $SERVICE_LONG have the same extensions installed. - LiveSync does not create extensions on the target. If the table uses column types from an extension, + $LIVESYNC_CAP does not create extensions on the target. If the table uses column types from an extension, first create the extension on the target $SERVICE_LONG before syncing the table. * There is WAL volume growth on the source PostgreSQL instance during large table copy. * This works for PostgreSQL databases only as source. TimescaleDB is not yet supported. diff --git a/_partials/_livesync-terminal.md b/_partials/_livesync-terminal.md index d133cd86f1..3596a824e6 100644 --- a/_partials/_livesync-terminal.md +++ b/_partials/_livesync-terminal.md @@ -10,12 +10,12 @@ import TuneSourceDatabaseAWSRDS from "versionContent/_partials/_migrate_live_tun - Ensure that the source $PG instance and the target $SERVICE_LONG have the same extensions installed. - LiveSync does not create extensions on the target. If the table uses column types from an extension, + $LIVESYNC_CAP does not create extensions on the target. If the table uses column types from an extension, first create the extension on the target $SERVICE_LONG before syncing the table. - [Install Docker][install-docker] on your sync machine. - You need a minimum of a 4 CPU/16GB EC2 instance to run Livesync. + You need a minimum of a 4 CPU/16GB EC2 instance to run $LIVESYNC. - Install the [PostgreSQL client tools][install-psql] on your sync machine. @@ -26,7 +26,7 @@ import TuneSourceDatabaseAWSRDS from "versionContent/_partials/_migrate_live_tun -- The Schema is not migrated by Livesync, you use pg_dump/restore to migrate schema +- The schema is not migrated by $LIVESYNC, you use `pg_dump`/`pg_restore` to migrate it. ## Set your connection strings @@ -61,7 +61,7 @@ The `` in the `SOURCE` connection must have the replication role granted i ## Migrate the table schema to the $SERVICE_LONG -Use pg_dump to: +Use `pg_dump` to: @@ -129,14 +129,14 @@ events data, and tables that are already partitioned using PostgreSQL declarativ ## Synchronize data to your $SERVICE_LONG -You use the Livesync docker image to synchronize changes in real-time from a PostgreSQL database +You use the $LIVESYNC docker image to synchronize changes in real-time from a PostgreSQL database instance to a $SERVICE_LONG: -1. **Start Livesync** +1. **Start $LIVESYNC** - As you run Livesync continuously, best practice is to run it as a background process. + As you run $LIVESYNC continuously, best practice is to run it as a background process. ```shell docker run -d --rm --name livesync timescale/live-sync:v0.1.11 run --publication analytics --subscription livesync --source $SOURCE --target $TARGET @@ -144,7 +144,7 @@ instance to a $SERVICE_LONG: 1. **Trace progress** - Once Livesync is running as a docker daemon, you can also capture the logs: + Once $LIVESYNC is running as a docker daemon, you can also capture the logs: ```shell docker logs -f livesync ``` @@ -168,7 +168,7 @@ instance to a $SERVICE_LONG: - r: table is ready, synching live changes -1. **Stop Livesync** +1. **Stop $LIVESYNC** ```shell docker stop live-sync @@ -191,9 +191,9 @@ instance to a $SERVICE_LONG: ## Specify the tables to synchronize -After the Livesync docker is up and running, you [`CREATE PUBLICATION`][create-publication] on the SOURCE database to +After the $LIVESYNC docker is up and running, you [`CREATE PUBLICATION`][create-publication] on the SOURCE database to specify the list of tables which you intend to synchronize. Once you create a PUBLICATION, it is -automatically picked by Livesync, which starts synching the tables expressed as part of it. +automatically picked by $LIVESYNC, which starts syncing the tables expressed as part of it. For example: @@ -223,7 +223,7 @@ For example: ALTER PUBLICATION analytics SET(publish_via_partition_root=true); ``` -1. **Stop synching a table in the `PUBLICATION` with a call to `DROP TABLE`** +1. **Stop syncing a table in the `PUBLICATION` with a call to `DROP TABLE`** ```sql ALTER PUBLICATION analytics DROP TABLE tags; diff --git a/_partials/_migrate_import_prerequisites.md b/_partials/_migrate_import_prerequisites.md index 31f1a00185..e08c5daecc 100644 --- a/_partials/_migrate_import_prerequisites.md +++ b/_partials/_migrate_import_prerequisites.md @@ -10,7 +10,7 @@ Before you migrate your data: Each $SERVICE_LONG has a single database that supports the [most popular extensions][all-available-extensions]. $SERVICE_LONGs do not support tablespaces, and there is no superuser associated with a $SERVICE_SHORT. - Best practice is to create a $SERVICE_LONGs with at least 8 CPUs for a smoother experience. A higher-spec instance + Best practice is to create a $SERVICE_LONG with at least 8 CPUs for a smoother experience. A higher-spec instance can significantly reduce the overall migration window. - To ensure that maintenance does not run during the process, [adjust the maintenance window][adjust-maintenance-window]. diff --git a/_partials/_migrate_live_setup_enable_replication.md b/_partials/_migrate_live_setup_enable_replication.md index 73e6b0b717..b402257d17 100644 --- a/_partials/_migrate_live_setup_enable_replication.md +++ b/_partials/_migrate_live_setup_enable_replication.md @@ -1,9 +1,9 @@ Replica identity assists data replication by identifying the rows being modified. Your options are that each table and hypertable in the source database should either have: -- **A primary key**: Data replication defaults to the primary key of the table being replicated. +- **A primary key**: data replication defaults to the primary key of the table being replicated. Nothing to do. - **A viable unique index**: each table has a unique, non-partial, non-deferrable index that includes only columns - marked as `NOT NULL`. If a UNIQUE index does not exists, create one to assist the migration. You can delete if after + marked as `NOT NULL`. If a UNIQUE index does not exist, create one to assist the migration. You can delete if after migration. For each table, set `REPLICA IDENTITY` to the viable unique index: @@ -19,4 +19,4 @@ ``` For each `UPDATE` or `DELETE` statement, PostgreSQL reads the whole table to find all matching rows. This results in significantly slower replication. If you are expecting a large number of `UPDATE` or `DELETE` operations on the table, - best practice is to not use `FULL` + best practice is to not use `FULL`. diff --git a/_partials/_migrate_live_tune_source_database_awsrds.md b/_partials/_migrate_live_tune_source_database_awsrds.md index 118885fe04..cd54ee0b88 100644 --- a/_partials/_migrate_live_tune_source_database_awsrds.md +++ b/_partials/_migrate_live_tune_source_database_awsrds.md @@ -7,7 +7,7 @@ Updating parameters on a PostgreSQL instance will cause an outage. Choose a time 1. In [https://console.aws.amazon.com/rds/home#databases:][databases], select the RDS instance to migrate. - 1. Click `Configuration`, scroll down and note the `DB instance parameter group`, then click `Parameter Groups` + 1. Click `Configuration`, scroll down and note the `DB instance parameter group`, then click `Parameter groups` March 21, 2025 -You can now set up an active data ingestion pipeline with Livesync for PostgreSQL in Timescale Console. This tool enables you to replicate your source database tables into Timescale's hypertables indefinitely. Yes, you heard that right—keep Livesync running for as long as you need, ensuring that your existing source PostgreSQL tables stay in sync with Timescale Cloud. Read more about setting up and using [Livesync for PostgreSQL](https://docs.timescale.com/migrate/latest/livesync-for-postgresql/). +You can now set up an active data ingestion pipeline with livesync for PostgreSQL in Timescale Console. This tool enables you to replicate your source database tables into Timescale's hypertables indefinitely. Yes, you heard that right—keep livesync running for as long as you need, ensuring that your existing source PostgreSQL tables stay in sync with Timescale Cloud. Read more about setting up and using [Livesync for PostgreSQL](https://docs.timescale.com/migrate/latest/livesync-for-postgresql/). ![Livesync in Timescale Console](https://assets.timescale.com/docs/images/timescale-cloud-livesync-tile.png) @@ -350,7 +350,7 @@ We have built a new solution that helps you continuously replicate all or some o [Livesync](https://docs.timescale.com/migrate/latest/livesync-for-postgresql/) allows you to keep a current Postgres instance such as RDS as your primary database, and easily offload your real-time analytical queries to Timescale Cloud to boost their performance. If you have any questions or feedback, talk to us in [#livesync in Timescale Community](https://app.slack.com/client/T4GT3N2JK/C086NU9EZ88). -This is just the beginning—you'll see more from Livesync in 2025! +This is just the beginning—you'll see more from livesync in 2025! ## In-Console import from S3, I/O Boost, and Jobs Explorer diff --git a/migrate/index.md b/migrate/index.md index dfe76d523e..e8ab82d04d 100644 --- a/migrate/index.md +++ b/migrate/index.md @@ -36,14 +36,14 @@ see [Ingest data from other sources][data-ingest]. ## Livesync your data -You use $LIVESYNC to synchronize all or some of your data to your $SERVICE_LONG in real-time. You run $LIVESYNC +You use $LIVESYNC to synchronize all or some of your data to your $SERVICE_LONG in real time. You run $LIVESYNC continuously, using your data as a primary database and your $SERVICE_LONG as a logical replica. This enables you to leverage $CLOUD_LONG’s real-time analytics capabilities on your replica data. -| $LIVESYNC options | Downtime requirements | -|----------------------------------------|-----------------------| -| [$LIVESYNC for $PG][livesync-postgres] | None | -| [$LIVESYNC for S3][livesync-s3] | None | +| $LIVESYNC_CAP options | Downtime requirements | +|--------------------------------------------|-----------------------| +| [$LIVESYNC_CAP for $PG][livesync-postgres] | None | +| [$LIVESYNC_CAP for S3][livesync-s3] | None | diff --git a/migrate/livesync-for-postgresql.md b/migrate/livesync-for-postgresql.md index 4135b81ab0..e007617cd9 100644 --- a/migrate/livesync-for-postgresql.md +++ b/migrate/livesync-for-postgresql.md @@ -15,7 +15,7 @@ import EarlyAccessNoRelease from "versionContent/_partials/_early_access.mdx"; # Livesync from PostgreSQL to Timescale Cloud You use $LIVESYNC to synchronize all the data, or specific tables, from a PostgreSQL database instance to your -$SERVICE_LONG in real-time. You run $LIVESYNC continuously, turning PostgreSQL into a primary database with your +$SERVICE_LONG in real time. You run $LIVESYNC continuously, turning PostgreSQL into a primary database with your $SERVICE_LONG as a logical replica. This enables you to leverage $CLOUD_LONG’s real-time analytics capabilities on your replica data. @@ -25,7 +25,7 @@ $LIVESYNC_CAP leverages the well-established PostgreSQL logical replication prot $LIVESYNC ensures compatibility, familiarity, and a broader knowledge base. Making it easier for you to adopt $LIVESYNC and integrate your data. -You use $LIVESYNC for data synchronization, rather than migration. Livesync can: +You use $LIVESYNC for data synchronization, rather than migration: * Copy existing data from a PostgreSQL instance to a $SERVICE_LONG: - Copy data at up to 150 GB/hr. @@ -38,14 +38,16 @@ You use $LIVESYNC for data synchronization, rather than migration. Livesync can: $LIVESYNC_CAP disables foreign key validation during the sync. For example, if a `metrics` table refers to the `id` column on the `tags` table, you can still sync only the `metrics` table without worrying about their foreign key relationships. - - Track progress. PostgreSQL exposes `COPY` progress under `pg_stat_progress_copy`. + - Track progress. + + PostgreSQL exposes `COPY` progress under `pg_stat_progress_copy`. * Synchronize real-time changes from a PostgreSQL instance to a $SERVICE_LONG. * Add and remove tables on demand using the [PostgreSQL PUBLICATION interface][postgres-publication-interface]. * Enable features such as [hypertables][about-hypertables], [columnstore][compression], and [continuous aggregates][caggs] on your logical replica. -: livesync is not supported for production use. If you have an questions or feedback, talk to us in #livesync in Timescale Community. +: livesync is not supported for production use. If you have any questions or feedback, talk to us in #livesync in Timescale Community. diff --git a/migrate/livesync-for-s3.md b/migrate/livesync-for-s3.md index feade22d0e..bd2af7d868 100644 --- a/migrate/livesync-for-s3.md +++ b/migrate/livesync-for-s3.md @@ -9,7 +9,7 @@ tags: [recovery, logical backup, replication] import PrereqCloud from "versionContent/_partials/_prereqs-cloud-only.mdx"; import EarlyAccessNoRelease from "versionContent/_partials/_early_access.mdx"; -# Livesync from S3 to Timescale Cloud +# $LIVESYNC_CAP from S3 to Timescale Cloud You use $LIVESYNC to synchronize CSV and Parquet files from an S3 bucket to your $SERVICE_LONG in real time. Livesync runs continuously, enabling you to leverage $CLOUD_LONG as your analytics database with data constantly synced from S3. This lets you take full advantage of $CLOUD_LONG's real-time analytics capabilities without having to develop or manage custom ETL solutions between S3 and $CLOUD_LONG. @@ -24,16 +24,16 @@ You can use $LIVESYNC to synchronize your existing and new data. Here's what $LI - For large backlogs, $LIVESYNC checks every minute until caught up. * Sync data from multiple file formats: - - CSV: files are checked for compression in `.gz` and `.zip` format, then processed using [timescaledb-parallel-copy][parallel-copy] - - Parquet: files are converted to CSV, then processed using [timescaledb-parallel-copy][parallel-copy] + - CSV: files are checked for compression in `.gz` and `.zip` format, then processed using [timescaledb-parallel-copy][parallel-copy]. + - Parquet: files are converted to CSV, then processed using [timescaledb-parallel-copy][parallel-copy]. -* Livesync offers an option to enable an [hypertable][about-hypertables] during the file-to-table schema mapping setup. You can enable [columnstore][compression] and [continuous aggregates][caggs] through the SQL editor once $LIVESYNC has started. +* $LIVESYNC_CAP offers an option to enable a [hypertable][about-hypertables] during the file-to-table schema mapping setup. You can enable [columnstore][compression] and [continuous aggregates][caggs] through the SQL editor once $LIVESYNC has started. -* Livesync offers a default 1-minute polling interval. This means that $CLOUD_LONG checks the S3 source every minute for new data. You can customize this interval by setting up a cron expression. +* $LIVESYNC_CAP offers a default 1-minute polling interval. This means that $CLOUD_LONG checks the S3 source every minute for new data. You can customize this interval by setting up a cron expression. -Livesync for S3 continuously imports data from an Amazon S3 bucket into your database. It monitors your S3 bucket for new files matching a specified pattern and automatically imports them into your designated database table. +$LIVESYNC_CAP for S3 continuously imports data from an Amazon S3 bucket into your database. It monitors your S3 bucket for new files matching a specified pattern and automatically imports them into your designated database table. -**Note**: Livesync for S3 currently only syncs existing and new files—it does not support updating or deleting records based on updates and deletes from S3 to tables in a $SERVICE_LONG. +**Note**: $LIVESYNC for S3 currently only syncs existing and new files—it does not support updating or deleting records based on updates and deletes from S3 to tables in a $SERVICE_LONG. : livesync is not supported for production use. If you have any questions or feedback, talk to us in #livesync in Timescale Community. @@ -41,9 +41,10 @@ Livesync for S3 continuously imports data from an Amazon S3 bucket into your dat -- Access to a standard Amazon S3 bucket containing your data files. +- Ensure access to a standard Amazon S3 bucket containing your data files. + Directory buckets are not supported. -- Access credentials for the S3 bucket. +- Configure access credentials for the S3 bucket. - The following credentials are supported: - [IAM Role][credentials-iam]. @@ -64,20 +65,22 @@ Livesync for S3 continuously imports data from an Amazon S3 bucket into your dat ## Limitations - **CSV**: - - Maximum file size: 1GB - To increase these limits, contact sales@timescale.com - - Maximum row size: 2MB + - Maximum file size: 1 GB + + To increase this limit, contact sales@timescale.com + - Maximum row size: 2 MB - Supported compressed formats: - `.gz` - `.zip` - Advanced settings: - Delimiter: the default character is `,`, you can choose a different delimiter - - Skip Header: skip the first row if your file has headers + - Skip header: skip the first row if your file has headers - **Parquet**: - - Maximum file size: 1GB - - Maximum row group uncompressed size: 200MB - - Maximum row size: 2MB + - Maximum file size: 1 GB + - Maximum row group uncompressed size: 200 MB + - Maximum row size: 2 MB - **Sync iteration**: + To prevent system overload, $LIVESYNC tracks up to 100 files for each sync iteration. Additional checks only fill empty queue slots. @@ -90,9 +93,9 @@ To sync data from your S3 bucket to your $SERVICE_LONG using $CONSOLE: 1. **Connect to your $SERVICE_LONG** In [$CONSOLE][portal-ops-mode], select the service to sync live data to. -1. **Start livesync** - 1. Click `Actions` > `livesync for S3`. - 2. Click `New Livesync for S3` +1. **Start $LIVESYNC** + 1. Click `Actions` > `Livesync for S3`. + 2. Click `New livesync for S3`. 1. **Connect the source S3 bucket to the target $SERVICE_SHORT** @@ -110,7 +113,7 @@ To sync data from your S3 bucket to your $SERVICE_LONG using $CONSOLE: - `/**`: match all recursively. - `/**/*.csv`: match a specific file type. - $LIVESYNC uses prefix filters where possible, place patterns carefully at the end of your glob expression. + $LIVESYNC_CAP uses prefix filters where possible, place patterns carefully at the end of your glob expression. AWS S3 doesn't support complex filtering. If your expression filters too many files, the list operation may timeout. 1. Click the search icon, you see files to sync. Click `Continue`. @@ -123,17 +126,19 @@ To sync data from your S3 bucket to your $SERVICE_LONG using $CONSOLE: ![Livesync choose table](https://assets.timescale.com/docs/images/livesync-s3-create-tables.png) 1. Choose the `Data type` for each column, then click `Continue`. - 1. Choose the interval. This can be a minute, an hour or use a [cron expression][cron-expression]. + 1. Choose the interval. This can be a minute, an hour, or use a [cron expression][cron-expression]. 1. Repeat this step for each table you want to sync. - 1. Press `Start Livesync`. + 1. Click `Start Livesync`. $CONSOLE starts $LIVESYNC between the source database and the target $SERVICE_SHORT and displays the progress. 1. **Monitor syncronization** - 1. To view the progress of the livesync, click the name of the $LIVESYNC process: + 1. To view the progress of the $LIVESYNC, click the name of the $LIVESYNC process. + You see the status of the file being synced. Only one file runs at a time. ![livesync view status](https://assets.timescale.com/docs/images/livesync-s3-view-status.png) - 1. To pause and restart livesync, click the buttons on the right of the $LIVESYNC process and select an action: + 1. To pause and restart $LIVESYNC, click the buttons on the right of the $LIVESYNC process and select an action. + During pauses, you can edit the configuration before resuming. ![livesync start stop](https://assets.timescale.com/docs/images/livesync-s3-start-stop.png)