You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
import cloud_service_action_menu from '@site/static/images/_snippets/cloud-service-actions-menu.png';
3
3
4
-
Select your service, followed by `Data souces` -> `Predefined sample data`.
4
+
Select your service, followed by `Data sources` -> `Predefined sample data`.
5
5
6
6
<Imagesize="md"img={cloud_service_action_menu}alt="ClickHouse Cloud service Actions menu showing Data sources and Predefined sample data options"border />
@@ -132,7 +132,7 @@ For now, we can run the embedding of a random LEGO set picture as `target`.
132
132
10 rows in set. Elapsed: 4.605 sec. Processed 100.38 million rows, 309.98 GB (21.80 million rows/s., 67.31 GB/s.)
133
133
```
134
134
135
-
## Run an approximate vector similarity search with a vector simialrity index {#run-an-approximate-vector-similarity-search-with-a-vector-similarity-index}
135
+
## Run an approximate vector similarity search with a vector similarity index {#run-an-approximate-vector-similarity-search-with-a-vector-similarity-index}
136
136
137
137
Let's now define two vector similarity indexes on the table.
Copy file name to clipboardExpand all lines: docs/integrations/data-ingestion/clickpipes/mysql/source/aurora.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,7 +42,7 @@ If ClickPipes tries to resume replication and the required binlog files have bee
42
42
43
43
By default, Aurora MySQL purges the binary log as soon as possible (i.e., _lazy purging_). We recommend increasing the binlog retention interval to at least **72 hours** to ensure availability of binary log files for replication under failure scenarios. To set an interval for binary log retention ([`binlog retention hours`](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/mysql-stored-proc-configuring.html#mysql_rds_set_configuration-usage-notes.binlog-retention-hours)), use the [`mysql.rds_set_configuration`](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/mysql-stored-proc-configuring.html#mysql_rds_set_configuration) procedure:
44
44
45
-
[//]: #"NOTE Most CDC providers recommend the maximum retention period for Aurora RDS (7 days/168 hours). Since this has an impact on disk usage, we conservatively recommend a mininum of 3 days/72 hours."
45
+
[//]: #"NOTE Most CDC providers recommend the maximum retention period for Aurora RDS (7 days/168 hours). Since this has an impact on disk usage, we conservatively recommend a minimum of 3 days/72 hours."
Copy file name to clipboardExpand all lines: docs/integrations/data-ingestion/clickpipes/mysql/source/rds.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,7 +42,7 @@ If ClickPipes tries to resume replication and the required binlog files have bee
42
42
43
43
By default, Amazon RDS purges the binary log as soon as possible (i.e., _lazy purging_). We recommend increasing the binlog retention interval to at least **72 hours** to ensure availability of binary log files for replication under failure scenarios. To set an interval for binary log retention ([`binlog retention hours`](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mysql-stored-proc-configuring.html#mysql_rds_set_configuration-usage-notes.binlog-retention-hours)), use the [`mysql.rds_set_configuration`](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mysql-stored-proc-configuring.html#mysql_rds_set_configuration) procedure:
44
44
45
-
[//]: #"NOTE Most CDC providers recommend the maximum retention period for RDS (7 days/168 hours). Since this has an impact on disk usage, we conservatively recommend a mininum of 3 days/72 hours."
45
+
[//]: #"NOTE Most CDC providers recommend the maximum retention period for RDS (7 days/168 hours). Since this has an impact on disk usage, we conservatively recommend a minimum of 3 days/72 hours."
Copy file name to clipboardExpand all lines: docs/integrations/data-ingestion/s3/index.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1027,7 +1027,7 @@ ClickHouse Keeper is responsible for coordinating the replication of data across
1027
1027
1028
1028
See the [network ports](../../../guides/sre/network-ports.md) list when you configure the security settings in AWS so that your servers can communicate with each other, and you can communicate with them.
1029
1029
1030
-
All three servers must listen for network connections so that they can communicate between the servers and with S3. By default, ClickHouse listens ony on the loopback address, so this must be changed. This is configured in `/etc/clickhouse-server/config.d/`. Here is a sample that configures ClickHouse and ClickHouse Keeper to listen on all IP v4 interfaces. see the documentation or the default configuration file `/etc/clickhouse/config.xml` for more information.
1030
+
All three servers must listen for network connections so that they can communicate between the servers and with S3. By default, ClickHouse listens only on the loopback address, so this must be changed. This is configured in `/etc/clickhouse-server/config.d/`. Here is a sample that configures ClickHouse and ClickHouse Keeper to listen on all IP v4 interfaces. see the documentation or the default configuration file `/etc/clickhouse/config.xml` for more information.
Copy file name to clipboardExpand all lines: docs/integrations/index.mdx
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -224,7 +224,7 @@ We are actively compiling this list of ClickHouse integrations below, so it's no
224
224
|Google Cloud Storage|<Gcssvgalt="GCS Logo"style={{width: '3rem', 'height': '3rem'}}/>|Data ingestion|Import from, export to, and transform GCS data in flight with ClickHouse built-in `S3` functions.|[Documentation](/integrations/data-ingestion/s3/index.md)|
225
225
|Golang|<Golangsvgalt="Golang logo"style={{width: '3rem' }}/>|Language client|The Go client uses the native interface for a performant, low-overhead means of connecting to ClickHouse.|[Documentation](/integrations/language-clients/go/index.md)|
226
226
|HDFS|<Hdfssvgalt="HDFS logo"style={{width: '3rem'}}/>|Data ingestion|Provides integration with the [Apache Hadoop](https://en.wikipedia.org/wiki/Apache_Hadoop) ecosystem by allowing to manage data on [HDFS](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) via ClickHouse.|[Documentation](/engines/table-engines/integrations/hdfs)|
227
-
|Hive|<Hivesvgalt="Hive logo"style={{width: '3rem'}}/>|Data ingestionn|The Hive engine allows you to perform `SELECT`quries on HDFS Hive table.|[Documentation](/engines/table-engines/integrations/hive)|
227
+
|Hive|<Hivesvgalt="Hive logo"style={{width: '3rem'}}/>|Data ingestionn|The Hive engine allows you to perform `SELECT`queries on HDFS Hive table.|[Documentation](/engines/table-engines/integrations/hive)|
228
228
|Hudi|<Imageimg={hudi}size="logo"alt="Apache Hudi logo"/>|Data ingestion| provides a read-only integration with existing Apache [Hudi](https://hudi.apache.org/) tables in Amazon S3.|[Documentation](/engines/table-engines/integrations/hudi)|
229
229
|Iceberg|<Imageimg={iceberg}size="logo"alt="Apache Iceberg logo"/>|Data ingestion|Provides a read-only integration with existing Apache [Iceberg](https://iceberg.apache.org/) tables in Amazon S3.|[Documentation](/engines/table-engines/integrations/iceberg)|
230
230
|Java, JDBC|<Javasvgalt="Java logo"style={{width: '3rem'}}/>|Language client|The Java client and JDBC driver.|[Documentation](/integrations/language-clients/java/index.md)|
@@ -327,7 +327,7 @@ We are actively compiling this list of ClickHouse integrations below, so it's no
327
327
|SiSense|<Imageimg={sisense_logo}size="logo"alt="SiSense logo"/>|Data visualization|Embed analytics into any application or workflow|[Website](https://www.sisense.com/data-connectors/)|
|Snappy Flow|<Imageimg={snappy_flow_logo}size="logo"alt="Snappy Flow logo"/>|Data management|Collects ClickHouse database metrics via plugin.|[Documentation](https://docs.snappyflow.io/docs/Integrations/clickhouse/instance)|
330
-
|Soda|<Imageimg={soda_logo}size="logo"alt="Soda logo"/>|Data quality|Soda integration makes it easy for organziations to detect, resolve, and prevent data quality issues by running data quality checks on data before it is loaded into the database.|[Website](https://www.soda.io/integrations/clickhouse)|
330
+
|Soda|<Imageimg={soda_logo}size="logo"alt="Soda logo"/>|Data quality|Soda integration makes it easy for organizations to detect, resolve, and prevent data quality issues by running data quality checks on data before it is loaded into the database.|[Website](https://www.soda.io/integrations/clickhouse)|
331
331
|Splunk|<Imageimg={splunk_logo}size="logo"alt="Splunk logo"/>|Data integration|Splunk modular input to import to Splunk the ClickHouse Cloud Audit logs.|[Website](https://splunkbase.splunk.com/app/7709),<br/>[Documentation](/integrations/tools/data-integration/splunk/index.md)|
332
332
|StreamingFast|<Imageimg={streamingfast_logo}size="logo"alt="StreamingFast logo"/>|Data ingestion| Blockchain-agnostic, parallelized and streaming-first data engine. |[Website](https://www.streamingfast.io/)|
333
333
|Streamkap|<Imageimg={streamkap_logo}size="logo"alt="Streamkap logo"/>|Data ingestion|Setup real-time CDC (Change Data Capture) streaming to ClickHouse with high throughput in minutes.|[Documentation](https://docs.streamkap.com/docs/clickhouse)|
0 commit comments