diff --git a/.cargo/audit.toml b/.cargo/audit.toml index 7d6b59cb81d3f..15bd81b363b64 100644 --- a/.cargo/audit.toml +++ b/.cargo/audit.toml @@ -5,7 +5,7 @@ ignore = [ # We depends on `chrono`, but not `time`, and `chrono` is not affected by `RUSTSEC-2020-0071` # (see https://github.com/time-rs/time/issues/293#issuecomment-946382614). - # + # # `chrono` also suffers from a similar vulnerability ([`RUSTSEC-2020-0159`](https://rustsec.org/advisories/RUSTSEC-2020-0159), # but it's already patched in `0.4.20` by rewriting vulnerable C function in Rust). "RUSTSEC-2020-0071", diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml index 7177c380db498..a4ccb66e8a0fd 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.yml +++ b/.github/ISSUE_TEMPLATE/bug_report.yml @@ -17,12 +17,12 @@ body: description: Steps to reproduce the behavior, including the SQLs you run and/or the operations you have done to trigger the bug. placeholder: | First create the tables/sources and materialized views with - + ```sql CREATE TABLE ... CREATE MATERIALIZED VIEW ... ``` - + Then the bug is triggered after ... - type: textarea attributes: @@ -30,7 +30,7 @@ body: description: A clear and concise description of what you expected to happen. placeholder: | I expected to see this happen: *explanation* - + Instead, this happened: *explanation* - type: textarea attributes: @@ -58,4 +58,4 @@ body: attributes: label: Additional context description: Add any other context about the problem here. e.g., the full log files. - + diff --git a/.github/ISSUE_TEMPLATE/design-rfc.yml b/.github/ISSUE_TEMPLATE/design-rfc.yml index bae7dfaf0d08d..4f35a84f53def 100644 --- a/.github/ISSUE_TEMPLATE/design-rfc.yml +++ b/.github/ISSUE_TEMPLATE/design-rfc.yml @@ -28,5 +28,5 @@ body: label: Q&A description: Here's where the doc readers can leave the questions and suggestions placeholder: | - * Why do you need ... + * Why do you need ... * What will happen if ... diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md index db71f14296967..1c4cf6490a35b 100644 --- a/.github/pull_request_template.md +++ b/.github/pull_request_template.md @@ -31,7 +31,7 @@ Please explain **IN DETAIL** what the changes are in this PR and why they are ne - [ ] My PR contains user-facing changes. - ## Overview -As a cloud-neutral database, RisingWave supports running on different (object) storage backends. Currently, these storage products include +As a cloud-neutral database, RisingWave supports running on different (object) storage backends. Currently, these storage products include - [S3](https://aws.amazon.com/s3/) - [GCS](https://cloud.google.com/storage) - [COS](https://cloud.tencent.com/product/cos) @@ -22,7 +22,7 @@ If an object store declares that it is s3-compatible, it means that it can be di Currently for COS and Lyvecloud Storage, we use s3 compatible mode. To use these two object storage products, you need to overwrite s3 environmrnt with the corresponding `access_key`, `secret_key`, `region` and `bueket_name`, and config `endpoint` as well. ### OpenDAL object store -For those (object) storage products that are not compatible with s3 (or compatible but some interfaces are unstable), we use [OpenDAL](https://github.com/apache/incubator-opendal) to access them. OpenDAL is the Open Data Access Layer to freely access data, which supports several different storage backends. We implemented a [`OpenDALObjectStore`](https://github.com/risingwavelabs/risingwave/blob/1fd0394980fd713459df8076283bb1a1f46fef9a/src/object_store/src/object/opendal_engine/opendal_object_store.rs#L61) to support the interface for accessing object store in RisingWave. +For those (object) storage products that are not compatible with s3 (or compatible but some interfaces are unstable), we use [OpenDAL](https://github.com/apache/incubator-opendal) to access them. OpenDAL is the Open Data Access Layer to freely access data, which supports several different storage backends. We implemented a [`OpenDALObjectStore`](https://github.com/risingwavelabs/risingwave/blob/1fd0394980fd713459df8076283bb1a1f46fef9a/src/object_store/src/object/opendal_engine/opendal_object_store.rs#L61) to support the interface for accessing object store in RisingWave. All of these object stores are supported in risedev, you can use the risedev command to start RisingWave on these storage backends. ## How to build RisingWave with multiple object store @@ -32,7 +32,7 @@ To use COS or Lyvecloud Storage, you need to overwrite the aws default `access_k export AWS_REGION=your_region export AWS_ACCESS_KEY_ID=your_access_key export AWS_SECRET_ACCESS_KEY=your_secret_key -export RW_S3_ENDPOINT=your_endpoint +export RW_S3_ENDPOINT=your_endpoint ``` then in `risedev.yml`, set the bucket name, starting RisingWave with ridedev. Then you can successfully run RisingWave on these two storage backends. @@ -43,7 +43,7 @@ To use GCS, you need to [enable OpenDAL](https://github.com/risingwavelabs/risin Once these configurations are set, run `./risedev d gcs` and then you can run RisingWave on GCS. ### OSS -To use OSS, you need to [enable OpenDAL](https://github.com/risingwavelabs/risingwave/blob/1fd0394980fd713459df8076283bb1a1f46fef9a/risedev.yml#L167-L170) in `risedev.yml`, set `engine = oss`, `bucket_name` and `root` as well. +To use OSS, you need to [enable OpenDAL](https://github.com/risingwavelabs/risingwave/blob/1fd0394980fd713459df8076283bb1a1f46fef9a/risedev.yml#L167-L170) in `risedev.yml`, set `engine = oss`, `bucket_name` and `root` as well. For authentication, set the identity information in the environment variable: ```shell @@ -69,10 +69,10 @@ export AZBLOB_ACCOUNT_KEY="your_account_key" Once these configurations are set, run `./risedev d azblob` and then you can run RisingWave on Azure Blob Storage. ### HDFS -HDFS requairs complete hadoop environment and java environment, which are very heavy. Thus, RisingWave does not open the hdfs feature by default. To compile RisingWave with hdfs backend, [turn on this feature](https://github.com/risingwavelabs/risingwave/blob/5aca4d9ac382259db42aa26c814f19640fbdf83a/src/object_store/Cargo.toml#L46-L47) first, and enable hdfs for risedev tools. +HDFS requairs complete hadoop environment and java environment, which are very heavy. Thus, RisingWave does not open the hdfs feature by default. To compile RisingWave with hdfs backend, [turn on this feature](https://github.com/risingwavelabs/risingwave/blob/5aca4d9ac382259db42aa26c814f19640fbdf83a/src/object_store/Cargo.toml#L46-L47) first, and enable hdfs for risedev tools. Run `./risedev configure`, and enable `[Component] Hummock: Hdfs Backend`. -After that, you need to [enable OpenDAL](https://github.com/risingwavelabs/risingwave/blob/1fd0394980fd713459df8076283bb1a1f46fef9a/risedev.yml#L123-L126) in `risedev.yml`, set `engine = hdfs`, `namenode` and `root` as well. +After that, you need to [enable OpenDAL](https://github.com/risingwavelabs/risingwave/blob/1fd0394980fd713459df8076283bb1a1f46fef9a/risedev.yml#L123-L126) in `risedev.yml`, set `engine = hdfs`, `namenode` and `root` as well. You can also use WebHDFS as a lightweight alternative to HDFS. Hdfs is powered by HDFS’s native java client. Users need to setup the hdfs services correctly. But webhdfs can access from HTTP API and no extra setup needed. The way to start WebHDFS is basically the same as hdfs, but its default name_node is `127.0.0.1:9870`. diff --git a/docs/relational_table/relational-table-schema.md b/docs/relational_table/relational-table-schema.md index 805f84c7a02a2..64cd615feda25 100644 --- a/docs/relational_table/relational-table-schema.md +++ b/docs/relational_table/relational-table-schema.md @@ -12,7 +12,7 @@ In this doc, we will take HashAgg with extreme state (`max`, `min`) or value sta ## Value State (Sum, Count) Query example: ```sql -select sum(v2), count(v3) from t group by v1 +select sum(v2), count(v3) from t group by v1 ``` This query will need to initiate 2 Relational Tables. The schema is `table_id/group_key`. @@ -20,12 +20,12 @@ This query will need to initiate 2 Relational Tables. The schema is `table_id/gr ## Extreme State (Max, Min) Query example: ```sql -select max(v2), min(v3) from t group by v1 +select max(v2), min(v3) from t group by v1 ``` -This query will need to initiate 2 Relational Tables. If the upstream is not append-only, the schema becomes `table_id/group_key/sort_key/upstream_pk`. +This query will need to initiate 2 Relational Tables. If the upstream is not append-only, the schema becomes `table_id/group_key/sort_key/upstream_pk`. -The order of `sort_key` depends on the agg call kind. For example, if it's `max()`, `sort_key` will order with `Ascending`. if it's `min()`, `sort_key` will order with `Descending`. +The order of `sort_key` depends on the agg call kind. For example, if it's `max()`, `sort_key` will order with `Ascending`. if it's `min()`, `sort_key` will order with `Descending`. The `upstream_pk` is also appended to ensure the uniqueness of the key. This design allows the streaming executor not to read all the data from the storage when the cache fails, but only a part of it. The streaming executor will try to write all streaming data to storage, because there may be `update` or `delete` operations in the stream, it's impossible to always guarantee correct results without storing all data. diff --git a/docs/relational_table/storing-state-using-relational-table.md b/docs/relational_table/storing-state-using-relational-table.md index 09854c80aa7ac..c5bb5a89e2bbe 100644 --- a/docs/relational_table/storing-state-using-relational-table.md +++ b/docs/relational_table/storing-state-using-relational-table.md @@ -6,13 +6,13 @@ - [Write Path](#write-path) - [Read Path](#read-path) - + ## Row-based Encoding -RisingWave adapts a relational data model. Relational tables, including tables and materialized views, consist of a list of named, strong-typed columns. All streaming executors store their data into a KV state store, which is backed by a service called Hummock. There are two choices to save a relational row into key-value pairs: cell-based format and row-based format. We choose row-based format because internal states always read and write the whole row, and don't need to partially update some fields in a row. Row-based encoding has better performance than cell-based encoding, which reduces the number of read and write kv pairs. +RisingWave adapts a relational data model. Relational tables, including tables and materialized views, consist of a list of named, strong-typed columns. All streaming executors store their data into a KV state store, which is backed by a service called Hummock. There are two choices to save a relational row into key-value pairs: cell-based format and row-based format. We choose row-based format because internal states always read and write the whole row, and don't need to partially update some fields in a row. Row-based encoding has better performance than cell-based encoding, which reduces the number of read and write kv pairs. We implement a relational table layer as the bridge between executors and KV state store, which provides the interfaces accessing KV data in relational semantics. As the executor state's encoding is very similar to a row-based table, each kind of state is stored as a row-based relational table first. In short, one row is stored as a key-value pair. For example, encoding of some stateful executors in row-based format is as follows: | state | key | value | @@ -30,20 +30,20 @@ For the detailed schema, please check [doc](relational-table-schema.md) In this part, we will introduce how stateful executors interact with KV state store through the relational table layer. -Relational table layer consists of State Table, Mem Table and Storage Table. The State Table and MemTable is used in streaming mode, and Storage Table is used in batch mode. +Relational table layer consists of State Table, Mem Table and Storage Table. The State Table and MemTable is used in streaming mode, and Storage Table is used in batch mode. State Table provides the table operations by these APIs: `get_row`, `scan`, `insert_row`, `delete_row` and `update_row`, which are the read and write interfaces for streaming executors. The Mem Table is an in-memory buffer for caching table operations during one epoch. The Storage Table is read only, and will output the partial columns upper level needs. ![Overview of Relational Table](../images/relational-table-layer/relational-table-01.svg) ### Write Path -To write into KV state store, executors first perform operations on State Table, and these operations will be cached in Mem Table. Once a barrier flows through one executor, executor will flush the cached operations into state store. At this moment, State Table will covert these operations into kv pairs and write to state store with specific epoch. +To write into KV state store, executors first perform operations on State Table, and these operations will be cached in Mem Table. Once a barrier flows through one executor, executor will flush the cached operations into state store. At this moment, State Table will covert these operations into kv pairs and write to state store with specific epoch. For example, an executor performs `insert(a, b, c)` and `delete(d, e, f)` through the State Table APIs, Mem Table first caches these two operations in memory. After receiving new barrier, State Table converts these two operations into KV operations by row-based format, and writes these KV operations into state store (Hummock). ![write example](../images/relational-table-layer/relational-table-03.svg) ### Read Path -In streaming mode, executors should be able to read the latest written data, which means uncommitted data is visible. The data in Mem Table (memory) is fresher than that in shared storage (state store). State Table provides both point-get and scan to read from state store by merging data from Mem Table and Storage Table. +In streaming mode, executors should be able to read the latest written data, which means uncommitted data is visible. The data in Mem Table (memory) is fresher than that in shared storage (state store). State Table provides both point-get and scan to read from state store by merging data from Mem Table and Storage Table. #### Get For example, let's assume that the first column is the pk of relational table, and the following operations are performed. ``` diff --git a/docs/rustdoc/rust.css b/docs/rustdoc/rust.css index c8ddda2bbc0d2..71cf5e3df0004 100644 --- a/docs/rustdoc/rust.css +++ b/docs/rustdoc/rust.css @@ -1,4 +1,4 @@ -/* This file is copied from the Rust Project, which is dual-licensed under +/* This file is copied from the Rust Project, which is dual-licensed under Apache 2.0 and MIT terms. */ /* General structure */ diff --git a/docs/streaming-overview.md b/docs/streaming-overview.md index 811a17aa5e4ad..e8a6302c3e89e 100644 --- a/docs/streaming-overview.md +++ b/docs/streaming-overview.md @@ -16,26 +16,26 @@ RisingWave provides real-time analytics to serve user’s need. This is done by defining materialized views (MV). All materialized views will be automatically refreshed according to recent updates, such that querying materialized views will reflect real-time analytical results. Such refreshing is carried out by our RisingWave streaming engine. -The core design principles of the RisingWave streaming engine are summarized as follows. +The core design principles of the RisingWave streaming engine are summarized as follows. * **Actor model based execution engine.** We create a set of actors such that each actor reacts to its own input message, including both data update and control signal. In this way we build a highly concurrent and efficient streaming engine. * **Shared storage for states.** The backbone of the state storage is based on shared cloud object storage (currently AWS S3), which gives us computational elasticity, cheap and infinite storage capacity, and simplicity during configuration change. * **Everything is a table, everything is a state.** We treat every object in our internal storage as both a logical table and an internal state. Therefore, they can be effectively managed by catalog, and be updated in a unified streaming engine with consistency guarantee. -In this document we give an overview of the RisingWave streaming engine. +In this document we give an overview of the RisingWave streaming engine. ## Architecture ![streaming-architecture](./images/streaming-overview/streaming-architecture.svg) -The overall architecture of RisingWave is depicted in the figure above. In brief, RisingWave streaming engine consists of three sets of nodes: frontend, compute nodes, and meta service. The frontend node consists of the serving layer, handling users’ SQL requests concurrently. Underlying is the processing layer. Each compute node hosts a collection of long-running actors for stream processing. All actors access a shared persistence layer of storage (currently AWS S3) as its state storage. The meta service maintains all meta-information and coordinates the whole cluster. +The overall architecture of RisingWave is depicted in the figure above. In brief, RisingWave streaming engine consists of three sets of nodes: frontend, compute nodes, and meta service. The frontend node consists of the serving layer, handling users’ SQL requests concurrently. Underlying is the processing layer. Each compute node hosts a collection of long-running actors for stream processing. All actors access a shared persistence layer of storage (currently AWS S3) as its state storage. The meta service maintains all meta-information and coordinates the whole cluster. When receiving a create materialized view statement at the frontend, a materialized view and the corresponding streaming pipeline are built in the following steps. 1. Building a stream plan. Here a stream plan is a logical plan which consists of logical operators encoding the dataflow. This is carried out by the streaming planner at the frontend. 2. Fragmentation. The stream fragmenter at the meta service breaks the generated logical stream plan into stream fragments, and duplicates such fragments when necessary. Here a stream fragment holds partial nodes from the stream plan, and each fragment can be parallelized by building multiple actors for data parallelization. -3. Scheduling plan fragments. The meta service distributes different fragments into different compute nodes and let all compute nodes build their local actors. -4. Initializing the job at the backend. The meta service notifies all compute nodes to start serving streaming pipelines. +3. Scheduling plan fragments. The meta service distributes different fragments into different compute nodes and let all compute nodes build their local actors. +4. Initializing the job at the backend. The meta service notifies all compute nodes to start serving streaming pipelines. ## Actors, executors, and states ![streaming-executor](./images/streaming-overview/streaming-executor-and-compute-node.svg) @@ -44,35 +44,35 @@ When receiving a create materialized view statement at the frontend, a materiali Actors are the minimal unit to be scheduled in the RisingWave streaming engine, such that there is no parallelism inside each actor. The typical structure of an actor is depicted on the right of the figure above. An actor consists of three parts. -* Merger (optional). Each merger merges the messages from different upstream actors into one channel, such that the executors can handle messages sequentially. The merger is also in charge of aligning barriers to support checkpoints (details described later). -* A chain of executors. Each executor is the basic unit of delta computation (details described later). +* Merger (optional). Each merger merges the messages from different upstream actors into one channel, such that the executors can handle messages sequentially. The merger is also in charge of aligning barriers to support checkpoints (details described later). +* A chain of executors. Each executor is the basic unit of delta computation (details described later). * Dispatcher (optional). Each dispatcher will send its received messages to different downstream actors according to certain strategies, e.g. hash shuffling or round-robin. -The execution of actors is carried out by tokio async runtime. After an actor starts running, it runs an infinite loop in which it continuously runs async functions to generate outputs, until it receives a stop message. +The execution of actors is carried out by tokio async runtime. After an actor starts running, it runs an infinite loop in which it continuously runs async functions to generate outputs, until it receives a stop message. -Messages between two local actors are transferred via channels. For two actors located on different compute nodes, messages are re-directed to an exchange service. The exchange service will continuously exchange messages with each other via RPC requests. +Messages between two local actors are transferred via channels. For two actors located on different compute nodes, messages are re-directed to an exchange service. The exchange service will continuously exchange messages with each other via RPC requests. ### Executors -Executors are the basic computational units in the streaming engine. Each executor responds to its received messages and computes an output message atomically, i.e the computation inside each executor will not be broken down. +Executors are the basic computational units in the streaming engine. Each executor responds to its received messages and computes an output message atomically, i.e the computation inside each executor will not be broken down. The underlying algorithmic framework of the RisingWave streaming system is the traditional change propagation framework. Given a materialized view to be maintained, we build a set of executors where each executor corresponds to a relational operator (including base table). When any of the base tables receive an update, the streaming engine computes the changes to each of the materialized views by recursively computing the update from the leaf to the root. Each node receives an update from one of its children, computes the local update, and propagates the update to its parents. By guaranteeing the correctness of every single executor, we get a composable framework for maintaining arbitrary SQL queries. ## Checkpoint, Consistency, and Fault tolerance -We use the term consistency to denote the model of the *completeness and correctness* of querying materialized views. We follow the consistency model introduced in [Materialize](https://materialize.com/blog/consistency/). More specifically, the system assures that the query result is always a consistent snapshot of a certain timestamp t before the query issued a timestamp. Also, later queries always get consistent snapshots from a later timestamp. A consistent snapshot at t requires that all messages no later than t are reflected in the snapshot exactly once while all messages after t are not reflected. +We use the term consistency to denote the model of the *completeness and correctness* of querying materialized views. We follow the consistency model introduced in [Materialize](https://materialize.com/blog/consistency/). More specifically, the system assures that the query result is always a consistent snapshot of a certain timestamp t before the query issued a timestamp. Also, later queries always get consistent snapshots from a later timestamp. A consistent snapshot at t requires that all messages no later than t are reflected in the snapshot exactly once while all messages after t are not reflected. ### Barrier based checkpoint -To guarantee consistency, RisingWave introduces a Chandy-Lamport style consistent snapshot algorithm as its checkpoint scheme. +To guarantee consistency, RisingWave introduces a Chandy-Lamport style consistent snapshot algorithm as its checkpoint scheme. This procedure guarantees that every state to be flushed into the storage is consistent (matching a certain barrier at the source). Therefore, when querying materialized views, consistency is naturally guaranteed when the batch engine reads a consistent snapshot (of views and tables) on the storage. We also call each barrier an epoch and sometimes use both terms interchangeably as data streams are cut into epochs. In other words, the write to the database is visible only after it has been committed to the storage via the checkpoint. -To improve the efficiency, all dirty states on the same compute node are gathered to a shared buffer, and the compute node asynchronously flushes the whole shared buffer into a single SST file in the storage, such that the checkpoint procedure shall not block stream processing. +To improve the efficiency, all dirty states on the same compute node are gathered to a shared buffer, and the compute node asynchronously flushes the whole shared buffer into a single SST file in the storage, such that the checkpoint procedure shall not block stream processing. See more detailed descriptions on [Checkpoint](./checkpoint.md). ### Fault tolerance -When the streaming engine crashes down, the system must globally rollback to a previous consistent snapshot. To achieve this, whenever the meta detects the failover of some certain compute node or any undergoing checkpoint procedure, it triggers a recovery process. After rebuilding the streaming pipeline, each executor will reset its local state from a consistent snapshot on the storage and recover its computation. +When the streaming engine crashes down, the system must globally rollback to a previous consistent snapshot. To achieve this, whenever the meta detects the failover of some certain compute node or any undergoing checkpoint procedure, it triggers a recovery process. After rebuilding the streaming pipeline, each executor will reset its local state from a consistent snapshot on the storage and recover its computation. diff --git a/e2e_test/README.md b/e2e_test/README.md index b07601775d2f3..e97b6cc73c358 100644 --- a/e2e_test/README.md +++ b/e2e_test/README.md @@ -14,6 +14,6 @@ Refer to risingwave [developer guide](../docs/developer-guide.md#end-to-end-test > **Note** > -> Usually you will just need to run either batch tests or streaming tests. Other tests may need to be run under some specific settings, e.g., ddl tests need to be run on a fresh instance, and database tests need to first create a database and then connect to that database to run tests. +> Usually you will just need to run either batch tests or streaming tests. Other tests may need to be run under some specific settings, e.g., ddl tests need to be run on a fresh instance, and database tests need to first create a database and then connect to that database to run tests. > > You will never want to run all tests using `./e2e_test/**/*.slt`. You may refer to the [ci script](../ci/scripts/run-e2e-test.sh) to see how to run all tests. diff --git a/e2e_test/batch/aggregate/sum.slt.part b/e2e_test/batch/aggregate/sum.slt.part index dd5529e3f8bfa..825b684975986 100644 --- a/e2e_test/batch/aggregate/sum.slt.part +++ b/e2e_test/batch/aggregate/sum.slt.part @@ -60,13 +60,13 @@ statement ok create table t(d decimal); statement ok -insert into t values (9000000000000000000000000000), -(9000000000000000000000000000), -(9000000000000000000000000000), -(9000000000000000000000000000), -(9000000000000000000000000000), -(9000000000000000000000000000), -(9000000000000000000000000000), +insert into t values (9000000000000000000000000000), +(9000000000000000000000000000), +(9000000000000000000000000000), +(9000000000000000000000000000), +(9000000000000000000000000000), +(9000000000000000000000000000), +(9000000000000000000000000000), (9000000000000000000000000000); query T diff --git a/e2e_test/batch/basic/array.slt.part b/e2e_test/batch/basic/array.slt.part index 5a1a759ef9f69..31b4b08e0f925 100644 --- a/e2e_test/batch/basic/array.slt.part +++ b/e2e_test/batch/basic/array.slt.part @@ -158,9 +158,9 @@ integer 2 # Test multiple castings of the same input. query TTI -select +select (arr::varchar[][])[1][2] as double_varchar, - (arr::varchar[][][])[1][2][3] as triple_varchar, + (arr::varchar[][][])[1][2][3] as triple_varchar, (arr::integer[][][])[1][2][3] as triple_integer from (values ('{{{1, 2, 3}, {44, 55, 66}}}')) as t(arr); ---- diff --git a/e2e_test/batch/basic/escape_string.slt.part b/e2e_test/batch/basic/escape_string.slt.part index 7f26e3442243c..0e0495dbd366a 100644 --- a/e2e_test/batch/basic/escape_string.slt.part +++ b/e2e_test/batch/basic/escape_string.slt.part @@ -16,7 +16,7 @@ select e'\u003f' query T select e'\55p' ---- --p +-p query T select e'\pp' @@ -38,8 +38,8 @@ select e'\\' ---- \ -statement error +statement error select e'\x80' -statement error +statement error select e'\200' diff --git a/e2e_test/batch/basic/range_scan.slt.part b/e2e_test/batch/basic/range_scan.slt.part index 197cad39c4fa6..fcb2bc633c08c 100644 --- a/e2e_test/batch/basic/range_scan.slt.part +++ b/e2e_test/batch/basic/range_scan.slt.part @@ -9,31 +9,31 @@ CREATE TABLE orders ( statement ok CREATE MATERIALIZED VIEW orders_count_by_user AS - SELECT user_id, date, count(*) AS orders_count - FROM orders + SELECT user_id, date, count(*) AS orders_count + FROM orders GROUP BY user_id, date; statement ok CREATE MATERIALIZED VIEW orders_count_by_user_1 AS - SELECT user_id, date, count(*) AS orders_count - FROM orders + SELECT user_id, date, count(*) AS orders_count + FROM orders GROUP BY user_id, date ORDER BY user_id desc, date desc; statement ok CREATE MATERIALIZED VIEW orders_count_by_user_2 AS - SELECT user_id, date, count(*) AS orders_count - FROM orders + SELECT user_id, date, count(*) AS orders_count + FROM orders GROUP BY user_id, date ORDER BY user_id asc, date desc; statement ok CREATE MATERIALIZED VIEW orders_count_by_user_3 AS - SELECT user_id, date, count(*) AS orders_count - FROM orders + SELECT user_id, date, count(*) AS orders_count + FROM orders GROUP BY user_id, date ORDER BY user_id desc, date asc; statement ok -insert into orders values - (0, 42, 1111), +insert into orders values + (0, 42, 1111), (1, 42, 2222), (2, 42, 2222), (3, 43, 1111), @@ -227,9 +227,9 @@ drop materialized view orders_count_by_user_3; statement ok CREATE MATERIALIZED VIEW orders_count_by_user AS - SELECT user_id, date, count(*) AS orders_count - FROM orders - GROUP BY user_id, date + SELECT user_id, date, count(*) AS orders_count + FROM orders + GROUP BY user_id, date ORDER BY orders_count; query III rowsort diff --git a/e2e_test/batch/basic/subquery.slt.part b/e2e_test/batch/basic/subquery.slt.part index dc27649a90b6d..59a832c41126a 100644 --- a/e2e_test/batch/basic/subquery.slt.part +++ b/e2e_test/batch/basic/subquery.slt.part @@ -71,7 +71,7 @@ NULL 1 NULL 2 NULL NULL -query II +query II select * except (b,d) from (select t1.x as a, t1.y as b, t2.x as c, t2.y as d from t1 join t2 on t1.x = t2.x where t1.x=1); ---- 1 1 @@ -79,7 +79,7 @@ select * except (b,d) from (select t1.x as a, t1.y as b, t2.x as c, t2.y as d fr 1 1 1 1 -query II +query II select * except (t1.x, t2.y), * except (t1.y, t2.x) from t1 join t2 on t1.y = t2.y where exists(select * from t3 where t1.x = t3.x and t2.y = t3.y) order by t2.x; ---- 2 1 2 2 diff --git a/e2e_test/batch/catalog/pg_attribute.slt.part b/e2e_test/batch/catalog/pg_attribute.slt.part index b1e2b44181a10..8bd43485c3ebe 100644 --- a/e2e_test/batch/catalog/pg_attribute.slt.part +++ b/e2e_test/batch/catalog/pg_attribute.slt.part @@ -3,7 +3,7 @@ create table tmp(id1 int, id2 int); query TIII select a.attname, a.atttypid, a.attlen, a.attnum from pg_catalog.pg_class t - join pg_catalog.pg_attribute a on t.oid = a.attrelid + join pg_catalog.pg_attribute a on t.oid = a.attrelid where t.relname = 'tmp' order by a.attnum; ---- id1 23 4 1 @@ -14,7 +14,7 @@ create view view1 as select id2 from tmp; query TIII select a.attname, a.atttypid, a.attlen, a.attnum from pg_catalog.pg_class t - join pg_catalog.pg_attribute a on t.oid = a.attrelid + join pg_catalog.pg_attribute a on t.oid = a.attrelid where t.relname = 'view1'; ---- id2 23 4 1 @@ -32,7 +32,7 @@ statement ok create index tmp_idx on tmp(id2) include(id1, id3); query TT -select i.relname, a.attname, ix.indkey from pg_catalog.pg_class t +select i.relname, a.attname, ix.indkey from pg_catalog.pg_class t join pg_catalog.pg_index ix on t.oid = ix.indrelid join pg_catalog.pg_class i on i.oid = ix.indexrelid join pg_catalog.pg_attribute a on t.oid = a.attrelid and a.attnum = ANY(ix.indkey) diff --git a/e2e_test/batch/catalog/pg_index.slt.part b/e2e_test/batch/catalog/pg_index.slt.part index 271553f13a621..3ebace06f207c 100644 --- a/e2e_test/batch/catalog/pg_index.slt.part +++ b/e2e_test/batch/catalog/pg_index.slt.part @@ -5,7 +5,7 @@ statement ok create index tmp_id2_idx on tmp(id2) include(id2); query IT -select ix.indnatts, ix.indkey from pg_catalog.pg_class t +select ix.indnatts, ix.indkey from pg_catalog.pg_class t join pg_catalog.pg_index ix on t.oid = ix.indrelid join pg_catalog.pg_class i on i.oid = ix.indexrelid where t.relname = 'tmp' and i.relname = 'tmp_id2_idx'; @@ -16,7 +16,7 @@ statement ok create index tmp_id2_idx_include_id1 on tmp(id2) include(id1); query IT -select ix.indnatts, ix.indkey from pg_catalog.pg_class t +select ix.indnatts, ix.indkey from pg_catalog.pg_class t join pg_catalog.pg_index ix on t.oid = ix.indrelid join pg_catalog.pg_class i on i.oid = ix.indexrelid where t.relname = 'tmp' and i.relname = 'tmp_id2_idx_include_id1'; @@ -27,7 +27,7 @@ statement ok create index tmp_id1_id2_idx on tmp(id1, id2); query IT -select ix.indnatts, ix.indkey from pg_catalog.pg_class t +select ix.indnatts, ix.indkey from pg_catalog.pg_class t join pg_catalog.pg_index ix on t.oid = ix.indrelid join pg_catalog.pg_class i on i.oid = ix.indexrelid where t.relname = 'tmp' and i.relname = 'tmp_id1_id2_idx'; diff --git a/e2e_test/batch/duckdb/aggregate/aggregates/test_minmax.slt.part b/e2e_test/batch/duckdb/aggregate/aggregates/test_minmax.slt.part index 3047d5b4c54f0..f047dbcfbba02 100644 --- a/e2e_test/batch/duckdb/aggregate/aggregates/test_minmax.slt.part +++ b/e2e_test/batch/duckdb/aggregate/aggregates/test_minmax.slt.part @@ -5,7 +5,7 @@ statement ok CREATE TABLE lists(l int[]); statement ok -INSERT INTO lists VALUES ('{0, 10}'), ('{1, 11}'), ('{2, 12}'), ('{3, 13}'), ('{4, 14}'), ('{5, 15}'); +INSERT INTO lists VALUES ('{0, 10}'), ('{1, 11}'), ('{2, 12}'), ('{3, 13}'), ('{4, 14}'), ('{5, 15}'); statement ok FLUSH; diff --git a/e2e_test/batch/functions/abs.slt.part b/e2e_test/batch/functions/abs.slt.part index 19420710443f3..a1e64f5ed65c5 100644 --- a/e2e_test/batch/functions/abs.slt.part +++ b/e2e_test/batch/functions/abs.slt.part @@ -1,20 +1,20 @@ query I -SELECT abs(-1000) +SELECT abs(-1000) ---- 1000 query I -SELECT abs(-1000.2131293210382103821) +SELECT abs(-1000.2131293210382103821) ---- 1000.2131293210382103821 query I -SELECT abs(-10002131293210382103821) +SELECT abs(-10002131293210382103821) ---- 10002131293210382103821 query I -SELECT abs(2134) +SELECT abs(2134) ---- 2134 diff --git a/e2e_test/batch/functions/pi.slt.part b/e2e_test/batch/functions/pi.slt.part index 2f19b5132420e..e1c0dd881e0ae 100644 --- a/e2e_test/batch/functions/pi.slt.part +++ b/e2e_test/batch/functions/pi.slt.part @@ -14,7 +14,7 @@ statement ok insert into f32_table values(pi()); query I -SELECT pi() +SELECT pi() ---- 3.141592653589793 diff --git a/e2e_test/batch/functions/sqrt.slt.part b/e2e_test/batch/functions/sqrt.slt.part index f00587914d7fa..b10eaf9edd794 100644 --- a/e2e_test/batch/functions/sqrt.slt.part +++ b/e2e_test/batch/functions/sqrt.slt.part @@ -1,4 +1,4 @@ -# testing sqrt(double precision) +# testing sqrt(double precision) query T SELECT abs(sqrt('1004.3') - '31.690692639953454') < 1e-12; ---- diff --git a/e2e_test/batch/functions/trigonometric_funcs.slt.part b/e2e_test/batch/functions/trigonometric_funcs.slt.part index 6d90d8c8b68c6..ae30880a155ba 100644 --- a/e2e_test/batch/functions/trigonometric_funcs.slt.part +++ b/e2e_test/batch/functions/trigonometric_funcs.slt.part @@ -306,12 +306,12 @@ t query R SELECT abs(cosd(85) - 0.08715574274765817) < 1e-14; ---- -t +t query R SELECT abs(cosd(90) - 0.0) < 1e-14; ---- -t +t query R SELECT cosd(('Inf')) diff --git a/e2e_test/batch/top_n/group_top_n.slt b/e2e_test/batch/top_n/group_top_n.slt index 71904968200cf..40502a5711b01 100644 --- a/e2e_test/batch/top_n/group_top_n.slt +++ b/e2e_test/batch/top_n/group_top_n.slt @@ -5,7 +5,7 @@ statement ok create table t(x int, y int); statement ok -insert into t values +insert into t values (1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (2, 1), (2, 2), (2, 3), (2, 4), (2, 5), (3, 1), (3, 2), (3, 3), (3, 4), (3, 5), diff --git a/e2e_test/batch/types/list/multi-dimentional_list_cast.slt.part b/e2e_test/batch/types/list/multi-dimentional_list_cast.slt.part index a69e52ad22872..8a67840a6c205 100644 --- a/e2e_test/batch/types/list/multi-dimentional_list_cast.slt.part +++ b/e2e_test/batch/types/list/multi-dimentional_list_cast.slt.part @@ -1,25 +1,25 @@ -query I -select array[array[1, 2], array[3, 4]]; ----- -{{1,2},{3,4}} - -query I -select array[[1, 2], [3, 4]]; ----- -{{1,2},{3,4}} - -query I -select array[[array[1, 2]], [[3, 4]]]; ----- -{{{1,2}},{{3,4}}} - -query I -select array[[[1, 2]], [array[3, 4]]]; ----- -{{{1,2}},{{3,4}}} - -statement error syntax error at or near -select array[array[1, 2], [3, 4]]; - -statement error syntax error at or near +query I +select array[array[1, 2], array[3, 4]]; +---- +{{1,2},{3,4}} + +query I +select array[[1, 2], [3, 4]]; +---- +{{1,2},{3,4}} + +query I +select array[[array[1, 2]], [[3, 4]]]; +---- +{{{1,2}},{{3,4}}} + +query I +select array[[[1, 2]], [array[3, 4]]]; +---- +{{{1,2}},{{3,4}}} + +statement error syntax error at or near +select array[array[1, 2], [3, 4]]; + +statement error syntax error at or near select array[[1, 2], array[3, 4]]; \ No newline at end of file diff --git a/e2e_test/batch/types/struct/nested_structs.slt.part b/e2e_test/batch/types/struct/nested_structs.slt.part index 85e4d1b75ad25..9ba4b0f2718b6 100644 --- a/e2e_test/batch/types/struct/nested_structs.slt.part +++ b/e2e_test/batch/types/struct/nested_structs.slt.part @@ -1,5 +1,5 @@ # Copied from https://github.com/duckdb/duckdb (MIT licensed). -# Copyright 2018-2022 Stichting DuckDB Foundation +# Copyright 2018-2022 Stichting DuckDB Foundation statement ok SET RW_IMPLICIT_FLUSH TO true; diff --git a/e2e_test/batch/types/struct/struct_case.slt.part b/e2e_test/batch/types/struct/struct_case.slt.part index a6626386c38ca..3200d7b05b35d 100644 --- a/e2e_test/batch/types/struct/struct_case.slt.part +++ b/e2e_test/batch/types/struct/struct_case.slt.part @@ -1,5 +1,5 @@ # Copied from https://github.com/duckdb/duckdb (MIT licensed). -# Copyright 2018-2022 Stichting DuckDB Foundation +# Copyright 2018-2022 Stichting DuckDB Foundation statement ok SET RW_IMPLICIT_FLUSH TO true; diff --git a/e2e_test/batch/types/struct/struct_cast.slt.part b/e2e_test/batch/types/struct/struct_cast.slt.part index f4cb3959c7072..77ac022dd9e09 100644 --- a/e2e_test/batch/types/struct/struct_cast.slt.part +++ b/e2e_test/batch/types/struct/struct_cast.slt.part @@ -1,5 +1,5 @@ # Copied from https://github.com/duckdb/duckdb (MIT licensed). -# Copyright 2018-2022 Stichting DuckDB Foundation +# Copyright 2018-2022 Stichting DuckDB Foundation statement ok SET RW_IMPLICIT_FLUSH TO true; diff --git a/e2e_test/batch/types/struct/struct_cross_product.slt.part b/e2e_test/batch/types/struct/struct_cross_product.slt.part index 547916a65f3c7..2c45edf96785a 100644 --- a/e2e_test/batch/types/struct/struct_cross_product.slt.part +++ b/e2e_test/batch/types/struct/struct_cross_product.slt.part @@ -1,5 +1,5 @@ # Copied from https://github.com/duckdb/duckdb (MIT licensed). -# Copyright 2018-2022 Stichting DuckDB Foundation +# Copyright 2018-2022 Stichting DuckDB Foundation statement ok SET RW_IMPLICIT_FLUSH TO true; diff --git a/e2e_test/batch/types/struct/struct_operation.slt.part b/e2e_test/batch/types/struct/struct_operation.slt.part index e966a5126edd6..b441e47a01e26 100644 --- a/e2e_test/batch/types/struct/struct_operation.slt.part +++ b/e2e_test/batch/types/struct/struct_operation.slt.part @@ -1,5 +1,5 @@ # Copied from https://github.com/duckdb/duckdb (MIT licensed). -# Copyright 2018-2022 Stichting DuckDB Foundation +# Copyright 2018-2022 Stichting DuckDB Foundation statement ok SET RW_IMPLICIT_FLUSH TO true; diff --git a/e2e_test/ch_benchmark/batch/q3.slt.part b/e2e_test/ch_benchmark/batch/q3.slt.part index 99d3c3718479c..00d56c653c26a 100644 --- a/e2e_test/ch_benchmark/batch/q3.slt.part +++ b/e2e_test/ch_benchmark/batch/q3.slt.part @@ -1,19 +1,19 @@ query IIIIT -select ol_o_id, ol_w_id, ol_d_id, round(sum(ol_amount)::decimal, 0) as revenue, o_entry_d -from customer, neworder, orders, orderline -where - c_id = o_c_id - and c_w_id = o_w_id - and c_d_id = o_d_id - and no_w_id = o_w_id - and no_d_id = o_d_id - and no_o_id = o_id - and ol_w_id = o_w_id - and ol_d_id = o_d_id - and ol_o_id = o_id - and o_entry_d > '2007-01-02 00:00:00.000000' - -- and c_state like '%a%' -group by ol_o_id, ol_w_id, ol_d_id, o_entry_d +select ol_o_id, ol_w_id, ol_d_id, round(sum(ol_amount)::decimal, 0) as revenue, o_entry_d +from customer, neworder, orders, orderline +where + c_id = o_c_id + and c_w_id = o_w_id + and c_d_id = o_d_id + and no_w_id = o_w_id + and no_d_id = o_d_id + and no_o_id = o_id + and ol_w_id = o_w_id + and ol_d_id = o_d_id + and ol_o_id = o_id + and o_entry_d > '2007-01-02 00:00:00.000000' + -- and c_state like '%a%' +group by ol_o_id, ol_w_id, ol_d_id, o_entry_d order by revenue desc, o_entry_d; ---- 23 1 6 81327 2022-12-12 19:29:02 diff --git a/e2e_test/ch_benchmark/batch/q8.slt.part b/e2e_test/ch_benchmark/batch/q8.slt.part index b34eceb893728..3606cb7822d3c 100644 --- a/e2e_test/ch_benchmark/batch/q8.slt.part +++ b/e2e_test/ch_benchmark/batch/q8.slt.part @@ -1,5 +1,5 @@ query TR -select +select extract(year from o_entry_d::timestamp) as l_year, round((sum(case when n2.n_name = 'GERMANY' or n2.n_name = 'UNITED STATES' then ol_amount else 0 end) / sum(ol_amount))::decimal, 2) as mkt_share from item, supplier, stock, orderline, orders, customer, nation n1, nation n2, region diff --git a/e2e_test/ch_benchmark/batch/q9.slt.part b/e2e_test/ch_benchmark/batch/q9.slt.part index 9288012b6d240..4385ebe6f1fe4 100644 --- a/e2e_test/ch_benchmark/batch/q9.slt.part +++ b/e2e_test/ch_benchmark/batch/q9.slt.part @@ -1,6 +1,6 @@ query TTR -select n_name, - extract(year from o_entry_d::timestamp) as l_year, +select n_name, + extract(year from o_entry_d::timestamp) as l_year, round(sum(ol_amount)::decimal, 0) as sum_profit from item, stock, supplier, orderline, orders, nation where ol_i_id = s_i_id diff --git a/e2e_test/ch_benchmark/streaming/views/q3.slt.part b/e2e_test/ch_benchmark/streaming/views/q3.slt.part index d9d6e7fce18d1..73894814ea307 100644 --- a/e2e_test/ch_benchmark/streaming/views/q3.slt.part +++ b/e2e_test/ch_benchmark/streaming/views/q3.slt.part @@ -1,17 +1,17 @@ statement ok create materialized view ch_benchmark_q3 as -select ol_o_id, ol_w_id, ol_d_id, round(sum(ol_amount)::decimal, 0) as revenue, o_entry_d -from customer, neworder, orders, orderline -where - c_id = o_c_id - and c_w_id = o_w_id - and c_d_id = o_d_id - and no_w_id = o_w_id - and no_d_id = o_d_id - and no_o_id = o_id - and ol_w_id = o_w_id - and ol_d_id = o_d_id - and ol_o_id = o_id - and o_entry_d > '2007-01-02 00:00:00.000000' - -- and c_state like '%a%' +select ol_o_id, ol_w_id, ol_d_id, round(sum(ol_amount)::decimal, 0) as revenue, o_entry_d +from customer, neworder, orders, orderline +where + c_id = o_c_id + and c_w_id = o_w_id + and c_d_id = o_d_id + and no_w_id = o_w_id + and no_d_id = o_d_id + and no_o_id = o_id + and ol_w_id = o_w_id + and ol_d_id = o_d_id + and ol_o_id = o_id + and o_entry_d > '2007-01-02 00:00:00.000000' + -- and c_state like '%a%' group by ol_o_id, ol_w_id, ol_d_id, o_entry_d ; diff --git a/e2e_test/ch_benchmark/streaming/views/q8.slt.part b/e2e_test/ch_benchmark/streaming/views/q8.slt.part index 879d3e08f1cee..371c3d6149084 100644 --- a/e2e_test/ch_benchmark/streaming/views/q8.slt.part +++ b/e2e_test/ch_benchmark/streaming/views/q8.slt.part @@ -1,6 +1,6 @@ statement ok create materialized view ch_benchmark_q8 as -select +select extract(year from o_entry_d::timestamp) as l_year, round((sum(case when n2.n_name = 'GERMANY' or n2.n_name = 'UNITED STATES' then ol_amount else 0 end) / sum(ol_amount))::decimal, 2) as mkt_share from item, supplier, stock, orderline, orders, customer, nation n1, nation n2, region diff --git a/e2e_test/ch_benchmark/streaming/views/q9.slt.part b/e2e_test/ch_benchmark/streaming/views/q9.slt.part index 7cd30cb2d9d64..da77ed21f49a4 100644 --- a/e2e_test/ch_benchmark/streaming/views/q9.slt.part +++ b/e2e_test/ch_benchmark/streaming/views/q9.slt.part @@ -1,7 +1,7 @@ statement ok create materialized view ch_benchmark_q9 as -select n_name, - extract(year from o_entry_d::timestamp) as l_year, +select n_name, + extract(year from o_entry_d::timestamp) as l_year, round(sum(ol_amount)::decimal, 0) as sum_profit from item, stock, supplier, orderline, orders, nation where ol_i_id = s_i_id diff --git a/e2e_test/database/prepare.slt b/e2e_test/database/prepare.slt index e5b167c409f69..a2b37011e9325 100644 --- a/e2e_test/database/prepare.slt +++ b/e2e_test/database/prepare.slt @@ -1,5 +1,5 @@ -# Create a database for test.slt to use. -# A new connection will be created when we switch to a different database, +# Create a database for test.slt to use. +# A new connection will be created when we switch to a different database, # so this cannot be tested in a single .slt file. # Create a test database. diff --git a/e2e_test/ddl/explain_no_duplicate_check.slt b/e2e_test/ddl/explain_no_duplicate_check.slt index ee374f7260bbc..f3f116974ee3e 100644 --- a/e2e_test/ddl/explain_no_duplicate_check.slt +++ b/e2e_test/ddl/explain_no_duplicate_check.slt @@ -12,7 +12,7 @@ explain create table test_explain_table (v int); -# Create materialized view on it +# Create materialized view on it statement ok create materialized view mv as select v from test_explain_table order by v limit 10; diff --git a/e2e_test/ddl/invalid_operation.slt b/e2e_test/ddl/invalid_operation.slt index 00eec4eb19529..a86506e8cf587 100644 --- a/e2e_test/ddl/invalid_operation.slt +++ b/e2e_test/ddl/invalid_operation.slt @@ -74,7 +74,7 @@ statement error Use `DROP TABLE` drop source t; # FIXME: improve the error message -statement error not found +statement error not found drop sink t; # FIXME: improve the error message @@ -182,7 +182,7 @@ drop sink src; statement error not found drop view src; -# 4.6 sink +# 4.6 sink statement ok CREATE SINK sink FROM mv WITH (connector='blackhole'); diff --git a/e2e_test/ddl/table/table.slt.part b/e2e_test/ddl/table/table.slt.part index 525982f2c579d..2e7c744ba2536 100644 --- a/e2e_test/ddl/table/table.slt.part +++ b/e2e_test/ddl/table/table.slt.part @@ -132,7 +132,7 @@ create table t (v1 int not null); statement error create table t (v1 varchar collate "en_US"); -# Test create-table-as +# Test create-table-as statement ok create table t as select 1; @@ -148,7 +148,7 @@ create table t as select 1 as a, 2 as b; statement ok drop table t; -statement ok +statement ok create table t(v1) as select 1; statement ok @@ -188,11 +188,11 @@ drop table t1; statement ok drop table t; -statement ok +statement ok create table t AS SELECT * FROM generate_series(0, 5,1) tbl(i); statement ok -flush; +flush; query I select * from t order by i; @@ -242,7 +242,7 @@ drop table n1; statement ok drop table t; -statement ok +statement ok create table t (v1 int,v2 int); statement ok @@ -251,7 +251,7 @@ create table t1(a,b) as select v1,v2 from t; statement ok create table t2(a) as select v1,v2 from t; -statement ok +statement ok drop table t; statement ok diff --git a/e2e_test/generated/README.md b/e2e_test/generated/README.md index 2c7b7aa7b3d39..a23d70f071a17 100644 --- a/e2e_test/generated/README.md +++ b/e2e_test/generated/README.md @@ -2,7 +2,7 @@ ## docslt -Generated by +Generated by ```bash cargo run --bin risedev-docslt diff --git a/e2e_test/nexmark/insert_bid.slt.part b/e2e_test/nexmark/insert_bid.slt.part index 7f2b945d84fbf..a227152005e45 100644 --- a/e2e_test/nexmark/insert_bid.slt.part +++ b/e2e_test/nexmark/insert_bid.slt.part @@ -8,53 +8,53 @@ INSERT INTO bid ( date_time, extra ) VALUES -(1000, 1001, 73134520, 'channel-7568', 'https://www.nexmark.com/rswp/ygi/_gwv/item.htm?query=1&channel_id=163053568', '2015-07-15 00:00:01', 'bxkfohfuvlkvjarjgrngycoibaooinpatxmrmhotgsqtdarhxlbrgroteageapilufrwznnvea'), -(1000, 1001, 499920, 'Apple', 'https://www.nexmark.com/rxa/bnn/fl_/item.htm?query=1', '2015-07-15 00:00:01.001', 'gwqpevotazgnmxipaopgadzjhnmoxnyxdslcqchawppgliuqntlnjqztzdlaooms'), -(1000, 1001, 1940, 'channel-9319', 'https://www.nexmark.com/myw/ifm/m_dd/item.htm?query=1&channel_id=433848320', '2015-07-15 00:00:02.001', 'gxhdccxtkafmnxiwlfrrvflfxsutdumuejnuvekmzvvagkagfnfkiniwrmssgdhgxyowzykkwnd'), -(1000, 1007, 12655, 'channel-5136', 'https://www.nexmark.com/xzzn/qfz/_kk/item.htm?query=1&channel_id=136839168', '2015-07-15 00:00:02.001', 'cnipfyemyybafidbftraixbzwzfuiqotzjhwebelxuettusmmofqypfsrtivrqtvghzvdtqhqxc'), -(1004, 1001, 3992, 'Baidu', 'https://www.nexmark.com/pd_/a_y/p_f_/item.htm?query=1', '2015-07-15 00:00:03.001', 'ylaavyeydxlbzycbvvaeiqflxtxvjhqzaiaxmfictkgqxuykbkyacvhvhhdwxfzzvrmbdfsxyybugxmq'), -(1012, 1003, 19269, 'channel-5775', 'https://www.nexmark.com/g_oh/jlh/q_cf/item.htm?query=1&channel_id=244842496', '2015-07-15 00:00:03.001', 'zvhixblfatttovhezudfnaqkzhngltvwoclmbtiyeikjxqgmcvvhhclaudqyauwat'), -(1000, 1001, 2419091, 'Google', 'https://www.nexmark.com/vzbh/gkz_/yha/item.htm?query=1', '2015-07-15 00:00:04.001', 'roiowjqmoknpaewchvciudpvrchvmeahktnreghhzoumjkqzibdabshabwqg'), -(1000, 1001, 43672280, 'channel-9550', 'https://www.nexmark.com/_nq/nslg/wvuo/item.htm?query=1&channel_id=1923350528', '2015-07-15 00:00:04.001', 'ckqgmlisexypszjgjbudoejevhpdvdwdvwcwgkyosfhungfqrvtgaw'), -(1008, 1002, 28953332, 'channel-5698', 'https://www.nexmark.com/_ft/bwy/hh_/item.htm?query=1&channel_id=1114112000', '2015-07-15 00:00:04.001', 'vcwipbxmlunhuydoptqjecuqoinqioxibjdwtfxapnmuzzsjpwitcfexncgnxyaistbstubeuotsgs'), -(1000, 1010, 235, 'Facebook', 'https://www.nexmark.com/_mks/ppeq/sic/item.htm?query=1', '2015-07-15 00:00:04.001', 'xsslxlqwhaglqtqrdvspopxkmpvdjinwtglewwvzdwgulmguzhqlqfjsehjaljtkbehlvttbmxuvfhf'), -(1001, 1004, 96533552, 'Facebook', 'https://www.nexmark.com/_mks/ppeq/sic/item.htm?query=1', '2015-07-15 00:00:05.001', 'tbxkmcsgxetgedefplmecyqssamffgwrvmcgxzjsilauvxiyesrmlphkxik'), -(1007, 1001, 242, 'channel-3148', 'https://www.nexmark.com/jgy/pdmx/h_yl/item.htm?query=1&channel_id=842006528', '2015-07-15 00:00:05.002', 'yzzgchwigbfzxrrlpmlclcdqipozeqibwntzwrhofvbewxuamxbftdgqauhjeiiqzyt'), -(1000, 1001, 373, 'channel-426', 'https://www.nexmark.com/qhfk/hhg/ti_b/item.htm?query=1&channel_id=1434451968', '2015-07-15 00:00:05.002', 'aalbeoflneqqncxbvwszsqcrtqcdytiyljzxpivkzpgwupsoyuaaexjapqsuh'), -(1012, 1001, 71083760, 'Apple', 'https://www.nexmark.com/rxa/bnn/fl_/item.htm?query=1', '2015-07-15 00:00:06.002', 'zatpcgnqcbcwrtglrjgugqffyabwbteckqxlomjknkkotrbarpwhzi'), -(1000, 1003, 647009, 'Apple', 'https://www.nexmark.com/rxa/bnn/fl_/item.htm?query=1', '2015-07-15 00:00:06.002', 'aipwmsfmwzjviutftnkrndbjisxryazydrfjgruwkppllqypvivjjuwrzdipb'), -(1009, 1001, 209960, 'Google', 'https://www.nexmark.com/vzbh/gkz_/yha/item.htm?query=1', '2015-07-15 00:00:06.002', 'jmxarvgscfnkcdenoazhfygexpljyjtbprldrppfgyhlmysgpibrmohmaroansnruycxasekoufmfcrp'), -(1000, 1001, 522439, 'channel-1217', 'https://www.nexmark.com/r_d/hry/ebdc/item.htm?query=1&channel_id=2095054848', '2015-07-15 00:00:07.002', 'ucmakczqkkcthyxthcvlpziafettollqfgqjgrfhvhpfusvgeahiydux'), -(1003, 1001, 89027544, 'Apple', 'https://www.nexmark.com/rxa/bnn/fl_/item.htm?query=1', '2015-07-15 00:00:07.002', 'wpsrfruszjuaqzroxiukljjbdrlehlyxnbztiaoutlhmehhojgwoncidsq'), -(1001, 1008, 34546, 'Apple', 'https://www.nexmark.com/rxa/bnn/fl_/item.htm?query=1', '2015-07-15 00:00:08.002', 'royyfdziwhoefjfckzoxckpooqmbtqxwltmcypaotuqzuebexxkwdnfctcooei'), -(1000, 1002, 322, 'channel-2691', 'https://www.nexmark.com/dyg/ozd/gp_/item.htm?query=1&channel_id=1051721728', '2015-07-15 00:00:08.002', 'rmfblzbgfonebnxejptccmvhakgnagewhbolwyiygsyyfsuwjiwuohrkxavxuhxzycjiprahpyygej'), -(1006, 1001, 333841, 'Baidu', 'https://www.nexmark.com/pd_/a_y/p_f_/item.htm?query=1', '2015-07-15 00:00:09.002', 'unhedkuzyhxtjwcclvxyqvlyyxugtcxwzdvxszyppxqlsvxqzzwtmdxzvvawculjwtegvqhky'), -(1000, 1004, 2349, 'Apple', 'https://www.nexmark.com/rxa/bnn/fl_/item.htm?query=1', '2015-07-15 00:00:09.003', 'rwmjnxidcqagbkqwshiuzcoeyqkznwptzxsmpzxhymnrfhzuxdqfxjblywhastfcqjffyrrygdmv'), -(1003, 1001, 42953, 'channel-7914', 'https://www.nexmark.com/fkb/zfwu/tcq/item.htm?query=1&channel_id=1467482112', '2015-07-15 00:00:09.003', 'ibhvqzzjihezplknyuirszjxwzjmjufyrotaiaskermnpxxuznzmpqxactmhuvzglf'), -(1011, 1001, 5255, 'Google', 'https://www.nexmark.com/vzbh/gkz_/yha/item.htm?query=1', '2015-07-15 00:00:10.003', 'jwqpxghlxenjdndnxxesigkexntlplwjewvocnlygtedoybqjtcgdlxrsjwk'), -(1000, 1001, 14909, 'channel-6340', 'https://www.nexmark.com/ykkn/wnmu/iytk/item.htm?query=1&channel_id=588775424', '2015-07-15 00:00:10.003', 'qflzyadlrncvltfbgdgecgnyrrgiiaaczikpoyvqypnhqpvhvlbupdzsplrhhrtotwxfqpbwt'), -(1000, 1001, 13982877, 'Baidu', 'https://www.nexmark.com/pd_/a_y/p_f_/item.htm?query=1', '2015-07-15 00:00:11.003', 'yrllwmdvzsapjmvllehrwumuzkcnswoezmszyawjrbiinwamqhubrkwoegpfmsiwawofsdxq'), -(1000, 1003, 9353, 'Facebook', 'https://www.nexmark.com/_mks/ppeq/sic/item.htm?query=1', '2015-07-15 00:00:11.003', 'qnxoeqlppooteqhxfzeoqlmkwforwtzyqyhjfzpgbaucbbxieuvvpfeubalvrgdexgfilcdkjuh'), -(1001, 1001, 84935, 'Baidu', 'https://www.nexmark.com/pd_/a_y/p_f_/item.htm?query=1', '2015-07-15 00:00:11.003', 'rvawpznicmrupxpoyegtkmirvotiiuleklxbufoceleihfzpkmmbzrngmogqeaysfjy'), -(1000, 1006, 1433650, 'channel-5311', 'https://www.nexmark.com/__du/lcuf/nhcz/item.htm?query=1&channel_id=47710208', '2015-07-15 00:00:12.003', 'xjflgxfiltfokisttnrueiyejuesecuwhxulwpkqnisqkfnyjbtdpmeimlyrphctf'), -(1000, 1001, 28676694, 'Facebook', 'https://www.nexmark.com/_mks/ppeq/sic/item.htm?query=1', '2015-07-15 00:00:13.003', 'rbuxqhlpgrxcudngtiqtmuucohowqxczvtpfnennefebshmuoidyeinfhmkzemivg'), -(1011, 1009, 12201, 'Apple', 'https://www.nexmark.com/rxa/bnn/fl_/item.htm?query=1', '2015-07-15 00:00:13.003', 'itfogghkclpdkraffmwdsyblmwwaxxxgembvwyvzcmxjqejwjjsudqezwbhonfydwubtbnqgct'), -(1000, 1001, 1930913, 'channel-497', 'https://www.nexmark.com/_auz/ypry/xeka/item.htm?query=1&channel_id=1887436800', '2015-07-15 00:00:14.004', 'rpvaomoeslugunnfhqsryqzxhbywymppdpipkjzxzjnwzxpipnsfdghqm'), -(1004, 1001, 2062, 'Baidu', 'https://www.nexmark.com/pd_/a_y/p_f_/item.htm?query=1', '2015-07-15 00:00:14.004', 'gknwvwczpduarvvbzkjgyxipbnzjbnhcneurucjqfitazyhdylgqqqlsskojl'), -(1000, 1001, 567581, 'channel-4450', 'https://www.nexmark.com/l__/kjxu/onl/item.htm?query=1&channel_id=1183318016', '2015-07-15 00:00:15.004', 'alulhecnzcixssbdjgrqvblnozzozbtaaerktjzaxzncjbcxglmdwcqwpbgjbfujtrvtjksco'), -(1000, 1001, 3999, 'channel-7314', 'https://www.nexmark.com/zdr/zhjp/ryr/item.htm?query=1&channel_id=1228406784', '2015-07-15 00:00:15.004', 'ryktfqvddqxzmvybagmkromvmwornpysycoingrjvwygkxzvikzmffmfdoskyrsqrhwfnjkjtb'), -(1010, 1001, 213, 'Facebook', 'https://www.nexmark.com/_mks/ppeq/sic/item.htm?query=1', '2015-07-15 00:00:15.004', 'xlyazoncjinqftzslwzpbwsfwjvuvytifuctjzzhiawdoxgzispzdwritryvkj'), -(1006, 1001, 3414043, 'channel-8070', 'https://www.nexmark.com/kyop/avga/kkq/item.htm?query=1&channel_id=1643642880', '2015-07-15 00:00:16.004', 'aiuubtfwalgmjsgiqkpkovqheatbweauormiveilvgbrymereqlinivuwgcwigzmyhcdvypc'), -(1006, 1001, 21659600, 'channel-5594', 'https://www.nexmark.com/hwl/iqr/gi__/item.htm?query=1&channel_id=1537736704', '2015-07-15 00:00:16.004', 'qthsicqelorumfjrbjacwshuchufyinikyqtnctalyktdujiwqhlofkxwvu'), -(1009, 1001, 806, 'channel-2421', 'https://www.nexmark.com/vicj/m_a/vop/item.htm?query=1&channel_id=1366294528', '2015-07-15 00:00:17.004', 'efpmkkcrybqeodykzswczvmslauxgasmtvhbjswnjlhwlyuicaawprcp'), -(1012, 1001, 170060, 'channel-4175', 'https://www.nexmark.com/aa_k/kwna/dfo_/item.htm?query=1&channel_id=234356736', '2015-07-15 00:00:18.004', 'zuwaklfhmtsmhsisdbochookjfedsmkhuuufdmomntxynjwtmohcmpqmvjtdgqwnfuqxwvg'), -(1001, 1001, 222, 'channel-4639', 'https://www.nexmark.com/_lwa/jepz/slr/item.htm?query=1&channel_id=129499136', '2015-07-15 00:00:19.004', 'srnrswlrlfjgsglzbhhtypluvykcygqjvrgihflqrlzohptveqnshbshbankokgoooqttrgskh'), -(1010, 1002, 2092, 'Baidu', 'https://www.nexmark.com/pd_/a_y/p_f_/item.htm?query=1', '2015-07-15 00:00:19.005', 'xsnnlcwqvsvtykjjjytqdarszxiunbpikhlltzcbzgtyrczlujqaxptncekuigacyszuoggizexd'), -(1012, 1001, 3378476, 'channel-5845', 'https://www.nexmark.com/ph_/qho/awfo/item.htm?query=1', '2015-07-15 00:00:20.005', 'pzekasaqfyqeykupxciyhmstbecetphgcsqpjywqqtttrzbnitwvrkxefbq'), -(1012, 1001, 3633428, 'Google', 'https://www.nexmark.com/vzbh/gkz_/yha/item.htm?query=1', '2015-07-15 00:00:20.005', 'tkwhxqjbnsfzczgrdffzdlsqwryirkyygzdttpdutcsogqiteblxoyjcmgwbjprlmvekufhq'), -(1004, 1000, 783536, 'Baidu', 'https://www.nexmark.com/pd_/a_y/p_f_/item.htm?query=1', '2015-07-15 00:00:21.005', 'gjzqqtmvzsrbrnkxtnlfyodjwqvingsihakejqkidyyzemiindosrxcjbjaekihncddiit'), -(1002, 1001, 190, 'Google', 'https://www.nexmark.com/vzbh/gkz_/yha/item.htm?query=1', '2015-07-15 00:00:21.005', 'rvcnfijweawzxscxasktbfnjhxmhwprafafcizoythlvpkbgutnvqbthfgqsgcvhip'), -(1000, 1001, 10465998, 'Facebook', 'https://www.nexmark.com/_mks/ppeq/sic/item.htm?query=1', '2015-07-15 00:00:22.005', 'iyvfsixnocijwqeqzesiugoiyppmaansyqvsruqngwsfgksjxsbudeunwadjftqsbojxbvuraba'), -(1000, 1001, 506, 'Google', 'https://www.nexmark.com/vzbh/gkz_/yha/item.htm?query=1', '2015-07-15 00:00:23.006', 'dgvalsxtsvhizqfyfuskwvymyonrsrysvsncacmbwdxtqpcmcldjztjsdqzjeedzhsz'), -(1000, 1004, 92441, 'Google', 'https://www.nexmark.com/vzbh/gkz_/yha/item.htm?query=1', '2015-07-15 00:00:24.006', 'vigdixojctlzzlhdczfojxzrvrrfqbtnstozecfukalltorefhjucxsjbqnyyjvumn'), +(1000, 1001, 73134520, 'channel-7568', 'https://www.nexmark.com/rswp/ygi/_gwv/item.htm?query=1&channel_id=163053568', '2015-07-15 00:00:01', 'bxkfohfuvlkvjarjgrngycoibaooinpatxmrmhotgsqtdarhxlbrgroteageapilufrwznnvea'), +(1000, 1001, 499920, 'Apple', 'https://www.nexmark.com/rxa/bnn/fl_/item.htm?query=1', '2015-07-15 00:00:01.001', 'gwqpevotazgnmxipaopgadzjhnmoxnyxdslcqchawppgliuqntlnjqztzdlaooms'), +(1000, 1001, 1940, 'channel-9319', 'https://www.nexmark.com/myw/ifm/m_dd/item.htm?query=1&channel_id=433848320', '2015-07-15 00:00:02.001', 'gxhdccxtkafmnxiwlfrrvflfxsutdumuejnuvekmzvvagkagfnfkiniwrmssgdhgxyowzykkwnd'), +(1000, 1007, 12655, 'channel-5136', 'https://www.nexmark.com/xzzn/qfz/_kk/item.htm?query=1&channel_id=136839168', '2015-07-15 00:00:02.001', 'cnipfyemyybafidbftraixbzwzfuiqotzjhwebelxuettusmmofqypfsrtivrqtvghzvdtqhqxc'), +(1004, 1001, 3992, 'Baidu', 'https://www.nexmark.com/pd_/a_y/p_f_/item.htm?query=1', '2015-07-15 00:00:03.001', 'ylaavyeydxlbzycbvvaeiqflxtxvjhqzaiaxmfictkgqxuykbkyacvhvhhdwxfzzvrmbdfsxyybugxmq'), +(1012, 1003, 19269, 'channel-5775', 'https://www.nexmark.com/g_oh/jlh/q_cf/item.htm?query=1&channel_id=244842496', '2015-07-15 00:00:03.001', 'zvhixblfatttovhezudfnaqkzhngltvwoclmbtiyeikjxqgmcvvhhclaudqyauwat'), +(1000, 1001, 2419091, 'Google', 'https://www.nexmark.com/vzbh/gkz_/yha/item.htm?query=1', '2015-07-15 00:00:04.001', 'roiowjqmoknpaewchvciudpvrchvmeahktnreghhzoumjkqzibdabshabwqg'), +(1000, 1001, 43672280, 'channel-9550', 'https://www.nexmark.com/_nq/nslg/wvuo/item.htm?query=1&channel_id=1923350528', '2015-07-15 00:00:04.001', 'ckqgmlisexypszjgjbudoejevhpdvdwdvwcwgkyosfhungfqrvtgaw'), +(1008, 1002, 28953332, 'channel-5698', 'https://www.nexmark.com/_ft/bwy/hh_/item.htm?query=1&channel_id=1114112000', '2015-07-15 00:00:04.001', 'vcwipbxmlunhuydoptqjecuqoinqioxibjdwtfxapnmuzzsjpwitcfexncgnxyaistbstubeuotsgs'), +(1000, 1010, 235, 'Facebook', 'https://www.nexmark.com/_mks/ppeq/sic/item.htm?query=1', '2015-07-15 00:00:04.001', 'xsslxlqwhaglqtqrdvspopxkmpvdjinwtglewwvzdwgulmguzhqlqfjsehjaljtkbehlvttbmxuvfhf'), +(1001, 1004, 96533552, 'Facebook', 'https://www.nexmark.com/_mks/ppeq/sic/item.htm?query=1', '2015-07-15 00:00:05.001', 'tbxkmcsgxetgedefplmecyqssamffgwrvmcgxzjsilauvxiyesrmlphkxik'), +(1007, 1001, 242, 'channel-3148', 'https://www.nexmark.com/jgy/pdmx/h_yl/item.htm?query=1&channel_id=842006528', '2015-07-15 00:00:05.002', 'yzzgchwigbfzxrrlpmlclcdqipozeqibwntzwrhofvbewxuamxbftdgqauhjeiiqzyt'), +(1000, 1001, 373, 'channel-426', 'https://www.nexmark.com/qhfk/hhg/ti_b/item.htm?query=1&channel_id=1434451968', '2015-07-15 00:00:05.002', 'aalbeoflneqqncxbvwszsqcrtqcdytiyljzxpivkzpgwupsoyuaaexjapqsuh'), +(1012, 1001, 71083760, 'Apple', 'https://www.nexmark.com/rxa/bnn/fl_/item.htm?query=1', '2015-07-15 00:00:06.002', 'zatpcgnqcbcwrtglrjgugqffyabwbteckqxlomjknkkotrbarpwhzi'), +(1000, 1003, 647009, 'Apple', 'https://www.nexmark.com/rxa/bnn/fl_/item.htm?query=1', '2015-07-15 00:00:06.002', 'aipwmsfmwzjviutftnkrndbjisxryazydrfjgruwkppllqypvivjjuwrzdipb'), +(1009, 1001, 209960, 'Google', 'https://www.nexmark.com/vzbh/gkz_/yha/item.htm?query=1', '2015-07-15 00:00:06.002', 'jmxarvgscfnkcdenoazhfygexpljyjtbprldrppfgyhlmysgpibrmohmaroansnruycxasekoufmfcrp'), +(1000, 1001, 522439, 'channel-1217', 'https://www.nexmark.com/r_d/hry/ebdc/item.htm?query=1&channel_id=2095054848', '2015-07-15 00:00:07.002', 'ucmakczqkkcthyxthcvlpziafettollqfgqjgrfhvhpfusvgeahiydux'), +(1003, 1001, 89027544, 'Apple', 'https://www.nexmark.com/rxa/bnn/fl_/item.htm?query=1', '2015-07-15 00:00:07.002', 'wpsrfruszjuaqzroxiukljjbdrlehlyxnbztiaoutlhmehhojgwoncidsq'), +(1001, 1008, 34546, 'Apple', 'https://www.nexmark.com/rxa/bnn/fl_/item.htm?query=1', '2015-07-15 00:00:08.002', 'royyfdziwhoefjfckzoxckpooqmbtqxwltmcypaotuqzuebexxkwdnfctcooei'), +(1000, 1002, 322, 'channel-2691', 'https://www.nexmark.com/dyg/ozd/gp_/item.htm?query=1&channel_id=1051721728', '2015-07-15 00:00:08.002', 'rmfblzbgfonebnxejptccmvhakgnagewhbolwyiygsyyfsuwjiwuohrkxavxuhxzycjiprahpyygej'), +(1006, 1001, 333841, 'Baidu', 'https://www.nexmark.com/pd_/a_y/p_f_/item.htm?query=1', '2015-07-15 00:00:09.002', 'unhedkuzyhxtjwcclvxyqvlyyxugtcxwzdvxszyppxqlsvxqzzwtmdxzvvawculjwtegvqhky'), +(1000, 1004, 2349, 'Apple', 'https://www.nexmark.com/rxa/bnn/fl_/item.htm?query=1', '2015-07-15 00:00:09.003', 'rwmjnxidcqagbkqwshiuzcoeyqkznwptzxsmpzxhymnrfhzuxdqfxjblywhastfcqjffyrrygdmv'), +(1003, 1001, 42953, 'channel-7914', 'https://www.nexmark.com/fkb/zfwu/tcq/item.htm?query=1&channel_id=1467482112', '2015-07-15 00:00:09.003', 'ibhvqzzjihezplknyuirszjxwzjmjufyrotaiaskermnpxxuznzmpqxactmhuvzglf'), +(1011, 1001, 5255, 'Google', 'https://www.nexmark.com/vzbh/gkz_/yha/item.htm?query=1', '2015-07-15 00:00:10.003', 'jwqpxghlxenjdndnxxesigkexntlplwjewvocnlygtedoybqjtcgdlxrsjwk'), +(1000, 1001, 14909, 'channel-6340', 'https://www.nexmark.com/ykkn/wnmu/iytk/item.htm?query=1&channel_id=588775424', '2015-07-15 00:00:10.003', 'qflzyadlrncvltfbgdgecgnyrrgiiaaczikpoyvqypnhqpvhvlbupdzsplrhhrtotwxfqpbwt'), +(1000, 1001, 13982877, 'Baidu', 'https://www.nexmark.com/pd_/a_y/p_f_/item.htm?query=1', '2015-07-15 00:00:11.003', 'yrllwmdvzsapjmvllehrwumuzkcnswoezmszyawjrbiinwamqhubrkwoegpfmsiwawofsdxq'), +(1000, 1003, 9353, 'Facebook', 'https://www.nexmark.com/_mks/ppeq/sic/item.htm?query=1', '2015-07-15 00:00:11.003', 'qnxoeqlppooteqhxfzeoqlmkwforwtzyqyhjfzpgbaucbbxieuvvpfeubalvrgdexgfilcdkjuh'), +(1001, 1001, 84935, 'Baidu', 'https://www.nexmark.com/pd_/a_y/p_f_/item.htm?query=1', '2015-07-15 00:00:11.003', 'rvawpznicmrupxpoyegtkmirvotiiuleklxbufoceleihfzpkmmbzrngmogqeaysfjy'), +(1000, 1006, 1433650, 'channel-5311', 'https://www.nexmark.com/__du/lcuf/nhcz/item.htm?query=1&channel_id=47710208', '2015-07-15 00:00:12.003', 'xjflgxfiltfokisttnrueiyejuesecuwhxulwpkqnisqkfnyjbtdpmeimlyrphctf'), +(1000, 1001, 28676694, 'Facebook', 'https://www.nexmark.com/_mks/ppeq/sic/item.htm?query=1', '2015-07-15 00:00:13.003', 'rbuxqhlpgrxcudngtiqtmuucohowqxczvtpfnennefebshmuoidyeinfhmkzemivg'), +(1011, 1009, 12201, 'Apple', 'https://www.nexmark.com/rxa/bnn/fl_/item.htm?query=1', '2015-07-15 00:00:13.003', 'itfogghkclpdkraffmwdsyblmwwaxxxgembvwyvzcmxjqejwjjsudqezwbhonfydwubtbnqgct'), +(1000, 1001, 1930913, 'channel-497', 'https://www.nexmark.com/_auz/ypry/xeka/item.htm?query=1&channel_id=1887436800', '2015-07-15 00:00:14.004', 'rpvaomoeslugunnfhqsryqzxhbywymppdpipkjzxzjnwzxpipnsfdghqm'), +(1004, 1001, 2062, 'Baidu', 'https://www.nexmark.com/pd_/a_y/p_f_/item.htm?query=1', '2015-07-15 00:00:14.004', 'gknwvwczpduarvvbzkjgyxipbnzjbnhcneurucjqfitazyhdylgqqqlsskojl'), +(1000, 1001, 567581, 'channel-4450', 'https://www.nexmark.com/l__/kjxu/onl/item.htm?query=1&channel_id=1183318016', '2015-07-15 00:00:15.004', 'alulhecnzcixssbdjgrqvblnozzozbtaaerktjzaxzncjbcxglmdwcqwpbgjbfujtrvtjksco'), +(1000, 1001, 3999, 'channel-7314', 'https://www.nexmark.com/zdr/zhjp/ryr/item.htm?query=1&channel_id=1228406784', '2015-07-15 00:00:15.004', 'ryktfqvddqxzmvybagmkromvmwornpysycoingrjvwygkxzvikzmffmfdoskyrsqrhwfnjkjtb'), +(1010, 1001, 213, 'Facebook', 'https://www.nexmark.com/_mks/ppeq/sic/item.htm?query=1', '2015-07-15 00:00:15.004', 'xlyazoncjinqftzslwzpbwsfwjvuvytifuctjzzhiawdoxgzispzdwritryvkj'), +(1006, 1001, 3414043, 'channel-8070', 'https://www.nexmark.com/kyop/avga/kkq/item.htm?query=1&channel_id=1643642880', '2015-07-15 00:00:16.004', 'aiuubtfwalgmjsgiqkpkovqheatbweauormiveilvgbrymereqlinivuwgcwigzmyhcdvypc'), +(1006, 1001, 21659600, 'channel-5594', 'https://www.nexmark.com/hwl/iqr/gi__/item.htm?query=1&channel_id=1537736704', '2015-07-15 00:00:16.004', 'qthsicqelorumfjrbjacwshuchufyinikyqtnctalyktdujiwqhlofkxwvu'), +(1009, 1001, 806, 'channel-2421', 'https://www.nexmark.com/vicj/m_a/vop/item.htm?query=1&channel_id=1366294528', '2015-07-15 00:00:17.004', 'efpmkkcrybqeodykzswczvmslauxgasmtvhbjswnjlhwlyuicaawprcp'), +(1012, 1001, 170060, 'channel-4175', 'https://www.nexmark.com/aa_k/kwna/dfo_/item.htm?query=1&channel_id=234356736', '2015-07-15 00:00:18.004', 'zuwaklfhmtsmhsisdbochookjfedsmkhuuufdmomntxynjwtmohcmpqmvjtdgqwnfuqxwvg'), +(1001, 1001, 222, 'channel-4639', 'https://www.nexmark.com/_lwa/jepz/slr/item.htm?query=1&channel_id=129499136', '2015-07-15 00:00:19.004', 'srnrswlrlfjgsglzbhhtypluvykcygqjvrgihflqrlzohptveqnshbshbankokgoooqttrgskh'), +(1010, 1002, 2092, 'Baidu', 'https://www.nexmark.com/pd_/a_y/p_f_/item.htm?query=1', '2015-07-15 00:00:19.005', 'xsnnlcwqvsvtykjjjytqdarszxiunbpikhlltzcbzgtyrczlujqaxptncekuigacyszuoggizexd'), +(1012, 1001, 3378476, 'channel-5845', 'https://www.nexmark.com/ph_/qho/awfo/item.htm?query=1', '2015-07-15 00:00:20.005', 'pzekasaqfyqeykupxciyhmstbecetphgcsqpjywqqtttrzbnitwvrkxefbq'), +(1012, 1001, 3633428, 'Google', 'https://www.nexmark.com/vzbh/gkz_/yha/item.htm?query=1', '2015-07-15 00:00:20.005', 'tkwhxqjbnsfzczgrdffzdlsqwryirkyygzdttpdutcsogqiteblxoyjcmgwbjprlmvekufhq'), +(1004, 1000, 783536, 'Baidu', 'https://www.nexmark.com/pd_/a_y/p_f_/item.htm?query=1', '2015-07-15 00:00:21.005', 'gjzqqtmvzsrbrnkxtnlfyodjwqvingsihakejqkidyyzemiindosrxcjbjaekihncddiit'), +(1002, 1001, 190, 'Google', 'https://www.nexmark.com/vzbh/gkz_/yha/item.htm?query=1', '2015-07-15 00:00:21.005', 'rvcnfijweawzxscxasktbfnjhxmhwprafafcizoythlvpkbgutnvqbthfgqsgcvhip'), +(1000, 1001, 10465998, 'Facebook', 'https://www.nexmark.com/_mks/ppeq/sic/item.htm?query=1', '2015-07-15 00:00:22.005', 'iyvfsixnocijwqeqzesiugoiyppmaansyqvsruqngwsfgksjxsbudeunwadjftqsbojxbvuraba'), +(1000, 1001, 506, 'Google', 'https://www.nexmark.com/vzbh/gkz_/yha/item.htm?query=1', '2015-07-15 00:00:23.006', 'dgvalsxtsvhizqfyfuskwvymyonrsrysvsncacmbwdxtqpcmcldjztjsdqzjeedzhsz'), +(1000, 1004, 92441, 'Google', 'https://www.nexmark.com/vzbh/gkz_/yha/item.htm?query=1', '2015-07-15 00:00:24.006', 'vigdixojctlzzlhdczfojxzrvrrfqbtnstozecfukalltorefhjucxsjbqnyyjvumn'), (1000, 1001, 366, 'channel-9891', 'https://www.nexmark.com/dhv/smk/gfok/item.htm?query=1', '2015-07-15 00:00:25.006', 'zwtgfachqjpffrplonfttcxixellixjbejdxrpslceydbcjnlhnzwxdks'); diff --git a/e2e_test/nexmark/insert_person.slt.part b/e2e_test/nexmark/insert_person.slt.part index 49dc74c9a6c1a..dd51babb80bd8 100644 --- a/e2e_test/nexmark/insert_person.slt.part +++ b/e2e_test/nexmark/insert_person.slt.part @@ -9,23 +9,23 @@ INSERT INTO person ( date_time, extra ) VALUES -(1000, 'vicky noris', 'vzbhp@wxv.com', '4355 0142 3460 9324', 'boise', 'ca', '2015-07-15 00:00:00', 'cllnesmssnthtljklifqbqcyhcjwiuoaudxxwcnnwgmsmwgqelplzyckqzuoaitfpxubgpkjtqjhktelmbskvjkxrhziyowxibbgnqneuaiazqduhkynvgeisbxtknbxmqmzbgnptlrcyigjginataks'), -(1001, 'peter smith', 'lf@sas.com', '1932 7149 7430 9595', 'cheyenne', 'ca', '2015-07-15 00:00:00.005', 'cbynrrhwzdyweandiyjtakwwchkmoqtsewodpcbefvfkljvxxysswqkyhhpjipvrnmuvlvqjsiavcdpiinxtetxvwiuxjinmmhnultqqozgbkuoqhdezgvdorrpufstltihbflmsmuemgthxjthbjyfsp'), -(1002, 'kate jones', 'x bm@sje.com', '0622 7698 7127 2976', 'kent', 'id', '2015-07-15 00:00:00.010', 'nocxojhpzqonitqwnvqwgfzipfacacnotrfcjksoelmcjuuuxrfjwjbrccfrtahdghdzpgpwldthwufmlvigmgumoqgogscksamvbncvjnqbsbvsoivnpkrimphygtpvqesvmzpvg'), -(1003, 'peter jones', 'ov qzc@fn h.com', '9822 1971 1718 5783', 'bend', 'wy', '2015-07-15 00:00:00.015', 'dnughosupjijpigthihwcqwndobafqjggnjdavuwtkjqwfnxbjqpishlmpuughttanwnglcjrdkouvtbnszjxoiuinsqhdnobnphxinfsuxybemgcvkzmnzylxggozkyjesojxzh'), -(1004, 'john white', 'hizlim@qqs.com', '3610 2274 8951 4507', 'los angeles', 'ca', '2015-07-15 00:00:00.020', 'xlfehaxvuccmllwgszomtoykcpsnaacqlacaomqnpfxdcgemhmsexjbkiidtkqiiyfgxioqqmkftpaxsrctvwvvahumgjwyyqwpeidquduhayyrxoyewgqxdqaubxunqldzkdsmezcgeyvlefljjcegkipx'), -(1005, 'vicky smith', 'eo qy@xhiq.com', '2954 5455 4164 4668', 'san francisco', 'id', '2015-07-15 00:00:00.025', 'gevdqvvarwgznzsobebcjfhtkpfzoukybrbxahvnijjsouvsqkijcncpdqeknkmerpyndfibitqjuwcsfdvddzneuqhpxqssoqvfucdpqqikqgfnuhutuzbnkoagvgeszjsrytgymdqswecjxqcqawzoovfpux'), -(1006, 'saul abrams', 'dbcm@yyah.com', '9226 2808 4889 3647', 'seattle', 'wa', '2015-07-15 00:00:00.030', 'jenqrdmgfrnwdffdkziekvaqxlljtepvydosarkghfnskoybjthxgagddsrgoljwhxgokedjgsrwlpscciguuhbrnanbzggtizgrucrxsdfsqpycisjtozszbrnnzylbluxcmkheyoebnhuqowarmobbbwislo'), -(1007, 'paul noris', 'l wejw@jru.com', '0124 8785 8561 5810', 'redmond', 'or', '2015-07-15 00:00:00.035', 'ooqvtcqzxzgkeslzvxsavosnzwvkewqwvaprunlcdukzqduaihlsjtggzaoxqddxormrnizxlcoiynquvnbranuxltbcamsallfrsykzmhxrftodjdhjwxmffjdileiajzjwjvbc'), -(1008, 'vicky white', 'hjao@yew.com', '8945 4702 5692 5322', 'cheyenne', 'wa', '2015-07-15 00:00:00.040', 'aairhjbnogbhzsvywileymywjilolrtsbeipjvvoenqzulcvjarednvxkfwquunixkwmlhqrmjjxclgqzmgdjmkrodoxvcuqnygfprbdqazamsuvpbmihbwqtwrgunbblvmoysrxqgultdvpythl'), -(1009, 'peter noris', 'vcz@mmno.com', '5906 8272 6022 9313', 'los angeles', 'or', '2015-07-15 00:00:00.045', 'urwxlspnclirksgagjckumwbkcouzgnkiwhyinqlbjeptbrjhfqsfqdoyyhtebsqlfiaxkyiqlsbvuycojruxiomtbnqkbsxhudhrsolelrlyhhexompmeyqrtocemooqcnimqptnbtwhehvnnvazcjppmgxrmkwb'), -(1010, 'luke white', 'cop@lt.com', '7769 7103 9009 1978', 'san francisco', 'id', '2015-07-15 00:00:00.050', 'wlvauvfajkpgvqttborsopgcxmqxigiuakpfnadgnvovkuiunnzucjiwcgbfeghiyqjkzbzjgtnsargvzlcyxpjkvodajkvjzwdjidwcaewwlgogburasfiwjyqxqsbqgwvmmheotwskwlervbcaqobcuklvr'), -(1011, 'julie white', 'w ged@eds.com', '1501 2413 6493 0580', 'los angeles', 'az', '2015-07-15 00:00:00.055', 'ydmgqottebzooqvskprggbseizxerbsaczenhlruhwngnewtltzkhzexqtclwjgrvkvonjtwrxxhjncmbhkmpbgbwopfbukjmeywctnhxdlpqtcibpvnwwxdf'), -(1012, 'kate white', 'mckq@scuk.com', '3067 7855 5104 7101', 'cheyenne', 'wy', '2015-07-15 00:00:00.060', 'hxmgejvnpfydycndxjgbyccqqalevyhdcjfmsfnuxppndkuoynejhpbdheoqjdzhbypuopuwxvcghszuakyufxmgryimbgzhmctltfbnetpeqyqaauzzlrdnnfsdjmddezhhfpqsfgnabneqcmlrw'), -(1013, 'peter white', 'zm je@yvo.com', '4069 8328 0295 7946', 'kent', 'id', '2015-07-15 00:00:00.065', 'zkjtikggavldkdxruxkxuuxeopisrursclzlywdhfiwhjxzpbsabtlztbsffyujuddwukwgchgjeyqiloptlffumjnhaaguhfhmesnccqtsxstceyprrphsdjervbipl'), -(1014, 'paul abrams', 'lsgd@caoz.com', '2357 7007 8185 6409', 'seattle', 'az', '2015-07-15 00:00:00.070', 'hegyytofihlztapmzuzfcxepughovadkrwwatwjjoczxkjqqskkxuyihxipiznviyouyejcwfceocjtvaambzkgcwejqkqdsmgixuslacenknajqrjploayukumdzkasrckmpieowbcgxklygecgq'), -(1015, 'peter shultz', 'hc pb@mcj.com', '3055 7843 5530 7072', 'cheyenne', 'wy', '2015-07-15 00:00:00.075', 'ysqxddshusxlqfvhuawjanqkqcnmtvqqmtecgjwhsxwfwkydnppvxuwnfemnrhpioxomyiewndwfbphehbrcoggcxsdmlrjjwyzuozwgjpjfcrmrmawcsvcsl'), -(1016, 'kate walton', 'iq lq@pay.com', '1208 6365 1565 6332', 'seattle', 'id', '2015-07-15 00:00:00.080', 'xtlsarejqxbmypkmkpqmpwllztkdgkqibwtlszuqlekotqxvkcmkbdbgribiajjmiqgzvumcvbeeysjduvtdpiyqgisxjjysambyltcmjycnxyeeqegiygehknktfdnnqsspluo'), -(1017, 'julie abrams', 'oxirwj@quy.com', '7592 3742 7289 7528', 'san francisco', 'wa', '2015-07-15 00:00:00.085', 'zsskforiwmtqizsbcwgvlewyfepcyimgfrjvlrjboedjhxttfrdtpveylhebgsrdpqbqmletxfqktbvnqgtuiigcxjvfljkxbvbogxtiwviytvhckqbmvxxbkcqphxgfahqglm'), -(1018, 'peter jones', 'ottl@o op.com', '2452 9230 1682 0211', 'seattle', 'or', '2015-07-15 00:00:00.090', 'epfgfsaqjfeitqmkrjxleuodgsmoggfbvqxdrgttlkfmoinbfrfuswxmndvczvtugklpkdoyzgwiohagkjoepdfjaqwdskybszgqruiskrofzzlewjosucfxuznqsfchbwvwtehzecho'), +(1000, 'vicky noris', 'vzbhp@wxv.com', '4355 0142 3460 9324', 'boise', 'ca', '2015-07-15 00:00:00', 'cllnesmssnthtljklifqbqcyhcjwiuoaudxxwcnnwgmsmwgqelplzyckqzuoaitfpxubgpkjtqjhktelmbskvjkxrhziyowxibbgnqneuaiazqduhkynvgeisbxtknbxmqmzbgnptlrcyigjginataks'), +(1001, 'peter smith', 'lf@sas.com', '1932 7149 7430 9595', 'cheyenne', 'ca', '2015-07-15 00:00:00.005', 'cbynrrhwzdyweandiyjtakwwchkmoqtsewodpcbefvfkljvxxysswqkyhhpjipvrnmuvlvqjsiavcdpiinxtetxvwiuxjinmmhnultqqozgbkuoqhdezgvdorrpufstltihbflmsmuemgthxjthbjyfsp'), +(1002, 'kate jones', 'x bm@sje.com', '0622 7698 7127 2976', 'kent', 'id', '2015-07-15 00:00:00.010', 'nocxojhpzqonitqwnvqwgfzipfacacnotrfcjksoelmcjuuuxrfjwjbrccfrtahdghdzpgpwldthwufmlvigmgumoqgogscksamvbncvjnqbsbvsoivnpkrimphygtpvqesvmzpvg'), +(1003, 'peter jones', 'ov qzc@fn h.com', '9822 1971 1718 5783', 'bend', 'wy', '2015-07-15 00:00:00.015', 'dnughosupjijpigthihwcqwndobafqjggnjdavuwtkjqwfnxbjqpishlmpuughttanwnglcjrdkouvtbnszjxoiuinsqhdnobnphxinfsuxybemgcvkzmnzylxggozkyjesojxzh'), +(1004, 'john white', 'hizlim@qqs.com', '3610 2274 8951 4507', 'los angeles', 'ca', '2015-07-15 00:00:00.020', 'xlfehaxvuccmllwgszomtoykcpsnaacqlacaomqnpfxdcgemhmsexjbkiidtkqiiyfgxioqqmkftpaxsrctvwvvahumgjwyyqwpeidquduhayyrxoyewgqxdqaubxunqldzkdsmezcgeyvlefljjcegkipx'), +(1005, 'vicky smith', 'eo qy@xhiq.com', '2954 5455 4164 4668', 'san francisco', 'id', '2015-07-15 00:00:00.025', 'gevdqvvarwgznzsobebcjfhtkpfzoukybrbxahvnijjsouvsqkijcncpdqeknkmerpyndfibitqjuwcsfdvddzneuqhpxqssoqvfucdpqqikqgfnuhutuzbnkoagvgeszjsrytgymdqswecjxqcqawzoovfpux'), +(1006, 'saul abrams', 'dbcm@yyah.com', '9226 2808 4889 3647', 'seattle', 'wa', '2015-07-15 00:00:00.030', 'jenqrdmgfrnwdffdkziekvaqxlljtepvydosarkghfnskoybjthxgagddsrgoljwhxgokedjgsrwlpscciguuhbrnanbzggtizgrucrxsdfsqpycisjtozszbrnnzylbluxcmkheyoebnhuqowarmobbbwislo'), +(1007, 'paul noris', 'l wejw@jru.com', '0124 8785 8561 5810', 'redmond', 'or', '2015-07-15 00:00:00.035', 'ooqvtcqzxzgkeslzvxsavosnzwvkewqwvaprunlcdukzqduaihlsjtggzaoxqddxormrnizxlcoiynquvnbranuxltbcamsallfrsykzmhxrftodjdhjwxmffjdileiajzjwjvbc'), +(1008, 'vicky white', 'hjao@yew.com', '8945 4702 5692 5322', 'cheyenne', 'wa', '2015-07-15 00:00:00.040', 'aairhjbnogbhzsvywileymywjilolrtsbeipjvvoenqzulcvjarednvxkfwquunixkwmlhqrmjjxclgqzmgdjmkrodoxvcuqnygfprbdqazamsuvpbmihbwqtwrgunbblvmoysrxqgultdvpythl'), +(1009, 'peter noris', 'vcz@mmno.com', '5906 8272 6022 9313', 'los angeles', 'or', '2015-07-15 00:00:00.045', 'urwxlspnclirksgagjckumwbkcouzgnkiwhyinqlbjeptbrjhfqsfqdoyyhtebsqlfiaxkyiqlsbvuycojruxiomtbnqkbsxhudhrsolelrlyhhexompmeyqrtocemooqcnimqptnbtwhehvnnvazcjppmgxrmkwb'), +(1010, 'luke white', 'cop@lt.com', '7769 7103 9009 1978', 'san francisco', 'id', '2015-07-15 00:00:00.050', 'wlvauvfajkpgvqttborsopgcxmqxigiuakpfnadgnvovkuiunnzucjiwcgbfeghiyqjkzbzjgtnsargvzlcyxpjkvodajkvjzwdjidwcaewwlgogburasfiwjyqxqsbqgwvmmheotwskwlervbcaqobcuklvr'), +(1011, 'julie white', 'w ged@eds.com', '1501 2413 6493 0580', 'los angeles', 'az', '2015-07-15 00:00:00.055', 'ydmgqottebzooqvskprggbseizxerbsaczenhlruhwngnewtltzkhzexqtclwjgrvkvonjtwrxxhjncmbhkmpbgbwopfbukjmeywctnhxdlpqtcibpvnwwxdf'), +(1012, 'kate white', 'mckq@scuk.com', '3067 7855 5104 7101', 'cheyenne', 'wy', '2015-07-15 00:00:00.060', 'hxmgejvnpfydycndxjgbyccqqalevyhdcjfmsfnuxppndkuoynejhpbdheoqjdzhbypuopuwxvcghszuakyufxmgryimbgzhmctltfbnetpeqyqaauzzlrdnnfsdjmddezhhfpqsfgnabneqcmlrw'), +(1013, 'peter white', 'zm je@yvo.com', '4069 8328 0295 7946', 'kent', 'id', '2015-07-15 00:00:00.065', 'zkjtikggavldkdxruxkxuuxeopisrursclzlywdhfiwhjxzpbsabtlztbsffyujuddwukwgchgjeyqiloptlffumjnhaaguhfhmesnccqtsxstceyprrphsdjervbipl'), +(1014, 'paul abrams', 'lsgd@caoz.com', '2357 7007 8185 6409', 'seattle', 'az', '2015-07-15 00:00:00.070', 'hegyytofihlztapmzuzfcxepughovadkrwwatwjjoczxkjqqskkxuyihxipiznviyouyejcwfceocjtvaambzkgcwejqkqdsmgixuslacenknajqrjploayukumdzkasrckmpieowbcgxklygecgq'), +(1015, 'peter shultz', 'hc pb@mcj.com', '3055 7843 5530 7072', 'cheyenne', 'wy', '2015-07-15 00:00:00.075', 'ysqxddshusxlqfvhuawjanqkqcnmtvqqmtecgjwhsxwfwkydnppvxuwnfemnrhpioxomyiewndwfbphehbrcoggcxsdmlrjjwyzuozwgjpjfcrmrmawcsvcsl'), +(1016, 'kate walton', 'iq lq@pay.com', '1208 6365 1565 6332', 'seattle', 'id', '2015-07-15 00:00:00.080', 'xtlsarejqxbmypkmkpqmpwllztkdgkqibwtlszuqlekotqxvkcmkbdbgribiajjmiqgzvumcvbeeysjduvtdpiyqgisxjjysambyltcmjycnxyeeqegiygehknktfdnnqsspluo'), +(1017, 'julie abrams', 'oxirwj@quy.com', '7592 3742 7289 7528', 'san francisco', 'wa', '2015-07-15 00:00:00.085', 'zsskforiwmtqizsbcwgvlewyfepcyimgfrjvlrjboedjhxttfrdtpveylhebgsrdpqbqmletxfqktbvnqgtuiigcxjvfljkxbvbogxtiwviytvhckqbmvxxbkcqphxgfahqglm'), +(1018, 'peter jones', 'ottl@o op.com', '2452 9230 1682 0211', 'seattle', 'or', '2015-07-15 00:00:00.090', 'epfgfsaqjfeitqmkrjxleuodgsmoggfbvqxdrgttlkfmoinbfrfuswxmndvczvtugklpkdoyzgwiohagkjoepdfjaqwdskybszgqruiskrofzzlewjosucfxuznqsfchbwvwtehzecho'), (1019, 'deiter white', 'ejb@owf.com', '1807 5157 8942 6763', 'phoenix', 'az', '2015-07-15 00:00:00.095', 'jfqsfzzolpbcpwpdorfdodwupokxvrhpwnowcowyezlrpibupikowcpjuduehavglpcxyibofhdrxlpeghgonfffkagkgzlbbcqqolbcrprttwytvwqcmmsvvywmpxbxbyrbhywrvkulyafhiejymgxndz'); diff --git a/e2e_test/s3/json_file.py b/e2e_test/s3/json_file.py index e3e3c4850bd1b..585f44a7ce825 100644 --- a/e2e_test/s3/json_file.py +++ b/e2e_test/s3/json_file.py @@ -20,7 +20,7 @@ def do_test(client, config, N, prefix): cur = conn.cursor() # Execute a SELECT statement - cur.execute(f'''CREATE TABLE s3_test_jsonfile( + cur.execute(f'''CREATE TABLE s3_test_jsonfile( id int, name TEXT, sex int, diff --git a/e2e_test/s3/run.py b/e2e_test/s3/run.py index e3492923783b8..58e3c5765c0ef 100644 --- a/e2e_test/s3/run.py +++ b/e2e_test/s3/run.py @@ -19,7 +19,7 @@ def do_test(config, N, n, prefix): cur = conn.cursor() # Execute a SELECT statement - cur.execute(f'''CREATE TABLE s3_test( + cur.execute(f'''CREATE TABLE s3_test( id int, name TEXT, sex int, @@ -103,7 +103,7 @@ def do_test(config, N, n, prefix): config["S3_BUCKET"], f"{run_id}_data_{i}.ndjson", f"data_{i}.ndjson" - + ) print(f"Uploaded {run_id}_data_{i}.ndjson to S3") os.remove(f"data_{i}.ndjson") diff --git a/e2e_test/s3/run_csv.py b/e2e_test/s3/run_csv.py index 2b3a6fbb0d493..b721e3c796066 100644 --- a/e2e_test/s3/run_csv.py +++ b/e2e_test/s3/run_csv.py @@ -18,7 +18,7 @@ def do_test(config, N, n, prefix): # Open a cursor to execute SQL statements cur = conn.cursor() - cur.execute(f'''CREATE TABLE s3_test_csv_without_headers( + cur.execute(f'''CREATE TABLE s3_test_csv_without_headers( a int, b int, c int, @@ -32,7 +32,7 @@ def do_test(config, N, n, prefix): s3.endpoint_url = 'https://{config['S3_ENDPOINT']}' ) FORMAT PLAIN ENCODE CSV (delimiter = ',', without_header = true);''') - cur.execute(f'''CREATE TABLE s3_test_csv_with_headers( + cur.execute(f'''CREATE TABLE s3_test_csv_with_headers( a int, b int, c int, diff --git a/e2e_test/sink/kafka/create_sink.slt b/e2e_test/sink/kafka/create_sink.slt index d9c52d963b468..a97e1df50aec4 100644 --- a/e2e_test/sink/kafka/create_sink.slt +++ b/e2e_test/sink/kafka/create_sink.slt @@ -1,5 +1,5 @@ statement ok -create table t_kafka ( +create table t_kafka ( id integer primary key, v_varchar varchar, v_smallint smallint, diff --git a/e2e_test/source/basic/ddl.slt b/e2e_test/source/basic/ddl.slt index 13b6a1d73d6f5..c1941d4697ffa 100644 --- a/e2e_test/source/basic/ddl.slt +++ b/e2e_test/source/basic/ddl.slt @@ -29,7 +29,7 @@ create source invalid_startup_timestamp ( ) FORMAT PLAIN ENCODE JSON; statement error db error: ERROR: QueryError: Invalid input syntax: schema definition is required for ENCODE JSON -create source invalid_schema_definition +create source invalid_schema_definition with ( connector = 'kafka', topic = 'kafka_1_partition_topic', @@ -153,7 +153,7 @@ statement ok drop table s # Test create source with connection -statement ok +statement ok CREATE CONNECTION mock WITH (type = 'privatelink', provider = 'mock'); # Reference to non-existant connection diff --git a/e2e_test/source/basic/kafka.slt b/e2e_test/source/basic/kafka.slt index da4e0d5dbd458..6c38b31779c14 100644 --- a/e2e_test/source/basic/kafka.slt +++ b/e2e_test/source/basic/kafka.slt @@ -157,7 +157,7 @@ create table s8_no_schema_field ( statement ok create table s9 with ( - connector = 'kafka', + connector = 'kafka', topic = 'avro_bin', properties.bootstrap.server = 'message_queue:29092', scan.startup.mode = 'earliest' @@ -165,7 +165,7 @@ create table s9 with ( statement ok create table s10 with ( - connector = 'kafka', + connector = 'kafka', topic = 'avro_c_bin', properties.bootstrap.server = 'message_queue:29092', scan.startup.mode = 'earliest' @@ -173,7 +173,7 @@ create table s10 with ( statement ok create table s11 with ( - connector = 'kafka', + connector = 'kafka', topic = 'proto_c_bin', properties.bootstrap.server = 'message_queue:29092', scan.startup.mode = 'earliest', @@ -183,8 +183,8 @@ create table s11 with ( statement ok CREATE TABLE s12( id int, - code string, - timestamp bigint, + code string, + timestamp bigint, xfas struct[], contacts struct, jsonb jsonb) @@ -257,7 +257,7 @@ create table s16 (v1 int, v2 varchar) with ( statement ok create source s17 with ( - connector = 'kafka', + connector = 'kafka', topic = 'proto_c_bin', properties.bootstrap.server = 'message_queue:29092', scan.startup.mode = 'earliest', @@ -266,7 +266,7 @@ create source s17 with ( statement ok create source s18 with ( - connector = 'kafka', + connector = 'kafka', topic = 'avro_c_bin', properties.bootstrap.server = 'message_queue:29092', scan.startup.mode = 'earliest' @@ -274,7 +274,7 @@ create source s18 with ( # we cannot use confluent schema registry when connector is not kafka statement error -create table s19 +create table s19 with ( connector = 'kinesis', topic = 'topic', @@ -330,7 +330,7 @@ create source s24 (id bytea) with ( ) FORMAT PLAIN ENCODE BYTES # bytes format only accept one column -statement error +statement error create source s25 (v1 bytea, v2 int) with ( connector = 'kafka', topic = 'kafka_source_format_bytes', @@ -339,7 +339,7 @@ create source s25 (v1 bytea, v2 int) with ( ) FORMAT PLAIN ENCODE BYTES # bytes format only accept bytea type -statement error +statement error create source s26 (id int) with ( connector = 'kafka', topic = 'kafka_source_format_bytes', @@ -528,7 +528,7 @@ create materialized view source_mv1 as select * from s6; statement ok create materialized view source_mv2 as select sum(v1) as sum_v1, count(v2) as count_v2 from s6 where v1 > 3; -statement ok +statement ok create materialized view source_mv3 as select * from s24; # Wait for source @@ -646,7 +646,7 @@ select * from s23; \x31324344 \xdeadbeef -query II +query II SELECT * FROM @@ -659,7 +659,7 @@ ORDER BY 1003 {"_id": {"$numberLong": "1003"}, "email": "ed@walker.com", "first_name": "Edward", "last_name": "Walker"} 1004 {"_id": {"$numberLong": "1004"}, "email": "annek@noanswer.org", "first_name": "Anne", "last_name": "Kretchmar"} -query II +query II SELECT * FROM @@ -673,7 +673,7 @@ ORDER BY 1004 {"_id": {"$numberLong": "1004"}, "email": "annek@noanswer.org", "first_name": "Anne", "last_name": "Kretchmar"} -query II +query II SELECT * FROM @@ -702,7 +702,7 @@ order by 56166 1 56166 2 -query I +query I SELECT * FROM source_mv3 ORDER BY id; ---- \x6b6b @@ -719,7 +719,7 @@ drop materialized view source_mv1 statement ok drop materialized view source_mv2 -statement ok +statement ok drop materialized view source_mv3 statement ok diff --git a/e2e_test/source/basic/kafka_batch.slt b/e2e_test/source/basic/kafka_batch.slt index 4b8c2e8cd2028..8d8d454c7c977 100644 --- a/e2e_test/source/basic/kafka_batch.slt +++ b/e2e_test/source/basic/kafka_batch.slt @@ -49,8 +49,8 @@ create source s6 (v1 int, v2 varchar) with ( statement ok CREATE SOURCE s7( id int, - code string, - timestamp bigint, + code string, + timestamp bigint, xfas struct[], contacts struct) WITH ( @@ -105,7 +105,7 @@ t t query B -select _rw_kafka_timestamp < now() from s1 +select _rw_kafka_timestamp < now() from s1 ---- t t @@ -191,8 +191,8 @@ select count(*) from s8 ---- 0 -query I -select * from s9 order by id +query I +select * from s9 order by id ---- \x6b6b \x776561776566776566 diff --git a/e2e_test/source/basic/nosim_kafka.slt b/e2e_test/source/basic/nosim_kafka.slt index f293a90544cd1..bc398be748625 100644 --- a/e2e_test/source/basic/nosim_kafka.slt +++ b/e2e_test/source/basic/nosim_kafka.slt @@ -141,7 +141,7 @@ select count(*) from debezium_compact; ---- 2 -statement ok +statement ok DROP TABLE upsert_avro_json_default_key; statement ok diff --git a/e2e_test/source/basic/old_row_format_syntax/ddl.slt b/e2e_test/source/basic/old_row_format_syntax/ddl.slt index 7ed0daf0148c9..d0a8cd9ba08ea 100644 --- a/e2e_test/source/basic/old_row_format_syntax/ddl.slt +++ b/e2e_test/source/basic/old_row_format_syntax/ddl.slt @@ -29,7 +29,7 @@ create source invalid_startup_timestamp ( ) ROW FORMAT JSON; statement error db error: ERROR: QueryError: Invalid input syntax: schema definition is required for ENCODE JSON -create source invalid_schema_definition +create source invalid_schema_definition with ( connector = 'kafka', topic = 'kafka_1_partition_topic', @@ -153,7 +153,7 @@ statement ok drop table s # Test create source with connection -statement ok +statement ok CREATE CONNECTION mock WITH (type = 'privatelink', provider = 'mock'); # Reference to non-existant connection diff --git a/e2e_test/source/basic/old_row_format_syntax/kafka.slt b/e2e_test/source/basic/old_row_format_syntax/kafka.slt index 7a5222e93b500..be96c7266f00d 100644 --- a/e2e_test/source/basic/old_row_format_syntax/kafka.slt +++ b/e2e_test/source/basic/old_row_format_syntax/kafka.slt @@ -157,7 +157,7 @@ create table s8_no_schema_field ( statement ok create table s9 with ( - connector = 'kafka', + connector = 'kafka', topic = 'avro_bin', properties.bootstrap.server = 'message_queue:29092', scan.startup.mode = 'earliest' @@ -165,7 +165,7 @@ create table s9 with ( statement ok create table s10 with ( - connector = 'kafka', + connector = 'kafka', topic = 'avro_c_bin', properties.bootstrap.server = 'message_queue:29092', scan.startup.mode = 'earliest' @@ -173,7 +173,7 @@ create table s10 with ( statement ok create table s11 with ( - connector = 'kafka', + connector = 'kafka', topic = 'proto_c_bin', properties.bootstrap.server = 'message_queue:29092', scan.startup.mode = 'earliest', @@ -183,8 +183,8 @@ create table s11 with ( statement ok CREATE TABLE s12( id int, - code string, - timestamp bigint, + code string, + timestamp bigint, xfas struct[], contacts struct, jsonb jsonb) @@ -257,7 +257,7 @@ create table s16 (v1 int, v2 varchar) with ( statement ok create source s17 with ( - connector = 'kafka', + connector = 'kafka', topic = 'proto_c_bin', properties.bootstrap.server = 'message_queue:29092', scan.startup.mode = 'earliest', @@ -266,7 +266,7 @@ create source s17 with ( statement ok create source s18 with ( - connector = 'kafka', + connector = 'kafka', topic = 'avro_c_bin', properties.bootstrap.server = 'message_queue:29092', scan.startup.mode = 'earliest' @@ -274,7 +274,7 @@ create source s18 with ( # we cannot use confluent schema registry when connector is not kafka statement error -create table s19 +create table s19 with ( connector = 'kinesis', topic = 'topic', @@ -331,7 +331,7 @@ create source s24 (id bytea) with ( ) ROW FORMAT BYTES # bytes format only accept one column -statement error +statement error create source s25 (v1 bytea, v2 int) with ( connector = 'kafka', topic = 'kafka_source_format_bytes', @@ -340,7 +340,7 @@ create source s25 (v1 bytea, v2 int) with ( ) ROW FORMAT BYTES # bytes format only accept bytea type -statement error +statement error create source s26 (id int) with ( connector = 'kafka', topic = 'kafka_source_format_bytes', @@ -519,7 +519,7 @@ create materialized view source_mv1 as select * from s6; statement ok create materialized view source_mv2 as select sum(v1) as sum_v1, count(v2) as count_v2 from s6 where v1 > 3; -statement ok +statement ok create materialized view source_mv3 as select * from s24; # Wait for source @@ -637,7 +637,7 @@ select * from s23; \x31324344 \xdeadbeef -query II +query II SELECT * FROM @@ -650,7 +650,7 @@ ORDER BY 1003 {"_id": {"$numberLong": "1003"}, "email": "ed@walker.com", "first_name": "Edward", "last_name": "Walker"} 1004 {"_id": {"$numberLong": "1004"}, "email": "annek@noanswer.org", "first_name": "Anne", "last_name": "Kretchmar"} -query II +query II SELECT * FROM @@ -664,7 +664,7 @@ ORDER BY 1004 {"_id": {"$numberLong": "1004"}, "email": "annek@noanswer.org", "first_name": "Anne", "last_name": "Kretchmar"} -query II +query II SELECT * FROM @@ -693,7 +693,7 @@ order by 56166 1 56166 2 -query I +query I SELECT * FROM source_mv3 ORDER BY id; ---- \x6b6b @@ -705,7 +705,7 @@ drop materialized view source_mv1 statement ok drop materialized view source_mv2 -statement ok +statement ok drop materialized view source_mv3 statement ok diff --git a/e2e_test/source/basic/old_row_format_syntax/kafka_batch.slt b/e2e_test/source/basic/old_row_format_syntax/kafka_batch.slt index 6bde8e273b9ce..5ab2f2dbce15f 100644 --- a/e2e_test/source/basic/old_row_format_syntax/kafka_batch.slt +++ b/e2e_test/source/basic/old_row_format_syntax/kafka_batch.slt @@ -49,8 +49,8 @@ create source s6 (v1 int, v2 varchar) with ( statement ok CREATE SOURCE s7( id int, - code string, - timestamp bigint, + code string, + timestamp bigint, xfas struct[], contacts struct) WITH ( @@ -105,7 +105,7 @@ t t query B -select _rw_kafka_timestamp < now() from s1 +select _rw_kafka_timestamp < now() from s1 ---- t t @@ -191,8 +191,8 @@ select count(*) from s8 ---- 0 -query I -select * from s9 order by id +query I +select * from s9 order by id ---- \x6b6b \x776561776566776566 diff --git a/e2e_test/source/basic/old_row_format_syntax/nosim_kafka.slt b/e2e_test/source/basic/old_row_format_syntax/nosim_kafka.slt index 27a1b7dfd87ea..582aff7d958fb 100644 --- a/e2e_test/source/basic/old_row_format_syntax/nosim_kafka.slt +++ b/e2e_test/source/basic/old_row_format_syntax/nosim_kafka.slt @@ -8,7 +8,7 @@ WITH ( connector = 'kafka', properties.bootstrap.server = 'message_queue:29092', topic = 'upsert_avro_json') -ROW FORMAT UPSERT_AVRO +ROW FORMAT UPSERT_AVRO row schema location confluent schema registry 'http://message_queue:8081' @@ -19,7 +19,7 @@ WITH ( connector = 'kafka', properties.bootstrap.server = 'message_queue:29092', topic = 'upsert_student_key_not_subset_of_value_avro_json') -ROW FORMAT UPSERT_AVRO +ROW FORMAT UPSERT_AVRO row schema location confluent schema registry 'http://message_queue:8081' @@ -29,7 +29,7 @@ WITH ( connector = 'kafka', properties.bootstrap.server = 'message_queue:29092', topic = 'upsert_student_avro_json') -ROW FORMAT UPSERT_AVRO +ROW FORMAT UPSERT_AVRO row schema location confluent schema registry 'http://message_queue:8081' @@ -41,7 +41,7 @@ WITH ( connector = 'kafka', properties.bootstrap.server = 'message_queue:29092', topic = 'upsert_avro_json') -ROW FORMAT UPSERT_AVRO +ROW FORMAT UPSERT_AVRO row schema location confluent schema registry 'http://message_queue:8081' # Just ignore the kafka key, it works @@ -53,7 +53,7 @@ WITH ( connector = 'kafka', properties.bootstrap.server = 'message_queue:29092', topic = 'upsert_avro_json') -ROW FORMAT UPSERT_AVRO +ROW FORMAT UPSERT_AVRO row schema location confluent schema registry 'http://message_queue:8081' statement ok @@ -147,7 +147,7 @@ select count(*) from debezium_compact; ---- 2 -statement ok +statement ok DROP TABLE upsert_avro_json_default_key; statement ok diff --git a/e2e_test/streaming/append_only.slt b/e2e_test/streaming/append_only.slt index a5c944d303f17..c06523a7371b1 100644 --- a/e2e_test/streaming/append_only.slt +++ b/e2e_test/streaming/append_only.slt @@ -66,7 +66,7 @@ select * from mv4; ## Group TopN statement ok -create materialized view mv4_1 as +create materialized view mv4_1 as select v1, v3 from ( select *, ROW_NUMBER() OVER (PARTITION BY v3 ORDER BY v1) as rank from t4 ) diff --git a/e2e_test/streaming/basic.slt b/e2e_test/streaming/basic.slt index c11618ca5788e..3ebb210ef94d3 100644 --- a/e2e_test/streaming/basic.slt +++ b/e2e_test/streaming/basic.slt @@ -127,7 +127,7 @@ statement ok create materialized view mv(a,b) as select * from t; statement ok -drop materialized view mv +drop materialized view mv statement ok drop table t diff --git a/e2e_test/streaming/bug_fixes/issue_8084.slt b/e2e_test/streaming/bug_fixes/issue_8084.slt index 446620cd57c4b..c4452f885321e 100644 --- a/e2e_test/streaming/bug_fixes/issue_8084.slt +++ b/e2e_test/streaming/bug_fixes/issue_8084.slt @@ -12,7 +12,7 @@ create materialized view mv as select t1.* from t as t1 full join t as t2 on t1. statement ok insert into t values(null); -# TODO: https://github.com/risingwavelabs/risingwave/issues/8084 +# TODO: https://github.com/risingwavelabs/risingwave/issues/8084 query I select * from mv; ---- diff --git a/e2e_test/streaming/demo/ad_ctr.slt b/e2e_test/streaming/demo/ad_ctr.slt index 0645fae3f999f..da7a0620d596d 100644 --- a/e2e_test/streaming/demo/ad_ctr.slt +++ b/e2e_test/streaming/demo/ad_ctr.slt @@ -18,14 +18,14 @@ CREATE TABLE ad_click ( ); statement ok -INSERT INTO ad_impression VALUES +INSERT INTO ad_impression VALUES ('8821808526777993777', '7', '2022-06-10 12:20:04.858173'), ('7151244365040293409', '7', '2022-06-10 12:20:06.409411'), ('6925263822026025842', '7', '2022-06-10 12:20:06.420565'), ('3665010658430074808', '8', '2022-06-10 12:20:06.911027'); statement ok -INSERT INTO ad_click VALUES +INSERT INTO ad_click VALUES ('8821808526777993777', '2022-06-10 12:20:04.923066'), ('3665010658430074808', '2022-06-10 12:20:07.651162'); diff --git a/e2e_test/streaming/group_top_n.slt b/e2e_test/streaming/group_top_n.slt index 469fbf21b3b7d..16d3187e16d7f 100644 --- a/e2e_test/streaming/group_top_n.slt +++ b/e2e_test/streaming/group_top_n.slt @@ -5,49 +5,49 @@ statement ok create table t(x int, y int); statement ok -create materialized view mv as +create materialized view mv as select x, y from ( select *, ROW_NUMBER() OVER (PARTITION BY x ORDER BY y) as rank from t ) where rank <= 3; statement ok -create materialized view mv_with_expr_in_window as +create materialized view mv_with_expr_in_window as select x, y from ( select *, ROW_NUMBER() OVER (PARTITION BY x/2 ORDER BY 6-y) as rank from t ) where rank <= 3; statement ok -create materialized view mv_with_lb as +create materialized view mv_with_lb as select x, y from ( select *, ROW_NUMBER() OVER (PARTITION BY x ORDER BY y) as rank from t ) where rank <= 3 AND rank > 1; statement ok -create materialized view mv_rank_order_group_same_key as +create materialized view mv_rank_order_group_same_key as SELECT x, y FROM ( SELECT *, rank() over (partition by x order by x) as rank FROM t ) WHERE rank <=1; statement ok -create materialized view mv_rank_no_group as +create materialized view mv_rank_no_group as select x, y from ( select *, RANK() OVER (ORDER BY y) as rank from t ) where rank <= 4; statement ok -create materialized view mv_rank as +create materialized view mv_rank as select x, y from ( select *, RANK() OVER (PARTITION BY x ORDER BY y) as rank from t ) where rank <= 3; statement ok -insert into t values +insert into t values (1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (2, 1), (2, 2), (2, 3), (2, 4), (2, 5), (3, 1), (3, 2), (3, 3), (3, 4), (3, 5), @@ -185,7 +185,7 @@ CREATE TABLE bid ( ); statement ok -insert into bid values +insert into bid values ('2020-04-15 08:05', 4, 'A', 'supplier1'), ('2020-04-15 08:06', 4, 'C', 'supplier2'), ('2020-04-15 08:07', 2, 'G', 'supplier1'), @@ -199,7 +199,7 @@ insert into bid values # Window Top-N follows directly after Window TVF # Top 3 items which have the highest price for every tumbling 10 minutes window statement ok -CREATE MATERIALIZED VIEW mv as +CREATE MATERIALIZED VIEW mv as SELECT window_start, window_end, item, price FROM ( SELECT *, ROW_NUMBER() OVER (PARTITION BY window_start, window_end ORDER BY price DESC) as rownum @@ -223,7 +223,7 @@ drop materialized view mv; # Window Top-N which follows after Window Aggregation # Top 3 suppliers who have the highest sales for every tumbling 10 minutes window. statement ok -CREATE MATERIALIZED VIEW mv as +CREATE MATERIALIZED VIEW mv as SELECT window_start, window_end, supplier_id, price, cnt FROM ( SELECT *, ROW_NUMBER() OVER (PARTITION BY window_start, window_end ORDER BY price DESC) as rownum diff --git a/e2e_test/streaming/natural_and_cross_join.slt b/e2e_test/streaming/natural_and_cross_join.slt index 4407244623cf3..e673b953e7e1f 100644 --- a/e2e_test/streaming/natural_and_cross_join.slt +++ b/e2e_test/streaming/natural_and_cross_join.slt @@ -31,7 +31,7 @@ FROM employees e NATURAL JOIN departments d ORDER BY employee_name; # Nested-loop joins are not supported in the streaming mode. -statement error +statement error CREATE MATERIALIZED VIEW employee_department_cross_join AS SELECT e.employee_name, d.department_name FROM employees e CROSS JOIN departments d diff --git a/e2e_test/streaming/nexmark/sinks/q18.slt.part b/e2e_test/streaming/nexmark/sinks/q18.slt.part index 46d2ad3b1c83b..d1412ff9be351 100644 --- a/e2e_test/streaming/nexmark/sinks/q18.slt.part +++ b/e2e_test/streaming/nexmark/sinks/q18.slt.part @@ -2,9 +2,9 @@ statement ok CREATE SINK nexmark_q18 AS SELECT auction, bidder, price, channel, url, date_time, extra FROM ( - SELECT *, + SELECT *, ROW_NUMBER() OVER ( - PARTITION BY bidder, auction + PARTITION BY bidder, auction ORDER BY date_time DESC, extra -- extra is addtionally added here to make the result deterministic ) AS rank_number diff --git a/e2e_test/streaming/nexmark/sinks/q2.slt.part b/e2e_test/streaming/nexmark/sinks/q2.slt.part index 55a1e697fb222..d6f67645c332b 100644 --- a/e2e_test/streaming/nexmark/sinks/q2.slt.part +++ b/e2e_test/streaming/nexmark/sinks/q2.slt.part @@ -1,6 +1,6 @@ statement ok CREATE SINK nexmark_q2 AS -SELECT auction, price FROM bid +SELECT auction, price FROM bid WHERE auction = 1007 OR auction = 1020 OR auction = 2001 OR auction = 2019 OR auction = 2087 WITH ( connector = 'blackhole', type = 'append-only', force_append_only = 'true'); diff --git a/e2e_test/streaming/nexmark/sinks/q21.slt.part b/e2e_test/streaming/nexmark/sinks/q21.slt.part index b1bfc2aadadab..49821b148a069 100644 --- a/e2e_test/streaming/nexmark/sinks/q21.slt.part +++ b/e2e_test/streaming/nexmark/sinks/q21.slt.part @@ -9,9 +9,9 @@ SELECT WHEN LOWER(channel) = 'baidu' THEN '3' ELSE (regexp_match(url, '(&|^)channel_id=([^&]*)'))[2] END - AS channel_id -FROM + AS channel_id +FROM bid -WHERE +WHERE (regexp_match(url, '(&|^)channel_id=([^&]*)'))[2] is not null or LOWER(channel) in ('apple', 'google', 'facebook', 'baidu') WITH ( connector = 'blackhole', type = 'append-only', force_append_only = 'true'); diff --git a/e2e_test/streaming/nexmark/sinks/q22.slt.part b/e2e_test/streaming/nexmark/sinks/q22.slt.part index b6086b6a9ee0f..bfb67f3d22adb 100644 --- a/e2e_test/streaming/nexmark/sinks/q22.slt.part +++ b/e2e_test/streaming/nexmark/sinks/q22.slt.part @@ -7,7 +7,7 @@ SELECT auction, bidder, price, channel, split_part(url, '/', 4) as dir1, split_part(url, '/', 5) as dir2, - split_part(url, '/', 6) as dir3 -FROM + split_part(url, '/', 6) as dir3 +FROM bid WITH ( connector = 'blackhole', type = 'append-only', force_append_only = 'true'); diff --git a/e2e_test/streaming/nexmark/sinks/q4.slt.part b/e2e_test/streaming/nexmark/sinks/q4.slt.part index 86b443554cd32..f6ad88e685318 100644 --- a/e2e_test/streaming/nexmark/sinks/q4.slt.part +++ b/e2e_test/streaming/nexmark/sinks/q4.slt.part @@ -5,17 +5,17 @@ SELECT Q.category, AVG(Q.final) as avg FROM ( - SELECT + SELECT MAX(B.price) AS final, A.category - FROM - auction A, + FROM + auction A, bid B - WHERE - A.id = B.auction AND + WHERE + A.id = B.auction AND B.date_time BETWEEN A.date_time AND A.expires - GROUP BY + GROUP BY A.id, A.category ) Q -GROUP BY +GROUP BY Q.category WITH ( connector = 'blackhole', type = 'append-only', force_append_only = 'true'); diff --git a/e2e_test/streaming/nexmark/sinks/q5.slt.part b/e2e_test/streaming/nexmark/sinks/q5.slt.part index 271f231ab7c08..7ed6dc5d249c0 100644 --- a/e2e_test/streaming/nexmark/sinks/q5.slt.part +++ b/e2e_test/streaming/nexmark/sinks/q5.slt.part @@ -6,9 +6,9 @@ SELECT FROM ( SELECT bid.auction, - count(*) AS num, + count(*) AS num, window_start AS starttime - FROM + FROM HOP(bid, date_time, INTERVAL '2' SECOND, INTERVAL '10' SECOND) GROUP BY bid.auction, @@ -27,10 +27,10 @@ JOIN ( bid.auction, window_start ) AS CountBids - GROUP BY + GROUP BY CountBids.starttime_c ) AS MaxBids -ON +ON AuctionBids.starttime = MaxBids.starttime_c AND AuctionBids.num >= MaxBids.maxn WITH ( connector = 'blackhole', type = 'append-only', force_append_only = 'true'); diff --git a/e2e_test/streaming/nexmark/views/q18.slt.part b/e2e_test/streaming/nexmark/views/q18.slt.part index a6435ef9f61a6..7ad26fad29a41 100644 --- a/e2e_test/streaming/nexmark/views/q18.slt.part +++ b/e2e_test/streaming/nexmark/views/q18.slt.part @@ -2,9 +2,9 @@ statement ok CREATE MATERIALIZED VIEW nexmark_q18 AS SELECT auction, bidder, price, channel, url, date_time, extra FROM ( - SELECT *, + SELECT *, ROW_NUMBER() OVER ( - PARTITION BY bidder, auction + PARTITION BY bidder, auction ORDER BY date_time DESC, extra -- extra is addtionally added here to make the result deterministic ) AS rank_number diff --git a/e2e_test/streaming/nexmark/views/q2.slt.part b/e2e_test/streaming/nexmark/views/q2.slt.part index 95e50c96fb13f..2c20c6dc7f5ed 100644 --- a/e2e_test/streaming/nexmark/views/q2.slt.part +++ b/e2e_test/streaming/nexmark/views/q2.slt.part @@ -1,5 +1,5 @@ statement ok CREATE MATERIALIZED VIEW nexmark_q2 AS -SELECT auction, price FROM bid +SELECT auction, price FROM bid WHERE auction = 1007 OR auction = 1020 OR auction = 2001 OR auction = 2019 OR auction = 2087; diff --git a/e2e_test/streaming/nexmark/views/q21.slt.part b/e2e_test/streaming/nexmark/views/q21.slt.part index 630ea01ea2ada..5910c9d86c82d 100644 --- a/e2e_test/streaming/nexmark/views/q21.slt.part +++ b/e2e_test/streaming/nexmark/views/q21.slt.part @@ -9,8 +9,8 @@ SELECT WHEN LOWER(channel) = 'baidu' THEN '3' ELSE (regexp_match(url, '(&|^)channel_id=([^&]*)'))[2] END - AS channel_id -FROM + AS channel_id +FROM bid -WHERE +WHERE (regexp_match(url, '(&|^)channel_id=([^&]*)'))[2] is not null or LOWER(channel) in ('apple', 'google', 'facebook', 'baidu'); diff --git a/e2e_test/streaming/nexmark/views/q22.slt.part b/e2e_test/streaming/nexmark/views/q22.slt.part index 63aaa1b0960f7..4dfe725c5ef70 100644 --- a/e2e_test/streaming/nexmark/views/q22.slt.part +++ b/e2e_test/streaming/nexmark/views/q22.slt.part @@ -7,6 +7,6 @@ SELECT auction, bidder, price, channel, split_part(url, '/', 4) as dir1, split_part(url, '/', 5) as dir2, - split_part(url, '/', 6) as dir3 -FROM + split_part(url, '/', 6) as dir3 +FROM bid; diff --git a/e2e_test/streaming/nexmark/views/q4.slt.part b/e2e_test/streaming/nexmark/views/q4.slt.part index f083bca2d4323..11fb8ccd93b6e 100644 --- a/e2e_test/streaming/nexmark/views/q4.slt.part +++ b/e2e_test/streaming/nexmark/views/q4.slt.part @@ -5,16 +5,16 @@ SELECT Q.category, AVG(Q.final) as avg FROM ( - SELECT + SELECT MAX(B.price) AS final, A.category - FROM - auction A, + FROM + auction A, bid B - WHERE - A.id = B.auction AND + WHERE + A.id = B.auction AND B.date_time BETWEEN A.date_time AND A.expires - GROUP BY + GROUP BY A.id, A.category ) Q -GROUP BY +GROUP BY Q.category; diff --git a/e2e_test/streaming/nexmark/views/q5.slt.part b/e2e_test/streaming/nexmark/views/q5.slt.part index c2c1b0b91976b..a38cc7c29e644 100644 --- a/e2e_test/streaming/nexmark/views/q5.slt.part +++ b/e2e_test/streaming/nexmark/views/q5.slt.part @@ -6,9 +6,9 @@ SELECT FROM ( SELECT bid.auction, - count(*) AS num, + count(*) AS num, window_start AS starttime - FROM + FROM HOP(bid, date_time, INTERVAL '2' SECOND, INTERVAL '10' SECOND) GROUP BY window_start, @@ -27,9 +27,9 @@ JOIN ( bid.auction, window_start ) AS CountBids - GROUP BY + GROUP BY CountBids.starttime_c ) AS MaxBids -ON +ON AuctionBids.starttime = MaxBids.starttime_c AND AuctionBids.num >= MaxBids.maxn; diff --git a/e2e_test/streaming/project_set.slt b/e2e_test/streaming/project_set.slt index 3a95f7ec73a1b..959c75ebebefc 100644 --- a/e2e_test/streaming/project_set.slt +++ b/e2e_test/streaming/project_set.slt @@ -34,7 +34,7 @@ statement ok insert into tweet values ('#1 #2 abaaba'), ('ss #1 ggg #risingwave'); statement ok -create materialized view mv as +create materialized view mv as with tags as (select unnest(regexp_matches(text, '#\w+', 'g')) as tag, text from tweet) select tag, count(*) as cnt from tags group by tag; diff --git a/e2e_test/streaming/temporal_filter.slt b/e2e_test/streaming/temporal_filter.slt index aaf9041586fb3..e021ecb3aa981 100644 --- a/e2e_test/streaming/temporal_filter.slt +++ b/e2e_test/streaming/temporal_filter.slt @@ -32,7 +32,7 @@ update t1 set v1 = v1 + interval '1 hour' where v1 = '3031-01-01 20:00:00' or v1 0001-01-01 22:00:00 3031-01-01 21:00:00 - + query I select * from mv1 order by v1; ---- diff --git a/e2e_test/streaming/tpch/q10.slt.part b/e2e_test/streaming/tpch/q10.slt.part index 0c9d09841112b..d0c77c19f24f8 100644 --- a/e2e_test/streaming/tpch/q10.slt.part +++ b/e2e_test/streaming/tpch/q10.slt.part @@ -7,17 +7,17 @@ select * from tpch_q10; 128 Customer#000000128 74326.7714 -986.96 EGYPT AmKUMlJf2NRHcKGmKjLS 14-280-874-8044 ing packages integrate across the slyly unusual dugouts. blithely silent ideas sublate carefully. blithely expr 130 Customer#000000130 31499.4100 5073.58 INDONESIA RKPx2OfZy0Vn 8wGWZ7F2EAvmMORl1k8iH 19-190-993-9281 ix slowly. express packages along the furiously ironic requests integrate daringly deposits. fur 136 Customer#000000136 359159.4900 -842.39 GERMANY QoLsJ0v5C1IQbh,DS1 17-501-210-4726 ackages sleep ironic, final courts. even requests above the blithely bold requests g -139 Customer#000000139 102799.6791 7897.78 INDONESIA 3ElvBwudHKL02732YexGVFVt 19-140-352-1403 nstructions. quickly ironic ideas are carefully. bold, +139 Customer#000000139 102799.6791 7897.78 INDONESIA 3ElvBwudHKL02732YexGVFVt 19-140-352-1403 nstructions. quickly ironic ideas are carefully. bold, 14 Customer#000000014 63310.3940 5266.3 ARGENTINA KXkletMlL2JQEA 11-845-129-3851 , ironic packages across the unus 142 Customer#000000142 129732.1688 2209.81 INDONESIA AnJ5lxtLjioClr2khl9pb8NLxG2, 19-407-425-2584 . even, express theodolites upo 19 Customer#000000019 35730.0870 8914.71 CHINA uc,3bHIx84H,wdrmLOjVsiqXCq2tr 28-396-526-5053 nag. furiously careful packages are slyly at the accounts. furiously regular in -23 Customer#000000023 46655.3250 3332.02 CANADA OdY W13N7Be3OC5MpgfmcYss0Wn6TKT 13-312-472-8245 deposits. special deposits cajole slyly. fluffily special deposits about the furiously +23 Customer#000000023 46655.3250 3332.02 CANADA OdY W13N7Be3OC5MpgfmcYss0Wn6TKT 13-312-472-8245 deposits. special deposits cajole slyly. fluffily special deposits about the furiously 25 Customer#000000025 80561.6934 7133.7 JAPAN Hp8GyFQgGHFYSilH5tBfe 22-603-468-3533 y. accounts sleep ruthlessly according to the regular theodolites. unusual instructions sleep. ironic, final 4 Customer#000000004 68199.2151 2866.83 EGYPT XxVSJsLAGtn 14-128-190-5944 requests. final, regular ideas sleep final accou -40 Customer#000000040 44161.9506 1335.3 CANADA gOnGWAyhSV1ofv 13-652-915-8939 rges impress after the slyly ironic courts. foxes are. blithely +40 Customer#000000040 44161.9506 1335.3 CANADA gOnGWAyhSV1ofv 13-652-915-8939 rges impress after the slyly ironic courts. foxes are. blithely 46 Customer#000000046 46956.6000 5744.59 FRANCE eaTXWWm10L9 16-357-681-2007 ctions. accounts sleep furiously even requests. regular, regular accounts cajole blithely around the final pa 55 Customer#000000055 54376.8534 4572.11 IRAN zIRBR4KNEl HzaiV3a i9n6elrxzDEh8r8pDom 20-180-440-8525 ully unusual packages wake bravely bold packages. unusual requests boost deposits! blithely ironic packages ab -61 Customer#000000061 45467.8872 1536.24 PERU 9kndve4EAJxhg3veF BfXr7AqOsT39o gtqjaYE 27-626-559-8599 egular packages shall have to impress along the +61 Customer#000000061 45467.8872 1536.24 PERU 9kndve4EAJxhg3veF BfXr7AqOsT39o gtqjaYE 27-626-559-8599 egular packages shall have to impress along the 70 Customer#000000070 152637.1018 4867.52 RUSSIA mFowIuhnHjp2GjCiYYavkW kUwOjIaTCQ 32-828-107-2832 fter the special asymptotes. ideas after the unusual frets cajole quickly regular pinto be 85 Customer#000000085 27823.5600 3386.64 ETHIOPIA siRerlDwiolhYR 8FgksoezycLj 15-745-585-8219 ronic ideas use above the slowly pendin 88 Customer#000000088 29032.1268 8031.44 MOZAMBIQUE wtkjBN9eyrFuENSMmMFlJ3e7jE5KXcg 26-516-273-2566 s are quickly above the quickly ironic instructions; even requests about the carefully final deposi diff --git a/e2e_test/streaming/tpch/q15.slt.part b/e2e_test/streaming/tpch/q15.slt.part index 1c160de5e1d20..848658e302c99 100644 --- a/e2e_test/streaming/tpch/q15.slt.part +++ b/e2e_test/streaming/tpch/q15.slt.part @@ -1,5 +1,5 @@ query ITTTR -select +select s_suppkey, s_name, s_address, diff --git a/e2e_test/streaming/tpch/q2.slt.part b/e2e_test/streaming/tpch/q2.slt.part index 8ea70a1550ce0..9934b0fb0f7f2 100644 --- a/e2e_test/streaming/tpch/q2.slt.part +++ b/e2e_test/streaming/tpch/q2.slt.part @@ -1,6 +1,6 @@ # scan MV with ORDER BY isn't guaranteed to be ordered query RTTITTTT rowsort -select +select s_acctbal, s_name, n_name, @@ -11,5 +11,5 @@ select s_comment from tpch_q2; ---- -1365.79 Supplier#000000006 KENYA 185 Manufacturer#4 tQxuVm7s7CnK 24-696-997-4969 final accounts. regular dolphins use against the furiously ironic decoys. +1365.79 Supplier#000000006 KENYA 185 Manufacturer#4 tQxuVm7s7CnK 24-696-997-4969 final accounts. regular dolphins use against the furiously ironic decoys. 4641.08 Supplier#000000004 MOROCCO 100 Manufacturer#3 Bk7ah4CK8SYQTepEmvMkkgMwg 25-843-787-7479 riously even requests above the exp diff --git a/e2e_test/tpch/insert_customer.slt.part b/e2e_test/tpch/insert_customer.slt.part index 88638d4cbb711..0d60ddc9a8514 100644 --- a/e2e_test/tpch/insert_customer.slt.part +++ b/e2e_test/tpch/insert_customer.slt.part @@ -1,5 +1,5 @@ statement ok -INSERT INTO customer (c_custkey, c_name, c_address, c_nationkey, c_phone, c_acctbal, c_mktsegment, c_comment) VALUES +INSERT INTO customer (c_custkey, c_name, c_address, c_nationkey, c_phone, c_acctbal, c_mktsegment, c_comment) VALUES (1,'Customer#000000001','IVhzIApeRb ot,c,E',15,'25-989-741-2988',711.56,'BUILDING','to the even, regular platelets. regular, ironic epitaphs nag e'), (2,'Customer#000000002','XSTf4,NCwDVaWNe6tEgvwfmRchLXak',13,'23-768-687-3665',121.65,'AUTOMOBILE','l accounts. blithely ironic theodolites integrate boldly: caref'), (3,'Customer#000000003','MG9kdTD2WBHm',1,'11-719-748-3364',7498.12,'AUTOMOBILE',' deposits eat slyly ironic, even instructions. express foxes detect slyly. blithely even accounts abov'), diff --git a/e2e_test/tpch/insert_lineitem.slt.part b/e2e_test/tpch/insert_lineitem.slt.part index 22f0a6e90f835..6ad6e95be01b6 100644 --- a/e2e_test/tpch/insert_lineitem.slt.part +++ b/e2e_test/tpch/insert_lineitem.slt.part @@ -1,5 +1,5 @@ statement ok -INSERT INTO lineitem (l_orderkey, l_partkey, l_suppkey, l_linenumber, l_quantity, l_extendedprice, l_discount, l_tax, l_returnflag, l_linestatus, l_shipdate, l_commitdate, l_receiptdate, l_shipinstruct, l_shipmode, l_comment) VALUES +INSERT INTO lineitem (l_orderkey, l_partkey, l_suppkey, l_linenumber, l_quantity, l_extendedprice, l_discount, l_tax, l_returnflag, l_linestatus, l_shipdate, l_commitdate, l_receiptdate, l_shipinstruct, l_shipmode, l_comment) VALUES (1,156,4,1,17,17954.55,0.04,0.02,'N','O','1996-03-13','1996-02-12','1996-03-22','DELIVER IN PERSON','TRUCK','egular courts above the'), (1,68,9,2,36,34850.16,0.09,0.06,'N','O','1996-04-12','1996-02-28','1996-04-20','TAKE BACK RETURN','MAIL','ly final dependencies: slyly bold '), (1,64,5,3,8,7712.48,0.10,0.02,'N','O','1996-01-29','1996-03-05','1996-01-31','TAKE BACK RETURN','REG AIR','riously. regular, express dep'), diff --git a/e2e_test/tpch/insert_nation.slt.part b/e2e_test/tpch/insert_nation.slt.part index 36b691150ab7a..bf0c8afac10f0 100644 --- a/e2e_test/tpch/insert_nation.slt.part +++ b/e2e_test/tpch/insert_nation.slt.part @@ -1,5 +1,5 @@ statement ok -INSERT INTO nation (n_nationkey, n_name, n_regionkey, n_comment) VALUES +INSERT INTO nation (n_nationkey, n_name, n_regionkey, n_comment) VALUES (0,'ALGERIA',0,' haggle. carefully final deposits detect slyly agai'), (1,'ARGENTINA',1,'al foxes promise slyly according to the regular accounts. bold requests alon'), (2,'BRAZIL',1,'y alongside of the pending deposits. carefully special packages are about the ironic forges. slyly special '), diff --git a/e2e_test/tpch/insert_orders.slt.part b/e2e_test/tpch/insert_orders.slt.part index cbfd63d44cbd1..d5f8fcb5d8e4d 100644 --- a/e2e_test/tpch/insert_orders.slt.part +++ b/e2e_test/tpch/insert_orders.slt.part @@ -1,5 +1,5 @@ statement ok -INSERT INTO orders (o_orderkey, o_custkey, o_orderstatus, o_totalprice, o_orderdate, o_orderpriority, o_clerk, o_shippriority, o_comment) VALUES +INSERT INTO orders (o_orderkey, o_custkey, o_orderstatus, o_totalprice, o_orderdate, o_orderpriority, o_clerk, o_shippriority, o_comment) VALUES (1,37,'O',131251.81,'1996-01-02','5-LOW','Clerk#000000951',0,'nstructions sleep furiously among '), (2,79,'O',40183.29,'1996-12-01','1-URGENT','Clerk#000000880',0,' foxes. pending accounts at the pending, silent asymptot'), (3,124,'F',160882.76,'1993-10-14','5-LOW','Clerk#000000955',0,'sly final accounts boost. carefully regular ideas cajole carefully. depos'), diff --git a/e2e_test/tpch/insert_part.slt.part b/e2e_test/tpch/insert_part.slt.part index 7f3b84298bb35..a3fd9919f8548 100644 --- a/e2e_test/tpch/insert_part.slt.part +++ b/e2e_test/tpch/insert_part.slt.part @@ -1,5 +1,5 @@ statement ok -INSERT INTO part (p_partkey, p_name, p_mfgr, p_brand, p_type, p_size, p_container, p_retailprice, p_comment) VALUES +INSERT INTO part (p_partkey, p_name, p_mfgr, p_brand, p_type, p_size, p_container, p_retailprice, p_comment) VALUES (1,'goldenrod lavender spring chocolate lace','Manufacturer#1','Brand#13','PROMO BURNISHED COPPER',7,'JUMBO PKG',901.00,'ly. slyly ironi'), (2,'blush thistle blue yellow saddle','Manufacturer#1','Brand#13','LARGE BRUSHED BRASS',1,'LG CASE',902.00,'lar accounts amo'), (3,'spring green yellow purple cornsilk','Manufacturer#4','Brand#42','STANDARD POLISHED BRASS',21,'WRAP CASE',903.00,'egular deposits hag'), diff --git a/e2e_test/tpch/insert_partsupp.slt.part b/e2e_test/tpch/insert_partsupp.slt.part index b397ad35437d0..c765f31d278cb 100644 --- a/e2e_test/tpch/insert_partsupp.slt.part +++ b/e2e_test/tpch/insert_partsupp.slt.part @@ -1,5 +1,5 @@ statement ok -INSERT INTO partsupp (ps_partkey, ps_suppkey, ps_availqty, ps_supplycost, ps_comment) VALUES +INSERT INTO partsupp (ps_partkey, ps_suppkey, ps_availqty, ps_supplycost, ps_comment) VALUES (1,2,3325,771.64,', even theodolites. regular, final theodolites eat after the carefully pending foxes. furiously regular deposits sleep slyly. carefully bold realms above the ironic dependencies haggle careful'), (1,4,8076,993.49,'ven ideas. quickly even packages print. pending multipliers must have to are fluff'), (1,6,3956,337.09,'after the fluffily ironic deposits? blithely special dependencies integrate furiously even excuses. blithely silent theodolites could have to haggle pending, express requests; fu'), diff --git a/e2e_test/tpch/insert_region.slt.part b/e2e_test/tpch/insert_region.slt.part index 23961db226dbb..58879a4e1b55c 100644 --- a/e2e_test/tpch/insert_region.slt.part +++ b/e2e_test/tpch/insert_region.slt.part @@ -1,5 +1,5 @@ statement ok -INSERT INTO region (r_regionkey, r_name, r_comment) VALUES +INSERT INTO region (r_regionkey, r_name, r_comment) VALUES (0,'AFRICA','lar deposits. blithely final packages cajole. regular waters are final requests. regular accounts are according to '), (1,'AMERICA','hs use ironic, even requests. s'), (2,'ASIA','ges. thinly even pinto beans ca'), diff --git a/e2e_test/tpch/insert_supplier.slt.part b/e2e_test/tpch/insert_supplier.slt.part index d367b29a4f2a7..511653bb203bb 100644 --- a/e2e_test/tpch/insert_supplier.slt.part +++ b/e2e_test/tpch/insert_supplier.slt.part @@ -1,5 +1,5 @@ statement ok -INSERT INTO supplier (s_suppkey, s_name, s_address, s_nationkey, s_phone, s_acctbal, s_comment) VALUES +INSERT INTO supplier (s_suppkey, s_name, s_address, s_nationkey, s_phone, s_acctbal, s_comment) VALUES (1,'Supplier#000000001',' N kD4on9OM Ipw3,gf0JBoQDd7tgrzrddZ',17,'27-918-335-1736',5755.94,'each slyly above the careful'), (2,'Supplier#000000002','89eJ5ksX3ImxJQBvxObC,',5,'15-679-861-2259',4032.68,' slyly bold instructions. idle dependen'), (3,'Supplier#000000003','q1,G3Pj6OjIuUYfUoH18BFTKP5aU9bEV3',1,'11-383-516-1199',4192.40,'blithely silent requests after the express dependencies are sl'), diff --git a/e2e_test/user_doc/user_doc.slt b/e2e_test/user_doc/user_doc.slt index bfbb0b48558b5..6021db78f4a7b 100644 --- a/e2e_test/user_doc/user_doc.slt +++ b/e2e_test/user_doc/user_doc.slt @@ -54,14 +54,14 @@ INSERT INTO taxi VALUES ( '1001', ARRAY['ABCD1234', 'ABCD1235', 'ABCD1236', 'ABCD1237'], - 'N5432N', - 'FAST TAXI', - '2030-12-31', + 'N5432N', + 'FAST TAXI', + '2030-12-31', 'DAVID WANG' ); statement ok -SELECT trip_id[1] +SELECT trip_id[1] FROM taxi; statement ok @@ -110,17 +110,17 @@ CREATE TABLE trip ( initial_charge DOUBLE PRECISION, subsequent_charge DOUBLE PRECISION, surcharge DOUBLE PRECISION, - tolls DOUBLE PRECISION - > + tolls DOUBLE PRECISION + > ); statement ok -INSERT INTO trip VALUES +INSERT INTO trip VALUES ( - '1234ABCD', - '2022-07-28 11:04:05', - '2022-07-28 11:15:22', - 6.1, + '1234ABCD', + '2022-07-28 11:04:05', + '2022-07-28 11:15:22', + 6.1, ROW(1.0, 4.0, 1.5, 2.0) ); @@ -149,10 +149,10 @@ drop table if exists taxi_trips; statement ok create table taxi_trips (trip_id varchar, taxi_id varchar, completed_at timestamp, distance double precision, duration double precision); -statement ok -insert into taxi_trips values -(1, 1001, '2022-07-01 22:00:00', 4, 6), -(2, 1002, '2022-07-01 22:01:00', 6, 9), +statement ok +insert into taxi_trips values +(1, 1001, '2022-07-01 22:00:00', 4, 6), +(2, 1002, '2022-07-01 22:01:00', 6, 9), (3, 1003, '2022-07-01 22:02:00', 3, 5), (4, 1004, '2022-07-01 22:03:00', 7, 15), (5, 1005, '2022-07-01 22:05:00', 2, 4), @@ -168,15 +168,15 @@ FROM HOP (taxi_trips, completed_at, INTERVAL '1 MINUTE', INTERVAL '2 MINUTES') ORDER BY window_start; statement ok -SELECT window_start, window_end, count(trip_id) as no_of_trips, sum(distance) as total_distance -FROM TUMBLE (taxi_trips, completed_at, INTERVAL '2 MINUTES') -GROUP BY window_start, window_end +SELECT window_start, window_end, count(trip_id) as no_of_trips, sum(distance) as total_distance +FROM TUMBLE (taxi_trips, completed_at, INTERVAL '2 MINUTES') +GROUP BY window_start, window_end ORDER BY window_start ASC; statement ok -SELECT window_start, window_end, count(trip_id) as no_of_trips, sum(distance) as total_distance -FROM HOP (taxi_trips, completed_at, INTERVAL '1 MINUTES', INTERVAL '2 MINUTES') -GROUP BY window_start, window_end +SELECT window_start, window_end, count(trip_id) as no_of_trips, sum(distance) as total_distance +FROM HOP (taxi_trips, completed_at, INTERVAL '1 MINUTES', INTERVAL '2 MINUTES') +GROUP BY window_start, window_end ORDER BY window_start ASC; @@ -187,12 +187,12 @@ statement ok create table taxi_simple (taxi_id varchar, company varchar); statement ok -insert into taxi_simple values -(1001, 'SAFE TAXI'), -(1002, 'SUPER TAXI'), -(1003, 'FAST TAXI'), -(1004, 'BEST TAXI'), -(1005, 'WEST TAXI'), +insert into taxi_simple values +(1001, 'SAFE TAXI'), +(1002, 'SUPER TAXI'), +(1003, 'FAST TAXI'), +(1004, 'BEST TAXI'), +(1005, 'WEST TAXI'), (1006, 'EAST TAXI'); statement ok @@ -202,11 +202,11 @@ statement ok CREATE TABLE taxi_fare (trip_id VARCHAR, completed_at TIMESTAMP, total_fare DOUBLE PRECISION, payment_status VARCHAR); statement ok -INSERT INTO taxi_fare VALUES -(1, '2022-07-01 22:00:00', 8, 'COMPLETED'), -(2, '2022-07-01 22:01:00', 12, 'PROCESSING'), -(3, '2022-07-01 22:02:10', 5, 'COMPLETED'), -(4, '2022-07-01 22:03:00', 15, 'COMPLETED'), +INSERT INTO taxi_fare VALUES +(1, '2022-07-01 22:00:00', 8, 'COMPLETED'), +(2, '2022-07-01 22:01:00', 12, 'PROCESSING'), +(3, '2022-07-01 22:02:10', 5, 'COMPLETED'), +(4, '2022-07-01 22:03:00', 15, 'COMPLETED'), (5, '2022-07-01 22:06:00', 5, 'REJECTED'), (6, '2022-07-01 22:06:00', 20, 'COMPLETED'); diff --git a/grafana/README.md b/grafana/README.md index 12d905f78ac09..ca3d4da00f9cc 100644 --- a/grafana/README.md +++ b/grafana/README.md @@ -1,6 +1,6 @@ # RisingWave Grafana Dashboard -The Grafana dashboard is generated with grafanalib. You'll need +The Grafana dashboard is generated with grafanalib. You'll need - Python - grafanalib @@ -31,7 +31,7 @@ And don't forget to include the generated `risingwave--dashboard.json` in t ## Advanced Usage -We can specify the source uid, dashboard uid, dashboard version, enable namespace filter and enable risingwave_name filter(used in multi-cluster deployment) via env variables. +We can specify the source uid, dashboard uid, dashboard version, enable namespace filter and enable risingwave_name filter(used in multi-cluster deployment) via env variables. For example, we can use the following query to generate dashboard json used in our benchmark cluster: diff --git a/grafana/risingwave-dev-dashboard.dashboard.py b/grafana/risingwave-dev-dashboard.dashboard.py index 9bfb7a78d2c33..758a981379cc2 100644 --- a/grafana/risingwave-dev-dashboard.dashboard.py +++ b/grafana/risingwave-dev-dashboard.dashboard.py @@ -1140,7 +1140,7 @@ def section_streaming_actors(outer_panels): f"rate({metric('stream_group_top_n_appendonly_cache_miss_count')}[$__rate_interval])", "Group top n appendonly cache miss - table {{table_id}} actor {{actor_id}}", ), - + panels.target( f"rate({metric('stream_agg_lookup_total_count')}[$__rate_interval])", "stream agg total lookups - table {{table_id}} actor {{actor_id}}", @@ -1192,7 +1192,7 @@ def section_streaming_actors(outer_panels): [ panels.target(f"{metric('stream_temporal_join_cached_entry_count')}", "Temporal Join cached count | table {{table_id}} actor {{actor_id}}"), - + ], ), @@ -1202,7 +1202,7 @@ def section_streaming_actors(outer_panels): [ panels.target(f"{metric('stream_lookup_cached_entry_count')}", "lookup cached count | table {{table_id}} actor {{actor_id}}"), - + ], ), ], @@ -2306,7 +2306,7 @@ def section_hummock_manager(outer_panels): ], ), - + panels.timeseries_count( "Table KV Count", "", diff --git a/grafana/risingwave-user-dashboard.dashboard.py b/grafana/risingwave-user-dashboard.dashboard.py index 3cc0759cd7962..8a15174d150b5 100644 --- a/grafana/risingwave-user-dashboard.dashboard.py +++ b/grafana/risingwave-user-dashboard.dashboard.py @@ -387,7 +387,7 @@ def section_memory(outer_panels): f"(sum(rate({metric('stream_temporal_join_cache_miss_count')}[$__rate_interval])) by (table_id, actor_id) ) / (sum(rate({metric('stream_temporal_join_total_query_cache_count')}[$__rate_interval])) by (table_id, actor_id))", "Stream temporal join cache miss ratio - table {{table_id}} actor {{actor_id}} ", ), - + panels.target( f"1 - (sum(rate({metric('stream_materialize_cache_hit_count')}[$__rate_interval])) by (table_id, actor_id) ) / (sum(rate({metric('stream_materialize_cache_total_count')}[$__rate_interval])) by (table_id, actor_id))", "materialize executor cache miss ratio - table {{table_id}} - actor {{actor_id}} {{instance}}", diff --git a/grafana/update.sh b/grafana/update.sh index a2ddebb946afd..a41148ed1e79c 100755 --- a/grafana/update.sh +++ b/grafana/update.sh @@ -7,7 +7,7 @@ set -euo pipefail echo "$(tput setaf 4)Upload dashboard to localhost:3001$(tput sgr0)" for dashboard in "risingwave-user-dashboard.json" "risingwave-dev-dashboard.json"; do - payload="{\"dashboard\": $(jq . $dashboard), \"overwrite\": true}" + payload="{\"dashboard\": $(jq . $dashboard), \"overwrite\": true}" echo "$payload" > payload.txt curl -X POST \ -H 'Content-Type: application/json' \ @@ -15,7 +15,7 @@ for dashboard in "risingwave-user-dashboard.json" "risingwave-dev-dashboard.json "http://admin:admin@localhost:3001/api/dashboards/db" rm payload.txt -done +done diff --git a/integration_tests/citus-cdc/docker-compose.yml b/integration_tests/citus-cdc/docker-compose.yml index 5a1672dca2c2b..9721a180a45f3 100644 --- a/integration_tests/citus-cdc/docker-compose.yml +++ b/integration_tests/citus-cdc/docker-compose.yml @@ -83,7 +83,7 @@ services: citus-prepare: container_name: citus_prepare image: "citusdata/citus:10.2.5" - depends_on: + depends_on: - citus-master - citus-manager - citus-worker-1 diff --git a/integration_tests/clickhouse-sink/README.md b/integration_tests/clickhouse-sink/README.md index 776d3fea9e5dd..607621faefeae 100644 --- a/integration_tests/clickhouse-sink/README.md +++ b/integration_tests/clickhouse-sink/README.md @@ -14,7 +14,7 @@ The cluster contains a RisingWave cluster and its necessary dependencies, a data 2. Create the ClickHouse table: ```sh -docker compose exec clickhouse-server bash /opt/clickhouse/clickhouse-sql/run-sql-file.sh create_clickhouse_table +docker compose exec clickhouse-server bash /opt/clickhouse/clickhouse-sql/run-sql-file.sh create_clickhouse_table ``` 3. Execute the SQL queries in sequence: @@ -26,7 +26,7 @@ docker compose exec clickhouse-server bash /opt/clickhouse/clickhouse-sql/run-sq 4. Execute a simple query: ```sh -docker compose exec clickhouse-server bash /opt/clickhouse/clickhouse-sql/run-sql-file.sh clickhouse_query +docker compose exec clickhouse-server bash /opt/clickhouse/clickhouse-sql/run-sql-file.sh clickhouse_query ``` diff --git a/integration_tests/datagen/twitter/avro.go b/integration_tests/datagen/twitter/avro.go index df20780c3e0bd..4d993d783fa91 100644 --- a/integration_tests/datagen/twitter/avro.go +++ b/integration_tests/datagen/twitter/avro.go @@ -31,7 +31,7 @@ var AvroSchema string = ` ] } ] -} +} ` var AvroCodec *goavro.Codec = nil diff --git a/integration_tests/debezium-postgres/docker-compose.yml b/integration_tests/debezium-postgres/docker-compose.yml index 4f35d907a8f46..03b5fa0491e33 100644 --- a/integration_tests/debezium-postgres/docker-compose.yml +++ b/integration_tests/debezium-postgres/docker-compose.yml @@ -75,7 +75,7 @@ services: postgres: { condition: service_healthy } command: - /bin/sh - - -c + - -c - psql "postgresql://postgresuser:postgrespw@postgres:5432/mydb" -f ./postgres_prepare.sql volumes: - "./postgres_prepare.sql:/postgres_prepare.sql" @@ -90,7 +90,7 @@ services: environment: CONNECT_URL: http://debezium:8083 container_name: kafka-connect-ui - depends_on: + depends_on: message_queue: { condition: service_healthy } volumes: @@ -98,4 +98,3 @@ volumes: external: false name: risingwave-compose - \ No newline at end of file diff --git a/integration_tests/iceberg-sink/airflow_dags/remove_iceberg_orphan_files.py b/integration_tests/iceberg-sink/airflow_dags/remove_iceberg_orphan_files.py index ec70d5bf1425f..472a658f51bc1 100644 --- a/integration_tests/iceberg-sink/airflow_dags/remove_iceberg_orphan_files.py +++ b/integration_tests/iceberg-sink/airflow_dags/remove_iceberg_orphan_files.py @@ -19,7 +19,7 @@ spark_sql_remove_files = SparkSubmitOperator( application=PYSPARK_APPLICATION_PATH, - task_id="spark_sql_remove_files", - packages=SPARK_PACKAGES, + task_id="spark_sql_remove_files", + packages=SPARK_PACKAGES, conn_id = "spark_local", ) \ No newline at end of file diff --git a/integration_tests/iceberg-sink/airflow_dags/rewrite_iceberg_small_files.py b/integration_tests/iceberg-sink/airflow_dags/rewrite_iceberg_small_files.py index 462d22831c704..d8585a8ec1bce 100644 --- a/integration_tests/iceberg-sink/airflow_dags/rewrite_iceberg_small_files.py +++ b/integration_tests/iceberg-sink/airflow_dags/rewrite_iceberg_small_files.py @@ -19,8 +19,8 @@ spark_sql_rewrite_files = SparkSubmitOperator( application=PYSPARK_APPLICATION_PATH, - task_id="spark_sql_rewrite_files", - packages=SPARK_PACKAGES, - conn_id = "spark_local" + task_id="spark_sql_rewrite_files", + packages=SPARK_PACKAGES, + conn_id = "spark_local" ) diff --git a/integration_tests/iceberg-sink/docker-compose.yml b/integration_tests/iceberg-sink/docker-compose.yml index 41268dcb3d883..c91f57a4fb41c 100644 --- a/integration_tests/iceberg-sink/docker-compose.yml +++ b/integration_tests/iceberg-sink/docker-compose.yml @@ -3,7 +3,7 @@ version: "3" x-airflow-common: &airflow-common image: apache/airflow:2.6.2-python3.10 - build: + build: context: . target: airflow environment: @@ -26,9 +26,9 @@ x-airflow-common: condition: service_healthy x-spark-common: &spark-air - build: + build: context: . - target: spark + target: spark services: spark: @@ -102,14 +102,14 @@ services: service: connector-node prepare_mysql: image: mysql:8.0 - depends_on: + depends_on: - mysql command: - /bin/sh - -c - "mysql -p123456 -h mysql mydb < mysql_prepare.sql" volumes: - - "./mysql_prepare.sql:/mysql_prepare.sql" + - "./mysql_prepare.sql:/mysql_prepare.sql" container_name: prepare_mysql restart: on-failure datagen: diff --git a/integration_tests/kafka-cdc-sink/docker-compose.yml b/integration_tests/kafka-cdc-sink/docker-compose.yml index 204904a870be3..61cb1b2744554 100644 --- a/integration_tests/kafka-cdc-sink/docker-compose.yml +++ b/integration_tests/kafka-cdc-sink/docker-compose.yml @@ -70,7 +70,7 @@ services: timeout: 5s retries: 5 container_name: mysql - + flink-jobmanager: image: flink build: ./flink @@ -80,7 +80,7 @@ services: environment: - | FLINK_PROPERTIES= - jobmanager.rpc.address: flink-jobmanager + jobmanager.rpc.address: flink-jobmanager container_name: flink-jobmanager flink-taskmanager: @@ -107,7 +107,7 @@ services: - | FLINK_PROPERTIES= jobmanager.rpc.address: flink-jobmanager - rest.address: flink-jobmanager + rest.address: flink-jobmanager container_name: flink-sql-client connect: diff --git a/integration_tests/mysql-cdc/docker-compose.yml b/integration_tests/mysql-cdc/docker-compose.yml index 50401ae163b01..5207536520823 100644 --- a/integration_tests/mysql-cdc/docker-compose.yml +++ b/integration_tests/mysql-cdc/docker-compose.yml @@ -54,7 +54,7 @@ services: service: connector-node datagen_tpch: image: ghcr.io/risingwavelabs/go-tpc:v0.1 - depends_on: + depends_on: - mysql command: tpch prepare --sf 1 --threads 4 -H mysql -U root -p '123456' -D mydb -P 3306 container_name: datagen_tpch diff --git a/integration_tests/pinot-sink/README.md b/integration_tests/pinot-sink/README.md index dff67418fbf88..bc1a38091f2a6 100644 --- a/integration_tests/pinot-sink/README.md +++ b/integration_tests/pinot-sink/README.md @@ -139,15 +139,15 @@ pinot-broker -brokerPort 8099 -query "SELECT * FROM orders" ] } ``` -From the query result, we can see that the update on RisingWave table +From the query result, we can see that the update on RisingWave table has been reflected on the pinot table. -By now, the demo has finished. +By now, the demo has finished. ## Kafka Payload Format -In the demo, there will be 4 upsert events in the kafka topic. +In the demo, there will be 4 upsert events in the kafka topic. The payload is like the following: ```json {"created_at":1685421033000,"id":1,"product_id":100,"quantity":1,"status":"INIT","total":1.0,"updated_at":1685421033000,"user_id":10} diff --git a/integration_tests/postgres-cdc/docker-compose.yml b/integration_tests/postgres-cdc/docker-compose.yml index c68035b33feb8..3c417545272fa 100644 --- a/integration_tests/postgres-cdc/docker-compose.yml +++ b/integration_tests/postgres-cdc/docker-compose.yml @@ -57,14 +57,14 @@ services: service: connector-node postgres_prepare: image: postgres - depends_on: + depends_on: - postgres command: - /bin/sh - -c - "psql postgresql://myuser:123456@postgres:5432/mydb < postgres_prepare.sql" volumes: - - "./postgres_prepare.sql:/postgres_prepare.sql" + - "./postgres_prepare.sql:/postgres_prepare.sql" container_name: postgres_prepare restart: on-failure datagen_tpch: diff --git a/integration_tests/postgres-sink/docker-compose.yml b/integration_tests/postgres-sink/docker-compose.yml index cd8033ad2221b..e59c8a143bc35 100644 --- a/integration_tests/postgres-sink/docker-compose.yml +++ b/integration_tests/postgres-sink/docker-compose.yml @@ -70,14 +70,14 @@ services: service: connector-node prepare_postgres: image: postgres - depends_on: + depends_on: - postgres command: - /bin/sh - -c - "psql postgresql://myuser:123456@postgres:5432/mydb < postgres_prepare.sql" volumes: - - "./postgres_prepare.sql:/postgres_prepare.sql" + - "./postgres_prepare.sql:/postgres_prepare.sql" container_name: prepare_postgres restart: on-failure volumes: diff --git a/integration_tests/presto-trino/README.md b/integration_tests/presto-trino/README.md index b475c87ed6aa3..fe3ca48f3a92d 100644 --- a/integration_tests/presto-trino/README.md +++ b/integration_tests/presto-trino/README.md @@ -2,7 +2,7 @@ ## Run the demo -1. Start the cluster with `docker compose up -d` command. +1. Start the cluster with `docker compose up -d` command. The command will start a RisingWave cluster together with a integrated trino and presto instance. 2. Connect the RisingWave frontend via the psql client. Create and insert data into the RisingWave table. ```shell @@ -25,14 +25,14 @@ docker compose run presto-client # within the trino/presto client trino:public> show tables; - Table + Table ------------ - test_table + test_table (1 row) trino:public> select * from test_table; - id + id ---- - 1 + 1 (1 row) ``` \ No newline at end of file diff --git a/integration_tests/prometheus/prometheus.yaml b/integration_tests/prometheus/prometheus.yaml index 8fef25a62dc17..4c0c115e03be7 100644 --- a/integration_tests/prometheus/prometheus.yaml +++ b/integration_tests/prometheus/prometheus.yaml @@ -15,7 +15,7 @@ scrape_configs: - job_name: meta static_configs: - targets: ["meta-node-0:1250"] - + - job_name: minio metrics_path: /minio/v2/metrics/cluster static_configs: diff --git a/integration_tests/schema-registry/create_source.sql b/integration_tests/schema-registry/create_source.sql index df1595b285406..76c5a856d1c9a 100644 --- a/integration_tests/schema-registry/create_source.sql +++ b/integration_tests/schema-registry/create_source.sql @@ -3,5 +3,5 @@ CREATE SOURCE student WITH ( topic = 'sr-test', properties.bootstrap.server = 'message_queue:29092', scan.startup.mode = 'earliest' -) +) FORMAT PLAIN ENCODE AVRO (schema.registry = 'http://message_queue:8081'); \ No newline at end of file diff --git a/integration_tests/tidb-cdc-sink/docker-compose.yml b/integration_tests/tidb-cdc-sink/docker-compose.yml index 4f5d6653c2dfd..5da30c496c907 100644 --- a/integration_tests/tidb-cdc-sink/docker-compose.yml +++ b/integration_tests/tidb-cdc-sink/docker-compose.yml @@ -195,7 +195,7 @@ services: #===================== Others =================== datagen: build: ../datagen - depends_on: + depends_on: - tidb command: - /bin/sh @@ -206,14 +206,14 @@ services: init_tidb: image: mysql:8.0 - depends_on: + depends_on: - tidb command: - /bin/sh - -c - "mysql --password= -h tidb --port 4000 -u root test < tidb_create_tables.sql" volumes: - - "./tidb_create_tables.sql:/tidb_create_tables.sql" + - "./tidb_create_tables.sql:/tidb_create_tables.sql" container_name: init_tidb restart: on-failure diff --git a/java/connector-node/README.md b/java/connector-node/README.md index 8c21d68dc4b19..1651315fe0331 100644 --- a/java/connector-node/README.md +++ b/java/connector-node/README.md @@ -30,7 +30,7 @@ This will create a `.tar.gz` file with the Connector Node and all its dependenci ``` # unpack the tar file, the file name might vary depending on the version -cd java/connector-node/assembly/target && tar xvf risingwave-connector-1.0.0.tar.gz +cd java/connector-node/assembly/target && tar xvf risingwave-connector-1.0.0.tar.gz # launch connector node service java -classpath "./libs/*" com.risingwave.connector.ConnectorService ``` @@ -65,17 +65,17 @@ bash gen-stub.sh PYTHONPATH=proto python3 integration_tests.py ``` -Or you can use conda and install the necessary package `grpcio grpcio-tools psycopg2 psycopg2-binary`. +Or you can use conda and install the necessary package `grpcio grpcio-tools psycopg2 psycopg2-binary`. The connector service is the server and Python integration test is a client, which will send gRPC request and get response from the connector server. So when running integration_tests, remember to launch the connector service in advance. You can get the gRPC response and check messages or errors in client part. And check the detailed exception information on server side. ### Python file format -We use `black` as the python file formatter. We can run `format-python.sh` to format the python files. +We use `black` as the python file formatter. We can run `format-python.sh` to format the python files. ### JDBC test -We have integration tests that involve the use of several sinks, including file sink, jdbc sink, iceberg sink, and deltalake sink. If you wish to run these tests locally, you will need to configure both MinIO and PostgreSQL. +We have integration tests that involve the use of several sinks, including file sink, jdbc sink, iceberg sink, and deltalake sink. If you wish to run these tests locally, you will need to configure both MinIO and PostgreSQL. Downloading and launching MinIO is a straightforward process. For PostgreSQL, I recommend launching it using Docker. When setting up PostgreSQL, please ensure that the values for `POSTGRES_PASSWORD`, `POSTGRES_DB`, and `POSTGRES_USER` match the corresponding settings in the `integration_tests.py` file. ```shell @@ -110,7 +110,7 @@ Currently, the following external sources and sinks depends on the connector nod ### Sources - CDC -Creating a sink with external connectors above will check for the connector node service. If the service is not running, the creation will fail. +Creating a sink with external connectors above will check for the connector node service. If the service is not running, the creation will fail. ```sql CREATE SINK s1 FROM mv1 WITH ( diff --git a/java/connector-node/python-client/proto/.gitignore b/java/connector-node/python-client/proto/.gitignore index 9956645154d07..ef3a5b1a75ed6 100644 --- a/java/connector-node/python-client/proto/.gitignore +++ b/java/connector-node/python-client/proto/.gitignore @@ -1,3 +1,3 @@ __pycache__ -*.py +*.py !__init__.py \ No newline at end of file diff --git a/java/udf/README.md b/java/udf/README.md index 0cbf20ff126d9..f963fa6b368e0 100644 --- a/java/udf/README.md +++ b/java/udf/README.md @@ -68,7 +68,7 @@ The `--add-opens` flag must be added when running unit tests through Maven: ## Scalar Functions -A user-defined scalar function maps zero, one, or multiple scalar values to a new scalar value. +A user-defined scalar function maps zero, one, or multiple scalar values to a new scalar value. In order to define a scalar function, one has to create a new class that implements the `ScalarFunction` interface in `com.risingwave.functions` and implement exactly one evaluation method named `eval(...)`. @@ -101,10 +101,10 @@ public class Gcd implements ScalarFunction { ## Table Functions A user-defined table function maps zero, one, or multiple scalar values to one or multiple -rows (structured types). +rows (structured types). In order to define a table function, one has to create a new class that implements the `TableFunction` -interface in `com.risingwave.functions` and implement exactly one evaluation method named `eval(...)`. +interface in `com.risingwave.functions` and implement exactly one evaluation method named `eval(...)`. This method must be declared public and non-static. The return type must be an `Iterator` of any [data type](#data-types) listed in the data types section. diff --git a/scripts/check/check-trailing-spaces.sh b/scripts/check/check-trailing-spaces.sh new file mode 100755 index 0000000000000..2a77660a8a988 --- /dev/null +++ b/scripts/check/check-trailing-spaces.sh @@ -0,0 +1,64 @@ +#!/usr/bin/env bash + +# Exits as soon as any line fails. +set -euo pipefail + +# Shell colors +RED='\033[0;31m' +BLUE='\033[0;34m' +GREEN='\033[0;32m' +ORANGE='\033[0;33m' +BOLD='\033[1m' +NONE='\033[0m' + +_echo_err() { + echo -e "${RED}$@${NONE}" +} + +fix=false +while [ $# -gt 0 ]; do + case $1 in + -f | --fix) + fix=true + ;; + *) + _echo_err "$self: invalid option \`$1\`\n" + exit 1 + ;; + esac + shift +done + +# The following is modified from https://github.com/raisedevs/find-trailing-whitespace/blob/restrict-to-plaintext-only/entrypoint.sh. + +has_trailing_spaces=false + +for file in $(git grep --cached -Il '' -- ':!src/tests/regress/data'); do + lines=$(egrep -rnIH "[[:space:]]+$" "$file" | cut -f-2 -d ":" || echo "") + if [ ! -z "$lines" ]; then + if [[ $has_trailing_spaces == false ]]; then + echo -e "\nLines containing trailing whitespace:\n" + has_trailing_spaces=true + fi + if [[ $fix == true ]]; then + sed -i '' -e's/[[:space:]]*$//' "$file" + fi + echo -e "${BLUE}$lines${NONE}" + fi +done + +if [[ $has_trailing_spaces == true ]]; then + if [[ $fix == false ]]; then + echo + echo -e "${RED}${BOLD}Please clean all the trailing spaces.${NONE}" + echo -e "${BOLD}You can run 'scripts/check-trailing-spaces.sh --fix' for convenience.${NONE}" + exit 1 + else + echo + echo -e "${GREEN}${BOLD}All trailing spaces have been cleaned.${NONE}" + exit 0 + fi +else + echo -e "${GREEN}${BOLD}No trailing spaces found.${NONE}" + exit 0 +fi diff --git a/scripts/source/prepare_ci_kafka.sh b/scripts/source/prepare_ci_kafka.sh index 24b900a2a0e18..f1ec3e7d9c903 100755 --- a/scripts/source/prepare_ci_kafka.sh +++ b/scripts/source/prepare_ci_kafka.sh @@ -64,7 +64,7 @@ for filename in $kafka_data_files; do # binary data, one message a file, filename/topic ends with "bin" if [[ "$topic" = *bin ]]; then ${KCAT_BIN} -P -b message_queue:29092 -t "$topic" "$filename" - elif [[ "$topic" = *avro_json ]]; then + elif [[ "$topic" = *avro_json ]]; then python3 source/avro_producer.py "message_queue:29092" "http://message_queue:8081" "$filename" else cat "$filename" | ${KCAT_BIN} -P -K ^ -b message_queue:29092 -t "$topic" diff --git a/src/batch/benches/README.md b/src/batch/benches/README.md index cae645187016f..cc1c75b225ca4 100644 --- a/src/batch/benches/README.md +++ b/src/batch/benches/README.md @@ -16,7 +16,7 @@ Run a specific benchmark cargo bench -p risingwave_batch -- ``` -where `` is a regular expression matching the benchmark ID, e.g., +where `` is a regular expression matching the benchmark ID, e.g., `top_n.rs` uses `BenchmarkId::new("TopNExecutor", params)` , so we can run TopN benchmarks with ```bash diff --git a/src/batch/benches/nested_loop_join.rs b/src/batch/benches/nested_loop_join.rs index 0fbed6a3aedfe..b5fc33307c0ef 100644 --- a/src/batch/benches/nested_loop_join.rs +++ b/src/batch/benches/nested_loop_join.rs @@ -45,7 +45,7 @@ fn create_nested_loop_join_executor( Box::new(NestedLoopJoinExecutor::new( build_from_pretty( "(equal:boolean - (modulus:int8 $0:int8 2:int8) + (modulus:int8 $0:int8 2:int8) (modulus:int8 $1:int8 3:int8))", ), join_type, diff --git a/src/bench/file_cache_bench/bpf.rs b/src/bench/file_cache_bench/bpf.rs index 1fbb9d440f298..25f0cfff66293 100644 --- a/src/bench/file_cache_bench/bpf.rs +++ b/src/bench/file_cache_bench/bpf.rs @@ -108,12 +108,12 @@ int vfs_read_enter(struct pt_regs *ctx, struct file *file, char *buf, size_t cou u64 ts = bpf_ktime_get_ns(); if ((u64)file->f_op != (u64)EXT4_FILE_OPERATIONS) return 0; - + if (!scmp(&file->f_path.dentry->d_iname[0], &target[0])) return 0; u64 magic = *((u64 *)buf); u64 sid = *(((u64 *)buf) + 1); - + struct data_t data = {0}; data.vfs_read_enter_ts = ts; data.magic = magic; @@ -128,7 +128,7 @@ int vfs_read_leave(struct pt_regs *ctx, struct file *file, char *buf, size_t cou u64 id = bpf_get_current_pid_tgid(); u64 ts = bpf_ktime_get_ns(); - + struct data_t *data = tss.lookup(&id); if (data == 0) return 0; data->vfs_read_leave_ts = ts; @@ -136,13 +136,13 @@ int vfs_read_leave(struct pt_regs *ctx, struct file *file, char *buf, size_t cou events.perf_submit(ctx, data, sizeof(*data)); tss.delete(&id); - + return 0; } int ext4_file_read_iter_enter(struct pt_regs *ctx, struct kiocb *iocb, struct iov_iter *to) { u64 id = bpf_get_current_pid_tgid(); - + u64 ts = bpf_ktime_get_ns(); struct data_t *data = tss.lookup(&id); @@ -154,19 +154,19 @@ int ext4_file_read_iter_enter(struct pt_regs *ctx, struct kiocb *iocb, struct io int ext4_file_read_iter_leave(struct pt_regs *ctx, struct kiocb *iocb, struct iov_iter *to) { u64 id = bpf_get_current_pid_tgid(); - + u64 ts = bpf_ktime_get_ns(); - + struct data_t *data = tss.lookup(&id); if (data == 0) return 0; data->ext4_file_read_iter_leave_ts = ts; - + return 0; } int iomap_dio_rw_enter(struct pt_regs *ctx, struct kiocb *iocb, struct iov_iter *iter) { u64 id = bpf_get_current_pid_tgid(); - + u64 ts = bpf_ktime_get_ns(); struct data_t *data = tss.lookup(&id); @@ -178,7 +178,7 @@ int iomap_dio_rw_enter(struct pt_regs *ctx, struct kiocb *iocb, struct iov_iter int iomap_dio_rw_leave(struct pt_regs *ctx, struct kiocb *iocb, struct iov_iter *iter) { u64 id = bpf_get_current_pid_tgid(); - + u64 ts = bpf_ktime_get_ns(); struct data_t *data = tss.lookup(&id); @@ -190,7 +190,7 @@ int iomap_dio_rw_leave(struct pt_regs *ctx, struct kiocb *iocb, struct iov_iter int filemap_write_and_wait_range_enter(struct pt_regs *ctx, struct address_space *mapping, long long lstart, long long lend) { u64 id = bpf_get_current_pid_tgid(); - + u64 ts = bpf_ktime_get_ns(); struct data_t *data = tss.lookup(&id); @@ -202,13 +202,13 @@ int filemap_write_and_wait_range_enter(struct pt_regs *ctx, struct address_space int filemap_write_and_wait_range_leave(struct pt_regs *ctx, struct address_space *mapping, long long lstart, long long lend) { u64 id = bpf_get_current_pid_tgid(); - + u64 ts = bpf_ktime_get_ns(); struct data_t *data = tss.lookup(&id); if (data == 0) return 0; data->filemap_write_and_wait_range_leave_ts = ts; - + return 0; } "#; diff --git a/src/common/src/array/num256_array.rs b/src/common/src/array/num256_array.rs index 9c508f7048d83..65b7daf784979 100644 --- a/src/common/src/array/num256_array.rs +++ b/src/common/src/array/num256_array.rs @@ -106,7 +106,7 @@ macro_rules! impl_array_for_num256 { data: Vec::with_capacity(capacity), } } - + fn with_type(capacity: usize, ty: DataType) -> Self { assert_eq!(ty, DataType::$variant_name); Self::new(capacity) diff --git a/src/connector/src/error.rs b/src/connector/src/error.rs index 23a24f3768ae9..155f0c248d012 100644 --- a/src/connector/src/error.rs +++ b/src/connector/src/error.rs @@ -1,37 +1,37 @@ -// Copyright 2023 RisingWave Labs -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -use risingwave_common::error::{ErrorCode, RwError}; -use thiserror::Error; - -#[derive(Error, Debug)] -pub enum ConnectorError { - #[error("Parse error: {0}")] - Parse(&'static str), - - #[error("Invalid parameter {name}: {reason}")] - InvalidParam { name: &'static str, reason: String }, - - #[error("Kafka error: {0}")] - Kafka(#[from] rdkafka::error::KafkaError), - - #[error(transparent)] - Internal(#[from] anyhow::Error), -} - -impl From for RwError { - fn from(s: ConnectorError) -> Self { - ErrorCode::ConnectorError(Box::new(s)).into() - } -} +// Copyright 2023 RisingWave Labs +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +use risingwave_common::error::{ErrorCode, RwError}; +use thiserror::Error; + +#[derive(Error, Debug)] +pub enum ConnectorError { + #[error("Parse error: {0}")] + Parse(&'static str), + + #[error("Invalid parameter {name}: {reason}")] + InvalidParam { name: &'static str, reason: String }, + + #[error("Kafka error: {0}")] + Kafka(#[from] rdkafka::error::KafkaError), + + #[error(transparent)] + Internal(#[from] anyhow::Error), +} + +impl From for RwError { + fn from(s: ConnectorError) -> Self { + ErrorCode::ConnectorError(Box::new(s)).into() + } +} diff --git a/src/connector/src/parser/debezium/avro_parser.rs b/src/connector/src/parser/debezium/avro_parser.rs index f7841f69f71a1..5281ba63533c7 100644 --- a/src/connector/src/parser/debezium/avro_parser.rs +++ b/src/connector/src/parser/debezium/avro_parser.rs @@ -245,7 +245,7 @@ mod tests { "type": "int" }], "connect.name": "dbserver1.inventory.customers.Key" -} +} "#; let key_schema = Schema::parse_str(key_schema_str).unwrap(); let names: Vec = avro_schema_to_column_descs(&key_schema) diff --git a/src/docs/development/how-to-write-a-rpc-service.md b/src/docs/development/how-to-write-a-rpc-service.md index 8314f2e47ee89..bbb12e50eae8c 100644 --- a/src/docs/development/how-to-write-a-rpc-service.md +++ b/src/docs/development/how-to-write-a-rpc-service.md @@ -6,7 +6,7 @@ Quick example on how to write a service. Add your service definition under `src//src/rcp/service/.rs`, e.g. `src/meta/src/rpc/service/health_service.rs`. -```rust +```rust pub struct HealthServiceImpl {} impl HealthServiceImpl { pub fn new() -> Self { @@ -56,10 +56,10 @@ service Health { Make sure to lint your file using [buf](https://docs.buf.build/installation). -## Use service +## Use service -Add your module in `src/prost/src/lib.rs`, like so: +Add your module in `src/prost/src/lib.rs`, like so: ```rust #[rustfmt::skip] @@ -71,7 +71,7 @@ Add your proto file `"health"` in `src/prost/build.rs`. Add the module in `src/meta/src/rpc/service/mod.rs`. -Use your service in `src/meta/src/rpc/server.rs`, like +Use your service in `src/meta/src/rpc/server.rs`, like ```rust let health_srv = HealthServiceImpl::new(); diff --git a/src/frontend/planner_test/tests/testdata/input/ch_benchmark.yaml b/src/frontend/planner_test/tests/testdata/input/ch_benchmark.yaml index 1873b5e44c8a0..61c72961b0df8 100644 --- a/src/frontend/planner_test/tests/testdata/input/ch_benchmark.yaml +++ b/src/frontend/planner_test/tests/testdata/input/ch_benchmark.yaml @@ -82,7 +82,7 @@ o_all_local INT, PRIMARY KEY(o_w_id, o_d_id, o_id) ); - + create table order_line ( ol_o_id INT, ol_d_id INT, diff --git a/src/frontend/planner_test/tests/testdata/input/emit_on_window_close.yaml b/src/frontend/planner_test/tests/testdata/input/emit_on_window_close.yaml index 04f86ac1c7d79..23ac9768a11a1 100644 --- a/src/frontend/planner_test/tests/testdata/input/emit_on_window_close.yaml +++ b/src/frontend/planner_test/tests/testdata/input/emit_on_window_close.yaml @@ -37,4 +37,3 @@ from t WITH (connector = 'blackhole'); expected_outputs: - explain_output - \ No newline at end of file diff --git a/src/frontend/src/handler/alter_system.rs b/src/frontend/src/handler/alter_system.rs index 9b9a0d6955d2f..91f8cff23c0d4 100644 --- a/src/frontend/src/handler/alter_system.rs +++ b/src/frontend/src/handler/alter_system.rs @@ -44,12 +44,12 @@ pub async fn handle_alter_system( if let Some(params) = params { if params.barrier_interval_ms() >= NOTICE_BARRIER_INTERVAL_MS { builder = builder.notice( - format!("Barrier interval is set to {} ms >= {} ms. This can hurt freshness and potentially cause OOM.", + format!("Barrier interval is set to {} ms >= {} ms. This can hurt freshness and potentially cause OOM.", params.barrier_interval_ms(), NOTICE_BARRIER_INTERVAL_MS)); } if params.checkpoint_frequency() >= NOTICE_CHECKPOINT_FREQUENCY { builder = builder.notice( - format!("Checkpoint frequency is set to {} >= {}. This can hurt freshness and potentially cause OOM.", + format!("Checkpoint frequency is set to {} >= {}. This can hurt freshness and potentially cause OOM.", params.checkpoint_frequency(), NOTICE_CHECKPOINT_FREQUENCY)); } } diff --git a/src/frontend/src/optimizer/plan_node/predicate_pushdown.rs b/src/frontend/src/optimizer/plan_node/predicate_pushdown.rs index a7c0036d7da2c..165d4ade25297 100644 --- a/src/frontend/src/optimizer/plan_node/predicate_pushdown.rs +++ b/src/frontend/src/optimizer/plan_node/predicate_pushdown.rs @@ -31,7 +31,7 @@ pub trait PredicatePushdown { /// /// 1. those can't be pushed down. We just create a `LogicalFilter` for them above the current /// `PlanNode`. i.e., - /// + /// /// ```ignore /// LogicalFilter::create(self.clone().into(), predicate) /// ``` diff --git a/src/meta/src/backup_restore/README.md b/src/meta/src/backup_restore/README.md index 02eb2f6ef6b38..9fee884ba54fc 100644 --- a/src/meta/src/backup_restore/README.md +++ b/src/meta/src/backup_restore/README.md @@ -16,6 +16,6 @@ backup-restore 4. Config meta service cluster to use the new meta store. ### Caveat -The meta service backup/recovery procedure **doesn't** replicate SSTs in object store. +The meta service backup/recovery procedure **doesn't** replicate SSTs in object store. So always make sure the underlying SST object store is writable to at most one running cluster at any time. Otherwise, the SST object store will face the risk of data corruption. \ No newline at end of file diff --git a/src/meta/src/hummock/manager/mod.rs b/src/meta/src/hummock/manager/mod.rs index 0bd943680720a..d4b7e9c605f9e 100644 --- a/src/meta/src/hummock/manager/mod.rs +++ b/src/meta/src/hummock/manager/mod.rs @@ -2740,7 +2740,7 @@ async fn write_exclusive_cluster_id( Ok(metadata) => metadata, Err(_) => { return Err(ObjectError::internal( - "Fail to access remote object storage, + "Fail to access remote object storage, please check if your Access Key and Secret Key are configured correctly. ", ) .into()) diff --git a/src/meta/src/rpc/server.rs b/src/meta/src/rpc/server.rs index 8504cc6eef88d..d846b2af8f30d 100644 --- a/src/meta/src/rpc/server.rs +++ b/src/meta/src/rpc/server.rs @@ -348,8 +348,8 @@ pub async fn start_service_as_election_leader( let data_directory = system_params_reader.data_directory(); if !is_correct_data_directory(data_directory) { return Err(MetaError::system_param(format!( - "The data directory {:?} is misconfigured. - Please use a combination of uppercase and lowercase letters and numbers, i.e. [a-z, A-Z, 0-9]. + "The data directory {:?} is misconfigured. + Please use a combination of uppercase and lowercase letters and numbers, i.e. [a-z, A-Z, 0-9]. The string cannot start or end with '/', and consecutive '/' are not allowed. The data directory cannot be empty and its length should not exceed 800 characters.", data_directory diff --git a/src/risedevtool/src/config_gen/grafana_gen.rs b/src/risedevtool/src/config_gen/grafana_gen.rs index 4683bc045f061..2eded78ad6bc9 100644 --- a/src/risedevtool/src/config_gen/grafana_gen.rs +++ b/src/risedevtool/src/config_gen/grafana_gen.rs @@ -186,7 +186,7 @@ providers: folderUid: '' type: file disableDeletion: false - updateIntervalSeconds: 60 + updateIntervalSeconds: 60 allowUiUpdates: true options: path: {s3_dashboard_path} diff --git a/src/risedevtool/src/config_gen/kafka_gen.rs b/src/risedevtool/src/config_gen/kafka_gen.rs index 5755442a56141..472a7a43b5a77 100644 --- a/src/risedevtool/src/config_gen/kafka_gen.rs +++ b/src/risedevtool/src/config_gen/kafka_gen.rs @@ -59,7 +59,7 @@ broker.id={kafka_broker_id} ############################# Socket Server Settings ############################# -# The address the socket server listens on. It will get the value returned from +# The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured. # FORMAT: # listeners = listener_name://host_name:port @@ -67,7 +67,7 @@ broker.id={kafka_broker_id} # listeners = PLAINTEXT://your.host.name:9092 listeners=PLAINTEXT://{kafka_listen_host}:{kafka_port} -# Hostname and port the broker will advertise to producers and consumers. If not set, +# Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value # returned from java.net.InetAddress.getCanonicalHostName(). advertised.listeners=PLAINTEXT://{kafka_advertise_host}:{kafka_port} diff --git a/src/risedevtool/src/config_gen/prometheus_gen.rs b/src/risedevtool/src/config_gen/prometheus_gen.rs index 01caa0ca1aa34..aa6422416a31f 100644 --- a/src/risedevtool/src/config_gen/prometheus_gen.rs +++ b/src/risedevtool/src/config_gen/prometheus_gen.rs @@ -130,7 +130,7 @@ scrape_configs: - job_name: meta static_configs: - targets: [{meta_node_targets}] - + - job_name: minio metrics_path: /minio/v2/metrics/cluster static_configs: diff --git a/src/risedevtool/src/config_gen/zookeeper_gen.rs b/src/risedevtool/src/config_gen/zookeeper_gen.rs index 89b695ffd90de..90518bb9623e0 100644 --- a/src/risedevtool/src/config_gen/zookeeper_gen.rs +++ b/src/risedevtool/src/config_gen/zookeeper_gen.rs @@ -34,9 +34,9 @@ impl ZooKeeperGen { # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at -# +# # http://www.apache.org/licenses/LICENSE-2.0 -# +# # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. diff --git a/src/risedevtool/welcome.sh b/src/risedevtool/welcome.sh index 56379ea9454f9..bf7402b224c71 100755 --- a/src/risedevtool/welcome.sh +++ b/src/risedevtool/welcome.sh @@ -1,12 +1,12 @@ #!/usr/bin/env bash cat < HashJoinExecutor, ) -> impl Stream> + '_ { Self::eq_join_oneside::<{ SideType::Left }>(args) } - /// Used to forward `eq_join_oneside` to show join side in stack. + /// Used to forward `eq_join_oneside` to show join side in stack. fn eq_join_right( args: EqJoinArgs<'_, K, S>, ) -> impl Stream> + '_ { diff --git a/src/stream/src/executor/wrapper/update_check.rs b/src/stream/src/executor/wrapper/update_check.rs index f14366d55550d..f525a7329c7de 100644 --- a/src/stream/src/executor/wrapper/update_check.rs +++ b/src/stream/src/executor/wrapper/update_check.rs @@ -70,9 +70,9 @@ mod tests { let (mut tx, source) = MockSource::channel(Default::default(), vec![]); tx.push_chunk(StreamChunk::from_pretty( " I - U- 114 + U- 114 U- 514 - U+ 1919 + U+ 1919 U+ 810", )); @@ -103,7 +103,7 @@ mod tests { let (mut tx, source) = MockSource::channel(Default::default(), vec![]); tx.push_chunk(StreamChunk::from_pretty( " I - U- 114 + U- 114 U+ 514 U- 1919810", )); diff --git a/src/tests/e2e_extended_mode/README.md b/src/tests/e2e_extended_mode/README.md index a4b5f81840368..07e67d68e3a0c 100644 --- a/src/tests/e2e_extended_mode/README.md +++ b/src/tests/e2e_extended_mode/README.md @@ -3,12 +3,12 @@ This is a program used for e2e test in extended mode. ## What is difference between it and extended_mode/*.slt in e2e_test For e2e test in extended query mode, there are two thing we can't test in sqllogitest -1. bind parameter -2. max row number +1. bind parameter +2. max row number 3. cancel query See [detail](https://www.postgresql.org/docs/15/protocol-flow.html#PROTOCOL-FLOW-PIPELINING:~:text=Once%20a%20portal,count%20is%20ignored) -So before sqllogictest supporting these, we test these function in this program. +So before sqllogictest supporting these, we test these function in this program. In the future, we may merge it to e2e_text/extended_query diff --git a/src/tests/regress/README.md b/src/tests/regress/README.md index a9c90fc778529..49e5b2fad22ba 100644 --- a/src/tests/regress/README.md +++ b/src/tests/regress/README.md @@ -1,13 +1,13 @@ -This program is a rewrite of [postgres regress test framework](https://github.com/postgres/postgres/tree/master/src/test/regress) +This program is a rewrite of [postgres regress test framework](https://github.com/postgres/postgres/tree/master/src/test/regress) in rust. # How it works -* When it starts up, it will do some initialization work, e.g. setting up environment variables, creating output +* When it starts up, it will do some initialization work, e.g. setting up environment variables, creating output directories. * After initialization, it reads a schedule file, where each line describes a parallel schedule, e.g. test cases that run in parallel. You can find an example [here](https://github.com/postgres/postgres/blob/master/src/test/regress/parallel_schedule). -* For each test case, it starts a psql process which executes sqls in input file, and compares outputs of psql with +* For each test case, it starts a psql process which executes sqls in input file, and compares outputs of psql with expected output. For example, for a test case named `boolean`, here is its [input file](data/sql/boolean.sql) and [expected out](data/expected/boolean.out). @@ -67,5 +67,5 @@ The `data` folder contains test cases migrated from [postgres](https://github.co # Caveat -This regress test is executed for both Postgres and RisingWave. As the result set of a query without `order by` -is order-unaware, we need to interpret the output file by ourselves. +This regress test is executed for both Postgres and RisingWave. As the result set of a query without `order by` +is order-unaware, we need to interpret the output file by ourselves. diff --git a/src/tests/sqlsmith/README.md b/src/tests/sqlsmith/README.md index 7e97c47e17ff5..495276b096585 100644 --- a/src/tests/sqlsmith/README.md +++ b/src/tests/sqlsmith/README.md @@ -24,7 +24,7 @@ This test will be run as a unit test: ## Running with Madsim -You can check [`ci/scripts/build-simulation.sh`](../../../ci/scripts/build-simulation.sh) +You can check [`ci/scripts/build-simulation.sh`](../../../ci/scripts/build-simulation.sh) for the latest madsim build instructions. You can adjust the sample size. Below `100` batch and stream queries are generated (`--sqlsmith 100`). diff --git a/src/tests/sqlsmith/develop.md b/src/tests/sqlsmith/develop.md index 2c017242ed924..ad48e5b3ad322 100644 --- a/src/tests/sqlsmith/develop.md +++ b/src/tests/sqlsmith/develop.md @@ -68,7 +68,7 @@ Query execution and generation happen in step. Here's an overview of it. 4. Generate `UPDATE / DELETE` statements and Update base tables with them. If no PK we will just do `DELETE` for some rows and `INSERT` back statements. 5. Generate and run batch queries e.g. `SELECT * FROM t`, `WITH w AS ... SELECT * FROM w`. -6. Generate and run stream queries. +6. Generate and run stream queries. These are immediately removed after they are successfully created. 7. Drop base materialized views. 8. Drop base tables. @@ -104,7 +104,7 @@ This generates either: 4. Aggregates. 5. Casts. 6. Other kinds of expressions e.g. `CASE ... WHEN`. - + We mentioned that we call `gen_expr` with a **specific type**. That should be the return type of calling functions and aggregates. It should also be the cast target type. diff --git a/src/utils/workspace-config/README.md b/src/utils/workspace-config/README.md index 4efb5ed45d0d1..8cfb70c36e1e5 100644 --- a/src/utils/workspace-config/README.md +++ b/src/utils/workspace-config/README.md @@ -1,6 +1,6 @@ # How this magic works -This crate is to configure the features of some dependencies: +This crate is to configure the features of some dependencies: - [static log verbosity level](https://docs.rs/tracing/latest/tracing/level_filters/index.html#compile-time-filters). This is forced. - static link some dependencies e.g., OpenSSL. This is optional and controlled by feature flag `rw-static-link`