Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Iceberg-rust Delete support #735

Open
2 tasks
Fokko opened this issue Nov 28, 2024 · 2 comments
Open
2 tasks

Iceberg-rust Delete support #735

Fokko opened this issue Nov 28, 2024 · 2 comments

Comments

@Fokko
Copy link
Contributor

Fokko commented Nov 28, 2024

For the deletes, we need a broader discussion on where the responsibilities lie between iceberg-rust and the query engine.

On the read-side there Tasks are passed to the query engine. I like this nice and clean boundary between the engine and the library. I would love to go to a similar API for deletes. Similar to the read path, the library comes up with a set of tasks that are passed back to the query engine to write out the files and return the DataFile with all the statistics and such.

The current focus of #700 is adding DataFiles, which is reasonable for engines to take control over. As a next step, we need to add delete operations. Here it gets more complicated since it can be that the delete can be performed purely on Iceberg metadata (eg. dropping a partition), but it can also be that certain Parquet files need to be rewritten. In such a case, the old DataFile will be dropped, and one or more DataFiles will be added when the engines have rewritten the Parquet files, excluding the rows that need to be dropped.

When doing a delete, the following steps are being taken:

  • First, based on the partition predicates it is determined if a whole partition can be dropped. It so, the whole manifest will be read, and marked as deleted.
  • Second, the manifest will be opened, and based on the statistics of each of the manifest-entries we can determine if the whole file can be deleted, if so, it will be marked as deleted.
  • Third, we have to pass the file to the query engine to check if it needs to rewrite the file. The Query engine can leverage the Parquet bloom filters to see if needs to rewrite the file, if so, it can go over the row groups to check if it needs to rewrite the row group, and if so it will start rewriting the file. There is a chance that the original file can be kept (because no rows are deleted), or we need to drop the old manifest entry and add a new one that deletes the records that we want to drop.

As you might notice from above, this is pretty similar to the read path. Except, we need to invert the evaluators. For the read path, we check for ROWS_MIGHT_MATCH to include it in the query plan. For the delete use-case, we need to determine the opposite, namely ROWS_CANNOT_MATCH. Therefore we need to extend the evaluators:

  • Strict projection needs to be added to the transforms.
  • Strict Metrics Evaluator to determine if the predicate cannot match.

Once this is ready, we can incorporate this into the write path, and also easily add the update operation (append + delete).

@Fokko Fokko pinned this issue Nov 28, 2024
@ZENOTME
Copy link
Contributor

ZENOTME commented Dec 4, 2024

I'd like to help with this. I will send a PR about Strict projection later.

@jonathanc-n
Copy link
Contributor

I would also like to look into this, I will probably be working on the Strict Metrics Evaluator.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants