Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial draft for timestamping DLO generation and compaction. #182

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

anjagruenheid
Copy link
Contributor

Summary

Issue] Briefly discuss the summary of the changes made in this
pull request in 2-3 lines.

Changes

  • Client-facing API Changes
  • Internal API Changes
  • Bug Fixes
  • New Features
  • Performance Improvements
  • Code Style
  • Refactoring
  • Documentation
  • Tests

For all the boxes checked, please include additional details of the changes made in this pull request.

Testing Done

  • Manually Tested on local docker setup. Please include commands ran, and their output.
  • Added new tests for the changes made.
  • Updated existing tests to reflect the changes made.
  • No tests added or updated. Please explain why. If unsure, please feel free to ask for help.
  • Some other form of testing like staging or soak time in production. Please explain.

For all the boxes checked, include a detailed description of the testing done for the changes made in this pull request.

Additional Information

  • Breaking Changes
  • Deprecations
  • Large PR broken into smaller PRs, and PR plan linked in the description.

For all the boxes checked, include additional details of the changes made in this pull request.

@anjagruenheid
Copy link
Contributor Author

As discussed, the timestamp for the DLO strategy is added as part of the strategy in the table properties. For the compaction app, I've added a separate property ('write.data-layout.compaction') and used the same mechanism to alter the table properties. Let me know what you think.

Copy link
Collaborator

@sumedhsakdeo sumedhsakdeo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code looks great. Have one clarifying question.
Wondering if we should add skipping compaction if computation timestamp is smaller than compaction timestamp in next PR or this PR.

@@ -58,6 +65,23 @@ protected void runInner(Operations ops) {
fileGroupRewriteResult.rewrittenDataFilesCount(),
fileGroupRewriteResult.rewrittenBytesCount());
}
// Add compaction timestamp to table properties.
CompactionEvent event =
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@teamurko can you confirm if this will be run after all partitions are compacted in Table or after each partition is compacted in a Table?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it will run after compaction is done on the whole table

Copy link
Collaborator

@teamurko teamurko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the quick PR, sorry for delayed response @anjagruenheid

Let's wait until we iterate on the e2e design, I hope that we can land on something quickly to clarity how we support these timestamps

The idea for these timestamp is to tell generator when to regenerate new strategy and to tell scheduler whether the strategy has not yet been applied, based on this idea I thought that the apply timestamp should be kept in the same entity as the create timestamp, but I'm not sure how it would work when we have multiple strategies per table, e.g. partition (range) scope

@@ -58,6 +65,23 @@ protected void runInner(Operations ops) {
fileGroupRewriteResult.rewrittenDataFilesCount(),
fileGroupRewriteResult.rewrittenBytesCount());
}
// Add compaction timestamp to table properties.
CompactionEvent event =
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it will run after compaction is done on the whole table

Comment on lines +76 to +84
Gson gson = new GsonBuilder().create();
Type type = new TypeToken<CompactionEvent>() {}.getType();
ops.spark()
.sql(
String.format(
"ALTER TABLE %s SET TBLPROPERTIES ('%s' = '%s')",
ops.getTable(fqtn),
DATA_LAYOUT_COMPACTION_PROPERTY_KEY,
StringEscapeUtils.escapeJava(gson.toJson(event, type))));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, but it's not related to the strategy, right? It's a compaction event. Should I push this functionality into a separate util?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants