Skip to content

PyIceberg appending data creates snapshots incompatible with Athena/Spark #1424

Closed
@Samreay

Description

@Samreay

Apache Iceberg version

0.8.0

Please describe the bug 🐞

We append data to our iceberg table using the Table.overwrite function, and this is saving out snapshots which have IDs that cannot be parsed by athena's OPTIMIZE command, or Sparks:

java.lang.IllegalArgumentException: Cannot parse to a long value: snapshot-id: 9223372036854775808
        at org.apache.iceberg.relocated.com.google.common.base.Preconditions.checkArgument(Preconditions.java:446)
        at org.apache.iceberg.util.JsonUtil.getLong(JsonUtil.java:139)
        at org.apache.iceberg.SnapshotParser.fromJson(SnapshotParser.java:116)
        at org.apache.iceberg.TableMetadataParser.fromJson(TableMetadataParser.java:478)

Java's long max value is 9223372036854775807
PyIceberg (or something under the hood, it might not be pyiceberg) has created a snapshot with value 9223372036854775808, literally 1+MAX_VALUE

Willingness to contribute

  • I can contribute a fix for this bug independently
  • I would be willing to contribute a fix for this bug with guidance from the Iceberg community
  • I cannot contribute a fix for this bug at this time

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions