Skip to content

Conversation

greyp9
Copy link
Contributor

@greyp9 greyp9 commented Oct 2, 2025

Summary

NIFI-15027

Tracking

Please complete the following tracking steps prior to pull request creation.

Issue Tracking

Pull Request Tracking

  • Pull Request title starts with Apache NiFi Jira issue number, such as NIFI-00000
  • Pull Request commit message starts with Apache NiFi Jira issue number, as such NIFI-00000

Pull Request Formatting

  • Pull Request based on current revision of the main branch
  • Pull Request refers to a feature branch with one commit containing changes

Verification

Please indicate the verification steps performed prior to pull request creation.

Build

  • Build completed using ./mvnw clean install -P contrib-check
    • JDK 21
    • JDK 25

Licensing

  • New dependencies are compatible with the Apache License 2.0 according to the License Policy
  • New dependencies are documented in applicable LICENSE and NOTICE files

Documentation

  • Documentation formatting appears as expected in rendered files

@pvillard31 pvillard31 changed the title CDPDFX-15027 - adjust AvroWriter handling of invalid payloads; ConsumeKafka impact NIFI-15027 - adjust AvroWriter handling of invalid payloads; ConsumeKafka impact Oct 2, 2025
try {
dataFileWriter.append(rec);
} catch (final DataFileWriter.AppendWriteException e) {
throw new IOException(e);
Copy link
Contributor

@jrsteinebrey jrsteinebrey Oct 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working on this ticket.
This changed line breaks other writeRecord() callers who explicitly catch DataFileWriter.AppendWriteException like this example
https://github.com/jrsteinebrey/nifi/blob/b0f29ef94e95be8160ec2cd5fbdfbef373451f90/nifi-extension-bundles/nifi-extension-utils/nifi-database-utils/src/main/java/org/apache/nifi/util/db/JdbcCommon.java#L466
They would need to be changed to catch IOException instead of AppendWriteException.
Instead of this change here in WriteAvroResultWithSchema.java,
I suggest that you consider changing the Kafka code here
https://github.com/apache/nifi/blob/1457950040d0fe86ade53770def6c5a95b6f0252/nifi-extension-bundles/nifi-kafka-bundle/nifi-kafka-processors/src/main/java/org/apache/nifi/kafka/processors/consumer/convert/AbstractRecordStreamKafkaMessageConverter.java#L112-L120\
to catch (Exception) instead of specific exception classes. Then the ticket is resolved and any future created exception classes also route to failure.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's reasonable; thanks.

I'm not familiar with the reason for the "catch all" in AbstractRecordStreamKafkaMessageConverter.

To me, the problem seems to be that the Avro writer implementation throws a particular exception (class) that is not visible in the classpath of the Kafka implementation. So we can't act based on that particular exception.

Another variation would be for AvroWriter to throw MalformedRecordException instead of IOException, as that better conveys the particular problem (bad data).

There are potential side effects to either of these potential paths forward; hopefully others in the community will chime in.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants