Skip to content

ticdc v4.0.9

Compare
Choose a tag to compare
@sre-bot sre-bot released this 18 Dec 09:22
04e0284

Improvements

  • Add an alert for enabling TiKV's Hibernate Region feature #1120
  • Reduce memory usage in the schema storage #1127
  • Add the feature of unified sorter, which accelerates replication when the data size of the incremental scan is large (experimental) #1122
  • Support configuring the maximum message size and the maximum message batch in the TiCDC Open Protocol message (only for Kafka sink) #1079

Bug Fixes

  • Fix the issue that multiple owners might exist when the owner campaign key is deleted #1104
  • Fix a bug that TiCDC might fail to continue replicating data when a TiKV node crashes or recovers from a crash. This bug only exists in v4.0.8. #1198
  • Fix the issue that the metadata is repeatedly flushed to etcd before a table is initialized #1191
  • Fix an issue of replication interruption caused by early GC or the latency of updating TableInfo when the schema storage caches TiDB tables #1114
  • Fix the issue that the schema storage costs too much memory when DDL operations are frequent #1127
  • Fix the goroutine leak when a changefeed is paused or stopped #1075
  • Increase the maximum retry timeout to 60 seconds in Kafka producer to prevent replication interruption caused by the service or network jitter in the downstream Kafka #1118
  • Fix a bug that the Kafka batch size does not take effect #1112
  • Fix a bug that some tables' row change might be lost when the network between TiCDC and PD has jitter and when there are paused changefeeds being resumed at the same time #1213
  • Fix a bug that the TiCDC process might exit when the network between TiCDC and PD is not stable #1218
  • Use a singleton PD client in TiCDC and fix a bug that TiCDC closes PD client by accident which causes replication block #1217
  • Fix a bug that the cdc owner might consume too much memory in the etcd watch client