-
Notifications
You must be signed in to change notification settings - Fork 1k
PHOENIX-7566 ZK to SystemTable Sync and Event Reactor for Failover #2302
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PHOENIX-7566 ZK to SystemTable Sync and Event Reactor for Failover #2302
Conversation
ccdad34 to
cf86560
Compare
| * @param fromState the previous state before the transition | ||
| * can be null for initial state. | ||
| * Also, can be inaccurate in case there is | ||
| * connection loss to ZK and multiple state changes happen in between. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this true? Are not we using persistent watchers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, in case of connection loss there can be events which are missed. Although this doesn't affect failover IMO, clients need to be aware of this limitation.
| * Syncs data from ZooKeeper (source of truth) to the system table. | ||
| * This method is called periodically to ensure consistency. | ||
| */ | ||
| private void syncZKToSystemTable() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of syncing every time, can we check if sync needed by reading the HA record from the HA group system table first, and then sync only if needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done and also added IT to check the same
…checking system table record before writing
| = systemTableRecord.getClusterRole().getDefaultHAGroupState(); | ||
| Long lastSyncTimeInMs | ||
| = defaultHAGroupState.equals(HAGroupStoreRecord.HAGroupState.ACTIVE_NOT_IN_SYNC) | ||
| ? System.currentTimeMillis() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the ZNode for the HA group is not present, should we set lastSyncTimeInMs to zero always? I do not think it is safe to set to System.currentTimeMillis(). Also what does setting to null mean here? Is it equivalent to setting to zero?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is because the ACTIVE cluster will start in ANIS state. The writer might not be active so setting this to current timestamp.
Also, setting null means we are current, it's more of a convention that we adopt. @Himanshu-g81 LMK if you have any strong preference since you are primary user for this field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is because the ACTIVE cluster will start in ANIS state. The writer might not be active so setting this to current timestamp
I think in that case it would be 0? i.e. cluster is not in sync but we don't have exact timestamp on when it was last in sync, we will not purge anything during compaction on standby side and as soon as it's in sync, this timestamp will also be updated.
Also, setting null means we are current, it's more of a convention that we adopt.
@ritegarg if the cluster is in sync, can we keep it currentTime (and not null?) (i.e. last timestamp when cluster was in sync - at the time when API is called, which is curren time). On reader it's handled with same assumption.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i.e. last timestamp when cluster was in sync - at the time when API is called
This might be hard to maintain as there can be sometime elapsed between calling and returning the response. It is better to keep as 0.
I propose we can adopt this convention
0 (long default value) -> Sync
-1 (Explicitly added when HAGroup ZNode is initialized) -> Unknown as cluster started in this mode
Any other timestamp (long) -> Last known sync time.
Standby will observe any updates sent from Active(via ZK listener) and will just copy the timestamp in its HAGroupStoreRecord on standby ZK cluster. For eg. when cluster moves from AIS to ANIS, standby will listen to this change and update local HAGroupStoreRecord in standby ZK with same value. When the record moves back from ANIS to AIS, standby peer will then detect the transition and just update the local ZK with same value. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure I understand the value of setting anything other than zero when we do not know/determine the value for lastSyncTimeInMs. What are the consumers supposed to do when they see the value is -1 (instead of zero)? Regardless if it is zero or -1, the behavior will be the same. So, let us set it to zero in all such cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed offline and finally decided to use 0 as default value when ZNode is initialized.
When we move from AIS -> ANIS, value is updated to beginning of last round
When we move from ANIS -> AIS post that, value is retained
| * check the state again and retry if the use case still needs it. | ||
| * @throws SQLException when there is an error with the database operation | ||
| */ | ||
| public void initiateFailoverOnActiveCluster(final String haGroupName) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ritegarg are you planning to add the API to mark cluster from STADNBY_TO_ACTIVE to ACTIVE in this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Himanshu-g81 you can use setHAGroupStatusToSync
| throw new IOException("HAGroupStoreManager is null " | ||
| + "for current cluster, check configuration"); | ||
| } | ||
| String tableName |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume the slowness will be observed for the very first mutation on the specific HA group. When do we initialize an HA group store client? I am not worried about slowness in updating the HA group system table. However, this would be a concern if this slowness is also observed for user mutations. I think we need to initialize an HA group store client before receiving the first mutation. This may mean that we need to initiate initialization when the region sever level Phoenix coproc starts possibly asynchronously.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Created https://issues.apache.org/jira/browse/PHOENIX-7719 for this.
kadirozde
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1. Thanks!
PHOENIX-7566 ZK to SystemTable Sync and Event Reactor for Failover (apache#2302)
Summary of changes
Event Reactor for Failover, this reactor observes state transitions and makes state updates accordingly.
a. Add peer event reactor (for reaction to peer state transitions)
b. Add local event reactor (for reaction to local state transitions)
New flow to sync between ZK and HA System Table
a. ZK is the source of truth for config and state both. But only local state is maintained in ZK.
b. Updating System table will be best effort basis
c. Regular job with jitter to sync from ZK to System Table in case of missed update.
d. Introduced adminVersion to handle any admin updates to config.
Removed Degraded for reader/writer in favor of Degraded Standby
In update calls check if the state is already updated, double checking now once with local cache and once with direct ZK. This is needed as all the HAGroupStoreManagers will try to update the state on same trigger.
Added fromState to event subscription callback. Note that fromState can be null when ZNode is initialized.
Not blocking any transitions to HA_GROUP table in IndexRegionObserver even if we are in ACTIVE_TO_STANDBY state.
Testing
a. E2E Failover Happy Path
b. E2E Failover with Abort
c. Peer Reaction when ACTIVE moves from Sync to Store&Forward mode and back
a. Existing tests passing
b. Added test for regular job with jitter to sync from ZK to System Table in case of missed update.