@@ -46,7 +46,7 @@ To implement clustering, the deployment considerations are as follows:
46
46
47
47
* Every device that belongs to a cluster needs to have an identifier.
48
48
OpenDaylight uses the node's ``role `` for this purpose. After you define the
49
- first node's role as *member-1 * in the ``akka .conf `` file, OpenDaylight uses
49
+ first node's role as *member-1 * in the ``pekko .conf `` file, OpenDaylight uses
50
50
*member-1 * to identify that node.
51
51
52
52
* Data shards are used to contain all or a certain segment of a OpenDaylight's
@@ -102,7 +102,7 @@ OpenDaylight includes some scripts to help with the clustering configuration.
102
102
Configure Cluster Script
103
103
^^^^^^^^^^^^^^^^^^^^^^^^
104
104
105
- This script is used to configure the cluster parameters (e.g. ``akka .conf ``,
105
+ This script is used to configure the cluster parameters (e.g. ``pekko .conf ``,
106
106
``module-shards.conf ``) on a member of the controller cluster. The user should
107
107
restart the node to apply the changes.
108
108
@@ -157,7 +157,7 @@ do the following on each machine:
157
157
158
158
#. Open the following configuration files:
159
159
160
- * ``configuration/initial/akka .conf ``
160
+ * ``configuration/initial/pekko .conf ``
161
161
* ``configuration/initial/module-shards.conf ``
162
162
163
163
#. In each configuration file, make the following changes:
@@ -176,7 +176,7 @@ do the following on each machine:
176
176
address of any of the machines that will be part of the cluster::
177
177
178
178
cluster {
179
- seed-nodes = ["akka ://opendaylight-cluster-data@${IP_OF_MEMBER1}:2550",
179
+ seed-nodes = ["pekko ://opendaylight-cluster-data@${IP_OF_MEMBER1}:2550",
180
180
<url-to-cluster-member-2>,
181
181
<url-to-cluster-member-3>]
182
182
@@ -211,10 +211,10 @@ the three member nodes to access the data residing in the datastore.
211
211
Sample Config Files
212
212
"""""""""""""""""""
213
213
214
- Sample ``akka .conf `` file::
214
+ Sample ``pekko .conf `` file::
215
215
216
216
odl-cluster-data {
217
- akka {
217
+ pekko {
218
218
remote {
219
219
artery {
220
220
enabled = on
@@ -226,9 +226,9 @@ Sample ``akka.conf`` file::
226
226
227
227
cluster {
228
228
# Using artery.
229
- seed-nodes = ["akka ://[email protected] :2550",
230
-
231
-
229
+ seed-nodes = ["pekko ://[email protected] :2550",
230
+
231
+
232
232
233
233
roles = [
234
234
"member-1"
@@ -381,7 +381,7 @@ on a particular shard. An example output for the
381
381
"LastApplied": 5,
382
382
"LastLeadershipChangeTime": "2017-01-06 13:18:37.605",
383
383
"LastLogIndex": 5,
384
- "PeerAddresses": "member-3-shard-default-operational: akka ://[email protected] :2550/user/shardmanager-operational/member-3-shard-default-operational, member-2-shard-default-operational: akka ://[email protected] :2550/user/shardmanager-operational/member-2-shard-default-operational",
384
+ "PeerAddresses": "member-3-shard-default-operational: pekko ://[email protected] :2550/user/shardmanager-operational/member-3-shard-default-operational, member-2-shard-default-operational: pekko ://[email protected] :2550/user/shardmanager-operational/member-2-shard-default-operational",
385
385
"WriteOnlyTransactionCount": 0,
386
386
"FollowerInitialSyncStatus": false,
387
387
"FollowerInfo": [
@@ -469,7 +469,7 @@ Split Brain Resolver
469
469
You need to enable the Split Brain Resolver by configuring it as downing
470
470
provider in the configuration::
471
471
472
- akka .cluster.downing-provider-class = "akka .cluster.sbr.SplitBrainResolverProvider"
472
+ pekko .cluster.downing-provider-class = "org.apache.pekko .cluster.sbr.SplitBrainResolverProvider"
473
473
474
474
You should also consider different downing strategies, described below.
475
475
@@ -481,7 +481,7 @@ more nodes while there is a network partition does not influence this timeout, s
481
481
the status of those nodes will not be changed to Up while there are unreachable nodes.
482
482
Joining nodes are not counted in the logic of the strategies.
483
483
484
- Setting ``akka .cluster.split-brain-resolver.stable-after `` to a shorter duration for having
484
+ Setting ``pekko .cluster.split-brain-resolver.stable-after `` to a shorter duration for having
485
485
quicker removal of crashed nodes can be done at the price of risking a too early action on
486
486
transient network partitions that otherwise would have healed. Do not set this to a shorter
487
487
duration than the membership dissemination time in the cluster, which depends on the cluster size.
@@ -513,7 +513,7 @@ removed, or if there are no changes within stable-after * 2.
513
513
514
514
Configuration::
515
515
516
- akka .cluster.split-brain-resolver {
516
+ pekko .cluster.split-brain-resolver {
517
517
# Time margin after which shards or singletons that belonged to a downed/removed
518
518
# partition are created in surviving partition. The purpose of this margin is that
519
519
# in case of a network partition the persistent actors in the non-surviving partitions
@@ -574,11 +574,11 @@ than others.
574
574
575
575
Configuration::
576
576
577
- akka .cluster.split-brain-resolver.active-strategy=keep-majority
577
+ pekko .cluster.split-brain-resolver.active-strategy=keep-majority
578
578
579
579
::
580
580
581
- akka .cluster.split-brain-resolver.keep-majority {
581
+ pekko .cluster.split-brain-resolver.keep-majority {
582
582
# if the 'role' is defined the decision is based only on members with that 'role'
583
583
role = ""
584
584
}
@@ -597,7 +597,7 @@ cluster, or when you can define a fixed number of nodes with a certain role.
597
597
* If there are unreachable nodes when starting up the cluster, before reaching
598
598
this limit, the cluster may shut itself down immediately.
599
599
This is not an issue if you start all nodes at approximately the same time or
600
- use the ``akka .cluster.min-nr-of-members `` to define required number of
600
+ use the ``pekko .cluster.min-nr-of-members `` to define required number of
601
601
members before the leader changes member status of ‘Joining’ members to ‘Up’.
602
602
You can tune the timeout after which downing decisions are made using the
603
603
stable-after setting.
@@ -628,11 +628,11 @@ splitting the cluster into two separate clusters, i.e. a split brain.
628
628
629
629
Configuration::
630
630
631
- akka .cluster.split-brain-resolver.active-strategy=static-quorum
631
+ pekko .cluster.split-brain-resolver.active-strategy=static-quorum
632
632
633
633
::
634
634
635
- akka .cluster.split-brain-resolver.static-quorum {
635
+ pekko .cluster.split-brain-resolver.static-quorum {
636
636
# minimum number of nodes that the cluster must have
637
637
quorum-size = undefined
638
638
@@ -669,12 +669,12 @@ in the cluster.
669
669
670
670
Configuration::
671
671
672
- akka .cluster.split-brain-resolver.active-strategy=keep-oldest
672
+ pekko .cluster.split-brain-resolver.active-strategy=keep-oldest
673
673
674
674
675
675
::
676
676
677
- akka .cluster.split-brain-resolver.keep-oldest {
677
+ pekko .cluster.split-brain-resolver.keep-oldest {
678
678
# Enable downing of the oldest node when it is partitioned from all other nodes
679
679
down-if-alone = on
680
680
@@ -701,7 +701,7 @@ to shutdown all nodes and start up a new fresh cluster.
701
701
702
702
Configuration::
703
703
704
- akka .cluster.split-brain-resolver.active-strategy=down-all
704
+ pekko .cluster.split-brain-resolver.active-strategy=down-all
705
705
706
706
Lease
707
707
^^^^^
@@ -719,21 +719,21 @@ This strategy is very safe since coordination is added by an external arbiter.
719
719
720
720
Configuration::
721
721
722
- akka {
722
+ pekko {
723
723
cluster {
724
- downing-provider-class = "akka .cluster.sbr.SplitBrainResolverProvider"
724
+ downing-provider-class = "org.apache.pekko .cluster.sbr.SplitBrainResolverProvider"
725
725
split-brain-resolver {
726
726
active-strategy = "lease-majority"
727
727
lease-majority {
728
- lease-implementation = "akka .coordination.lease.kubernetes"
728
+ lease-implementation = "pekko .coordination.lease.kubernetes"
729
729
}
730
730
}
731
731
}
732
732
}
733
733
734
734
::
735
735
736
- akka .cluster.split-brain-resolver.lease-majority {
736
+ pekko .cluster.split-brain-resolver.lease-majority {
737
737
lease-implementation = ""
738
738
739
739
# This delay is used on the minority side before trying to acquire the lease,
@@ -765,7 +765,7 @@ An OpenDaylight cluster has an ability to run on multiple data centers in a way,
765
765
that tolerates network partitions among them.
766
766
767
767
Nodes can be assigned to group of nodes by setting the
768
- ``akka .cluster.multi-data-center.self-data-center `` configuration property.
768
+ ``pekko .cluster.multi-data-center.self-data-center `` configuration property.
769
769
A node can only belong to one data center and if nothing is specified a node will
770
770
belong to the default data center.
771
771
@@ -783,14 +783,14 @@ nodes in the same data center than across data centers.
783
783
784
784
Two different failure detectors can be configured for these two purposes:
785
785
786
- * ``akka .cluster.failure-detector `` for failure detection within own data center
786
+ * ``pekko .cluster.failure-detector `` for failure detection within own data center
787
787
788
- * ``akka .cluster.multi-data-center.failure-detector `` for failure detection across
788
+ * ``pekko .cluster.multi-data-center.failure-detector `` for failure detection across
789
789
different data centers
790
790
791
791
Heartbeat messages for failure detection across data centers are only performed
792
792
between a number of the oldest nodes on each side. The number of nodes is configured
793
- with ``akka .cluster.multi-data-center.cross-data-center-connections ``.
793
+ with ``pekko .cluster.multi-data-center.cross-data-center-connections ``.
794
794
795
795
This influences how rolling updates should be performed. Don’t stop all of the oldest nodes
796
796
that are used for gossip at the same time. Stop one or a few at a time so that new
@@ -819,10 +819,10 @@ Configuration::
819
819
820
820
failure-detector {
821
821
# FQCN of the failure detector implementation.
822
- # It must implement akka .remote.FailureDetector and have
822
+ # It must implement org.apache.pekko .remote.FailureDetector and have
823
823
# a public constructor with a com.typesafe.config.Config and
824
- # akka .actor.EventStream parameter.
825
- implementation-class = "akka .remote.DeadlineFailureDetector"
824
+ # pekko .actor.EventStream parameter.
825
+ implementation-class = "org.apache.pekko .remote.DeadlineFailureDetector"
826
826
827
827
# Number of potentially lost/delayed heartbeats that will be
828
828
# accepted before considering it to be an anomaly.
@@ -863,13 +863,13 @@ nodes in two locations) such configuration is used:
863
863
864
864
* for member-1, member-2 and member-3 (active data center)::
865
865
866
- akka .cluster.multi-data-center {
866
+ pekko .cluster.multi-data-center {
867
867
self-data-center = "main"
868
868
}
869
869
870
870
* for member-4, member-5, member-6 (backup data center)::
871
871
872
- akka .cluster.multi-data-center {
872
+ pekko .cluster.multi-data-center {
873
873
self-data-center = "backup"
874
874
}
875
875
@@ -1006,7 +1006,7 @@ shard-transaction-idle-timeout-in-minutes uint32 (1..max) 10 The max
1006
1006
shard-snapshot-batch-count uint32 (1..max) 20000 The minimum number of entries to be present in the in-memory journal log before a snapshot is to be taken.
1007
1007
shard-snapshot-data-threshold-percentage uint8 (1..100) 12 The percentage of ``Runtime.totalMemory() `` used by the in-memory journal log before a snapshot is to be taken
1008
1008
shard-heartbeat-interval-in-millis uint16 (100..max) 500 The interval at which a shard will send a heart beat message to its remote shard.
1009
- operation-timeout-in-seconds uint16 (5..max) 5 The maximum amount of time for akka operations (remote or local) to complete before failing.
1009
+ operation-timeout-in-seconds uint16 (5..max) 5 The maximum amount of time for pekko operations (remote or local) to complete before failing.
1010
1010
shard-journal-recovery-log-batch-size uint32 (1..max) 5000 The maximum number of journal log entries to batch on recovery for a shard before committing to the data store.
1011
1011
shard-transaction-commit-timeout-in-seconds uint32 (1..max) 30 The maximum amount of time a shard transaction three-phase commit can be idle without receiving the next messages before it aborts the transaction
1012
1012
shard-transaction-commit-queue-capacity uint32 (1..max) 20000 The maximum allowed capacity for each shard's transaction commit queue.
0 commit comments