Skip to content

Commit 848f8f2

Browse files
ihraskoGerrit Code Review
authored and
Gerrit Code Review
committed
Merge "Update docs for Pekko"
2 parents a68bdce + cb6eb73 commit 848f8f2

File tree

2 files changed

+36
-35
lines changed

2 files changed

+36
-35
lines changed

docs/getting-started-guide/clustering.rst

+35-35
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ To implement clustering, the deployment considerations are as follows:
4646

4747
* Every device that belongs to a cluster needs to have an identifier.
4848
OpenDaylight uses the node's ``role`` for this purpose. After you define the
49-
first node's role as *member-1* in the ``akka.conf`` file, OpenDaylight uses
49+
first node's role as *member-1* in the ``pekko.conf`` file, OpenDaylight uses
5050
*member-1* to identify that node.
5151

5252
* Data shards are used to contain all or a certain segment of a OpenDaylight's
@@ -102,7 +102,7 @@ OpenDaylight includes some scripts to help with the clustering configuration.
102102
Configure Cluster Script
103103
^^^^^^^^^^^^^^^^^^^^^^^^
104104

105-
This script is used to configure the cluster parameters (e.g. ``akka.conf``,
105+
This script is used to configure the cluster parameters (e.g. ``pekko.conf``,
106106
``module-shards.conf``) on a member of the controller cluster. The user should
107107
restart the node to apply the changes.
108108

@@ -157,7 +157,7 @@ do the following on each machine:
157157

158158
#. Open the following configuration files:
159159

160-
* ``configuration/initial/akka.conf``
160+
* ``configuration/initial/pekko.conf``
161161
* ``configuration/initial/module-shards.conf``
162162

163163
#. In each configuration file, make the following changes:
@@ -176,7 +176,7 @@ do the following on each machine:
176176
address of any of the machines that will be part of the cluster::
177177

178178
cluster {
179-
seed-nodes = ["akka://opendaylight-cluster-data@${IP_OF_MEMBER1}:2550",
179+
seed-nodes = ["pekko://opendaylight-cluster-data@${IP_OF_MEMBER1}:2550",
180180
<url-to-cluster-member-2>,
181181
<url-to-cluster-member-3>]
182182

@@ -211,10 +211,10 @@ the three member nodes to access the data residing in the datastore.
211211
Sample Config Files
212212
"""""""""""""""""""
213213

214-
Sample ``akka.conf`` file::
214+
Sample ``pekko.conf`` file::
215215

216216
odl-cluster-data {
217-
akka {
217+
pekko {
218218
remote {
219219
artery {
220220
enabled = on
@@ -226,9 +226,9 @@ Sample ``akka.conf`` file::
226226

227227
cluster {
228228
# Using artery.
229-
seed-nodes = ["akka://[email protected]:2550",
230-
"akka://[email protected]:2550",
231-
"akka://[email protected]:2550"]
229+
seed-nodes = ["pekko://[email protected]:2550",
230+
"pekko://[email protected]:2550",
231+
"pekko://[email protected]:2550"]
232232

233233
roles = [
234234
"member-1"
@@ -381,7 +381,7 @@ on a particular shard. An example output for the
381381
"LastApplied": 5,
382382
"LastLeadershipChangeTime": "2017-01-06 13:18:37.605",
383383
"LastLogIndex": 5,
384-
"PeerAddresses": "member-3-shard-default-operational: akka://[email protected]:2550/user/shardmanager-operational/member-3-shard-default-operational, member-2-shard-default-operational: akka://[email protected]:2550/user/shardmanager-operational/member-2-shard-default-operational",
384+
"PeerAddresses": "member-3-shard-default-operational: pekko://[email protected]:2550/user/shardmanager-operational/member-3-shard-default-operational, member-2-shard-default-operational: pekko://[email protected]:2550/user/shardmanager-operational/member-2-shard-default-operational",
385385
"WriteOnlyTransactionCount": 0,
386386
"FollowerInitialSyncStatus": false,
387387
"FollowerInfo": [
@@ -469,7 +469,7 @@ Split Brain Resolver
469469
You need to enable the Split Brain Resolver by configuring it as downing
470470
provider in the configuration::
471471

472-
akka.cluster.downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
472+
pekko.cluster.downing-provider-class = "org.apache.pekko.cluster.sbr.SplitBrainResolverProvider"
473473

474474
You should also consider different downing strategies, described below.
475475

@@ -481,7 +481,7 @@ more nodes while there is a network partition does not influence this timeout, s
481481
the status of those nodes will not be changed to Up while there are unreachable nodes.
482482
Joining nodes are not counted in the logic of the strategies.
483483

484-
Setting ``akka.cluster.split-brain-resolver.stable-after`` to a shorter duration for having
484+
Setting ``pekko.cluster.split-brain-resolver.stable-after`` to a shorter duration for having
485485
quicker removal of crashed nodes can be done at the price of risking a too early action on
486486
transient network partitions that otherwise would have healed. Do not set this to a shorter
487487
duration than the membership dissemination time in the cluster, which depends on the cluster size.
@@ -513,7 +513,7 @@ removed, or if there are no changes within stable-after * 2.
513513

514514
Configuration::
515515

516-
akka.cluster.split-brain-resolver {
516+
pekko.cluster.split-brain-resolver {
517517
# Time margin after which shards or singletons that belonged to a downed/removed
518518
# partition are created in surviving partition. The purpose of this margin is that
519519
# in case of a network partition the persistent actors in the non-surviving partitions
@@ -574,11 +574,11 @@ than others.
574574

575575
Configuration::
576576

577-
akka.cluster.split-brain-resolver.active-strategy=keep-majority
577+
pekko.cluster.split-brain-resolver.active-strategy=keep-majority
578578

579579
::
580580

581-
akka.cluster.split-brain-resolver.keep-majority {
581+
pekko.cluster.split-brain-resolver.keep-majority {
582582
# if the 'role' is defined the decision is based only on members with that 'role'
583583
role = ""
584584
}
@@ -597,7 +597,7 @@ cluster, or when you can define a fixed number of nodes with a certain role.
597597
* If there are unreachable nodes when starting up the cluster, before reaching
598598
this limit, the cluster may shut itself down immediately.
599599
This is not an issue if you start all nodes at approximately the same time or
600-
use the ``akka.cluster.min-nr-of-members`` to define required number of
600+
use the ``pekko.cluster.min-nr-of-members`` to define required number of
601601
members before the leader changes member status of ‘Joining’ members to ‘Up’.
602602
You can tune the timeout after which downing decisions are made using the
603603
stable-after setting.
@@ -628,11 +628,11 @@ splitting the cluster into two separate clusters, i.e. a split brain.
628628

629629
Configuration::
630630

631-
akka.cluster.split-brain-resolver.active-strategy=static-quorum
631+
pekko.cluster.split-brain-resolver.active-strategy=static-quorum
632632

633633
::
634634

635-
akka.cluster.split-brain-resolver.static-quorum {
635+
pekko.cluster.split-brain-resolver.static-quorum {
636636
# minimum number of nodes that the cluster must have
637637
quorum-size = undefined
638638

@@ -669,12 +669,12 @@ in the cluster.
669669

670670
Configuration::
671671

672-
akka.cluster.split-brain-resolver.active-strategy=keep-oldest
672+
pekko.cluster.split-brain-resolver.active-strategy=keep-oldest
673673

674674

675675
::
676676

677-
akka.cluster.split-brain-resolver.keep-oldest {
677+
pekko.cluster.split-brain-resolver.keep-oldest {
678678
# Enable downing of the oldest node when it is partitioned from all other nodes
679679
down-if-alone = on
680680

@@ -701,7 +701,7 @@ to shutdown all nodes and start up a new fresh cluster.
701701

702702
Configuration::
703703

704-
akka.cluster.split-brain-resolver.active-strategy=down-all
704+
pekko.cluster.split-brain-resolver.active-strategy=down-all
705705

706706
Lease
707707
^^^^^
@@ -719,21 +719,21 @@ This strategy is very safe since coordination is added by an external arbiter.
719719

720720
Configuration::
721721

722-
akka {
722+
pekko {
723723
cluster {
724-
downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
724+
downing-provider-class = "org.apache.pekko.cluster.sbr.SplitBrainResolverProvider"
725725
split-brain-resolver {
726726
active-strategy = "lease-majority"
727727
lease-majority {
728-
lease-implementation = "akka.coordination.lease.kubernetes"
728+
lease-implementation = "pekko.coordination.lease.kubernetes"
729729
}
730730
}
731731
}
732732
}
733733

734734
::
735735

736-
akka.cluster.split-brain-resolver.lease-majority {
736+
pekko.cluster.split-brain-resolver.lease-majority {
737737
lease-implementation = ""
738738

739739
# This delay is used on the minority side before trying to acquire the lease,
@@ -765,7 +765,7 @@ An OpenDaylight cluster has an ability to run on multiple data centers in a way,
765765
that tolerates network partitions among them.
766766

767767
Nodes can be assigned to group of nodes by setting the
768-
``akka.cluster.multi-data-center.self-data-center`` configuration property.
768+
``pekko.cluster.multi-data-center.self-data-center`` configuration property.
769769
A node can only belong to one data center and if nothing is specified a node will
770770
belong to the default data center.
771771

@@ -783,14 +783,14 @@ nodes in the same data center than across data centers.
783783

784784
Two different failure detectors can be configured for these two purposes:
785785

786-
* ``akka.cluster.failure-detector`` for failure detection within own data center
786+
* ``pekko.cluster.failure-detector`` for failure detection within own data center
787787

788-
* ``akka.cluster.multi-data-center.failure-detector`` for failure detection across
788+
* ``pekko.cluster.multi-data-center.failure-detector`` for failure detection across
789789
different data centers
790790

791791
Heartbeat messages for failure detection across data centers are only performed
792792
between a number of the oldest nodes on each side. The number of nodes is configured
793-
with ``akka.cluster.multi-data-center.cross-data-center-connections``.
793+
with ``pekko.cluster.multi-data-center.cross-data-center-connections``.
794794

795795
This influences how rolling updates should be performed. Don’t stop all of the oldest nodes
796796
that are used for gossip at the same time. Stop one or a few at a time so that new
@@ -819,10 +819,10 @@ Configuration::
819819

820820
failure-detector {
821821
# FQCN of the failure detector implementation.
822-
# It must implement akka.remote.FailureDetector and have
822+
# It must implement org.apache.pekko.remote.FailureDetector and have
823823
# a public constructor with a com.typesafe.config.Config and
824-
# akka.actor.EventStream parameter.
825-
implementation-class = "akka.remote.DeadlineFailureDetector"
824+
# pekko.actor.EventStream parameter.
825+
implementation-class = "org.apache.pekko.remote.DeadlineFailureDetector"
826826

827827
# Number of potentially lost/delayed heartbeats that will be
828828
# accepted before considering it to be an anomaly.
@@ -863,13 +863,13 @@ nodes in two locations) such configuration is used:
863863

864864
* for member-1, member-2 and member-3 (active data center)::
865865

866-
akka.cluster.multi-data-center {
866+
pekko.cluster.multi-data-center {
867867
self-data-center = "main"
868868
}
869869

870870
* for member-4, member-5, member-6 (backup data center)::
871871

872-
akka.cluster.multi-data-center {
872+
pekko.cluster.multi-data-center {
873873
self-data-center = "backup"
874874
}
875875

@@ -1006,7 +1006,7 @@ shard-transaction-idle-timeout-in-minutes uint32 (1..max) 10 The max
10061006
shard-snapshot-batch-count uint32 (1..max) 20000 The minimum number of entries to be present in the in-memory journal log before a snapshot is to be taken.
10071007
shard-snapshot-data-threshold-percentage uint8 (1..100) 12 The percentage of ``Runtime.totalMemory()`` used by the in-memory journal log before a snapshot is to be taken
10081008
shard-heartbeat-interval-in-millis uint16 (100..max) 500 The interval at which a shard will send a heart beat message to its remote shard.
1009-
operation-timeout-in-seconds uint16 (5..max) 5 The maximum amount of time for akka operations (remote or local) to complete before failing.
1009+
operation-timeout-in-seconds uint16 (5..max) 5 The maximum amount of time for pekko operations (remote or local) to complete before failing.
10101010
shard-journal-recovery-log-batch-size uint32 (1..max) 5000 The maximum number of journal log entries to batch on recovery for a shard before committing to the data store.
10111011
shard-transaction-commit-timeout-in-seconds uint32 (1..max) 30 The maximum amount of time a shard transaction three-phase commit can be idle without receiving the next messages before it aborts the transaction
10121012
shard-transaction-commit-queue-capacity uint32 (1..max) 20000 The maximum allowed capacity for each shard's transaction commit queue.

docs/spelling_wordlist.txt

+1
Original file line numberDiff line numberDiff line change
@@ -121,6 +121,7 @@ ovsdb
121121
parameterized
122122
Pax
123123
PCE
124+
pekko
124125
Powermock
125126
powermock
126127
pre

0 commit comments

Comments
 (0)