From c91a86d938c91cad605cb860eb9cf6f7f4ec0fdc Mon Sep 17 00:00:00 2001 From: Cheng Guoliang Date: Thu, 4 Aug 2022 15:00:51 +0800 Subject: [PATCH 01/25] docs(global_distributed): add global tensor distributed --- cn/docs/cookies/global_tensor_distributed.md | 167 +++++++++++++++++++ 1 file changed, 167 insertions(+) create mode 100644 cn/docs/cookies/global_tensor_distributed.md diff --git a/cn/docs/cookies/global_tensor_distributed.md b/cn/docs/cookies/global_tensor_distributed.md new file mode 100644 index 00000000..c2bdc4c0 --- /dev/null +++ b/cn/docs/cookies/global_tensor_distributed.md @@ -0,0 +1,167 @@ +# 使用 Global Tensor 进行多机多设备编程:分布式并行策略 + +> 首先简单介绍分布式训练的重要性 + +深度学习是通过神经网络学习样本数据的内在规律和表现层次的一种复杂机器学习算法。计算过程主要涉及数据和模型两部分。 + +随着深度学习的广泛应用,模型规模不断扩大,对硬件(算力、内存)的需求也在不断提高。然而,受限于物理定律,持续提高芯片的集成越来越困难,单一设备的算力及容量难以跟上模型扩大的需求。 + +为解决算力增速不足的问题,多节点集群的分布式训练方式逐渐受到重视,高效易用的分布式并行策略的提出势在必行。 + + +## 并行策略 + +值得注意的是,简单的设备堆叠并不一定会带来算力的增长。因为神经网络的训练并不是单纯的“把原来一个设备做的事情,现在分给多个设备各自做”,它不仅需要多个设备进行计算,还涉及到设备之间的数据传输,只有协调好集群中的计算与通信,才可以实现高效的分布式训练。 + +> 对三种并行方式进行简要概括 + +常见的并行策略包括**数据并行**、**模型并行**和**流水并行**,特点如下: + +- 数据并行:对**数据**进行切分,不同设备数据不同,但模型相同 +- 模型并行:对**模型**进行切分,不同设备数据相同,但模型不同 +- 流水并行:将**模型**分为多个阶段,分发到不同设备,各个设备之间以“流水线”的方式完成训练 + +除上述三种策略外,**混合并行**也是一种常见的并行策略,通过上述两种或三种方式的混合使用完成训练目的。 + +> 这里考虑加一段 Global Tensor 实现并行的优势介绍(简单、高效、……) + +待定 + +> matmul 基础代码和示例,后续所有示例都以这个为基础修改 + +本文以矩阵乘法为例,解释并行策略间的区别,以及如何利用 Global Tensor 实现不同的并行方式。 + +假设神经网络中的某一层是进行矩阵乘法计算,其中,输入 $x$ 的形状为 $4\times5$,模型参数 $w$ 的形状为 $5\times8$,那么,矩阵乘法输出形状为 $4\times8$。 + +基础代码: + +```python +import oneflow as flow + +x = flow.randn(4, 5) +w = flow.randn(5, 8) +out = flow.matmul(x, w) +print(out.shape) # (4, 8) +``` + +示意图如下: + +![matmul](../parallelism/imgs/matmul_logical.png) + +单设备的训练中,以上矩阵乘法计算得到 $out$ 后会传递到下一层,并最终计算得到 $loss$。然后,在反向传播过程中,得到 $\frac{\partial loss}{\partial w}$,用于更新 $w$。 + +### 数据并行 + +> 接下来以以例子和图片结合的方式,分别介绍各种并行策略 + +数据并行是将数据进行切分输入不同设备,而每个设备上的模型保持完整和一致。 + +OneFlow 特有的 Global Tensor 采用 `placement` 与 `sbp` 结合的方式完成分布。其中 `placement` 表示 Global Tensor 分布的物理设备,`sbp` 表示 Global Tensor 分布的方式(详情可见:[创建 Global Tensor](./global_tensor.md/#global-tensor_2))。 + +以两卡数据并行为例,Global Tensor 的设计方式使得上述矩阵乘法案例的修改非常简单: + +1. 数据 $x$ 按第 0 维度切分(`sbp=flow.sbp.split(dim=0)`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) +2. 模型 $w$ 保持完整(`sbp=flow.sbp.broadcast`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) + +修改后,完整代码如下: + +```python +import oneflow as flow + +placement = flow.placement(type="cuda", ranks=[0, 1]) +x = flow.randn(4, 5, placement=placement, sbp=flow.sbp.split(dim=0)) +w = flow.randn(5, 8, placement=placement, sbp=flow.sbp.broadcast) +out = flow.matmul(x, w) +print(out.shape) # (4, 8) +``` + +数据并行示意图: + +![Data Paralelism](../parallelism/imgs/matmul_data_paralelism.png) + + +> 这里在考虑要不要放数据并行的其他介绍,例如: +>> 数据并行策略下,在反向传播过程中,需要对各个设备上的梯度进行 AllReduce,以确保各个设备上的模型始终保持一致 + +>> 当数据集较大,模型较小时,由于反向过程中为同步梯度产生的通信代价较小,此时选择数据并行一般比较有优势,常见的视觉分类模型,如 ResNet50,比较适合采用数据并行。 + +### 模型并行 + +当神经网络非常巨大时,数据并行同步梯度的代价很大,此时可以考虑采用模型并行策略。 + +与数据并行相反,模型并行是将模型进行切分输入不同设备,而每个设备上的数据保持完整和一致。 + +同样以两卡为例,模型并行修改方式为: + +1. 数据 $x$ 保持完整(`sbp=flow.sbp.broadcast`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) +2. 模型 $w$ 按第 1 维度切分(`sbp=flow.sbp.split(dim=1)`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) + +修改后,完整代码如下: + +```python +import oneflow as flow + +placement = flow.placement(type="cuda", ranks=[0, 1]) +x = flow.randn(4, 5, placement=placement, sbp=flow.sbp.broadcast) +w = flow.randn(5, 8, placement=placement, sbp=flow.sbp.split(dim=1)) +out = flow.matmul(x, w) +print(out.shape) # (4, 8) +``` + +模型并行示意图: + +![Data Paralelism](../parallelism/imgs/matmul_model_paralelism.png) + +### 流水并行 + +当神经网络过于巨大,无法在一个设备上存放时,可以选择流水并行策略。 流水并行将网络切分为多个阶段,并分发到不同的计算设备上,各个计算设备之间以“流水线”的方式完成训练。 + +以两卡流水并行为例,构造两阶段示例程序: + +```python +import oneflow as flow + +P0 = flow.placement(type="cuda", ranks=[0]) +P1 = flow.placement(type="cuda", ranks=[1]) +BROADCAST = flow.sbp.broadcast + +# 模型第一阶段分布在第 0 卡 +w0 = flow.randn(5, 8, placement=P0, sbp=BROADCAST) +# 模型第二阶段分布在第 1 卡 +w1 = flow.randn(8, 3, placement=P1, sbp=BROADCAST) + +# 随机生成数据模拟输入 +x = flow.randn(4, 5) + +# 利用 to_global 将第一阶段的数据分布在第 0 卡 +in_stage0 = x.to_global(placement=P0, sbp=BROADCAST) +out_stage0 = flow.matmul(in_stage0, w0) +print(out_stage0.shape) # (4, 8) + +# 利用 to_global 将第二阶段的数据分布在第 1 卡 +in_stage1 = out_stage0.to_global(placement=P1, sbp=BROADCAST) +out_stage1 = flow,matmul(in_stage1, w1) +print(out_stage1.shape) # (4, 3) +``` + +以上程序采用矩阵乘法,模拟了一个两阶段神经网络。与数据并行和模型并行不同,流水并行中的数据和模型均未被切分,而是分别将两个阶段分布在不同的设备上进行计算。 + +Global Tensor 的设计,使得计算过程中,只需通过 `to_global` 方法调整上一阶段的输出数据的分布策略,作为下一阶段的输入数据即可。 + +> 这里要不要写“ Stage ID 及梯度累积设置” + +### 混合并行 + +> 这里想的是直接放 GPT-3 示例 + +在网络的训练中,也可以将多种并行策略混用,以 GPT-3 为例,以下是它训练时的设备并行方案: + +首先将模型分为 64 个阶段,进行流水并行。每个阶段都运行在 6 台 DGX-A100 主机上。在 6 台主机之间,进行的是数据并行训练;每台主机有 8 张 GPU 显卡,同一台机器上的 8 张 GPU 显卡之间是进行模型并行训练。 + +![gpt-3](../parallelism/imgs/gpt3-overview.png) + +## 结语 + +并行策略的选择影响着训练效率,框架对并行训练的接口支持程度,决定了算法工程师的开发效率。 + +本文介绍了数据并行、模型并行、流水并行以及混合并行这些分布式并行策略,通过示例展示了 OneFlow 针对分布式训练所做的系统级设计和创新,以便于用户轻松上手分布式训练。 From d54086b9bf779bd7cf3d2b8813055a61894da26b Mon Sep 17 00:00:00 2001 From: Cheng Guoliang Date: Thu, 4 Aug 2022 17:25:53 +0800 Subject: [PATCH 02/25] docs(distributed_outline): write a new outline --- cn/docs/cookies/distributed_outline.md | 38 ++++++++++++++++++++++++++ 1 file changed, 38 insertions(+) create mode 100644 cn/docs/cookies/distributed_outline.md diff --git a/cn/docs/cookies/distributed_outline.md b/cn/docs/cookies/distributed_outline.md new file mode 100644 index 00000000..2e453659 --- /dev/null +++ b/cn/docs/cookies/distributed_outline.md @@ -0,0 +1,38 @@ +# 使用 Global Tensor 进行多机多设备编程:分布式并行策略 + +简单介绍分布式训练的重要性 + +## 并行策略 + +对三种并行方式进行简要概括 + +## 示例 + +思路:用一个完整的网络模型进行不同并行策略演示 + +### 单卡基础示例 + +模型可以用韩老师提供的这个[示例](https://github.com/Oneflow-Inc/oneflow-documentation/issues/481#issuecomment-1109771017),但是我觉得一些训练相关的 loss, optimizer 可以去掉,只保留输入,模型和输出。 + +单卡示例不采用 `.cuda()` 或者 `to(device)` 的写法,而是直接写 `placement=flow.placement(type="cuda", ranks=[0])` 和 `sbp=flow.sbp.broadcast`,便于与后续改动做对比 + +### 如何在两卡上进行数据并行 + +1. 描述代码需要改变的部分 +2. 给出完整可运行代码以及运行方式 + +### 如何在两卡上进行模型并行 + +1. 描述代码需要改变的部分 +2. 给出完整可运行代码以及运行方式 + +### 如何在两卡上进行流水并行 + +1. 描述代码需要改变的部分 +2. 给出完整可运行代码以及运行方式 + +### 混合并行 + +这里我想的还是只展示 GPT-3 示意图做简要介绍 + +## 结语 From b95ba9a26758386bd0cb955979a6bfd4c3d24b74 Mon Sep 17 00:00:00 2001 From: Cheng Guoliang Date: Fri, 5 Aug 2022 16:47:12 +0800 Subject: [PATCH 03/25] docs(distributed): modify global_tensor_distributed --- cn/docs/cookies/global_tensor_distributed.md | 81 ++++++++++--------- cn/docs/cookies/imgs/hybrid-parallel.png | Bin 0 -> 60030 bytes 2 files changed, 44 insertions(+), 37 deletions(-) create mode 100644 cn/docs/cookies/imgs/hybrid-parallel.png diff --git a/cn/docs/cookies/global_tensor_distributed.md b/cn/docs/cookies/global_tensor_distributed.md index c2bdc4c0..af3f7945 100644 --- a/cn/docs/cookies/global_tensor_distributed.md +++ b/cn/docs/cookies/global_tensor_distributed.md @@ -1,20 +1,15 @@ # 使用 Global Tensor 进行多机多设备编程:分布式并行策略 -> 首先简单介绍分布式训练的重要性 - 深度学习是通过神经网络学习样本数据的内在规律和表现层次的一种复杂机器学习算法。计算过程主要涉及数据和模型两部分。 随着深度学习的广泛应用,模型规模不断扩大,对硬件(算力、内存)的需求也在不断提高。然而,受限于物理定律,持续提高芯片的集成越来越困难,单一设备的算力及容量难以跟上模型扩大的需求。 为解决算力增速不足的问题,多节点集群的分布式训练方式逐渐受到重视,高效易用的分布式并行策略的提出势在必行。 - ## 并行策略 值得注意的是,简单的设备堆叠并不一定会带来算力的增长。因为神经网络的训练并不是单纯的“把原来一个设备做的事情,现在分给多个设备各自做”,它不仅需要多个设备进行计算,还涉及到设备之间的数据传输,只有协调好集群中的计算与通信,才可以实现高效的分布式训练。 -> 对三种并行方式进行简要概括 - 常见的并行策略包括**数据并行**、**模型并行**和**流水并行**,特点如下: - 数据并行:对**数据**进行切分,不同设备数据不同,但模型相同 @@ -23,12 +18,6 @@ 除上述三种策略外,**混合并行**也是一种常见的并行策略,通过上述两种或三种方式的混合使用完成训练目的。 -> 这里考虑加一段 Global Tensor 实现并行的优势介绍(简单、高效、……) - -待定 - -> matmul 基础代码和示例,后续所有示例都以这个为基础修改 - 本文以矩阵乘法为例,解释并行策略间的区别,以及如何利用 Global Tensor 实现不同的并行方式。 假设神经网络中的某一层是进行矩阵乘法计算,其中,输入 $x$ 的形状为 $4\times5$,模型参数 $w$ 的形状为 $5\times8$,那么,矩阵乘法输出形状为 $4\times8$。 @@ -52,18 +41,11 @@ print(out.shape) # (4, 8) ### 数据并行 -> 接下来以以例子和图片结合的方式,分别介绍各种并行策略 - 数据并行是将数据进行切分输入不同设备,而每个设备上的模型保持完整和一致。 OneFlow 特有的 Global Tensor 采用 `placement` 与 `sbp` 结合的方式完成分布。其中 `placement` 表示 Global Tensor 分布的物理设备,`sbp` 表示 Global Tensor 分布的方式(详情可见:[创建 Global Tensor](./global_tensor.md/#global-tensor_2))。 -以两卡数据并行为例,Global Tensor 的设计方式使得上述矩阵乘法案例的修改非常简单: - -1. 数据 $x$ 按第 0 维度切分(`sbp=flow.sbp.split(dim=0)`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) -2. 模型 $w$ 保持完整(`sbp=flow.sbp.broadcast`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) - -修改后,完整代码如下: +以两卡并行为例,矩阵乘法案例的数据并行程序如下: ```python import oneflow as flow @@ -79,11 +61,10 @@ print(out.shape) # (4, 8) ![Data Paralelism](../parallelism/imgs/matmul_data_paralelism.png) +可以看出,Global Tensor 的设计方式使得上述矩阵乘法案例的修改非常简单,只需要将: -> 这里在考虑要不要放数据并行的其他介绍,例如: ->> 数据并行策略下,在反向传播过程中,需要对各个设备上的梯度进行 AllReduce,以确保各个设备上的模型始终保持一致 - ->> 当数据集较大,模型较小时,由于反向过程中为同步梯度产生的通信代价较小,此时选择数据并行一般比较有优势,常见的视觉分类模型,如 ResNet50,比较适合采用数据并行。 +1. 数据 $x$ 按第 0 维度切分(`sbp=flow.sbp.split(dim=0)`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) +2. 模型 $w$ 保持完整(`sbp=flow.sbp.broadcast`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) ### 模型并行 @@ -91,12 +72,7 @@ print(out.shape) # (4, 8) 与数据并行相反,模型并行是将模型进行切分输入不同设备,而每个设备上的数据保持完整和一致。 -同样以两卡为例,模型并行修改方式为: - -1. 数据 $x$ 保持完整(`sbp=flow.sbp.broadcast`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) -2. 模型 $w$ 按第 1 维度切分(`sbp=flow.sbp.split(dim=1)`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) - -修改后,完整代码如下: +同样以两卡为例,矩阵乘法的模型并行程序如下: ```python import oneflow as flow @@ -110,7 +86,12 @@ print(out.shape) # (4, 8) 模型并行示意图: -![Data Paralelism](../parallelism/imgs/matmul_model_paralelism.png) +![Data Parallelism](../parallelism/imgs/matmul_model_paralelism.png) + +同样只需要修改以下两部分: + +1. 数据 $x$ 保持完整(`sbp=flow.sbp.broadcast`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) +2. 模型 $w$ 按第 1 维度切分(`sbp=flow.sbp.split(dim=1)`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) ### 流水并行 @@ -146,19 +127,45 @@ print(out_stage1.shape) # (4, 3) 以上程序采用矩阵乘法,模拟了一个两阶段神经网络。与数据并行和模型并行不同,流水并行中的数据和模型均未被切分,而是分别将两个阶段分布在不同的设备上进行计算。 -Global Tensor 的设计,使得计算过程中,只需通过 `to_global` 方法调整上一阶段的输出数据的分布策略,作为下一阶段的输入数据即可。 - -> 这里要不要写“ Stage ID 及梯度累积设置” +Global Tensor 的设计,使得计算过程中,只需通过 `to_global(...)` 方法调整上一阶段的输出数据的分布策略,作为下一阶段的输入数据即可。 ### 混合并行 -> 这里想的是直接放 GPT-3 示例 +混合并行是结合使用以上两种或三种策略的并行策略。 + +以下程序为 `2 机 2 卡` 混合并行示例: + +```python +import oneflow as flow + +P01 = flow.placement(type="cuda", ranks=[0, 1]) +P23 = flow.placement(type="cuda", ranks=[2, 3]) + +# 模型第一阶段在第 0 和第 1 卡上进行数据并行计算 +w0 = flow.randn(5, 8, placement=P01, sbp=flow.sbp.broadcast) +# 模型第二阶段在第 2 和第 3 卡上进行模型并行计算 +w1 = flow.randn(8, 3, placement=P23, sbp=flow.sbp.split(dim=1)) + +# 随机生成数据模拟输入 +x = flow.randn(4, 5) + +# 第一阶段需要将输入数据切分,用于数据并行 +in_stage0 = x.to_global(placement=P01, sbp=flow.sbp.split(dim=0)) +out_stage0 = flow.matmul(in_stage0, w0) +print(out_stage0.shape) # (4, 8) + +# 第二阶段需要将输入数据还原完整,并转移至第 2 和第 3 卡,用于模型并行 +in_stage1 = out_stage0.to_global(placement=P23, sbp=flow.sbp.broadcast) +out_stage1 = flow.matmul(in_stage1, w1) +print(out_stage1.shape) # (4, 3) +``` + +以上程序构建了一个两阶段网络,其并行方式如下图所示: -在网络的训练中,也可以将多种并行策略混用,以 GPT-3 为例,以下是它训练时的设备并行方案: + -首先将模型分为 64 个阶段,进行流水并行。每个阶段都运行在 6 台 DGX-A100 主机上。在 6 台主机之间,进行的是数据并行训练;每台主机有 8 张 GPU 显卡,同一台机器上的 8 张 GPU 显卡之间是进行模型并行训练。 +模型的两个阶段分别运行在两台机器进行流水并行,且第一阶段在第一台机器上进行两卡数据并行,第二阶段在第二台机器上进行两卡模型并行。 -![gpt-3](../parallelism/imgs/gpt3-overview.png) ## 结语 diff --git a/cn/docs/cookies/imgs/hybrid-parallel.png b/cn/docs/cookies/imgs/hybrid-parallel.png new file mode 100644 index 0000000000000000000000000000000000000000..eeed27f79a9d723b176bc59b12415772028a8ac5 GIT binary patch literal 60030 zcmd?Rbx@UW7&dr7Q3MG|Man`#LL{U^knU~~K^keLLqsG+LXhq*X(UAukS>V>DqV-} z+WY-|vpf6Ge&5dQ>>oSpj5D0~Jm{d)TTYxtoSMO?(%+CXe|}`xTQl#SSQqCwpD;hU@GGJBcsD-lQCEHU zFHfUJt3v|~lAu-~VKya_!J2>7mKH z-NpXz<5hNc(;nLj4sseBL`III2Cg-uA#@^M6)EKzEh*yD^-^@6L;OyQ&z=Ml8rX~- z9WNw%SDx-o^zMzlr2E3}Y`LX<;V0|x!8cUOmg-*xoVtn;-L6YluRk;LI?N2E;_js4 zGJiIlce+lpxl-I%D_+g6K3b?ZI->KE#%iLnkmUiUd{D7+A<3P=XgzkR(Bc+%^df)f z^D*z!-B_WGrVB-V$A=5a6Gtn>>$FCt&jS>a1o?D0jO^-ntz!*b?A!n4q=%KwcYmpC z|L*55NlM$;?0>ng=}B^3mB-=M?D;PGI+uBbd-r00`ls-f#|2ER$1~v~l>+PE261Sn z(j4rXuoeRg(i=l(C~ z{%Y+?ksNK*(@?50Ox-#LJB*6k^7qN%w!zxIRdM~iZJM}iwqjy|Tr8VrzGmTzk~C~H zo`d);Yt~{$ev0uu0-oPZw)MN3$7@y7s~=d3UIo0egF&xwSsSkz^*s0+xOcMMTQ|Wy zs&6uC=w68SI$kTaUWwmez09IjRDOZ*aT88_J{9U-#SM*d>Yv3F`*?!iJEt@R+Yddn5wu@|HhE)zB;g@>A2 z1))6SKTnOCV{|3O^L~hI{k&r{_3aI&e^61N`)GfiZelMwiir5Tt%;EPo>OwB$L_FJ zCE8`8=KR=%-F4ct3ZFB*n5pp9SaIW7Esg7>J?0#5&q+{-?tz8gwEMacB^QmyMl<i(0U7C)s?lSe}<^Cj*$jKkw%j#-Lg{bCYIqo@2 zQx$x9{pLXSgR|{kkq%AYS3c#9T9oC!7cMhQXOBco)w{Qsg=Su(DZV|NuccIzv7voz z>oP`CNkYyYb7%5WMJurfswNb55miDp%;DbOaAs_*9AF=M`EW5kyue~G3-i!FGdZ(LL!UHZn&1d=Ox?aC;Rz4+f z^l3R}$zAn|KXI_feCh`GJ(~S%==n6Vs@2F#iMb7WOObaEY|Z*@y1C*|7ykN2-hv%n zkX0R(-A|uk)*Zh$>DYUZPR+>6x%a~dilLT=j9#k&lrD9LunP6}?p25nYEwJdB#EQ#a>AMAn9(j@QCI2eo^>{5ntW#-EJwXkDV;L+>B8HIKgn8F z@;mO4OPeP2g36p%#V@0xbbkjtWkt#M=7p#@w9~l$)_Iv2{a`4-e?a3g9p{Vp-SOOA zJZLAi(EzKJ!Yac;j-tll+JEgA%zchm$G)QJO*1<*s%|;sfDJFC-Z+ zdW;tYd&DKR(KBD1>oL{C=Wj!AJZcka=;oQIwzU%O(v0%9p8ffeO=Ig}JN3*NvXs#Z z=*FlZ7HXwbHWRAgZP+!nJ_^Uh&M&O~yk>47zeU|t)GLPGD)*#fFZ)g?K!zv5c_^=1 zj~Ui7a-In}`JKU`UE$}{%wE3}+4Ul8Q@owd`KHEpTY6bP6h0hLVo)dNF zY2;6O78}+_t`Dw3baM$W)~^{rv+I=IwLd#ts9UI8(RsAhgRc2D-)lac?- zk=n;;*Lb=&TQ;I=#p*io>N^4H7u#Bk=cl`)N`i;8p8_h=A$njc^sGw*CIa1dmunhb ze;Wt39uP>A>n1rr*;eWtHSju|>sT{@kVw)M!^%>Ae&=b3v~}HD zCFYGy^0{}RZUqU1$}!9KZTRu>f;)Xrf=t5r+UkoSq+>R{&yS}&cDXDD?rx%8r|(S< zW-BF2oF46xLhzPdtyxG4f=KYYz6e?SHLni;p07htndNLeTF#F;9{u{Vx4gJt_{fyt zItP9Kw>OUadClN%JM! zIJIm==ibBhkBv*G_3rzM4g1;2Ub0>;XO+k*YR4k8LgMx_?o0yFwvZcYIm&y}=V!c! z@jrqo@8jDrOS^fWpSo_@lPA3FO1F3A?Q5)$!hh-TiljH1VH# zY;+y?0%_fH^xAT>lj1$(akS2>ZJBdo-(g3<%-5Rch|*l5H#YYAq`)InlyA}=7BIr) zaj-G#NP1sUUbyIVBRo*-=l-;J^5dn9Pk1!FFw0E+jo#-kIA-m_g^zOFd%e%L>Gt z_vC_>;TzKmXC(e&Y9KE+^bL@|SloQn^Z3r;?+ZkNf5JW)B{k?hdfDStoU4-l4nIo3 zd0E{&5f*noe%|T6DAvNQKj!(?x+66_KxY+_W?vMiO)Ydl(w)a`}&Fe&Z+Kr=&L zSG3%w_H?tIE{L2(v$~1ffIf4+X4R-0%~d@p757Og`QR zoF${|Qm5;u+qXn~lO7m;tL}6BVpAFSWz2x<-E#$t9#RG6gRL(1I{O>N-?lz{$=UoH zNarFqB6usFQeykv+p(yVGpQ=2lj^tkO-D?3(QRT*$ECGZg!;w-Avc5v|egc)oxxrqPn0??R~;^Bdaau ztxj7SmvP2MBu9(~uc4t86_dLcp--1M;d*xch5dPzCib?QwKy;sYX`vsL@>b^=m z7Cu#%Jng=1wy~L(h1H$2nIA_pb>C+sCWs`ue^dU)%hau7-Jz7-qurIdjEhu)MOc43 zTe9f88>LoMUtOmuq3y9Du~n*O!zqoAP`j|?SwijAaFoPS-LKD0)km0OevuyLS|#<- z1E1WOF+4o0LcOZWswubpzC3mOA}Ui;5>+e>?$LV$6y?@9T#(z%Qro3MspfR?xlF3C zF9yF@V`g3ZWE5+s@ZPMkjBLr7{PW_xh0*cO(o9&#+bg7z6O`z-5(<%o2OJ)WNu!SE zr|aJM^;)b9@!=Az+@J0%8QRp9$*h$4iHa7gpM5??$?jV+Dd&gjnny+FKlaiKKAtSM zHccBidDIz^X=4F68REC8%Ti`+?Jv!v$*A1I9ImGLmkd(*T zs>bmp<#kgNLOk*U4SEeZX|9c*cPk2}rVI6WdG*}BzavJ_|`ul$9M7Ss0thZHSgvghiWbH5BjY}Yhfk@-RA>g+$*z_O?w;Rva7 z;t7$^i&%Y!pku{@qSvYN4W!0$inY5^dYY(=_UiymN6fwOmAKLFYZY2scI3Z?hL-_$ zxo{6NJ6X{NGO-cpGWXo|i(^4?2ItdN&<_=98rwDcMEOmuofjZ@DQPj$o0W zJsO*`-eP@VS%00^HUWRbW0cuhNR2+NNX?C{NQMb3Lkj9LMpYTSg1@T^Gy*P8o->|n zJL`?-7>0nsStly|)FQ3rqC4>seO*qT#daL99TEr7V`*w*5vdV+EGmm6R=@0hJAB5EMG5zVg`}S{&4V_Pv#@~tibjq$} z@|TXSzO}|%%c*=_hG4q3H>e~s20uN)!|)R+vyKl&Zk#+^lYqVsR}_WHq@cjo*rqjm zUc1?L6J5{#EzxZ`m&fzSNdRKur>}3^|2n8v+XMyZh*|^v}rN&0OoWRaHsS7>&+jO=-08YO1$!Bln2&Se9}?MVu_rNI>b&AM#+|* z_Y7Qjt}}K;lZONju+UD+TO<4X_D`FD@^{i{x&85^1RmiN(IqM z$?fV_hZ+Y5+J0hCVBVqGN&cux?G01b(L84y5osHjdCiu{C+(UK3iTn%@=p;C4r}+5 z)FM{HJghn6RC$Av2Lthc7YvC-wEWvR4){boyuHYKzWe^BQcFz{pHZ0gClZzxUWhFe z`)N$c&T^fjkd6NbIfZAl*2L_&QM|mtUpJ3StjU_1#|H;9;rElZsPs0OtjL`XPQ$Yn z2@d~A1fk09C=NqTN;k2>!qVHU4n}h{7;5YLQ|O1S1T()8=v8u2NELG4%4F57eQrt= zoONd}*;u7zhL)g>?6(CSxwZD)YTcKK&Z`-p6dlgP&1&#sNqZ(pwy><7694Xhx6|WM z6TWcx!RQJ~-N9Jy?E-!Yh5Xg)nVLijg4ywQq0P#%v6P0dn`>RxmJM%E{8MLHuWJsW z6&l~|){0>!h?sKh6#huCv_6)g0gA-NwEy8oxJ3)U={g?yXk^+O zCc*B&G>U69CJN>`VuydP6FZvQ{GxPxulRAE7-n3{S#G;s z6zc=$a~hZ5#N4JO$Kg}e~_GNt4?5{_?P7@iL zM4k;tCl4GZTTD_Oa{etl<*`iqs6O!=$G6Nd-z3{6%q4o;SLQR`7@q37oAHWX^|Wx1 zShayMeoBM?S-qEH?BW>~t~tLG!@J8qomRtn8W((1u2QuY?1>Yf+f@RlmAyN?>0_kv%E+631^M9>D z9s6ep>LK8rEa^Mm8@DQM{Xot#xlPI-FAao|X@zOBLno8%^xkiiaPe>=I^oHBySdJ2 zrYeo_-b4Z3%@%z2yKZ}{aXBjKPgD1K5&XaLo#_75>zUBjGSjXB%-4~%M3-@ke4FGL z0o%#yR!SoemBr0ZO2T6_y=SM#_X&k|`ow!_UNvC_y-a$;@X&7RoA@z@_t}ZW_I$UN zUZ61VS{D?@>WiCDbl2^l!a@#$&`a*7m@GNn;7O{MC2!w#4j4((z5CKl>pzB%-fX?7 zNB z?Gf5gEONT@NpA|S$9pJF2SDk?`q1m-p!Mg+JEWIsoFd%yr`43Ei`_9b*?pBu2SU3; zYPLG#4G1&9?~2i1FA4%2oljlRz^5@h5UAzVXN`1PxoO=r~kYxg7IqEVoiarw-G2?xnR-#K!aYGfwwn z(a4R8Rek#(`i#pX8I8jZ86#J^pL>m4WY^3T6{Mgm=7#dHL`>%^mB2<+Pm?}A}ziQ z#h1Vs*TgVajGIr9PT%Wod^(ZG_WG=PDQhI;GVSZ83YVNrsfqlkU-8$K4KiCO7&7HX z7K&^Y8={{Ul2`_))S6|gj4WhFb@FSTagS@&Kb^?YZ^3RBvoH+)^D z%DIbk_L_i(f2dxODdN?buZrnwbf`*BHE#g)i%rBIe5@jbm9~!pZkqceq@hr!JLkH)DXWs>yhd9}Z|52&TYecR6d7D&#&DxlZvzL^YNudz z`c>r_EBieggO0X_);IFZ%bUfE)A%VQaY}XcD0-K%&vvQhy|Go1)<%_k_j*iJRl-G@ zFDu&f||vHF+w>+U`IeL^}92g%*Xnx zLtiTM`{;`Sgc1!?jwrPpX09qtmkM{;2PVT|Un5Wb7A{r<>3`oVhHMXn7}P8_6gmb7 zI7!Hei{~81XTO|88W0$bVeFDc5}1Q*qxu98sOuK;rig&3bP`; z!JYoGoaBxv1xi5|y1EL1y&U0(+ph?tX0iY4;#(pPfVIX5iakEZx=j5H2&ziw^{K@0 zYodxN_!KA2-hZ#5=hDE-~I)%1$2l+(*Y=6_)P8Pr03ja}ni(C*AxVrUd95OWT>fDR;YLXtfIE znVyY_Cw`Fb)4geLZ^{sJ>-V%;9O70m3U4WpTinK>7e08r*-DDmr-RbN#hZoW$|1@G z@`~fk6GFia(VN_sQTVba7X!-Qp>i1WHf2Dius?X#>{`bUT+v#h>r|=8`N<&g&C%Aw zxb1J=8Wcgmw8p1+P$IPV`@5m%t_H78YSRAt1)nX=T?lCnJ-;lskZ)=5P)?lyd~7SQ zTH>#e$Ukh>lVGt@RA*b3uTg8O(GdNlSqf%<73z`_@3X^p=dLZW;D<)IK3f$bY}zPP z)F<;QM!M@eYZIe9Q?E@X+*S&0MhbK^fuSF+HOM5nbOYFlq^|e75FN)A1=eOc%@3O- zeXpXTD3QtS6w9-}0klj75L;-V;rRdK?*WzoTkZ_n3Tx$@c<*tM9v&wM9DnBVH=E^0 ziwtajd>{{)$K9N0zJ3j5kFc=x48~$2S1F!`TsOW<=TUNVz4Ql#UNxH*;HMaOwqfn? ze+SNtxVeV)wfHX#JvJrXfek?m-X1mbuD8cBKob2@?U7hUNq} zc|vwSTL{p49T<u`Ie!7$8bw2 zC^_RmD*fZs2>6%tO}c`x`$if@2Nb5zuG-ACk-g*1+uT+oPWvIGSPrpXjd{w-Rlk7Z z90P&J&Ss*r0tnuz#7|UQ?^&rq(i&0Ud0rG;?XaG&snYxu*ro!|EsCZqEU)v9o8Q^L zGb$rWdB|Z!;4_&$UG6opJfL1>YXAVLlG}FjH(FJOoH)vvURA$J*?B^hp7Ui7HRXGA z`Z~<_AOfseihE_BMZ6pGuMV&4IpQnCasDs^Mo?Fb_xH04v1NKobraf_0dhi*CmL2!(%hf1z<1Qz^Px- zWkBZsy{gB-wnR z9X_24Q4ZrBo$gOh0E_Yrs&B{o$F@L02w^WcBO=_0)Q!M5pkmef%tXqu zl?IXCXYePo#DNd8$#C~ZwD~Tr!+pNr>+er-4mDkysIt@C3!xxRxqfu*!zCkTt#d+w z)jyolA=G@Fan0|_>rg0qqm1QuxtMuiVQUisaq>KCLhWlab%{7NV}D5Jijqyu75>(r7Cz7$V44U><~OKrOPx=Wy^bs_Toc%|MO8e`yfX6#!`Fhmh$#0vLP)EhdZDrUk27>LAV>QYF3P)v(~vll{u)71;6VAbcZ1#VpA@VOrm3Kg+_ zGMo38S`=7MMC2MPZKu`CtIx)#i117E2t6d8YZ;O71DVU7N3|D3*x7FZ_>{W9cdt#k zuUF(3)#lx;dqVjjg%QaJAU2NS&^?V|fr-{q$^WGd+;@?I=k73)&|(%-`mnJxQhhZ$ zU4<-%7$wez+w$KC(h*u00Pye zq1;cWfYpnR$>Dv?g$ogl=VucNJ%Z0$ajgJZ+e>{ATlAqM~CmCCN`Jt zJfhQ%Xt|(uK6@JY0d?{H0jQ1W3-^;=-wp&toIdnbK|2%`$q$~82cj-A{_&Oo@;&Cu zemcmL)Kq{WZXaw7HeaT*hEXqrr116kcS#hsga>dtwy4^N>o94ePZ5%mf#GEJYwrh7 zJ}g|~)SsSTyLo0xpc7G|u!keJeaQmg-k$|ri-rsJV!vZUs=$t~e73hb_M^d<>zgQ+ zbJYOR%aeo6k!pJrUc!veK~C^tW#upgFzd6tcZ(!bdH0-_xM`3`ikE@4LG1+)s4z& z(l6BIUg=BqKW>T2N?B`sHU18RuA2a*Wx4W6;f*-e@`68^-~5UCc^E(EX%#08=cs(g zx%J}b5UW*T!8E<(+90*I524cx)d8$b*amqX*@FyZh}Cy z;uj7~0AfJE!=cmcY`SxEJ(|wzpxOBfY3RM0^Rr`<>37ztb+GM4JEAkhzo0U(w>=jt zr`)V=3b~bc$31E%C8V=nMkan+TxwFtLw1ht~rM1V{mrwM`1Sq;Nk^Xs<*Hs@Sfth6L77Mea)7u(Sr%Wi@w$AHxdR@bnl$-@<*w5KWK-d zg)V<}T2ciF`Mtnyx}kDdQ!mfu}%BGAHiz_a_ki_-?i=K21-J^QEL!DD}gG#ZF&VQkF`zFZg z4Cy50c^bYZ-v`Ct4v_nGKpHMUxW*7I^3n}V6-%uR)%xE@E^jW^p=WB9b4v?8o~J7{Ip-MpesHfeA%?;y_jBu}K~!XYIi*cJTcP^@(B79;AF!G!2&Y zuaoQH?(6luH#|685%(1{K6Zoy2MCg`D?kusPPoD~b=!;dbANz!t{f@U)0=4vMR#_1 zAX}x=4|SKpm^c5{lXt&Clm4>ceifP*grV-50bl$d2IhBJ)z00!370vrBMx?#2HTO1 zk*}6b=#M0N%gHPbU5ZbR(~~-ncEc zI5bKryNwD8sx_(rbgR0GAc+Vc$TD&uVYr!RY8tjPcCm?R{*i8uvz<6$Z z89|?Y(x4~L7W+~)0oGcLm6=YhBVV9Q2Cc6`X$hs^oMx2t-7OD0r1cK2=yMK$9TO1u zw!K7uQpvEa$?C-&6K5GXGOk-3p;{y4>SaASl@_Wpljt6**%Ne)3ZQuZ8eIAb@#F@Wz?jj+@X81KBE6|O zaEv@ar%jl~8f>}HA$^F14?dY`4|m=6TK%@&^QJBw8jH$6&r)G?4J?lq+wCq7cTIRA zH<$H8Wn2X=S81fg`1?0Bvi?#sa78ec;3@gpqzz=Z&WkOWaG&CqcTD~pi0xMcXLM2x}Gl` zCGs29&EO}z*qZIoOn*v%5Uqw7WUDGuA(xTL6T;MN%V`q=M^ZTNpqTDJF`aB z9b6N&(3trdO_2HL9!g9Gpp^(lU@%4O0%S_^nRh<2O;usNNkS9%*m7a!9tf=UvV**z zy24iA81~Tc+E2MN!Sv~UU8};m&YR1*CUd`^1VTSyakYbyM4;-G=O3qLaeJ^!ESU8s zL)HRLId}#~9lKVz0DLC^yDUJRcg694%-E;Ww+Tc;0mZ?qdhD&(o`=9NI4@9dMibd1 zo#s#FwHkTAzE3cgjenN)#nuvC=UN4vuH#!@*{6+&Lv9Kfj96_K{b*{ilzauAx`ypL ziaA_?XRH+>!_uDC-0+qtO7$rPyJ12ITdsn*y+_q$@5V8+FJo;t?N%G4P+MxCc&$Nb zlBPiYTRodCmIDggc^^s&SA$Z`0&Owg2lPZs`-Pl8AfH%t#j?Ay(k+K&I*PqHnZE+_ z9be@nW-(c9KecwVGfWYMSP1WreSN8<_p$34J;wSJ-r0jNeca)}q9LaWDM zy-_45jHeZ!8`?Un|Ado&)ttg9adJ4Dj*-AQ6cusKb@n~@t0=L-XiEV8VI|fZ+x|4k z{IzFqDqfSoV#1-P>dUr%Dr%8)^PZ`7iRpK0<2@K3LkATT@84NIuD2Sn*)3e3cM7^x@EYf?k0)6698+`_i%e*@DvYT zN&7-O#$E?l`P;UQaKVRL5~X^Ht2wyr8EhI9V+g!&PIwvNF;LH(Vf~4c)~f?&vA&0_ zb!|1c;)8G`C{at~PZV-T=}_cDS8)vxtnD+5+z`A5$oAg)KivcTq|ZVB?5sOQmi!3W z|C$~ApL2)*zkHY8DoZ}@Ig~;Fd|(vmS+R*19@$!VwIDS z^r+=#xo`#yqnR}(&v!nbdz=0Xmo%MpO?e>ct=I1;uXDb~ylhujmSn}kf_#xFd%-P{ zj3bQZ$U0Y}34Ps3&E2C3*2Az%nM~#P^@CY~M1@ZEF1q!^Df`bytMM?U$m{0Wrh|OiG1K$&AXIk{7}u9M?F8 z^$39tFbU8>nex7Az2ZVTQfQuDV*TzY$!wfow3v{(ExzLsxy)_+Bh6bsJd?k8(IclF zx~==A`+Xs69q;Mv~TiQn}b+UxCVi= z5{me7h#i=bVo~YhQ7iX-%oHrfKg4l)R7p=vm z;rv)cO2Ztgi&6Wuv`h!tBDFJun-Fzw405x+SUm(;jBEY#A9$|b2)7)qWoV)Bg{LBn zMqoTWTzJ`!G)~C95|~WS4o67@8O+`|KDJ@1n)Yny>Y{BYGVR zWEs}L{j%Z0z$qb4Rg8Vbyvx$SkHTIp4+Pr)M0j-kFPMFDo!_ZfXumo|av^~+aa*^_ z)<#R;J|N$TZ!Strzb{SGSmUkv&7tR?uGj>J^%m%S)#u1KO%i-l=APXX#7}hR9E=4& z-5Z(;g6r7Y(y)E;hE$ffUd_k7ter>^^VhdYA$JnlP%kkb%K2Ewbl3uR_D|L_Dx>!d zW`%|qb9eth7;>se2G3wket`N~Hd{HKhtpudYlI72SGD@pm-^(BuT=TX0P@vABwgDU z*YCySOY=J~&*Cv95s7~ImpppLd{Dj7f^Q(!x&jzKzrG$yU41MG?ysnU!S!KRNmaR7 zC?x)!6;JN~->bUl32X44F<>g!=vzmpOMV~YGsclYat51-BdigN)^di6`V)4ErCZ77|{6iCxcv?71slIs8PrxaRP6ijuD z!6raEg(>52{%Q8N38Zmh+^iB?P}Zo;mG8;f9RhYA*}pL7ikIj{^&J&B z52Q7!_lK~z5QkdhP{&sHMZ{{Mv_ye@p;$bo@!raSUUYArsUV zLq*CS)G2b$i;MM=DhYl(Iffuo!8`T(dp*0p>*Fj1bdLC6KB@Elm7R4hj@xmSZW6f; zvxxWsvO*si=aUP{`)6{fk&VXc4JJy{3c~l6gu|wyL#DTA&JCRd#_ii_*1-AW?46Bw zX-KG9ttvAjCf!zF_~6OZLq_8PRAL(J{hB)1xGhrQNK%8M(OS{HrW$y(i%B2zxx_bKK zIdft+X>>_i7o!a7qvdI4u22Gpv7c3Rsh{X~)r*{X@1R~?KP%pR(D@4|h2#8q@PIS4 zi-Z7g9hjyqXURmnlbR<6kGR|9S+jHBHukC;WD17jY)3h-fSROOQy+6Xf~hCI=N^e? zvXRJ47c)^C>=&zL!wh@Mv`rFF`;~)SM~!}JY-^A2MD)kt;BYNP@engz@lG9mOsV9r zIS6?xv>qxsHs_p6wU!*LHOIc^3*Pv~mEXSh3QwS8cdF7J-h}#DJ3T!g3!T(sV5(xT zJnpEmqupxX^W=v(KCaKVS@u_*{Bww>*y@G{NY;`$@_iI4wZ3g&iFZLq}t z82;riNBez8SY(I~kKtT1AV8O}@cp!`y+NF8{2ssH`N7a+nzdJ)qC8%!uiYJoq+TZ5 zW|@SdTM@+#M|sLx_l>?MV~$hXhcv_-19FtgYVRKM2}qD$wTyhAutn}-hZ_)QdcqG1 z(cC=wCN0447Zh<@8A0t_&2f}H0>tRRS4}y!##q`0zteIY->R-sNLe`S3fF7i*ln?j zpU3S#AD(+S>|>X?s<`W}k<|zRnfq{Jcs8xA;u<+F7OsP#6T{1Z(_ z4R9pQt74dC-p*qWRDXp0KAnjscdk<%5i&ZAEOaCkA}Yzk6=nAIDBQ$-S+#fH-jr5_ zpwc1tLO)Qm#scK^hG;U6PTB6Gd>uET-L*Z>+Chb_Z1J}Ym(9-v3| z6BuaJD+O^VmU@WXpdhm}9tPr{NwO#=U0yf-J%@o6t8x;qXv83yb&h+mWg_>@T{zcd z4e4k0NkF=9>@ATX!@Ly1RjajR5r|};u%tn9{XihR({Ml8V*ueus@N*X%vaxHN0bDx zB`}Stt@vBYX_zxv1c=$ss1Ed>+_fZEZI)bD*!5~$19? zJr3H*wXM)`6nH1CFYYN^D>$2B=3vqu5I*Kg%*A`~x8Rpo3BOMZVBY8xJ=SYrA5X^@ z{Xpd!gIV9$PrxM16}#t2zwa?KoOJ&__i`M2C3a3vF0O_;Q}XAR>=tX`yc;!t{O~6| z8#erRxCZkXf(B>JJGYE?6eLBb!#QwofPC{bu-)jz_l_FbM=Dw}4Z(P#Cz1vMct#~Z z-`XWIDs)M__E0ymvVgucZ3BmOfB#z3`2;YN_@}2}b&6SBeET(la@e^`?-OSDFKRC` z!LA@h-}vJ!8A6E6=9WB5(b49fD8(}wi?F2_k;m6xGSnJq39$_PPHJ8|PN!}Q0*Wl= z9N4(EwwEvRa&80;UU^g&o$)H=PxEEsSPjj|1Uo0j38NW0y|s?cqz?RfEZtf8Alswnp_X$=b5 z3FefBDPfOc5qT`+{F<&`7Qm^duczSPb@5<|o^n7jS2S%^|;>2o5p~JY8*MQ}B z&E4nn_BBjD8%J#FWm&2S;SMzD$|_(|7IkecwLIEE`Cdz5%?5jdZ^J&7uas@2vBGMt z^qq$_TtEV;)}2>V5*jAJRg%PEZVUl3uXk2e!g4mZQAfe^j`XOrk@*-&(6MkEmp=B0SlkGYcNY5fAz%Fo0Zig z3429^&3;{vwH2H&*C-Uk6?NzXLTw)Jn7Jw0zU@X4#SBnnRS1(yjwyVW^?cV=%g83N zK;1{#_vu%07U_-qtOOCr0ZTa}_a>~%)pLt{IJkLpI~7T{66v`_{6_Md{=&ga$8D;` zisc@J)13IvKjFsMq&%%UeLH<}R+cq~%b$5Cr%iK)x14>PonEg6KaMTx$n$kOwPT2t zG46#`^#JVdT`*)=E2vy*Dc^BU^~%FxZQ~ zNN?G!d@s$YowaK6-x-}yxI{0X@J1iseLZuTTn`wV5KEe&Oj)LvbxuqF6fZIh9?Yvv zm!LqzlVo#iwf&5K7HhX~p?HX2B$Q2@d=ZK(n$1dGLm6^-Ze$+j?h?o3JvFEo z{^wIuP(`@F`K`M1{Kl_av&UjfOSL$bs_aT%$=&<0n|nph6HXe}`86V+HBmk#D>>$q z<_g|Kjyrh;{`KJfi#i03U=$3M$`j!FL9gd=iG@$4%lac{voPxjr+_||y-)*i9DNF9 zzTu)i>d*P%bha1WvTE+?Ct=oaZcK0*WV0xe<4BLt;<=bzvmXvqxb5eUJyp6k&zkk@F@ZIx?*Bt`8FqIT~ghbqL$W`ZN z#k)8(t^TG}$T_`?i}ZG`PkIaqOU{)mxytFnNoQ~H6~+0DgLH+{hPvf=yFLF{y{bo4 z?urn`S9mlIKX2K_XgSjXw_leCilJnWHbK-|cH%|DfMWw!^ziUx?yD5GK*V_dwTHi} zVk);U1wYl#6sJS@@f0ZOnk#NQpp4YUcL&mW{Wkb4RH2v`ky=W!tXcuvo1(75IMq-r z67T$oHOKTx^kvvooO;Fa?G)V>qx{4(D#W(M}EhtMkjA&v3|TYNv#O zkjC(4Pd9pPAFb&$a}H}9fe+Tco}y(6o4I7YwALw21kx?LpRARS<9QGoajr<0(*!PXZ`IU6P)uf(1l0cc)0 z)1=S%6umxJ1SzpK$0u7nrE<_N`BMAw+1zqb=BI-7PlPX?9Ox;pB-&b@_Ss1XxEl2B zP7M6e<{cKy`%{d0YT2O{L6SM`t9wjFws?A$R-9tQ8f{75fAB#bn>g6oq{M01a|~(oV_Xq58*E@o)wsbzg}Ed{o@FAf#|6W+|HeAb7j0(25*Q z+&O>^{?F1tqoj=Mf~9?1@AUuf1@Po4EXf_B^lGtw692Yi+g~sGkv?7+ z-J{7$@}0#eaz}nF{_3c5 zjLuY@$xmzPkk+_GVm2z9eb*qZ+*7wf#G7%WzZ*~Y_6fJ)!uMBCU2L>u21BGq^PMqe zU2K?`!c#HZTQCAt>P2R{reS4%_M~`JLsEJMj9eQ6i5r2`E^?)QO;2Xnoo)|ZQB^4B zAIm)2%ITF{S8vN^Md}0}gWaz(?1!Nfh{O zbQlk^BbgVqo*4}Z@>zEXhp!5cW@3O74?N@!iJ3v`Q5r>J~#n ziiGu%yMxKGOPr6_XW1{MF3^+z;f{+xvep#~4m?WDFV3j-%&_!Q7^aa6&((`+GjhUA zjyc~^5m6_QAo_YPVL^*CJfFj!cPTBPs}VRrbp55+z?sJ09#c-oWceSx-s_mUL;S$5o!hM$|WP9^B?<6b&$Gb&}B^{<> z#zerg9pG-7+5b#8nP1Hyy#Yw9Sw-^UycCRH=Tv%=n6LCtBA0L%flV?0GJ7V|oRLDO ztmT%p0&+ajD53ASJu#Xkj1(_Gh3I18HJahg1h-&jYx9}Cu0WP`1!s8q zFJQ8tRwwVuW*Jf7{?l>DK0&DxFm9iZifY!I0d3Z?kj9{xP(5y9`LO@4jF)gY!fDwA z+W7_zxEwZHb`GVV6;Zu*g$EtfV!3+RVkx1kqV}{dtjT>}PuBH;+bKW12Di(cAAIKM ztT7Ps&zQxRh_hwnYI2^^l1=n;B@26v4Xq;YZsDTRJ9s=??0%{^mOPFMw)0<2a`j?$ z`M4&^93ojz{VN5}p7Y||(GA2qrDDuqh-Q7V=R*);AmOUQlNgGYVZA16k$*DD?33>v)0jA6TT-=(E@?sje!y zvLT)Eh0d7As43uO1-WzSL|k&te^tn)U|FwxH2G>Wu+1SST3js9?4>60E71LGd_x;jHgDej>& zi_?vb6KoYxuj3ypi23S10l&_I-~F{uBiCpGRmrAz<;7zCZgu}nVbdSKOpv0r&&jH1 zqLlXa8)%v-CxFcC3`eS+_Twe_X}Xv5o;@if!4qwLP}UR$+(EBMvR+%J&0;f-W9)uw+uuQ^wif)cjvD$TehUx< zbGLPNe>r9wm_ zRp81&%^-*WF8l*0-V@3cbA~LgGv^H=T&|W?}0ipCkg`u3R&Fd*Ay%3cA)0|aw=MQ1lB%h(kM>Sdip>? z_BBxMWV#H9>kuUfU-^J~ryf**T0Q=&D8x9!d{tbktwjv|SO3l}Y$9lcIM^~H*_v?a z?&E@92hN)R3y(cL%K(8a^0nd@>3_z{vDXi^h12z7fm=XS7r6olVSfPy(e{5_!ONVq z=}!w7&Q%M%07{`xIgoAe?|*G!gGT@N{ciuCXq@8%T4hT+Ha$;*BM7n|y%sLgi{cD{ zTUnSQg(OF>%C-hGi>OzgV3TF}v3ZI4D{Ps%dLuX?R6`eFIoE)L(l2~#8zj*}1E~eieM5*~BPyl7>qTV9`MABKe>*JrHQ{ICQAOAW%9_ zBb^bY&cLEsP=cs)LTtsF#6A3?IH`W^a2#WjKy;2&Yr`04M(G7)3#79--yS{xh=Y&v zaR7J(9HKxS#3MI)Q}jzu)FVPA3YjtxF?3c};3{gi(sFFvt-|4Lb6bYY zCu{imm7vP$Srrb?PzzPUd6Jv(s2}8kB0&HA(7J-$#!&}f^cy^C;U}B`F@qBzod-f- zL97DhbISnf$CmyGHgSbV&OH-w@Ud;W=WBGiA$C$!3U;Su$2nLdWY5n1@YzbBA{nt9 zEwX`!@aV5qG}MiNFtwyBw-XN!g4m)#I=!W>c_9MK%pGWC5xBquj!buw5CbAOraAVf zfD79GAGg%j{QB(Bo?7^T(bkaY`~-Q@510UuC-VU2>^0>^@g1*zMc1n?o5{wNz~KN9 ztMPKC|DsF8LK&5h)7tnj^7JGD5NXlVKe|^^7ox+ez!Mxjwkg9=js9Ii(pH1ZLQjTT$P6ZSx*@d-r{tibuaDtz{DeKG7t zm|7jl+=?eU#bh~%DGB2V%~80bAIM!QY5~|7P(0}N@dASJ#m*V{-CvkILf%(@rIYx+ zL@8N#55hKun|@p=SG!am*1TIUoJ+KZt>mEut$bwZ^S6C=;3*Pl@Sq^pe^coUH%9(e&N%RJZ^C2OUB(GD0|oY)UA~F`|r;>`ixM6A{^=Qb{62 z*@R>7OZKZGmi(kRdqkBaahgvJu{~3vMvCy`iAWaT)*W^99&$(Sl<>ye3-*om5lP zqyf)(?1S2Syj)OM7ig73hoFJFl_U#K8M@k~uaD8sN`kOB(E!#k#~>Q1vF9Y1%#zn< zwX)`PPiE}G`0tMvG5(#G8oL8(MEQ}Hd{x60c(4|@`?{XXn64548^+y8EV zeM-80GTX2Kh&c}2SHrID?*YNfa;**=n?mxBfiH@nC*e%(h}T?Ypp8LzZ*V#Afkj+4 zO_?kKQrzn?8YvYuZS=X)c!6p`P%}HiY1p0UK3EK^9yEH1Cik1bT;x-FDN)L*DhHIO zh*3PHHRLhf-hk9cZ)LsjYhZD4TsaCsYJdI{gmN8U5w$Lrht6f*GSgw8nh$FIknR*2Fn~50J+Io zHW&4kqQXmo1C|GvhY*c{V?R8VTYuxi&qD~_P&Y~KJsC)c^RNN{2#~aDqi3jzbY1k# zQ^arkA-ie965P%}idojd>DoUEr0e;O2nrjpl(Wd-0r@Gnm(~<{@LnOrNL#dWg25ZK z_K`0In0ye2Hr~BgwM^}EgNOD$Oe>Q|?`ZLB6;vaHnpU^^H5sOErUFkHKPp#2@zG(m zT|%3J$O$0xryxEv>gJLPu4hhz?u1$idAxzC&UXhOw>4XUVO;Z$;S92M_?{N4)$aQ9 z5z7Ej=eB1t7^;p^gb7XaE2TKfwd&+jTRD{-k09yhfV4uG1ViqDf+hDVrS z>a+6YK0+7+puzV&{MLCy@px_VS zi3v*rd`Xr5b|J-NGsWSK1%6vxn#C0n&KAG)yzjB(+UYc!MNFAlyJcS+E+NmKCE$e& zys*7#@xE=uzpM^BpB`i8nycqFOzx@;`&xW3*rQB|9wI!bgi+3gcWOj3I zV=$YzyTa5lReGe9e8#3j^49wfqQI!Iu1$ zT)CD*8-V3jF!MS-A@>(L++H`cMG$%u_)KT_cYe=6KJwWTxecn9JpCZjaFMdUj~p?- zjM6SSW=AJGSnDmb+E-FZUACj?7_WOfu>AAWuywvWX_JT{M{u44kGSWu1=0KuusN9}tKBd1iqzt!Q!Ow*Mgw(Lk zSW=LIz*IY!k~Idf&zW04$;{1F!>gUZtc?AD0)f`L!ZX0UNF56wD2`Yi`$cV_EjDt-UK# zWt7j#N#5+DBAtlNA;+n`%#^>(ho`o4>(0==EvLkn3kCpH$I=6jxp#Hmo~?@#`wKgG zxM^KdmXVj0msS8yux553r8mp02c9u9*)4w)M!SgX?0jLNOL*J}dF)2Dh|qsR>e zEb=RVv+V-0Z0H@M+Eg}iM~%;Plks(CK|Yn#chh9jVk0xgP<~I%vTQKLs%|-tJx)cH z0&N6V0BB0x=lePw^P_W_!%4`<`#_pAPBmhb-u5S@TE3U*|e1?B`6e4 zD&QCy78S_!BE9m@kBbX$Zil(_H@NL-JxWT@7m5s|;rg)Ul3=G$2~TXu^GInRDp$Zr zfo`)r3^=%^CYaQPuH1?Q8UP56;e888<*;lWy)2Gzmlh{qmVRBcEINu|Qb!BO@oqV7 zK7DHP^yeBo3F6TPY1L6$D;lxlyv}uOGRzaFDFWvI5NT-e`U;W`ruw{ZsDAz2)^7-@ zT7aMVqArkJXj<@XSe~h%L3VgB}HRO zJ2Ss1l4ZJIPa>VV%LH5<&%yASUG&o6R`12{RFO>c3Soqjnl>DmA_#u-e*x&i)+JkL1H6a@k)t*P{iRq+C1%Wpv!+w4J zjLGyY8X!%Hd}Mb@I5S=A#dAA?5}!rSB|{$ho8O~$5`R-Ku#4a@n)luIH0hG%KTw>A zyQuic0*mppsHDpAS^&=41l4asteEJ{kc6(Hp8sxPZ!_q8AZ{Ig!A)Pv<2j2@v)Z+| z+VIc_D=}?4T%uE<_tUKO@2{Z49p9z|9lYysx$XkX9Tl&je(Zbkqz`1Drg28j(2LAV zR~i>tH@ZHMp%;EPsf@|wZK<5vtNigvZvp<0uELue0=$pu-PyIr#(bgq&|LVA>WG`H zetXJ_n5@^@-MQZD!=p-rQ!jIke7jHmGS2GkDEicVZ2bXY@-&-H@G)jDK!#Qi1q_9k zVwCt!#4ABlEy~AXwmU2D@#jB5-`yiXfFUt`jj!+WQE49MI`IKHpySB*$DOkd%U&lM zP~0%}bKhS#e8)&PEt854@P=#Ee zWM+hpuh@F_5PPN^H~C&d_40Y0Rd;k?dP1x6+om=a^P0w-Jx{TiY9-FNg{tbb?{Wi0 zUa2Y8mFZ5DV+Ymg+IZ=EcZ$;S985kvq{+m&xc5#Q0Tp;#g2ROY9)drrreko_+urXIj?kwT`q|IPSt}@ogZ~K-| z_{dwqpy=LItDUuGy)$)BdTDW+mLixARfSmmYd_-oNiTx3*4_WGe9Y@yy8Z+7+*c|Z zbuOkIi_dpyc9&sRo=l~{YJ^{6af#2n6YS27J_ykMZ9pJKC#OLW!4dW^>N z55H-!p_V5|%ci`Hl8E;u95MMR?sF#9+nwxB&}LN69HO9-#Q4`Q4UbLzZXcz>ckc;1 zl!iFfn`K0wy3uH6^=SIOEvp?RRo5Sa)iXUh_-UnBB|eh%eBChj^-n#j;9ckAWk&6q zovo+(Cva;&TP91%-pDm-<0_vrjAgB#xN9@yt*L?5BxgiX^5ae8u!TOPSO zyE+5*wgfBNQJv+@+ma>DdUTrG>sxzkB8>P_KbOH|oO13pKe_JGYmGPOj&$+vVw9Mo z6Ta?Fm(GNN7lwd6iHZSP%IVvQG(u{8Lg`%BzT9~8dd{be9xk_2^0H~JL|6gvBAqnL z^Dcvv-eYQc2~mYyZnoki=I7U_CaaijPXtp{sVQf>(6E&!ukhB{zv9w#Qcy5nqSl@z zNb=}X2VwvCOvh!ezdmgcQv4y;-FDQ+{)JIAW2lXsmU~R6ODE0;%T0JKWM`7LVIDLf zi7`ujWmL)-oQV~huS<6qT3)$r*qxHT8EpCmGx0d*NARNazS5Yz-Ls2l`LiF}>8+mJ z(sXRp2vS!E-EM3DNRDEPT2bX zTyL>{s-eKQgScg)Uuzg@TK6p-{G{)aRmSjMaSw0W^oOdLGz@D>n;CsF)8aRZ(?%GxZ+K z?f(7q&FrzNc;pbzxk#||ZV_tk_Ku9b*jCwg?}zG@0c#NvJoA@D&anMsGPYy%I*IvQZ|<??iBW1c`sTvxd>sp^6e}HkYk@J|hLgIMyszf=A|yv|${PIAr(tui4aAsBU)#-$ zrQXZ?aHBP~NT?-cs!LTRigH83J565njb;8Z?^&P4gVS~lGcnGQAxG1qgWsQUOS+kq zbWV|Z9QWS#rmrpCV!hiPzTTMk`tEx`I38w=aGN1Z*((Yat-=H>PoHZPn|%6Xvfc15 zheuE`E=6Oq;lyRJ3LcdPh2@o!?yFH*m!g(SLu2VB4|n_OVq3a=-GTX0C!{cltQNA{ z(?2LRyt;GbwR#HXYOU()>J*F9Z;t)P+8XQIlRZWVw1^Uto2FfSyQH_s)jE}AFs5{C zt?p{#wKauk|D*nu_~Iai!02k*2X6x@us4maq)+(pzD|7bJ9cp?xxW5RSdlShGUjIb z({)}u%*<}Zq0Q^cmiph&9jd~DQ}<&@oUN4d(Zt2-_s ztVf=|cYH>@A>>DN+D-)TLBkmPMXZn)^iupBdXh$_I1dFK`mSTstB|P5CSjkH{f@?6 zkQkt+?$w=a%=bfh{NB~guBq#1oD~u;%c$pLuBiLC^_p?i%YN6%Bt-9cO&6AwsB?T+ zH}LYC6uuTG(X2b#89@LPxbo-}{%(hssIIS0<6XxLwRSmZ_Vj`&?{1^q{?8w?yGbAX zp)1CU@r|}=ezh@puxO1fp{05zp+daBwOVP^$wPId#6flUy$_At8(KfSeQe~0F(2*T zz1jQwXV*^bBc(~@C3fC5`^i9?SY71Yf*O^43`}X6N`<5nOxWeY=wSwX+45+!ET8(8 zRiXCwBd@7?Gc(?(vDh~g5oYR?fBMP56|B=tfe$;s1A`aQ?)NAC^5bBQ=0~)*)1$aV z^HFmBT~CbDc(#pBj=@>S{dt*?T{0;Q&O;)gXUs5>HBR+7-lxL*(t47n-y`yxXnGTV zR>R6eb&NGs2{v~L3Ixupqwe05Vu#*N5Wt@5*mh%AOthmru=Gvy8xiU@TqJ-!E%U>E zV?^IZ>ya>4MrLk_jC3+7Q!x_iZJJ}c(Pj-#?odg{6Yp&cveA9~{QtQC22}r*IDFH- zeAzKFk+S{1VAD!e8PZ9Dw(;e+4>kvG{*wtIq~x-X@iFeToR^8V#+R`6euj~G#5%!{ z)u$r_o2;U3yk}SFC%|D%lBNXuy6{<}F@lc;wlrsaMNGAq>23s!sSDaitF(5ef%;JN zaL3dP@Q~&~UwtoT&N(~jD(^!|d_ebF3MaF)S+)D~jn4~D})w7oSTV-&A;_UT? zSxinbvy67?=MM&z+%oL~Z#oQ*`fjRPWwo>gTmsa1aV#%x<qGb$h5GD+M~kU#@*?S- zQb#~q{5|~J+w*O{zin~3Q#ZR5X%c*A3>*{hXfn#r6l(=}`05GXZO?gD_Gv1H|E^{d zAbjN&gLazWWifj7PEbjG+zOcaKY^p+1E)$Nkb|a0iPrgI-uaViUU-_{QP=X@j7PIC z7a#RH`hCDAvB!50hMh}WazpKgvHu_(y#}G|UkrQdrLpj&?CQfedU#Ke*|;~vf2E2e zdDhd$9X*y4x}VR+o_Vi!L;qpozK7t<0Z3~qW4RCBP55D&8)9aA3w0Zn5~*XJes`h3 z)ir2p=6n{5U+HMjq!{*`7JlB_I{Pj9y@B>H@L3jmzYwnY+`5b*#X0N@bp#kad_rDV zCmumw7DwN19Z5dF|3`Jna4&DrE@^?{gWqYTVgY;%_ttj$?sorS*(ZL)v+_aCye0N` zjB5pTc61*;BA+%|O-n3YQ+b(jZq@HWVS;*3;+g?cNwJ5{;bk;tgAAold*OX;cID+2 zf;bzOo|Ez|q|%;V^Jte6pj#!oj}_h-w7JMMF}=;$zBR`rKfRPx|70^pfm9C&oE@6-_Sb4#fU-TP9uou9EU*n&T^owsb{E1GjA>=9ih zWuK5sP^Fd3s}-Lyj9SNrAFK*vR-QUF1gW0EtWdDUn|3H{K({&6d#cn-XMG?lEoRp3 zu?iO{a`sO4-aZz9IsDdnnZ^FYOtsB!dINKm(n8S7u_^t68&`$v3X`!3#z7u(>|<|7 zX6~~E+n-hlx<|8ko2Ja&_`lDUE)WyRz(&7?ZNswK3;~6{2Ery?E|txw5~vuYP!XK z8%%^IgO{GOm%n@+kj;h~_!9Dd*YPAr5QQTRqjgfxU7t;$#x%*weF*EB7cFi(RlnAh zqV{>tn#OWGy^UKozfJ$h(H38QrvuMe-`?#XBXx@Z-MyGh+T&i#L zqO>?Q=YLp{9j!5@;V0aon1;5JtrLmUjqm?RhYz?)4z$hV#@PkG+@tI4Kdvj4$I$Q^vFUjE#ez6uYKzSaC2LIkRrSXl`8E^7^~Szz45q|=tNqSo`s5Auxgf(pEWGM- zv?i!e9%k1h5F96>@5sHb$PhG`X%A@^ubQ-bk$a=&k@dsU4gURm`D)%!5~&a~o6r_R(47_pDK50!=z zpWG>0tofrss8;WS-%xD$cK61^nk^&N;Cv zyhv||b)9Uq?Aku-cmm=nq+BPO_U;2*w%vxWHwU+;vloxU)g;|jvnk;^BG~P4W~*AP znZ0Hh)P1P=v3RXQtTuZT1mv@RdIj{uhLXAhIBng0avC0`MG4fvc#*&?c$gUhZs;jC~gXHdB9# znHY4gcPBA`-8sU-el?m_g3D_m10(CPKi_hvfUU5CSd~gK~Jj8x%9wH+|vGf(>6{WumrA0fChhYY0k8 z9|ni(``WrCxbJ?i`ERCZ?t#k*hAvTaE)`mnJ>*AE9N#weJt=X4Hu@ILkGp>Xp?Z}l z8I%;fu9HR{TUGUNQg=^-#0 z#LTPTK0WDUpTpSZDbS6XfQjy+o>%jYJaK;&+_hv}4S^}}Y3Q}jI-CTm?QVcg>5YEh zCV_)`A21#>#kfM}MEQ=KmW!17U=FK{uG1itent{a(9dpChZAwcsgasdO|o$4=JZ0q z0ox31vcfy%c zw!pldaxShJo=~%A#FKr2wl7R9asB-ImD3>X|XHF(+JK73JJla8jS6mFa>F`80I1 z-RSS0a`TR|^n>mc)!(IzLtY{RLPjP0>Ur>;3`vm{;8OW*<7hJI_H#J=wtnXQDJ36= zhWFh)`7($8|B!RbTh=$Anym%pZW-hOHSNKWSt5kr1oko(c;a~JtT8{S%d!+csKp!NPbk>QfZLet z`3vUteznHGQhC-iZL~uAXj)o-lI)ZrXwW{y}Wh`1S7VRKtS9P(>kFSPlcfLwjrxt>rwG%hvMZ z)Ht2@QgP&~RQT5z(;82g%_uc6V)*GQpK|dbTp}hUSDsjNX!jvx8(gr8zO}uevwt;R zEowf5JB)wX5t#*ODGZUQm__l#gH%=Bh*s*%H73GZRLh<^{4L-w^Cp9E{5qP^j?kq~ z7LL;TxB|QfM`)<#nHAh&(0UptENWEJ1Z}Ct=V-u(Xe&zNAcv=XrYqyge!1?kk2>f4 z{(9>7WW%~s(~Caz0xx60JD=6&`?FaY1JIrsju5~eo&A}trYD_ck*-w>hDEixCY9oo z`{sgYbEuFZ0_GQs2j&-9C?dnu?QIyxNN{Es>89NI_U0BSLSJO_r?W=uB*!K}veoMt zFg3%32L!h+qh1}NPy{phG{O9GuJu}XEk4*`hTJnI9~IC}kZ5A+QOk(%6! za?~jn3VRpTUh}`o?yNMbVww-&a8P>_ilpmRjtj89FbQaVh7Nh7I@)zZ%>R3F*tg*3 z`)jP5vgPRye3uIy2c=@#a}|8WyUG>^N?Yr8W`$yc(F%$?TRBS>G5GbN-|ungP!|IU zKzh|%T)RsNYXO{rU+Ks|hbQyH)xXH~@x@^zrEh=B5CiUuKafj5sWC9eFg%IDuJYX> z%RnwDfjOuw+BzdHJn40aogVg7sTp%lLSMSpXMib+!4rnJffvasS)dM#x}-|0?O=F$$Tv1K|a6=G&uBg7eHr$x}nHtZ9+s2An5-kmdMfwgC?9 zyqNH@t*Rt8`o;d@Mk%ME>1bEOE0J>b5<>lMbG_f0g$*o_?oId{HwOJqhBE1&C+NgR z3FCYU! zbgF(OL85Z^A}oj2U?WfpY=?>uV#V{{Yvv@@v-`D4rsAjQrvwwg4zfD9#4GJf-}sg7u|=U2l#pq5{Aomid8V zyJ4yx_UECTAF!cnSq!54mG-usls#j$OOq}Ec1WO3^Z{AG`Oj3Fhcu@d@KJMZI#Tk|9NVFYZ>ESKL^W%c27!U$nGLR$A_fH?beE|{&}t<=eg6@jEah3XVt^r(%SAb|smU7dnyU)t3y_FlYeUI^4^QZ|A!nH^J}&=!!PTBZ}lP_70;@ zc*@-R~^& zYeNC%=3x98dI%TYTy&btQ;CBAR{MC(TBo{G+2u-3Etow30JWK5{(Jb2?hYQ_8t|9i zj!9UzLQC3o@*&!D^@=Q;g`K%L?gm#LQF~cy+jZI)HfbTsoQQh05Pb_b#Nq;`;$j{8 z1x@FFx+}7bF8qZO)1~2R!Q2C3{1$?zvK5(rUkgq!P7eTvgCnv%q+C(E&jn7)_GkKl zzlsb(xXWqpUq)jt{Ji2@l9gf>f~X{PX^G>PUr-jO`r&42SM&Y%TGz*##DG_r6%QMf zUZty547(O=Av^x?M$7}D8XmV;U^F2^q`bt(Yz!N~tOKXnYV(M4zg`P};MQ$`>vbO( z*se-$(XF5E_yb(Q5#3*%orSOs9}E8XDg50h!hz=6kD^RkmI)`EYrtg68fXoz#DjZh z+NXg~Aqxi5d2*_-gdSFe_XO0?I~{F;-C?qhkHN>_%GK4tu|N3N;d!D9Elh?ro-26W zO7zjeeBG>cllNYWnhr!&>IRlC72kT;;IpQc1MQ=}iRMVt88i3GZO#q-VfJ~IUy*q- z3mp^KL;gk=CSM`-ge}0}iqjWF&k?-!B^&iKm=&duW3La^yH2VhqsIriXm6)hM-BbEN-1n5I=Re54AO;AXJjK8{`+{HXqr&`q{(NVc5Z0O}Nr z=X#a}Mgsh9ci|W~bI0y~(@Zm%C17hz!SriXTXInq81588YNJKs;EYt2!{*{}p5;=B zZP(DG(S{pT?R-7;r7)8743pyowc=RdYnn>$lG4dt(BB@0J-r{y%^3Vb!D_HK4*J-P z2U_)Vv(zci%%G3|ee=pwc)dIuRr#?Ua4l!1#7lT59UX^7s~p9X@!S)!W)QhIQ1hVp zEr~QJ*cL$WsUA1>2`E=}IMXV(LZ|`;kdQd<)@iieG*xQelb5ha1T7k}M(BNion6SV zC~XhIi#&K8ku#(RyIg=i#uYZ>LAm`ETB84_2%Ec0Lhb^Z15iv;ry)zEj07*iS_O2! z-cg^YtcA+?KJ@#Imqu#2W8hPro$g;&2q9lpfaJKaLn5;z5jn-wP-_)|1z*)ArrXT_ zt|V)jlujfcwy;=ay~uXnX+EMuQakD44)|Y8c?E&}aObepg#c&pKY2@ISm*PgbDsss zg|@tEuM1$$Fs#16@V|%gjxwJhGdi;EFELqX5PvkKa3=^$$f-ES>agmMp zo->dO6J2ZLNTgxNe(DRobPvp32H=4VX$hluQc}ZshT5vx2bs~qX~tKyJTzSo98TZw zK7h{q?jbI)9LlH0`>bbKO5PolCth%Y7*Hqb$OpzS9|9~H2abf$O1$>0yT4$>F6UPx z>Co36H9&LP@do&hyQ>jHLjuEKn>2pODSPVWt9f20C<&RF- zkn>zHUI9M}bhflWgml;=jleT&9(hj#yZ8O@!KbD2Xp4oB+CexF)$;TSqEEJ(aDpGN zn048+1oovt$RV?XfUf%wIM_-uF6RZe-MpC}35V}~DA<}wC~qp%>ov{5k8ZlRLV@#B zjp9nm;;!)WwVZ@Qvnxk?my3l5%rs+^4!|J$@zoLK!#bQs%8bt#1wQ(Nl`4+Jdt<&3 z@j&47VtV3T;bGq&hxz@|BqMK%O5*||Af;HvnMQFr0ZY}@B?tAFy@55=w_$dRg z1I&p<@jJ@!lf^S!QZ-K5U(0wSFN-NI*#6*o5Z$0)07kGwn}WK8io}cH+Wh2%Ysho2 zI-sWM-|+zRA*ZYewj;d+U7^9o8>I?0q~YMu^JDK_3t2Z5`~AQxA7>N(?Y14cs7=rj z(ATKc&o@Ix@0&&;n}uuj`4AE_F%|gNTQFP@vh1D@F$ME5sK9Mxyh4SSi0*FQ_FVli zepLSNhr2Ui7#}?j)=Fidk5?iOi$3d#!VPL+8C~3gJcU76MB;5R47E8zQGs-m^&xlU zK}=pkI-2lK3*!nlPLn$>evrC2SmmBAq!iv|Ws9dsOS1iax#5Y~Cy*^)edr9;o4j~e zr+Y5J5Re!XV^%2rzbDF#?xl^LI_~3%4?*fuR8*Ei<=p|XLvb_Xw(VKGCRI0zj>wNE zFLX0T2IlGF=WWy4(YGi41PmIL#eH9QI_~mkzYT7X6ONYvle(Ke{^E zI_x1w79NF8-@W^DAGs?R zc@fPr8%hcs`*|m~QTN9eqsNl8ACR;hYYwenfR@eB>+1v>gVJu5uIq#i-8aL)FmYPi zJGfDN+bjNG4hPO?h$MvT+Ro<)Px2VM-k6?Mp_`n|5XJX9qEGueRE?&S86BxtSq4}D zdqro1&EiDX&V%8$K==t51xXEnp3z~GeIi{Idj9nAK*PL6su3% zK0z;1tMzLYP_H4YeHZN)5sQH0Jy+aKsY1sAk4fi4LRXLv*uO;@@TJIeNB zLQ263<^?9@{;)O`)m{gotk&!#-Mc+>1VrL5Y?bWxcZyeFlT!d4Xv=zQqC3?E3K}*` z;J4${3*lqvUM<_&as>nnL{VfS-In`HUrv=%G=H_C4*XHH@nUtp1@rYi3m;LHTU*ue z2Anr#TR1MmDSnty|B|8Y2@U)2sv8-CA{^RXRg}k08vt6TT_xmo zO2@CS%}?pz<=#>pwi2B^;M8IDZy(WsAHhbS0B&^2yk^zd`v6|`}_$@S+7{mS;IA{$WnSJ5-p0L_*fJly>RAy z&?1yh#&D1s7FHpU<6y@lu!kC-ob^rS2KBK2csZtmIWs^&ly@uHPveWFm56kwR@z6su=5E?fgjM)tpK=tUCia< z#az*6FZA4=UHNG6LszrpJBqKcZ$S?jch6-i;1F3*0ofxRnoGiTq5mv3;m-XZ4$%C3 zqq}k+vpz8V(5hva}zZz!k8u{_(!o^ zAp{~k?S~clsYyMk4p7EP=js6pn%QGpLGRv5Y$px3IMx8`gdvnu3%~&LFzxC&IcSW` z=H~mm{yBg{L*rCT#(!iVZIFpAKumnZE9l*-JSCXVo2#BkV4^wzAKMQ&y8~>-h{LYr zM1-U6E)ct|P}nwv6(X%3SEXoAP&hpCL4a~1E&f4W6O4A4l*y}c{f7k&ize{b$(by{ zoAWy!nzF>2d??N~5Knk8%?rBibPJzgp^3Dxhp!KqIA(XW2?#=yb@r#n00l=03`THB za|E;0SA&PhUO?V#lW#qk(3~Qsi;GBi;lCxyCC13Wd+9C%dX4PV0rhl~#zCZStUKr~ zqB4d{?e)J6+J6vj!EOVP3vHglP}W4~2#Qdap=wG651=d{L`6vZ@{C1n(eD7+c*CtP z0IWdC>6maA)Jp?RlQW%isqm;@@fys88=&)aFu>4Cs*|Pm#IV1M))FZczwiP`PZDaP zSLHTSvejrO$LNV}YoB>0NrguhFK=yW>+Gngb@zeZDeZ9vZEA(!zf zLL6R=y9O783MFbmXrGV!K`{b2Rg#zuWE|}3w4$JlYDXDi!%ja49FhM2KZ4?MsWKHO z)PfF3K4a?n^N|#I9Yf$j?LuGSrxD*xQz8fI^%epk&<`=x+?(2WSK+XCg_B#?n&xkr z5at0079uR3!xfp?SK!cAJr6GrdyohM(T?xYXnEy*50{*dcJm}a zb^HyO8R6>xml5z`U(q-)o%JY}Jjyc`rLP7o7X^M80%8h9TvVp+42tR^qIeu)DDK@@S^{QKL%QKN;oZK0G~a#rJqslSjE(b5 zq&JbcQ4fTWq@$eZ=EVlb1#=$3trZ`b`pUzl{f_3ts{ff)Z)k+`)NrV(U|ZmpqZpXj z59R}k%Cseg9#He?Zp>%ekL$z!R_@0BdmkQ>At1$e<(DR_+4tFh8xm%h+L2Q4u0Ry| zD3ZqPf$*!$eNK<)+1~beD`^~=N0)1t3cD-hgJj6)uXjm)+{*uifg3DdN1KqUR&6EE ztX5`mw;Pf}bK^E(pbB?}B3;{u15`q9+=i;97Y0c1(px7h`{yteBNoxs%?ow2hNEDn zB?zM_MZ+{s%5m29lun?&dP!i(A@zri^vyK+43vbrS!$CPp@A_`Zq35jVr&;(z z{~@l{rZep+@*GFLg`%yZn2u`?ox0G$^BFN_<+}U8OW12xUD2wT5-k3U_P#t9h!xTN z1$KOuuV~vhFt?tE4&OcXg3d>G>tUR{@2S7yD{t_xw>vL} zCiXM}T+U%&Ln;!tYN;6EG>Vwm?vH>|a~}v%5W%<1b+V;ohiKCawyf~cXqu}4)|3n1 zh|*{W00%ok^aXtXHd>CT72#{5nGa6!PFTkTpaq2SN^(^ED&-VN^xkrS0M2DJAJ*La z-zABCplSQ1X7msKs8Wn4xryljZ~O8RwPH+vRL?p|BSaH_iQt9LNTNGjy7IVfNmJbG zl6v^x1(2z^8`pTAZH4su#f2XLs3tJW#+ zDk1=YSDkLuz-xlvynGI~V^r?MkuQmnPfl&SvdTrmCfWUj&H-x~o3&rr?fh?1PB=UfNj^bG z^Q?QEI~&jCzdJaJw0ka+Ou~9OZ(0SmQIrOh_uyxqte2)jfBvvxtygL5b5hL<)5W@c z^FFw6uD?mQ0cI+*ZsTOxI&+LHO%4e5&WvjpxRZGeI;`&mQV99C&qJ&BV$!zL*vg4^ z5L9k7J&Ym4QDq}w6lEJBap6#GJ5Zihj4++=5+5FibSyMG;zL_swkdosnLVC5*$pVN z3)T{ocAmo+g;89b%}jLw2Tmt*YP%eIJ8s1+t@LNRTER@t+=?PyMWMy=hj9F%YgKZ~ zAIOI1pWIPU+SZmiQy@&LBIngMC!A(Zo;;+smtdaw(C_PQ5Nt2LNi9yM3U*s1li%q$ ze);yjc!ze?Cx49=V`HJ@m=1b>)N*BH3Y|fqSBN5mJ`J3}rEjeh>$t*xJchuONX#J> zZ9JwUrxAebp*F+y-WGdQj>EaKdeQ*X`I40Gd%=#xvp_TV*FZ2abI437`1$F{Abcdp z&TXPnO1MvQmny`0V>74oCkk?gFIb$vpS-&HA-me zy<=1#Lt9SB=!0R6r_Odt6@X(5&7C>jIqy8ZIs^j8NWiKSkY~FKX zXGb?j#SiuoD%|5EtiM`j>l7xDy0qbaTWVD=CKgSJv)S9^#Pv}8k&HJ}@ z*16Uk5{gE4x>zn0c?Hj<()b!E9wblXm{hL%DSSn_GIjpIYe3yi16aKuWTCyyR=Atj z0A=_Aaa1|*B@+MgCfS#9H;I*Yx{pBp^dSD<%S5{|X>j%b`r}K5t1>z(E$kQXJ4{&r z4}qQ$>%Qra+8W?W`{FO>uur=^^IuGX=S;IH2YyWWs%4nXF{a3n>YqD8+#h92}I(lRvJ>5Y=s@@V-5 z^>U``{yRB^E|K3vkw!kn`W))MSEWGTVnDY>5GpoxO?}rYu|{<|G$0r`lMmrK9yUTr zlMA)w2Apg9fWEj{T^ksV?9Y)dUFc@YAqB*=(o_kKfU1yFr$1q`6Ma~pxM)g zJ%vsJ4QQ~yi{gUwVqd~|TDbQB@Qhp_^n!iu6GO;2GLb|{fUp$#{;mUmggcPPP!4*Y z;8;}T;J!bKBLf`$ps9_DKf!my5EWYQ%M+y><4SG2z5?1Kp&t=5_V4yP!|gZH;DtcX zLSRQ1hWyh;>|?bNTqVqcFUy2l-$*?d0PQOHxdLkLc08{35vUb}n=%mv1=i?id+I z6%9=msM&6#@KhkmiQYNob;rjq#3_H$0Es(&p`59afY%2|&>LZXy`@P2D$Bo&NL9Z0 z*K>`ipSytg+}k*w6m;ja#JW$=Rl*o@1P7E?hA(sC=w_vMv`DKnr=A=b0(|x1L(reR+C610+mB%^N#M=6Z_EFzz& z7hH-Oc%hG=*0#Ck~#aP3}~JjPL$J*Maij3$mm)vqoQi zGQ6E32i7!YzdYUSUOyr#5TtjWFAP;i5)mIn^YCX`vXm1qVM!YniPieJg)+8B9ifPH zg4)<^xYfCGk{Pu|=BOF6pGDodh8B9LrOQzp72 z2zEWJ*6(hi>Ts9yrM8Cj7vL$_pot2|!XG1a&pQnO6^!^A#Oo?TxK~Mgh0|;Lq%k0( z6RYz(0n5@4wBrivD5wfpe=B8KvZCF-q+ z)+~P%cFuRLMU;r$SR6V%1n@?XF&sGsmR#)HhNj;xfv?zLF)GH5mX=INhF`Rs;0VW)|Tgz6=b z$4mXN)-96K`F`2`xFR+I)fYN8&chQfM->lmPVo^Tcb74F!NY#xJH&ZJ6o886Zq(K@ zW)V9;t;zu0GlHSd%~fGXAlycFD?>|OM>y);6yF29O1bc0Imon8RyL25Tkng!h-r^% zoB?rZxSAOMSpXLH!9|>Dp5tcYZ_m-h{riG1@Q@F3{-Q6`HPyGzExF)a_V+k_4xnE~ zy8%l@<+vJxe$Gxoq=!3k`*zq@BbjFJb_I&c(c+7GO4RtoQF6JB&dqOO#xf)P1|8mV z$8)B|M@3KW)a`p7Un;XI%{BK~c?6F>H0msr%K417xAN-uy*5dkDGwC2U90NAcAY+^ zrpBRPe3b$-As|Pmxi|n8R7)BG!sc~DvguR~)0b~>t;Iz-0s#=UZOf6|0Yzni!$~>x z*UlF`;Jy-CfZEB3Yvv*RhpPq-V{f6p81dzg5z_y`G>4L+7f>bvKmb?BNq6$bXOR6z zm_IVp<^Yb-t2zA~gIi$I`3_*!HguTV!}m#sdjk7t*w-3JcCcIA1C;}f9l1C3V*0?% zoQ;@x?l_}BjQY#$W7kuBJEUfvZ+&wkb)l}y@BVB!9*-;v_wqK21F1PK!b6(WU@oyI z{#dNrb<%&G0lB-26FFjkf%|Oa;m?uJ_+ff4tC6^(cAG!(W%e$ty`4rSHn-qh;ogXI z3?e;@JrAW#vH#wjxfGP4Ql7`GXpi%73zYh}vK)M(?%0)`b;05G6 z%jWtr%cv0~NS*nMp5x6P^z;k=6;OAe%lBJZ?=a7a006d#zA8o`+N)64clzlTVQ|9x zpY#Ea0bQMMDK(=+>I%@Q`XRxVsN&2`Q?Y(BcVCI^kSF+_Ldh)K3l1H2aJ&j*=Q+e) zJWFQ4HO8iC(|#66kDKWa5wtolcwnoU%zA|86etfwiRLq)n<#;I3$I>a!|>-50FXJ` zMoz#rfsM*<0_AL>WM()@n80%SjYY(dxtMJ3f{_!lTUk6SS#fju9x{6L?HFyLP{D|^ zT>3q|m9+WVZMM5P{&P<=R?a{_XbO5M6@w=L=XW}-K=O$8s8S;pmQO}+4X|{M)Jy&; zij9o5{!Bd6>Py3T-<^u_;4)=mIRs(?wr2AdGKAtmHn;0gznIz@*fuvKn;$6-EUEgp zXM3Nr#!K8>PHye?G-}W|<+r!4?+u{Q(Ak@Ox6|Obn)6tIhha82Ps|rLN5gn!Ir)5M z9fD#vf3^YbqX7F-S0*x_)&R?eI&Q5epu0Ta%a*MNuDKomg4M%7tVobnK~j2n0_Tm!_kcI{0h@}U z(&SuD97)g09h4r^hr$}F#-D4v=RXV~LHZN5d1LRP4DE$B#QEDlLHhNDNxxz)CB|TT zJY++GPl@sK5lFo>JrCC4=Zmu))p@Qez1)N~vdXc0rI&k){}MwBf9I3D8#1&({x_d& zzroN=ygVo4?GNpaDtn2ZQQ-bGxrp=@Y7CS)+5?XTR7LLKG-0|m2_#Xi{jRHVOQz94mIdlwN4?)O{8R3@gzoZ22dlz91}+Saia*ON9un&^vZFZxo!i)%XeJfw`*Q4Cc(knw0H;IWG$eAViLp02 zzR?HtEeYH>M=P^}JdDd=;40@mm(->+8OXF!^*Xr9#U1!XsAkNCd+WGNb29Irgw-nH z**@l>Faw2~!u$IGci$hq7vaXaH}s3zCu#X`{d+bH&%-f`Bq-otC6*o$y6LBfE2f?K zc85KXYXgW5BU3WE-p%u88BD|2=Dyj~Y<7sBm1|#7|B_-`kc9gz9{tMp>EQt1?GNTl ze7vF0UAcWop3HBJKN@isJVoytsxom_Np&IN&fOsODq%s-r{^&%AI-!Qr~h!TL#mL9 zJs<%0nj=z7UIGzO`sZ*D#WwV#qEcy9N-`fU zd-T@p?jb8(aq%}zxf-9?RQUaQRDza~2Ruy7GV^}5*5erdZJpW(#(A5uW#BDW{-wWR zB?Z9H-!xB0!C0TEAODZL_l&3V|KrDxy*XyKqlk=%WOIznhMAEhJ1dltvPYCSMpm*S zGg;Z|ltMO@l$B9r%gX${E}!p>|K0zM-<{v*&ZmbnuIpUa^?toy<2hc$xel}a=q9c) zelbO`c-3DW~mp`?NF(4 ze|*@ohxaZS4htA2t0JXozVxPfkDC>S3W-{xN8L$R3i{e1Ls}vw{F9p9jQPV-@^(^3 z$auQU^oJhW*y^6fmrUa?+ornb_q|!}4JzO$Ln@j4`UKbHit08q8W+V-M>osX z6O&wuHe9diD!)vY2U{?HhM+k<^O7GVh23xDGM_;AxBkz+oU3lO!+8qhHnakN(Jb-( zL+LkjHXt==c$+&vb0=``IhE4q$t7Tdc_>f7{4kE_byAZ8EbjCVj=h>W%x}_HV<4SbDa0r2bG^hDfRpgxu$Wz2)lM z{h)U_(6WqEH101w-Bj0+|Aevq0hiw-(AnJ296k9EciE8HtPo74l*ucbFQ z>J|%L#!QW&+Oo|4WdWRoFI(AgwN1^f_?lhvDK-%i`a5^^kJuJtWzA;45Sa0zv<>mW zLx>?n5;vUjeC!C+$xbMWt|#!_G$ zbnXy)bB~n=0nPSI3Xa-|c8PRfD++x7(Y8l*U)u2pM_A*TiV}P6XMI!rx)irCBj@jo zk3w0@I#PtBs{VbQ4px>x4f`VBH8Z0o67W3n?(K6)b%bkhvCge=;`S4ZX?=x`{sD-r z;jy9_|2w-))A9_-jHSp7(q6Zx_Zwr2e^`KUUfkP%_`RL>m9MXV1SFR;6}#6%(q}P2 z(Zfd0uKk*w^mu)5CheKi98->ZHrNz6Zb+QV-~Lib}g;ZTc}<+x;3v zYx9bxasA&a;;!G0GauGYOJ_Q#@ow$!$&ddY>W(2yNsksrokD3V9`?M+AVk1yLro6p z9ubO%JJUdb%+gZbg7~t?bF}m51+?;)eO_NQzTxhVj5II*-3`rldjJ`KZDrb@tx>>` zCF^EOW;9FaZ3G)8FSLFKTh`=)2jlmD*M-RjW<70wO+t9;g=`&y;=dIBwW~-a{u8lz zI+>wjOw#q+{{9>O`+az??SLo>NlbXcSJ_y0DWQs$KL(sPYB7@Kme2K+zwPB0(+5O@sc-7PLC&&6F&wO$HHIA+)MOfZX5}5TvZTwB41f;x?K0VdRo>=|`Qr`f1vFHq2hg3iS%#Lw zF&1Dor|Ue{|MlHNUEfR88-Vii5Dpjb-KnJxbM|WkZ^cGNl)cx9zk%3 z)74O-CS{2SqW>mB$67*VF$0pJc)A`d{@3bcWO)1m>}?BInAGpQJX#hz+HN^oN=qq> zb~w3jrM-`oWj{apY79k@%;E|eQm$~rD0lVX5b2dk)aPs`Ua7js!(&ZxErp8vZb(Sy zd=yi0z=%8Fy<4ShGPWVH& z;cU4@C6CXTmx*m%#2f3oAOKkxj6e;>MyQQH?!atl5$u579fG7>i?Y-Sg1*LHg*io! zeZkK-h6ZCbi-rD!EKwO#ee}yo%}`0#M;VADt&f9?)Po!)^7_~Iwnmi0Wa4QR#CD%{w+VBo3M4iangr5S%zxjnG*+i`b)BZRp$ z1OSCR|9UcA{xWd23ZvF$Kiqt(AX7!J3P+0e9KO)hGI- zr8ob%Xa#N%XrKEukFyHrBw^K2`mz`r?-FYCXjGy%cTQO5c{V{W;Altt@Rf9O-{!Ik z?S$}G1`jbaeSXCGDgG}!Xe_IKS%$KZ@Tjo!-Nxpn#$vekHy(bk0U}4MTmAg2THR!O zdw|-MhL+Io1c3=cVYX!`d+pT>g%JQ&{&_6vxOFwTI67}!(&;eIQo`IGL)q~*_Z+Kl zce?CQ56Ve4*WR!Z5i)VH-{&NKfz0ie7tU=Gn>U@F@gte(Yy7nrVO6Y>6>F>Q za-#nRwe`JRO-}fB$;jrGhj1SudU^s4tob|j7Oxg zSNgXIBF~x-y+lk#C20`>iR8V|NV-ebFUD4l^FpGU@REf&L`n7+?ujP?X!~PNuVcMw zDE45nMt@clyk>QM*Dcg5RG8f|Ka-frrxs&ed3 zIp_B4^6>4}EQ0EBVe|TYl{$M8|7}lY)8PkKblI501mg@>)Q%L`v2&s3M5O*f?HobG zH(yfDc&Vy5@JZ>u#A}NytF;p}`D~hR8<#w^!U?v3z%MCH;ynp0L0Vv@(^mu8)8FZm zG*N35+;{I8TwXEey0cODI>8m;oLnc&lc@H~oNp@TAEH#KvJl>_&7O|3H@at_C$Lsk zgdftTcu~?{GDjhhj?+`d@Pr8#^mkIfzS6(82H>`l-`Qx*H&#P}O*nm36tXEZ{2X{Zxr0zRF|uT_ccP5k51(#v9Mg=1n; zBp+T$+tj-fkxvjEKjLNoMNBcu%Y7|R)#mauQekW2%)VjO;b7flvW3_MpBG+hm*eiS zw6CZ&28UjCA6D53KVj)kW=Ml!6URRZ5WW5D@=fzsj%>PYiEncA!plP;&7wiw6DIiW7^T1G zr;Eh8OwYbHCihdGPnlLWk$gFOrFq@v8pFSCD(=3jCLyQky%fR8fyKvXO)$9UQ|#q) zvc_-41fK2TxZ(MV?p2+me*8+)E2uy5b@RO>gM}Z{OGIb|c^kG)2OSY*P zkl@i)o#vGy`s`Jmh+&<^%0OA|dwo~z+v!Ltu~RvWb(My=mCD5o`EJ+g6hcBA)3kG> z^Zk#Av5(1b0NmGhRUQ5fFT+``+>r&wru&)+DS zR9pBVbvMV1<~7Zx^VKgyYT70E;Pjh=Q+xjEe}lsa1^Ev@{-A4{xZ=DaB|lTlPU8HU z>@F&3aW$J9g&jxoIUQ1(q<2Q^wV(EGUwbE^v-9thB8wC{_Engn=u-o=_b?W)&UE${ zUGGkM;W(0k^P`YSkyNki#o z-_7~V2KwSFYm17q*Mq4=V2nV?O`!F7AR4KbnkF=a8wI*&3r6#LnHCOg`0x-^VBoX&#tbhUqY_uP z>C+>OSF?ZjOB9vZ3y0tzRja5Z`6aaF@Y@%)F)#jx5kOFg8iU_s{nYE5KlONmtm13z zy3ZCw7o-vHs?=ZA0UxhtUxwm^z0R6U1i`4Z{#DQ)t{ffnqOxp$y)?Tueq||+?2lnZ z)_EAUS>?B*pCs5D3ayi_#u)cL%D8Dq@|I7hZ`MIJ(J+Vfv5wUVh_;oj2zcU?nNYZ# zLvOlP)I@i@b142%{OmK_8@3aoJciTiSNrn)uSK;WVwWTcEkqD30MC3B6E*0(@oZWV z_)M-aL@hPSCq1HMI-j*oiT z=j_hsluO$4Lc_(BW_}h@?CH{XGN@P{wxXY`W0`Cn8_w%y|K zB_gf2QTkc`Iq&bZBxFt(3Pz}<2tQ938s2mNngi4Mv%!`>O4~QOYGIaH;L^1|??+UkWSl$n7+eFz;-(2lEN-6TxNPyX66_IdvE!kM#t2|-N(c{(!%)M&F-1=k;j8Z zEm|YI5HSPSLkH$`M~Y`Ksp9wiVT#xcu=5M)k1;qdCz%8jA`S;wPb*Sl3jOfcz~LG} zWSTq`dZBS%Y?vP-Twd(O2`7dLK@n3R6v(b6y?>UX?Sm#& z>{3p(@eM^8cHPMA5(CaJ9z)cR*g0sq6~6JeUVVO{M@nS0+Vy$EG6EjI&m)lTP8ete zZ*5aYh18NTem7n=k>m_WI>p|FC{3WL2*{_Bw6H3x$eho#;21%PnLr1@Yrp^{#*klb zY(kEt=Sv@iVd8rMgIo+Z{F$VZ2mTsKMnJBb&Cn#&DGy@yhgrynz%54gF92+Rz#ZQS zg+U|65cwLILYDZhP3^0|fpBO3bKCeAfkK9-*yjJRwzxac4}n3P}TtX4@WrK>r8uBi{ za@M~Pi6jW_4LO8D$V-{$UcXA?XwZ2R7+=W~AX`QI06Oyhb9rz_su2zsaPvz{%dOqyWyVqS7pJLus@*1<$Ci?n z*Yt7ZaF&yO0lO+4&|Mfi5%f-FK ze_Mfo9LHDfHs%BfzABiB))NtTVS+Y(lc9jz4iUO~_Bz4i<5@KP`>8^{@-N5dGKZ*n z?MfbKNLK?qq<-5^g@l(Fb3aT{TPv}s-{pH%r$2#KAo~oNvhaH4?u(m#&oV2|qT)4G zOeSAaf?Nkl5Th}QQkW2@fcC$J01R4>G*R(niKEjAyT~KCPe-7|-7hz8>$r}L&iOdt zomr$}2*K<75`x0RU5CR39iyf(7vM!bxSU!~y^zlyslhjW{@@3cxZ5?zvc|dr(}X}O z@jtbOE#yE(5y^b_;3ca-_?j$>I!}P7f35l<27kiTkYm?_2+{xH4EIi;>YzbR832eF7y;-s_#=o6 z&rlQ}ogl>xm@~%PKR|RhW@zDe>YaKAt<&!S?HqEOfIkL}Vo;r@oHS~IKv6X+4S&Jp z!#QcKO?^P{vzNR6`?ALvU#z$eT;iNiL$hd4_R$T!g&%D_~2sS8L@UDNPlYG_{ro1-amyx)I6NIxc z0_;hBHgX*Z;9CoTCUWnXvQCIx?>D|)2pb;2D?=_T@Rp!-xbSMwp(;ThIt&uFIxVya&_2_c;hB8Jq6kB$ykj!Ix}S(Yad zz;V;g9HrEJrhj43h{8#-2NmbX`w2f2)PF}SGl|-kVUI-QR+x7qnJ?LDY$L`)qn+6Z zAjmDR%V+T$vc{^QrmLIyD~tidzGhHFut99n7MRar{E+glC~}cQMT~;7^n9ngP)PD?tTtSA4 zroe$3!qboq75!;mOWmdrtz9MzHt6?KSB)0!B!0Kmz~WMPLl z5NmnH^ihdfr3<3gF{u`;egGcI_`EWXLi4Zi{bBPRv z`<~`20ziE0LGM`5$pS!q>wXdf|H5a_w*OL^9)`VjO$E0Os& zpfI=h0*SQfh%x`NfcLZoK+Af4h>;u5&Aoh#x}>EN z1|snwm|b$OlTAYy7=eLP`l|;NsDGNe6^ZF3l=>Nj(3QGYyZ^M?(W)3Ti|{aNY5}oj z5fEi(k!)6H-ZG>4kIbAj_mde+-zTQo+!rpS{p^wq$$2?r>Y=~4^>~`nbz6kn)d^C$hs9ji`?AGuWR4xq``tWgQmN&qq2w&ywccJbq`57h#Sn(V_WmO%>Rl8zdDj;r*sek`-@Fq3Aw^WrJ!cPr zRv|(<&fwL7U|?<}k64*i(hCuk=oqkX2>aL$=4!{qm&wV$v8%4XvGMn|{JjydEczwn z1L8%WpS6GSJVzq-XXq!{QEHFN+_?>f`qvqn+ptxb*_QEK(%TLVAQ1#|-;)_`rymdJ zt#7<)AMjo{1T7#BNgB$?&pzssy5FV!&&UjTYYO1^#yNKlJWvNc$s%Ufz;Z4ZmVE=j zjjp2`p29N4&vZIBP!$@ec)ESIO1RT`2hDJo*Xq~;WWT^ZV1nsx-Z z;`gjTGlGtkN&Rz)*^u#|!3Y9{tk3luC<(Jm@~I zUMxXUb@_c0>>1NUR6X_x);rEVM2_ms;A=Tyo+brtB75UVq|Y$AS#)JC;k(ViUB}5c zm|S?kFMwQvv<&cFz)iC-yvS%fILvi;R`;j5nSiH$-(P>u3yn zzJw96!g{*7WuD|ddq3GJ&}RO#m88n-06E zg5cFDS;4fCYgXB_I!Xenk~C?e_W?e+^q-;)4EDxaZ%Fd6S6$cOVI0$_q#igwH!Kp) ze%gtnYG0?TmWe!y@_9jGnY{|wzBu+*Nc1ELek+S^?Gj5J-278iic$Y@^81^XgQYtc zJf_r7e4bl7y7|C?@Q&fXgCqX%L|*-q;b@)5Ov9XZI*j^!Pb1Ik)3Z%IKX;iO%`b7{ zS-DnWVvg#?h?ytxyoNW`7mpSy4;_{!?$+F0R&0OK+qC^Rps9C(`oW5W^4`erj0UAX zzpv)2S(8W1L6IsvuLB+PA=zTrB+@6%Cb&cCR4|!_JIxkCofSYvsrZiSU7leW4#R}G z9euEBeVSsPYpqXxke7pkH_X<~YEI21%e+?f+%A-9Z%uir{jXoJCHNtp^wYTY&bguj z0p`(Yo5=%gcSKl{(NEU4wAKVdJ+Zc+?GMrZtP7@T#J7U&ZX!r*KNk0r`>d0}=K$Eo z{FNxgpT2kv2nt&H~+xNkwhSF3NE4pUeoc@i1cgq5Y*(%kRrj?vd;w`XMI>vmA3sTAokhoqezN zvrq^4QUj_!K=n?UOA)A2{i(xFcqpQ&R|Ty&h#97^7}P4azlQu=vS_L&-uI{P^$Pn* zP82uG2ED5l&j!bmn!C!}EsoxUcT{3?hv@sO=~fS2a%gn%Wi3(48*7jq;-E?rkpC5i zTO=XTl0$?p3!h(4=?O)v8c;_Eh!m@)hf$zK4pjCD33K#K(dQV*ACrRWOr{?Tx-yQ* z@x-~4tEXG92l>+anw8B^rRJe^r>+@1@`$mq3OYqP1xgWveLpV;;XD@HvA$L#OjD=J zXh`KH^i^t!RTSxj;{}5gTUQI-FQ9v$yKzTq)vCB;Fh-u!q>D0)>`9wC=_OVwm#oy&<|ysS@9v!#ZSN)E zCFH$@-|P58a*HhPH+QJNt9Cj?8)WJK2p^$U!_HymsXXg!E|9DEhruN!6H$W%F=66s zcdDe67Dwey&(#an-hp3_U|yOA+ty?K7}po99YggaN#;E623Q*B8kS<}G_Wn@#=Cn@j@ zGSO5`JYnb)l(>8E0v^Woap|JO<<-e~XJ*`K%*hFT=`F;_lWcfdxiHrmr`U2{L_PXR z9m?T#7FUNRB)#nrM?e{CwdwiS6T?l|)r@YX!DXDlk0s>{Hxl?p{3iO19l(lghGcUK zf4#;xt;GIPxz683{v#{*!?rgl5oa|8^1=1A4$p8?Al)EZYiWn9x1$WH{0zxE+4X{Q z8Mj`Vfw<%SX^CQFW00sFAY+}^dhz64CCY=lU@y4a49zP3>vChOyJiU`;je+;R#F>U zYg~AL_wn2<-9o$@!P1<3Sn3xvfuJw>0LeuU_xr%jCQQPuOOvi8~r;?Gvp#=_>Thg0Vy^<)RjsLW8SEO=%jvhd$4=bkwfJebO&d6c$X|BEZRIcHF`Itm#ZP>b=pp%0qZ?g`4glT?Zry52H zH$b5wqaMz-+Yv#c5}apAS})23b@zW*06&c5n8G>KNg}hCdM|BZewRke+Ew2#EyD5s zi12#IEc;5QHqr( z@&X0P&dVFo9M;y8msRGv6rW!%ro=Bma!4`BLrgdjr^%&D%J-zinM>9)_q_AhOHlZ8 zku}N6uF`Rkk}lI9{#(oB<`@4xR|fr(T{fX7Vt~%N+WEv=kC*Fh7VJJ+TKzK2zLzl(tI>8h-IzT$ zI9l?PdQV)Wrk4z{@};(I!Xae7U3BHC)4R3FimU5tt~N;Od=*oZ{%b6qz{vSPz_JJfJt&2$!}~U zZBqBGMvZ9J?CTyYtFr2tUDUaE6Ar3VWcvY=HPkJ|f{~9W^7wWh!CDCV)*37D*z9NK z%|~wC?A5;h$HXVS z7C{B>{4$|YsfC%t!mTW|$<~Bmb}sRb0ZY)?sq*jg4Zol-%SkHNM&}fO<(u4J#aWFE z049GU9WA+L!pBC(4NQo`kH)yQe8;RyPzH3Z~HZbo7nFV$qUf-umvdZRq@8o9s!7DmW)a3m+ z8J6Mj!%4;Jk+;)|y}=%BG9$7gj^nLu)Vog8(GD(;Si^$eZD!M0h_w-ZJdwdHOR>_- z@jJvB&-z2cj)QLFi^Lsz)hVKV@5vg%me`b0@jfTwEyEyN)K=^&RL7r3WV%!ANCdIu zyoFEMa_wEdS3V!Ske)ALu|z}BGe|;XQX@*i_RS4jA4Mo~fcHElPpTo4r1Vpgbr&BJ z^7X6&lOxfRs)*@9v{m3KX1=OJ;?T0O@>V6ck408{Nrl4k18z2~rF1eMU$-uEhb}zQ zYc;i^#u$ln3vh^YQ&;)C`OtI~Z`2Wni{`&Z;3>3X@^Jw4DlQM~z_~f5X{72v{Etb z@7pF6*AkwpJ0RyxJznV1#y+*G}M|*q#ZAwZUb(_>zeU%gQU8K~kxZ6*6v`F&j2996lB}G0- z+Q2r4$U14Mh&fapAK+8A^8W%&dqWi^Aq- z!*u|!wZHzI1^pLKfaA?ASn(|R1BLW$Z|YARx&iok&^MUhOD zCea~Kd%yg)nppIQlCN&#@$mupLJ0Y2g9#>pn|d1-Gqf*d!PbCYYKGo~{wm1bs{)}j zUy+jXDfmoYMGS>o^3nfYK2-vSz}hto`Fbg|l1RNyWyxnuVM}7QI9yh4#09%p{&zOS z*{CQri|jQhesw@|o$qGzLufkNA5eb`(0HJB$psnK55l-Y{RhxVPjboK>&VKWKfbfR z5rlj(=iTZJ@BtFQ^x5SI#snTg@4m3Wu`sM(LbRpHw&3E+jpM-eg@grAn;UQR!<9JB`7X1($Ntm*=A4j3{H-F z4c~|txXO(X&6q??^%Uu)N8Q!xZ!+zY&@)7nMUh;8_bl$@49ISNQO*Ktu`W;d_{HPZ zpinlAJ5S+lLuLTQAP7{xfJx!_jOk@z(wGbLZHe21xwe|{n{g<3ISxoo zo~Fm&U_A*dRbO667Jej!UakABk+NH%0a{|mFKVq@t$TxHwE^4`+wD9QFJIV)2Nn=z zkd;SEb$rniA$W;-&h_(xWQtA&8C*+-ED$(EOsk6?|93AsVOqo4nW@~uB&dfO@jnAc zLJi-74!12h6mzm4Ki%ZDBdXA#yHN;KRwK2G$kmV@e+fVZ3E)-H(i|U30*sK25gX+D zAj0{Qy@)g8_Yz~=KBuYbVJv|5wc#vhKx82aXitUj9I$f8hdQDlInuDf0_C3mX-`!q zTmkZ`I}?^ixU9LJ4_e0&0=*u%`2DRe5;qXqBs3ip;BATSJFB-4vD>$Tw;&^(C;9J+ z#zA`H{8L2_C}MAuf#b`$)W7IJP0o9(m4}6J02qd%h(ggm%wC0*Fsh?L!pA>vG2(K^ z$yJ)PcT)d4&}u(!2iv>&gf4nSt*b8$sxJ&K(<7 z_b-4cu>-+QgSSq`fuyLuXy|6aD!cYUorJqA6r%*huZ1Ucs5Rbmwp7ToK{49;7o0H) zyXXULNvK#pv-6t9g!^ahvt?a@rwLaQA*#qDJBi6_2}R z84K7(4cHg>V5jMKpg0p~Qlb}m`~!u9)D=J;Adi|$&Iq*kF$a6w!^JvLuaPB_n37L4 z8S4%+pLGCv=m0g^7PL`!>!O$$PaUtJ)Hn!xH-yt)_G*SxGTXw43u*cAe+u)eC@+Im z(}p6BunWeDbb#hY9>|&^o2?c+%DKSt-X#+*NxPzz^eBNk4A&jepmS>Q4kCYzNc=y8 zhwL^*2yoI(PhU#GAVhp+`sW5IBu{WggcqrxtP+ONUm|eOZ-2M6?a|TFnnW|kKaHx1 zzQ4T4&l}cSxdsE4E*MzNfXU96sV+qPMFF&3>qix6zFmc%Mk#(L(k>7M1SZh)des3o zDkp(n(a%O?8yzHAt{F^7^6(BNFW+54qC;>oOUscDJlyyE$mXC0`a^k83dKv`8NYP* zG8r9rBFqsB={+Y~3H0kspsAWr?#tLV3V7`4`cjCH?iHjnV>Au2`X4a(TwpQ zS|=pcwIC+kJGlYUd0!_z6-t3v%y{+!Ovt0Q>lJ9IuS=coAQv{m-Rnl|G7&Lf%&bAv~0Sb$D_!0nf|;>W0a%( z(s)MIDH^}y&r#dMba@bzPHqT=zk9{-f#Hmge8`WV`YM#}RchqC9@NQJNHAmz7d{Ch zVtkN5epT4-FlVOyF`-Cn_O?tpMhsG?02Yg7NF6+gk9XJjffOrjwb0CO1V;5m_$0x( zTV17#ic^Yc%`O)r z&}s{cbLI=DSQ+6RsVLHHPV}Qd`bvM6CfdTivGzzUjVY(f>6`0-{Jnh zL14Lzi>Ye8ih!sNzns>?Mg`? z_`+-h{`Xb^lTr%v`GyQ1H42WhG3k830( zL3;I*|2VFOuF#5NRDZJpuX)6((|N3t=kS19XN zW@4xt`hOpR=D=>cLAh1?^#-7eOr1DI>9U`2GR^-A%V@sivf`h7Jv{~n9^@uyu@Fam zh2>%ggKp%GiP}DTDFsuLg`4_OaXTb_>sCi`P+jK|VDQ(N^_Ttf@{tqD& z%Y)Ks5V|O3ALz?lmWCkSFlEa79)jdk7t~f_;16{9n@xW{%u=rav_}9&phJ5PD<49OIZ#f zXr&X>bWb6nH>M`6jUj9z)ghzMUkQqnJU9hpQ|tsCfYT69be6Jp7{w1k(kGvqO+p>O z*j!C#^+5GiJ-^Z)^a%uEWBc3(`(^Uf*`NEYnxlaa-|+5WVefrMN%c?~Kw+hCG^1I0 z5rYm3FnVE=@P_igp9r3GXdc&#@1DMzA%ZlC?1TUtWiagy2}lJfQAmByd2wHwXay>~ zq$5DvL9)c+9{>c7Fa2TJ$w|5v7DSAae?Wg}bpi?~`HQYECsxP74#ND+Dac%Q{~_9U zhVmf1nE=Vmr!We}D#d7NZuCh|e2U{%^ed=AZ4;6{1guFO>}o>h(Z1VZ*hTz8*q+-A zC{zt7t)52)seV0RI;2y!7hA3SOo$)RoJj{qn_V+}2+8~Ebf~g!gI;;FM&=V{;iPNt zRmr26`~Gg-ho`sVJR~NM>=u*(VZ-!yw$!Zhv_1#q9G}W~5$WOm0I(X6F#Z9uruEg= z0Wy`~_7ZZ8fFp3K_`q0A$0oryV0z{Gh0DsyAmQ+MHzSF7&l}vzoF<;1zs;~#0{L|F z@(|<;Mk@$vsA>s7M*MXX0KJ((MBsMiGTM`IE^SIkwCCfnlI*{b&poezTx_%W> zxrNHN##z|ldpt6K(Ye+6t*$G;CA;eJpwz@3Qv5?%k@1-*feakzpnaLhhf1~HMR?qX zY1Bh*I$Cl=woobm^WQiofO$Q2Y5}C~wY^WQ5ZTC(NYqJ+K_5UF5eMb>?g@GK$yQsj zQR-fh(ddN5BV04;`2{jp20k?fX5zbP0w%#R!qb?mP&oyV>+v+B(;n0HV z=MX^>fF8G{f6<{u{*2=w5CR(@@ zX6pdi9o%OE@m$4k0tUO4PsnWb&uWTuF)I{F(Y(bR2I3$FY%!4xNG#=#Tbas%&GsWs zfaq#j0>nTfgr(wq`uSiF<*|kL294l)@}~)}h~Lp+mfd;nGMz&3zQwm~t>M-?p> z^LE;2>H93gUM9+hoO*G#FLR>Bjq?chhm3w7J&>(tBFl#I;D@pabhQ{=D-Umt{+;QZ zUfG6aBJiD7&U>-o$Jx3WhBpA2K?o}OE4qn-Kbt^qF5qYl`v9ctSHVAN5scc@xA$Pu z-vlzfeozN(^a1jLqvZflfcF6c@;V@<+^Ih+*ayf-b9_TT7hKsr zINc}WsZ=$O`pGOfmWL3#n(eo+l*R#pnlL9vvmwPA=4Mj^a&UJ*F(l0e|DCcO^dy@o znT6(j5d|CPZ)MprUy$hD4!xCjWkj&c5J zGPX>R!Om#egq?95ly8?ZAG`~rh>E#K(+%@2vu&t79T=8Ew|7Rv^z?#)01vA6^Z{!X z$i1)CfLX|QWgk^l)o;!DZ&BsWobui-Cqp-2kzA?b8p&Hlgt9W{c3|E!tZWZO=;1j> zx?$TNp1->10ngZn$l^a#nPQ(7-(UI*np}HJD-$g@Nn-8fp77}zss&;B-hD%as~__c zO?}!me{C)mcqyI{V+A+ZGWqO{M#wAoT(jP&U|c9qoGqEc&w|d(Hbm6Cc7Qec)X&1| zGCns$sS!yTEfywP3T~3OETJYkSVCXeQL=s>1t?p;e>NdxF%WB2k(_>nfDt1uhX=g) z!&BrM_Ig?=+0ss*1;-p<-`|C_bd#PWa_@WePFAb9NO+O+Ox^Er>DCwgv>_u-+9zr5 zz_$AtMW+cgl!Wsr!|Op|3lq>UZ6YM0+}XRgGOjqV*#a^sw=R-?YxZvlq;nUoUII4k zE_>z1H*cusuq4^%@=MSv(K|^n| zDQ|78?ry%AD#l<^yK>KF6a-Y8u@N}uft>IoABO|UV4}EtkaaiilTj>6}_`m--lFB zfi95u?GHN!!o`ySXC(hv)@PIVv49diIWCpXvv$%gSUDNCzt4f&kj=lkk6VSUE02Ve zo-DLB8nQ7b3p}Ve-jgTX2fK$?Z(fVBoH%)9vT}^E4Ug%?s+MzC_ZWr1@gfWGLD` zn#QSiePf|DDkXzuKIHIZrImHE;Rm!_4!ENdJZs&5zB8zD)c1eNO2?F;-Wa`ifZUdS zZ}FJg6}L}L8+5mm35u5UuNjc3N-!RWNEgcJDQF4Y5IY0`nppOt($+HUk8?I3)uphf z-2kSW+lRCYT6(B8A84R=VS6u4%|tIgLCt`>%Um_DTExy?I)HnldT&h$?$l8Uwdi&@ zUi@QadsJZ)N8kbv@!>dmwC2TaDzQfK%gw~zO$BuozH8Papzy5Z?n=&wnEUN}R5-tv zr2Y(oiG@F}L%N~v`p)XI5LL>JH(|uKGVU>;f2~oI#dB@1ZTCsgcGbJyTif&C(b_+G z*Mu0a^~l4`sv86e3#?^WLV^#@f{5LK4|)Ra#Tp9&F<(~R1HzW$!s=K>2XEwP^WUG} zpXqr$q{yT8-o7Y7Fg=M6y?xgs?H3vN>#-LHBRGdm&)pbnPLu2`K$(Ub2(s z^;qi8jRM+wP~@Wc)S8m$liBP85c1M&H{kR5nvYq2o82)E$s@y3Ftv_Y^G?oVBHPo9 znj0j2w^Q_aXj%WNtPoYa{3-d6N8Gz~nD=evd6j!Q9S!JZC0-j21sT>@=&cr=;FYRO z0*gsiDpeh=^QUwXX!Vzdh(j3B$}W1G*1_~X)!Nj@4d}ib6|kV=9br38qaspxw?Vag zS62f@Wmn!L2Qleh`K|fw!&nHu!36#nS5ee-2(`jDKMT-Ow;w8}T2`l6?iY%4|Cs{1L&> z1v$U#In1u$@>R1Ykv#l5j3|>?Ppp}lU-P(1VZS=HOaF#I z4ApqS^dku!Nzxs&{f^F%EnX!I>$#yLAX^o$Jxx1>mIs@z)epMntR1BezW~SiwDx8h z;X|h_lBIW4h>+GoWw70S5c^An8c&nU2h#Q97JnzHy3qO~wiPfj=|AEm7(&wueu}8Y zO&;QhBiN$I2c<-Uq$B$mKy8k>KZWj2SSq2{xSB@^stks0x z*wFCmD(g>5-5i=xfsHd#A0oM8^Bh;hun&RR|Mt+E>5ckifKz%XI-d3T$@pb8)*$*p zg|rYV!-C))5HrwJ6t{QG4w0HV!N=sgMEa0y+iST0<$4uyk{xdnQG~CIh1zHTr@pj| zl1OVw=+Cm!mWyis-QT@wQD-d5Je=5K%d0~VJg)YO{k@_UW+z{MjZ7M2T{cQfs+=@w z_Nyd;_q(0lS#_AE%_>f5wbN78YprF|P^bgTJro7$>pBKVo6`y227RUclmRIad9aW~s~ zjX7hMGaF-AvNXvqidMGxDiGc9VVLhkH|JTc8>;1U7Y@-+8EU0ZPmSKq-EwJ;x{|&P zU0ns<=J%HGsfg(=igTEr=T6OS-hR`#{yxb9?<5T!5O)m;(FM%nl@$QQeM$$@)!3iuL9&w;Fd zPOG={k0ko;4O^8>)R_RpBNY0Jx>a-`3vsEGBe5a)<=O!4Y?Md)Kij0UBWsa1=Ys8q zl^*W6Uq(-E(T|_z5ZLxR$yQT3;F?-i_QrqNuciCtCPezvMHCG44NS{&qvcg=*zPET z71BZaK!Dgr#b`dIg?){`bn|`L!oA8XB-%ouC#-QI%w@z=m(fJlwU)9queAscw+=ks zDDQB&ujuo|X=z=9ff*Du7?sfWsdJ;4J04q*8K-A%l8*$6#GL#_OMPRH+ALM1r@(oV z*MAvu{E|22#6?iE94gsQzFU2Sr=DgYnHW8Ufy#I$mnP-{F)=aGAAU>+SLIL#soh&) z9KJB{=ul2KW?HOvl91->1&mTZ>7NU&@mYgL zEJ3)z7QU&MNurz3Ee>tu3O61=Bq{&$z7rh9`<|JjI<|qS&*{bF{xy`Ls_t?bHDv~^ zulnkXHXpLL=P@jy+ns;Hh*uIC=m95&OU>XjcgeNgAe4}&&s3)ch0IuIcbOT*x7 z1)19(tf3IS_T*EVv}11o6EF^F^F|aHx8y@X>;R2P@ak}ev9UizEab$1j@xE<3K~&_?s%};0OLr+V?KOssLBiSNCCR z_T}!R<$2fd=LeLK1`i}!e_(_!IIu~&lDF6DgXwq|PSpgvw7@_B^kv6))~1_?$Y|&O zt@$^Cu;$g;WQaj)K*}u;-I^PCKXI+x`eKtZxWa5fAMaJBG0@~%>^Il~iNK{@`}65$ zF{sx1{XxV<`VMSkyV5_GLt8jTc^=jPVPa}?86sUvHnjNtpr#iH4A=I)#j|kSXq<(l zTTb2r61p2JF?KvDQW{p$N;M2EqQ^4+M^Kwl-jF#U9MRv7{NPWsRgjP1h+ zZ#3sq=SROkG^)mZYw&qftq&dE`{0SO&&no}tt$U5!attq_eEv&lfOXoE0!i4CVO*P zRW*eBX}NlwmIAcmUb5w(jh;S;S;H&7W1UPpXGnL|*qx0*!BU2_3)e5!7IJTX@M3WY zQCl_`c5nwl25zMMUS`sAn%kPw!=F{nS7lKi!~2G~;{U87GSRz7KWoo};%K9%dPkkU zY2NV%HZV*N)HY2A-OAqiZgR^WtsinbzjHo!zx|_6$&O=vq3nm2o0}XfoD0^7E@&xd zetzUW>oEpbxr_dL+}SKX_dGC`Y5vX5e-;={E3fw!AKcu$Uvo?Q#{0m2N!8+l|0+-J z9-mS&C1y_&nRv_hEZ5zWU)AsIj1in3visr} z!%MaOPk&^5?@qgO_q~Tn?(QVjm1c+Mng?Axn6cm6Q}FG5m80w@+1@ONFFO$bE!hH^ z0|kzid73|9?6kF40CN;u=CE)$-9M!Y8g~cEPcU&$`Tw7}{I}KG@7Go|05?4{c)I$z JtaD0e0ssVDZ;}82 literal 0 HcmV?d00001 From bc3f5da8e91b23cbbab903639c48e497bfaae292 Mon Sep 17 00:00:00 2001 From: Cheng Guoliang Date: Mon, 8 Aug 2022 14:18:58 +0800 Subject: [PATCH 04/25] docs(global_tensor_distributed): test code --- cn/docs/cookies/global_tensor_distributed.md | 56 ++++++++++++++++---- 1 file changed, 45 insertions(+), 11 deletions(-) diff --git a/cn/docs/cookies/global_tensor_distributed.md b/cn/docs/cookies/global_tensor_distributed.md index af3f7945..6450c756 100644 --- a/cn/docs/cookies/global_tensor_distributed.md +++ b/cn/docs/cookies/global_tensor_distributed.md @@ -57,11 +57,16 @@ out = flow.matmul(x, w) print(out.shape) # (4, 8) ``` +假设以上程序所在脚本文件为 `test.py`,不同于上一篇文章,本文章借助 oneflow 分布式工具,在 Terminal 运行以下命令启动程序: +``` +python3 -m oneflow.distributed.launch --nproc_per_node 2 test.py +``` + 数据并行示意图: ![Data Paralelism](../parallelism/imgs/matmul_data_paralelism.png) -可以看出,Global Tensor 的设计方式使得上述矩阵乘法案例的修改非常简单,只需要将: +以上程序可以看出,Global Tensor 的设计方式使得上述矩阵乘法案例的修改非常简单,只需要将: 1. 数据 $x$ 按第 0 维度切分(`sbp=flow.sbp.split(dim=0)`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) 2. 模型 $w$ 保持完整(`sbp=flow.sbp.broadcast`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) @@ -84,6 +89,11 @@ out = flow.matmul(x, w) print(out.shape) # (4, 8) ``` +假设以上程序所在脚本文件为 `test.py`,在 Terminal 运行以下命令启动程序: +``` +python3 -m oneflow.distributed.launch --nproc_per_node 2 test.py +``` + 模型并行示意图: ![Data Parallelism](../parallelism/imgs/matmul_model_paralelism.png) @@ -111,20 +121,22 @@ w0 = flow.randn(5, 8, placement=P0, sbp=BROADCAST) # 模型第二阶段分布在第 1 卡 w1 = flow.randn(8, 3, placement=P1, sbp=BROADCAST) -# 随机生成数据模拟输入 -x = flow.randn(4, 5) - -# 利用 to_global 将第一阶段的数据分布在第 0 卡 -in_stage0 = x.to_global(placement=P0, sbp=BROADCAST) +# 随机生成数据模拟输入,注意第一阶段的数据分布在第 0 卡 +in_stage0 = flow.randn(4, 5, placement=P0, sbp=BROADCAST) out_stage0 = flow.matmul(in_stage0, w0) print(out_stage0.shape) # (4, 8) # 利用 to_global 将第二阶段的数据分布在第 1 卡 in_stage1 = out_stage0.to_global(placement=P1, sbp=BROADCAST) -out_stage1 = flow,matmul(in_stage1, w1) +out_stage1 = flow.matmul(in_stage1, w1) print(out_stage1.shape) # (4, 3) ``` +假设以上程序所在脚本文件为 `test.py`,在 Terminal 运行以下命令启动程序: +``` +python3 -m oneflow.distributed.launch --nproc_per_node 2 test.py +``` + 以上程序采用矩阵乘法,模拟了一个两阶段神经网络。与数据并行和模型并行不同,流水并行中的数据和模型均未被切分,而是分别将两个阶段分布在不同的设备上进行计算。 Global Tensor 的设计,使得计算过程中,只需通过 `to_global(...)` 方法调整上一阶段的输出数据的分布策略,作为下一阶段的输入数据即可。 @@ -146,11 +158,9 @@ w0 = flow.randn(5, 8, placement=P01, sbp=flow.sbp.broadcast) # 模型第二阶段在第 2 和第 3 卡上进行模型并行计算 w1 = flow.randn(8, 3, placement=P23, sbp=flow.sbp.split(dim=1)) -# 随机生成数据模拟输入 -x = flow.randn(4, 5) - +# 随机生成数据模拟输入, # 第一阶段需要将输入数据切分,用于数据并行 -in_stage0 = x.to_global(placement=P01, sbp=flow.sbp.split(dim=0)) +in_stage0 = flow.randn(4, 5, placement=P01, sbp=flow.sbp.split(dim=0)) out_stage0 = flow.matmul(in_stage0, w0) print(out_stage0.shape) # (4, 8) @@ -160,6 +170,30 @@ out_stage1 = flow.matmul(in_stage1, w1) print(out_stage1.shape) # (4, 3) ``` +oneflow 分布式工具支持多机多设备并行,此处以 `2 机 2 卡` 程序为例,假设脚本文件名为 `test.py`,启动方式如下: + +在 第 0 号机器上运行: +``` +python3 -m oneflow.distributed.launch \ + --nnodes=2 \ + --node_rank=0 \ + --nproc_per_node=2 \ + --master_addr="192.168.1.1" \ + --master_port=7788 \ + test.py +``` + +在 第 1 号机器上运行: +``` +python3 -m oneflow.distributed.launch \ + --nnodes=2 \ + --node_rank=1 \ + --nproc_per_node=2 \ + --master_addr="192.168.1.1" \ + --master_port=7788 \ + test.py +``` + 以上程序构建了一个两阶段网络,其并行方式如下图所示: From d0400a5bd405f8b8011d67359c6032dc9a73f32c Mon Sep 17 00:00:00 2001 From: Cheng Guoliang Date: Mon, 8 Aug 2022 14:36:05 +0800 Subject: [PATCH 05/25] docs(global_tensor_distributed): modify master_adddr --- cn/docs/cookies/global_tensor_distributed.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/cn/docs/cookies/global_tensor_distributed.md b/cn/docs/cookies/global_tensor_distributed.md index 6450c756..b10dfc8c 100644 --- a/cn/docs/cookies/global_tensor_distributed.md +++ b/cn/docs/cookies/global_tensor_distributed.md @@ -178,7 +178,7 @@ python3 -m oneflow.distributed.launch \ --nnodes=2 \ --node_rank=0 \ --nproc_per_node=2 \ - --master_addr="192.168.1.1" \ + --master_addr="192.168.1.1" \ # 第 0 号机器的 IP --master_port=7788 \ test.py ``` @@ -189,11 +189,13 @@ python3 -m oneflow.distributed.launch \ --nnodes=2 \ --node_rank=1 \ --nproc_per_node=2 \ - --master_addr="192.168.1.1" \ + --master_addr="192.168.1.1" \ # 第 0 号机器的 IP --master_port=7788 \ test.py ``` +注意要将 `master_addr` 设置为第 0 号机器的 IP + 以上程序构建了一个两阶段网络,其并行方式如下图所示: From a66f63f4f41d7b98530ec8d4f54976f6a8761368 Mon Sep 17 00:00:00 2001 From: Cheng Guoliang Date: Mon, 8 Aug 2022 17:32:56 +0800 Subject: [PATCH 06/25] docs(global_tensor_distributed): add run way --- cn/docs/cookies/global_tensor_distributed.md | 66 ++++++++++++-------- 1 file changed, 39 insertions(+), 27 deletions(-) diff --git a/cn/docs/cookies/global_tensor_distributed.md b/cn/docs/cookies/global_tensor_distributed.md index b10dfc8c..2eac5be7 100644 --- a/cn/docs/cookies/global_tensor_distributed.md +++ b/cn/docs/cookies/global_tensor_distributed.md @@ -47,6 +47,8 @@ OneFlow 特有的 Global Tensor 采用 `placement` 与 `sbp` 结合的方式完 以两卡并行为例,矩阵乘法案例的数据并行程序如下: +**注意:没有多个 GPU 的读者,可以通过将本文并行示例中的 `placement` 指定为 `type="cpu"`, 实现用 CPU 模拟多设备并行** + ```python import oneflow as flow @@ -58,7 +60,7 @@ print(out.shape) # (4, 8) ``` 假设以上程序所在脚本文件为 `test.py`,不同于上一篇文章,本文章借助 oneflow 分布式工具,在 Terminal 运行以下命令启动程序: -``` +```shell python3 -m oneflow.distributed.launch --nproc_per_node 2 test.py ``` @@ -90,7 +92,7 @@ print(out.shape) # (4, 8) ``` 假设以上程序所在脚本文件为 `test.py`,在 Terminal 运行以下命令启动程序: -``` +```shell python3 -m oneflow.distributed.launch --nproc_per_node 2 test.py ``` @@ -133,7 +135,7 @@ print(out_stage1.shape) # (4, 3) ``` 假设以上程序所在脚本文件为 `test.py`,在 Terminal 运行以下命令启动程序: -``` +```shell python3 -m oneflow.distributed.launch --nproc_per_node 2 test.py ``` @@ -145,7 +147,7 @@ Global Tensor 的设计,使得计算过程中,只需通过 `to_global(...)` 混合并行是结合使用以上两种或三种策略的并行策略。 -以下程序为 `2 机 2 卡` 混合并行示例: +以下程序为 `4 卡` 混合并行示例: ```python import oneflow as flow @@ -170,33 +172,43 @@ out_stage1 = flow.matmul(in_stage1, w1) print(out_stage1.shape) # (4, 3) ``` -oneflow 分布式工具支持多机多设备并行,此处以 `2 机 2 卡` 程序为例,假设脚本文件名为 `test.py`,启动方式如下: +**运行方式:** -在 第 0 号机器上运行: -``` -python3 -m oneflow.distributed.launch \ - --nnodes=2 \ - --node_rank=0 \ - --nproc_per_node=2 \ - --master_addr="192.168.1.1" \ # 第 0 号机器的 IP - --master_port=7788 \ - test.py -``` +假设脚本文件名为 `test.py` -在 第 1 号机器上运行: -``` -python3 -m oneflow.distributed.launch \ - --nnodes=2 \ - --node_rank=1 \ - --nproc_per_node=2 \ - --master_addr="192.168.1.1" \ # 第 0 号机器的 IP - --master_port=7788 \ - test.py -``` +1. 单机四卡启动方式为: + + ```shell + python3 -m oneflow.distributed.launch --nproc_per_node 4 test.py + ``` + +2. oneflow 分布式工具支持多机多设备并行,以 `2 机 2 卡` 环境为例,启动方式如下: + + 在 第 0 号机器上运行: + ```shell + python3 -m oneflow.distributed.launch \ + --nnodes=2 \ + --node_rank=0 \ + --nproc_per_node=2 \ + --master_addr="192.168.1.1" \ # 第 0 号机器的 IP + --master_port=7788 \ + test.py + ``` + + 在 第 1 号机器上运行: + ```shell + python3 -m oneflow.distributed.launch \ + --nnodes=2 \ + --node_rank=1 \ + --nproc_per_node=2 \ + --master_addr="192.168.1.1" \ # 第 0 号机器的 IP + --master_port=7788 \ + test.py + ``` -注意要将 `master_addr` 设置为第 0 号机器的 IP + 注意要将 `master_addr` 设置为第 0 号机器的 IP -以上程序构建了一个两阶段网络,其并行方式如下图所示: +以上程序构建了一个两阶段网络,其 `2 机 2 卡` 并行方式如下图所示: From e06da9f5ea2e334711cca4dbe837737fe866b240 Mon Sep 17 00:00:00 2001 From: Cheng Guoliang Date: Mon, 22 Aug 2022 15:28:44 +0800 Subject: [PATCH 07/25] feat(distributed): add Graph --- cn/docs/cookies/global_tensor_distributed.md | 122 +++++++++++++++---- 1 file changed, 101 insertions(+), 21 deletions(-) diff --git a/cn/docs/cookies/global_tensor_distributed.md b/cn/docs/cookies/global_tensor_distributed.md index 2eac5be7..e9a5d4a1 100644 --- a/cn/docs/cookies/global_tensor_distributed.md +++ b/cn/docs/cookies/global_tensor_distributed.md @@ -147,34 +147,120 @@ Global Tensor 的设计,使得计算过程中,只需通过 `to_global(...)` 混合并行是结合使用以上两种或三种策略的并行策略。 -以下程序为 `4 卡` 混合并行示例: +OneFlow 同时支持 `Eager 模式`和 `Graph 模式`两种模型运行方式,二者均可用于并行计算策略。此处以 `4 卡`混合并行程序为例进行介绍。 + +**Eager 模式(动态图)** + +`Eager 模式`是 OneFlow 的默认模式,网络模型继承自 nn.Module 模块。 ```python import oneflow as flow +import oneflow.nn as nn P01 = flow.placement(type="cuda", ranks=[0, 1]) P23 = flow.placement(type="cuda", ranks=[2, 3]) -# 模型第一阶段在第 0 和第 1 卡上进行数据并行计算 -w0 = flow.randn(5, 8, placement=P01, sbp=flow.sbp.broadcast) -# 模型第二阶段在第 2 和第 3 卡上进行模型并行计算 -w1 = flow.randn(8, 3, placement=P23, sbp=flow.sbp.split(dim=1)) -# 随机生成数据模拟输入, -# 第一阶段需要将输入数据切分,用于数据并行 -in_stage0 = flow.randn(4, 5, placement=P01, sbp=flow.sbp.split(dim=0)) -out_stage0 = flow.matmul(in_stage0, w0) -print(out_stage0.shape) # (4, 8) +class ModuleModel(nn.Module): + def __init__(self): + super().__init__() -# 第二阶段需要将输入数据还原完整,并转移至第 2 和第 3 卡,用于模型并行 -in_stage1 = out_stage0.to_global(placement=P23, sbp=flow.sbp.broadcast) -out_stage1 = flow.matmul(in_stage1, w1) -print(out_stage1.shape) # (4, 3) + # 模型第一阶段在第 0 和第 1 卡上进行数据并行计算 + self.w0 = flow.randn(5, 8, placement=P01, sbp=flow.sbp.broadcast) + # 模型第二阶段在第 2 和第 3 卡上进行模型并行计算 + self.w1 = flow.randn(8, 3, placement=P23, sbp=flow.sbp.split(dim=1)) + + def forward(self, in_stage0): + # 第一阶段,数据切分在第 0 和第 1 卡,用于数据并行 + out_stage0 = flow.matmul(in_stage0, self.w0) + + # 第二阶段需要将输入数据还原完整,并转移至第 2 和第 3 卡,用于模型并行 + in_stage1 = out_stage0.to_global(placement=P23, sbp=flow.sbp.broadcast) + out_stage1 = flow.matmul(in_stage1, self.w1) + + return out_stage0, out_stage1 + + +if __name__ == "__main__": + model = ModuleModel() + # 需要将输入数据切分,用于数据并行 + in_stage0 = flow.randn(4, 5, placement=P01, sbp=flow.sbp.split(dim=0)) + out_stage0, out_stage1 = model(in_stage0) + print(out_stage0.shape, out_stage1.shape) # [4, 8] [4, 3] ``` +**Graph 模式(静态图)** + +将上述 `Eager 模式`的示例代码改写为 `Graph 模式`,只需要自定义继承自 `nn.Graph` 的类(GraphModel),并对 `Eager 模式` 的网络模型(ModuleModel)进行复用即可。GraphModel 的实现如下。(更多关于 `Graph 模式`的细节请参考:[静态图模块 nn.Graph](../basics/08_nn_graph.md)) + +```python +class GraphModel(nn.Graph): + def __init__(self): + super().__init__() + self.model = ModuleModel() + + def build(self, x): + return self.model(x) +``` + +`Graph 模式` 完整混合并行示例代码如下: + +```python +import oneflow as flow +import oneflow.nn as nn + +P01 = flow.placement(type="cuda", ranks=[0, 1]) +P23 = flow.placement(type="cuda", ranks=[2, 3]) + + +class ModuleModel(nn.Module): + def __init__(self): + super().__init__() + + # 模型第一阶段在第 0 和第 1 卡上进行数据并行计算 + self.w0 = flow.randn(5, 8, placement=P01, sbp=flow.sbp.broadcast) + # 模型第二阶段在第 2 和第 3 卡上进行模型并行计算 + self.w1 = flow.randn(8, 3, placement=P23, sbp=flow.sbp.split(dim=1)) + + def forward(self, in_stage0): + # 第一阶段,数据切分在第 0 和第 1 卡,用于数据并行 + out_stage0 = flow.matmul(in_stage0, self.w0) + + # 第二阶段需要将输入数据还原完整,并转移至第 2 和第 3 卡,用于模型并行 + in_stage1 = out_stage0.to_global(placement=P23, sbp=flow.sbp.broadcast) + out_stage1 = flow.matmul(in_stage1, self.w1) + + return out_stage0, out_stage1 + + +# Graph +class GraphModel(nn.Graph): + def __init__(self): + super().__init__() + self.model = ModuleModel() + + def build(self, x): + return self.model(x) + + +if __name__ == "__main__": + graph = GraphModel() + # 需要将输入数据切分,用于数据并行 + in_stage0 = flow.randn(4, 5, placement=P01, sbp=flow.sbp.split(dim=0)) + out_stage0, out_stage1 = graph(in_stage0) + print(out_stage0.shape, out_stage1.shape) # [4, 8] [4, 3] + +``` + +以上程序构建了一个两阶段网络,其 `2 机 2 卡` 并行方式如下图所示: + + + +模型的两个阶段分别运行在两台机器进行流水并行,且第一阶段在第一台机器上进行两卡数据并行,第二阶段在第二台机器上进行两卡模型并行。 + **运行方式:** -假设脚本文件名为 `test.py` +`Eager 模式`和 `Graph 模式`的运行方式一致,假设脚本文件名为 `test.py` 1. 单机四卡启动方式为: @@ -208,12 +294,6 @@ print(out_stage1.shape) # (4, 3) 注意要将 `master_addr` 设置为第 0 号机器的 IP -以上程序构建了一个两阶段网络,其 `2 机 2 卡` 并行方式如下图所示: - - - -模型的两个阶段分别运行在两台机器进行流水并行,且第一阶段在第一台机器上进行两卡数据并行,第二阶段在第二台机器上进行两卡模型并行。 - ## 结语 From 0414a7d9d3fe962d15b37d234a1a99eabcd2db86 Mon Sep 17 00:00:00 2001 From: Cheng Guoliang Date: Mon, 22 Aug 2022 16:57:07 +0800 Subject: [PATCH 08/25] fix(graph): add set_stage --- cn/docs/cookies/distributed_outline.md | 38 -------------------- cn/docs/cookies/global_tensor_distributed.md | 9 +++-- 2 files changed, 6 insertions(+), 41 deletions(-) delete mode 100644 cn/docs/cookies/distributed_outline.md diff --git a/cn/docs/cookies/distributed_outline.md b/cn/docs/cookies/distributed_outline.md deleted file mode 100644 index 2e453659..00000000 --- a/cn/docs/cookies/distributed_outline.md +++ /dev/null @@ -1,38 +0,0 @@ -# 使用 Global Tensor 进行多机多设备编程:分布式并行策略 - -简单介绍分布式训练的重要性 - -## 并行策略 - -对三种并行方式进行简要概括 - -## 示例 - -思路:用一个完整的网络模型进行不同并行策略演示 - -### 单卡基础示例 - -模型可以用韩老师提供的这个[示例](https://github.com/Oneflow-Inc/oneflow-documentation/issues/481#issuecomment-1109771017),但是我觉得一些训练相关的 loss, optimizer 可以去掉,只保留输入,模型和输出。 - -单卡示例不采用 `.cuda()` 或者 `to(device)` 的写法,而是直接写 `placement=flow.placement(type="cuda", ranks=[0])` 和 `sbp=flow.sbp.broadcast`,便于与后续改动做对比 - -### 如何在两卡上进行数据并行 - -1. 描述代码需要改变的部分 -2. 给出完整可运行代码以及运行方式 - -### 如何在两卡上进行模型并行 - -1. 描述代码需要改变的部分 -2. 给出完整可运行代码以及运行方式 - -### 如何在两卡上进行流水并行 - -1. 描述代码需要改变的部分 -2. 给出完整可运行代码以及运行方式 - -### 混合并行 - -这里我想的还是只展示 GPT-3 示意图做简要介绍 - -## 结语 diff --git a/cn/docs/cookies/global_tensor_distributed.md b/cn/docs/cookies/global_tensor_distributed.md index e9a5d4a1..dcf607f8 100644 --- a/cn/docs/cookies/global_tensor_distributed.md +++ b/cn/docs/cookies/global_tensor_distributed.md @@ -218,9 +218,11 @@ class ModuleModel(nn.Module): super().__init__() # 模型第一阶段在第 0 和第 1 卡上进行数据并行计算 - self.w0 = flow.randn(5, 8, placement=P01, sbp=flow.sbp.broadcast) + w0 = flow.randn(5, 8, placement=P01, sbp=flow.sbp.broadcast) + self.w0 = nn.Parameter(w0) # 模型第二阶段在第 2 和第 3 卡上进行模型并行计算 - self.w1 = flow.randn(8, 3, placement=P23, sbp=flow.sbp.split(dim=1)) + w1 = flow.randn(8, 3, placement=P23, sbp=flow.sbp.split(dim=1)) + self.w1 = nn.Parameter(w1) def forward(self, in_stage0): # 第一阶段,数据切分在第 0 和第 1 卡,用于数据并行 @@ -238,6 +240,8 @@ class GraphModel(nn.Graph): def __init__(self): super().__init__() self.model = ModuleModel() + self.model.w0.config.set_stage(stage_id=0, placement=P01) + self.model.w1.config.set_stage(stage_id=1, placement=P23) def build(self, x): return self.model(x) @@ -249,7 +253,6 @@ if __name__ == "__main__": in_stage0 = flow.randn(4, 5, placement=P01, sbp=flow.sbp.split(dim=0)) out_stage0, out_stage1 = graph(in_stage0) print(out_stage0.shape, out_stage1.shape) # [4, 8] [4, 3] - ``` 以上程序构建了一个两阶段网络,其 `2 机 2 卡` 并行方式如下图所示: From a30f7379789414578f90154b8eadc495168b54ac Mon Sep 17 00:00:00 2001 From: Cheng Guoliang Date: Mon, 22 Aug 2022 17:43:18 +0800 Subject: [PATCH 09/25] fix(graph): change set_stage --- cn/docs/cookies/global_tensor_distributed.md | 42 ++++++++++---------- 1 file changed, 22 insertions(+), 20 deletions(-) diff --git a/cn/docs/cookies/global_tensor_distributed.md b/cn/docs/cookies/global_tensor_distributed.md index dcf607f8..6a06aad2 100644 --- a/cn/docs/cookies/global_tensor_distributed.md +++ b/cn/docs/cookies/global_tensor_distributed.md @@ -191,19 +191,10 @@ if __name__ == "__main__": **Graph 模式(静态图)** -将上述 `Eager 模式`的示例代码改写为 `Graph 模式`,只需要自定义继承自 `nn.Graph` 的类(GraphModel),并对 `Eager 模式` 的网络模型(ModuleModel)进行复用即可。GraphModel 的实现如下。(更多关于 `Graph 模式`的细节请参考:[静态图模块 nn.Graph](../basics/08_nn_graph.md)) +将上述 `Eager 模式`的示例代码改写为 `Graph 模式`,需要自定义继承自 `nn.Graph` 的类(GraphModel),并对 `Eager 模式` 的网络模型(ModuleModel)进行复用。(更多关于 `Graph 模式`的细节请参考:[静态图模块 nn.Graph](../basics/08_nn_graph.md)) -```python -class GraphModel(nn.Graph): - def __init__(self): - super().__init__() - self.model = ModuleModel() - - def build(self, x): - return self.model(x) -``` -`Graph 模式` 完整混合并行示例代码如下: +示例代码如下: ```python import oneflow as flow @@ -213,24 +204,35 @@ P01 = flow.placement(type="cuda", ranks=[0, 1]) P23 = flow.placement(type="cuda", ranks=[2, 3]) +class StageModule(nn.Module): + def __init__(self, in_dims, out_dims, placement=None, sbp=None): + super().__init__() + self.w = nn.Parameter( + flow.randn(in_dims, out_dims, placement=placement, sbp=sbp) + ) + + def forward(self, x): + out = flow.matmul(x, self.w) + return out + + class ModuleModel(nn.Module): def __init__(self): super().__init__() # 模型第一阶段在第 0 和第 1 卡上进行数据并行计算 - w0 = flow.randn(5, 8, placement=P01, sbp=flow.sbp.broadcast) - self.w0 = nn.Parameter(w0) + self.m_stage0 = StageModule(5, 8, placement=P01, sbp=flow.sbp.broadcast) + # 模型第二阶段在第 2 和第 3 卡上进行模型并行计算 - w1 = flow.randn(8, 3, placement=P23, sbp=flow.sbp.split(dim=1)) - self.w1 = nn.Parameter(w1) + self.m_stage1 = StageModule(8, 3, placement=P23, sbp=flow.sbp.split(dim=1)) - def forward(self, in_stage0): + def forward(self, x): # 第一阶段,数据切分在第 0 和第 1 卡,用于数据并行 - out_stage0 = flow.matmul(in_stage0, self.w0) + out_stage0 = self.m_stage0(x) # 第二阶段需要将输入数据还原完整,并转移至第 2 和第 3 卡,用于模型并行 in_stage1 = out_stage0.to_global(placement=P23, sbp=flow.sbp.broadcast) - out_stage1 = flow.matmul(in_stage1, self.w1) + out_stage1 = self.m_stage1(in_stage1) return out_stage0, out_stage1 @@ -240,8 +242,8 @@ class GraphModel(nn.Graph): def __init__(self): super().__init__() self.model = ModuleModel() - self.model.w0.config.set_stage(stage_id=0, placement=P01) - self.model.w1.config.set_stage(stage_id=1, placement=P23) + self.model.m_stage0.config.set_stage(stage_id=0, placement=P01) + self.model.m_stage1.config.set_stage(stage_id=1, placement=P23) def build(self, x): return self.model(x) From d48a6a31522f9b3430724495c2ab35c948dcf4a9 Mon Sep 17 00:00:00 2001 From: Cheng Guoliang Date: Mon, 22 Aug 2022 17:52:16 +0800 Subject: [PATCH 10/25] fix(eager): change eager --- cn/docs/cookies/global_tensor_distributed.md | 25 +++++++++++++++----- 1 file changed, 19 insertions(+), 6 deletions(-) diff --git a/cn/docs/cookies/global_tensor_distributed.md b/cn/docs/cookies/global_tensor_distributed.md index 6a06aad2..b5c757ff 100644 --- a/cn/docs/cookies/global_tensor_distributed.md +++ b/cn/docs/cookies/global_tensor_distributed.md @@ -161,22 +161,35 @@ P01 = flow.placement(type="cuda", ranks=[0, 1]) P23 = flow.placement(type="cuda", ranks=[2, 3]) +class StageModule(nn.Module): + def __init__(self, in_dims, out_dims, placement=None, sbp=None): + super().__init__() + self.w = nn.Parameter( + flow.randn(in_dims, out_dims, placement=placement, sbp=sbp) + ) + + def forward(self, x): + out = flow.matmul(x, self.w) + return out + + class ModuleModel(nn.Module): def __init__(self): super().__init__() # 模型第一阶段在第 0 和第 1 卡上进行数据并行计算 - self.w0 = flow.randn(5, 8, placement=P01, sbp=flow.sbp.broadcast) + self.m_stage0 = StageModule(5, 8, placement=P01, sbp=flow.sbp.broadcast) + # 模型第二阶段在第 2 和第 3 卡上进行模型并行计算 - self.w1 = flow.randn(8, 3, placement=P23, sbp=flow.sbp.split(dim=1)) + self.m_stage1 = StageModule(8, 3, placement=P23, sbp=flow.sbp.split(dim=1)) - def forward(self, in_stage0): + def forward(self, x): # 第一阶段,数据切分在第 0 和第 1 卡,用于数据并行 - out_stage0 = flow.matmul(in_stage0, self.w0) + out_stage0 = self.m_stage0(x) # 第二阶段需要将输入数据还原完整,并转移至第 2 和第 3 卡,用于模型并行 in_stage1 = out_stage0.to_global(placement=P23, sbp=flow.sbp.broadcast) - out_stage1 = flow.matmul(in_stage1, self.w1) + out_stage1 = self.m_stage1(in_stage1) return out_stage0, out_stage1 @@ -186,7 +199,7 @@ if __name__ == "__main__": # 需要将输入数据切分,用于数据并行 in_stage0 = flow.randn(4, 5, placement=P01, sbp=flow.sbp.split(dim=0)) out_stage0, out_stage1 = model(in_stage0) - print(out_stage0.shape, out_stage1.shape) # [4, 8] [4, 3] + print(out_stage0.shape, out_stage1.shape) # [4, 8] [4, 3] ``` **Graph 模式(静态图)** From e972610c1248d9f52cc7384252e501926d2da725 Mon Sep 17 00:00:00 2001 From: Cheng Guoliang Date: Tue, 23 Aug 2022 14:38:31 +0800 Subject: [PATCH 11/25] refactor(distributed): add authors and refactor eager and graph --- cn/docs/cookies/global_tensor_distributed.md | 192 ++++++++++--------- cn/mkdocs.yml | 3 +- 2 files changed, 101 insertions(+), 94 deletions(-) diff --git a/cn/docs/cookies/global_tensor_distributed.md b/cn/docs/cookies/global_tensor_distributed.md index b5c757ff..9b26c4ad 100644 --- a/cn/docs/cookies/global_tensor_distributed.md +++ b/cn/docs/cookies/global_tensor_distributed.md @@ -1,5 +1,7 @@ # 使用 Global Tensor 进行多机多设备编程:分布式并行策略 +By [Guoliang Cheng](https://github.com/lmyybh), [Xu Xiaoyu](https://github.com/strint) + 深度学习是通过神经网络学习样本数据的内在规律和表现层次的一种复杂机器学习算法。计算过程主要涉及数据和模型两部分。 随着深度学习的广泛应用,模型规模不断扩大,对硬件(算力、内存)的需求也在不断提高。然而,受限于物理定律,持续提高芯片的集成越来越困难,单一设备的算力及容量难以跟上模型扩大的需求。 @@ -10,15 +12,15 @@ 值得注意的是,简单的设备堆叠并不一定会带来算力的增长。因为神经网络的训练并不是单纯的“把原来一个设备做的事情,现在分给多个设备各自做”,它不仅需要多个设备进行计算,还涉及到设备之间的数据传输,只有协调好集群中的计算与通信,才可以实现高效的分布式训练。 -常见的并行策略包括**数据并行**、**模型并行**和**流水并行**,特点如下: +常见的并行策略包括 **数据并行** 、**模型并行** 和 **流水并行**,特点如下: -- 数据并行:对**数据**进行切分,不同设备数据不同,但模型相同 -- 模型并行:对**模型**进行切分,不同设备数据相同,但模型不同 -- 流水并行:将**模型**分为多个阶段,分发到不同设备,各个设备之间以“流水线”的方式完成训练 +- 数据并行:对 **数据** 进行切分,不同设备数据不同,但模型相同 +- 模型并行:对 **模型** 进行切分,不同设备数据相同,但模型不同 +- 流水并行:将 **模型** 分为多个阶段,分发到不同设备,各个设备之间以“流水线”的方式完成训练 -除上述三种策略外,**混合并行**也是一种常见的并行策略,通过上述两种或三种方式的混合使用完成训练目的。 +除上述三种策略外, **混合并行** 也是一种常见的并行策略,通过上述两种或三种方式的混合使用完成训练目的。 -本文以矩阵乘法为例,解释并行策略间的区别,以及如何利用 Global Tensor 实现不同的并行方式。 +本文以矩阵乘法为例,解释并行策略间的区别,以及如何利用 `Global Tensor` 实现不同的并行方式。 假设神经网络中的某一层是进行矩阵乘法计算,其中,输入 $x$ 的形状为 $4\times5$,模型参数 $w$ 的形状为 $5\times8$,那么,矩阵乘法输出形状为 $4\times8$。 @@ -49,9 +51,9 @@ OneFlow 特有的 Global Tensor 采用 `placement` 与 `sbp` 结合的方式完 **注意:没有多个 GPU 的读者,可以通过将本文并行示例中的 `placement` 指定为 `type="cpu"`, 实现用 CPU 模拟多设备并行** + ```python import oneflow as flow - placement = flow.placement(type="cuda", ranks=[0, 1]) x = flow.randn(4, 5, placement=placement, sbp=flow.sbp.split(dim=0)) w = flow.randn(5, 8, placement=placement, sbp=flow.sbp.broadcast) @@ -60,6 +62,7 @@ print(out.shape) # (4, 8) ``` 假设以上程序所在脚本文件为 `test.py`,不同于上一篇文章,本文章借助 oneflow 分布式工具,在 Terminal 运行以下命令启动程序: + ```shell python3 -m oneflow.distributed.launch --nproc_per_node 2 test.py ``` @@ -147,128 +150,131 @@ Global Tensor 的设计,使得计算过程中,只需通过 `to_global(...)` 混合并行是结合使用以上两种或三种策略的并行策略。 -OneFlow 同时支持 `Eager 模式`和 `Graph 模式`两种模型运行方式,二者均可用于并行计算策略。此处以 `4 卡`混合并行程序为例进行介绍。 +OneFlow 同时支持 `Eager 模式` 和 `Graph 模式` 两种模型运行方式,二者均可用于并行计算策略。 -**Eager 模式(动态图)** +- `Eager 模式` 是 OneFlow 的默认模式,网络模型继承自 `nn.Module` 模块。 +- `Graph 模式` 需要自定义继承自 `nn.Graph` 的类,并对 `Eager 模式` 的网络模型进行复用。 -`Eager 模式`是 OneFlow 的默认模式,网络模型继承自 nn.Module 模块。 +更多关于 `Graph 模式`的细节请参考:[静态图模块 nn.Graph](../basics/08_nn_graph.md) -```python -import oneflow as flow -import oneflow.nn as nn +此处以 `4 卡`混合并行程序为例进行介绍。 -P01 = flow.placement(type="cuda", ranks=[0, 1]) -P23 = flow.placement(type="cuda", ranks=[2, 3]) +!!! Note + 分别 **点击** 以下 `Eager` 或 `Graph` 标签,查看 两种模式的示例代码 +=== "Eager" -class StageModule(nn.Module): - def __init__(self, in_dims, out_dims, placement=None, sbp=None): - super().__init__() - self.w = nn.Parameter( - flow.randn(in_dims, out_dims, placement=placement, sbp=sbp) - ) + ```python + import oneflow as flow + import oneflow.nn as nn - def forward(self, x): - out = flow.matmul(x, self.w) - return out + P01 = flow.placement(type="cuda", ranks=[0, 1]) + P23 = flow.placement(type="cuda", ranks=[2, 3]) -class ModuleModel(nn.Module): - def __init__(self): - super().__init__() + class StageModule(nn.Module): + def __init__(self, in_dims, out_dims, placement=None, sbp=None): + super().__init__() + self.w = nn.Parameter( + flow.randn(in_dims, out_dims, placement=placement, sbp=sbp) + ) - # 模型第一阶段在第 0 和第 1 卡上进行数据并行计算 - self.m_stage0 = StageModule(5, 8, placement=P01, sbp=flow.sbp.broadcast) + def forward(self, x): + out = flow.matmul(x, self.w) + return out - # 模型第二阶段在第 2 和第 3 卡上进行模型并行计算 - self.m_stage1 = StageModule(8, 3, placement=P23, sbp=flow.sbp.split(dim=1)) - def forward(self, x): - # 第一阶段,数据切分在第 0 和第 1 卡,用于数据并行 - out_stage0 = self.m_stage0(x) + class ModuleModel(nn.Module): + def __init__(self): + super().__init__() - # 第二阶段需要将输入数据还原完整,并转移至第 2 和第 3 卡,用于模型并行 - in_stage1 = out_stage0.to_global(placement=P23, sbp=flow.sbp.broadcast) - out_stage1 = self.m_stage1(in_stage1) + # 模型第一阶段在第 0 和第 1 卡上进行数据并行计算 + self.m_stage0 = StageModule(5, 8, placement=P01, sbp=flow.sbp.broadcast) - return out_stage0, out_stage1 + # 模型第二阶段在第 2 和第 3 卡上进行模型并行计算 + self.m_stage1 = StageModule(8, 3, placement=P23, sbp=flow.sbp.split(dim=1)) + def forward(self, x): + # 第一阶段,数据切分在第 0 和第 1 卡,用于数据并行 + out_stage0 = self.m_stage0(x) -if __name__ == "__main__": - model = ModuleModel() - # 需要将输入数据切分,用于数据并行 - in_stage0 = flow.randn(4, 5, placement=P01, sbp=flow.sbp.split(dim=0)) - out_stage0, out_stage1 = model(in_stage0) - print(out_stage0.shape, out_stage1.shape) # [4, 8] [4, 3] -``` + # 第二阶段需要将输入数据还原完整,并转移至第 2 和第 3 卡,用于模型并行 + in_stage1 = out_stage0.to_global(placement=P23, sbp=flow.sbp.broadcast) + out_stage1 = self.m_stage1(in_stage1) -**Graph 模式(静态图)** + return out_stage0, out_stage1 -将上述 `Eager 模式`的示例代码改写为 `Graph 模式`,需要自定义继承自 `nn.Graph` 的类(GraphModel),并对 `Eager 模式` 的网络模型(ModuleModel)进行复用。(更多关于 `Graph 模式`的细节请参考:[静态图模块 nn.Graph](../basics/08_nn_graph.md)) + if __name__ == "__main__": + model = ModuleModel() + # 需要将输入数据切分,用于数据并行 + in_stage0 = flow.randn(4, 5, placement=P01, sbp=flow.sbp.split(dim=0)) + out_stage0, out_stage1 = model(in_stage0) + print(out_stage0.shape, out_stage1.shape) # (4, 8) (4, 3) + ``` -示例代码如下: +=== "Graph" -```python -import oneflow as flow -import oneflow.nn as nn + ```python + import oneflow as flow + import oneflow.nn as nn -P01 = flow.placement(type="cuda", ranks=[0, 1]) -P23 = flow.placement(type="cuda", ranks=[2, 3]) + P01 = flow.placement(type="cuda", ranks=[0, 1]) + P23 = flow.placement(type="cuda", ranks=[2, 3]) -class StageModule(nn.Module): - def __init__(self, in_dims, out_dims, placement=None, sbp=None): - super().__init__() - self.w = nn.Parameter( - flow.randn(in_dims, out_dims, placement=placement, sbp=sbp) - ) + class StageModule(nn.Module): + def __init__(self, in_dims, out_dims, placement=None, sbp=None): + super().__init__() + self.w = nn.Parameter( + flow.randn(in_dims, out_dims, placement=placement, sbp=sbp) + ) - def forward(self, x): - out = flow.matmul(x, self.w) - return out + def forward(self, x): + out = flow.matmul(x, self.w) + return out -class ModuleModel(nn.Module): - def __init__(self): - super().__init__() + class ModuleModel(nn.Module): + def __init__(self): + super().__init__() - # 模型第一阶段在第 0 和第 1 卡上进行数据并行计算 - self.m_stage0 = StageModule(5, 8, placement=P01, sbp=flow.sbp.broadcast) + # 模型第一阶段在第 0 和第 1 卡上进行数据并行计算 + self.m_stage0 = StageModule(5, 8, placement=P01, sbp=flow.sbp.broadcast) - # 模型第二阶段在第 2 和第 3 卡上进行模型并行计算 - self.m_stage1 = StageModule(8, 3, placement=P23, sbp=flow.sbp.split(dim=1)) + # 模型第二阶段在第 2 和第 3 卡上进行模型并行计算 + self.m_stage1 = StageModule(8, 3, placement=P23, sbp=flow.sbp.split(dim=1)) - def forward(self, x): - # 第一阶段,数据切分在第 0 和第 1 卡,用于数据并行 - out_stage0 = self.m_stage0(x) + def forward(self, x): + # 第一阶段,数据切分在第 0 和第 1 卡,用于数据并行 + out_stage0 = self.m_stage0(x) - # 第二阶段需要将输入数据还原完整,并转移至第 2 和第 3 卡,用于模型并行 - in_stage1 = out_stage0.to_global(placement=P23, sbp=flow.sbp.broadcast) - out_stage1 = self.m_stage1(in_stage1) + # 第二阶段需要将输入数据还原完整,并转移至第 2 和第 3 卡,用于模型并行 + in_stage1 = out_stage0.to_global(placement=P23, sbp=flow.sbp.broadcast) + out_stage1 = self.m_stage1(in_stage1) - return out_stage0, out_stage1 + return out_stage0, out_stage1 -# Graph -class GraphModel(nn.Graph): - def __init__(self): - super().__init__() - self.model = ModuleModel() - self.model.m_stage0.config.set_stage(stage_id=0, placement=P01) - self.model.m_stage1.config.set_stage(stage_id=1, placement=P23) + # Graph + class GraphModel(nn.Graph): + def __init__(self): + super().__init__() + self.model = ModuleModel() + self.model.m_stage0.config.set_stage(stage_id=0, placement=P01) + self.model.m_stage1.config.set_stage(stage_id=1, placement=P23) - def build(self, x): - return self.model(x) + def build(self, x): + return self.model(x) -if __name__ == "__main__": - graph = GraphModel() - # 需要将输入数据切分,用于数据并行 - in_stage0 = flow.randn(4, 5, placement=P01, sbp=flow.sbp.split(dim=0)) - out_stage0, out_stage1 = graph(in_stage0) - print(out_stage0.shape, out_stage1.shape) # [4, 8] [4, 3] -``` + if __name__ == "__main__": + graph = GraphModel() + # 需要将输入数据切分,用于数据并行 + in_stage0 = flow.randn(4, 5, placement=P01, sbp=flow.sbp.split(dim=0)) + out_stage0, out_stage1 = graph(in_stage0) + print(out_stage0.shape, out_stage1.shape) # (4, 8) (4, 3) + ``` 以上程序构建了一个两阶段网络,其 `2 机 2 卡` 并行方式如下图所示: @@ -278,7 +284,7 @@ if __name__ == "__main__": **运行方式:** -`Eager 模式`和 `Graph 模式`的运行方式一致,假设脚本文件名为 `test.py` +`Eager 模式` 和 `Graph 模式` 的运行方式一致,假设脚本文件名为 `test.py` 1. 单机四卡启动方式为: diff --git a/cn/mkdocs.yml b/cn/mkdocs.yml index 43a59097..b1f371b8 100644 --- a/cn/mkdocs.yml +++ b/cn/mkdocs.yml @@ -134,7 +134,8 @@ nav: - 流水并行训练: parallelism/06_pipeline.md - 实践指南: - - 使用 Global Tensor 进行多机多设备编程 基础操作: cookies/global_tensor.md + - 使用 Global Tensor 进行多机多设备编程:基础操作: cookies/global_tensor.md + - 使用 Global Tensor 进行多机多设备编程:分布式并行策略: cookies/global_tensor_distributed.md - OneFlow 与 ONNX 交互: cookies/oneflow2onnnx.md - 模型部署: cookies/serving.md - 自动混合精度训练: cookies/amp.md From 3c3691ac4132c50f52ad8551895c87f301096cd9 Mon Sep 17 00:00:00 2001 From: httpshirley <100749531+httpshirley@users.noreply.github.com> Date: Wed, 24 Aug 2022 16:03:21 +0800 Subject: [PATCH 12/25] Create global_tensor_distributed.md --- en/docs/cookies/global_tensor_distributed.md | 292 +++++++++++++++++++ 1 file changed, 292 insertions(+) create mode 100644 en/docs/cookies/global_tensor_distributed.md diff --git a/en/docs/cookies/global_tensor_distributed.md b/en/docs/cookies/global_tensor_distributed.md new file mode 100644 index 00000000..cfcbfb69 --- /dev/null +++ b/en/docs/cookies/global_tensor_distributed.md @@ -0,0 +1,292 @@ +# 使用 Global Tensor 进行多机多设备编程:分布式并行策略 + +By [Guoliang Cheng](https://github.com/lmyybh), [Xu Xiaoyu](https://github.com/strint) + +深度学习是通过神经网络学习样本数据的内在规律和表现层次的一种复杂机器学习算法。计算过程主要涉及数据和模型两部分。 + +随着深度学习的广泛应用,模型规模不断扩大,对硬件(算力、内存)的需求也在不断提高。然而,受限于物理定律,持续提高芯片的集成越来越困难,单一设备的算力及容量难以跟上模型扩大的需求。 + +为解决算力增速不足的问题,多节点集群的分布式训练方式逐渐受到重视,高效易用的分布式并行策略的提出势在必行。 + +## 并行策略 + +值得注意的是,简单的设备堆叠并不一定会带来算力的增长。因为神经网络的训练并不是单纯的“把原来一个设备做的事情,现在分给多个设备各自做”,它不仅需要多个设备进行计算,还涉及到设备之间的数据传输,只有协调好集群中的计算与通信,才可以实现高效的分布式训练。 + +常见的并行策略包括 **数据并行** 、**模型并行** 和 **流水并行**,特点如下: + +- 数据并行:对 **数据** 进行切分,不同设备数据不同,但模型相同 +- 模型并行:对 **模型** 进行切分,不同设备数据相同,但模型不同 +- 流水并行:将 **模型** 分为多个阶段,分发到不同设备,各个设备之间以“流水线”的方式完成训练 + +除上述三种策略外, **混合并行** 也是一种常见的并行策略,通过上述两种或三种方式的混合使用完成训练目的。 + +本文以矩阵乘法为例,解释并行策略间的区别,以及如何利用 `Global Tensor` 实现不同的并行方式。 + +假设神经网络中的某一层是进行矩阵乘法计算,其中,输入 $x$ 的形状为 $4\times5$,模型参数 $w$ 的形状为 $5\times8$,那么,矩阵乘法输出形状为 $4\times8$。 + +基础代码: + +```python +import oneflow as flow +x = flow.randn(4, 5) +w = flow.randn(5, 8) +out = flow.matmul(x, w) +print(out.shape) # (4, 8) +``` + +示意图如下: + +![matmul](../parallelism/imgs/matmul_logical.png) + +单设备的训练中,以上矩阵乘法计算得到 $out$ 后会传递到下一层,并最终计算得到 $loss$。然后,在反向传播过程中,得到 $\frac{\partial loss}{\partial w}$,用于更新 $w$。 + +### 数据并行 + +数据并行是将数据进行切分输入不同设备,而每个设备上的模型保持完整和一致。 + +OneFlow 特有的 Global Tensor 采用 `placement` 与 `sbp` 结合的方式完成分布。其中 `placement` 表示 Global Tensor 分布的物理设备,`sbp` 表示 Global Tensor 分布的方式(详情可见:[创建 Global Tensor](./global_tensor.md/#global-tensor_2))。 + +以两卡并行为例,矩阵乘法案例的数据并行程序如下: + +**注意:没有多个 GPU 的读者,可以通过将本文并行示例中的 `placement` 指定为 `type="cpu"`, 实现用 CPU 模拟多设备并行** + + +```python +import oneflow as flow +placement = flow.placement(type="cuda", ranks=[0, 1]) +x = flow.randn(4, 5, placement=placement, sbp=flow.sbp.split(dim=0)) +w = flow.randn(5, 8, placement=placement, sbp=flow.sbp.broadcast) +out = flow.matmul(x, w) +print(out.shape) # (4, 8) +``` + +假设以上程序所在脚本文件为 `test.py`,不同于上一篇文章,本文章借助 oneflow 分布式工具,在 Terminal 运行以下命令启动程序: + +```shell +python3 -m oneflow.distributed.launch --nproc_per_node 2 test.py +``` + +数据并行示意图: + +![Data Paralelism](../parallelism/imgs/matmul_data_paralelism.png) + +以上程序可以看出,Global Tensor 的设计方式使得上述矩阵乘法案例的修改非常简单,只需要将: + +1. 数据 $x$ 按第 0 维度切分(`sbp=flow.sbp.split(dim=0)`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) +2. 模型 $w$ 保持完整(`sbp=flow.sbp.broadcast`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) + +### 模型并行 + +当神经网络非常巨大时,数据并行同步梯度的代价很大,此时可以考虑采用模型并行策略。 + +与数据并行相反,模型并行是将模型进行切分输入不同设备,而每个设备上的数据保持完整和一致。 + +同样以两卡为例,矩阵乘法的模型并行程序如下: + +```python +import oneflow as flow +placement = flow.placement(type="cuda", ranks=[0, 1]) +x = flow.randn(4, 5, placement=placement, sbp=flow.sbp.broadcast) +w = flow.randn(5, 8, placement=placement, sbp=flow.sbp.split(dim=1)) +out = flow.matmul(x, w) +print(out.shape) # (4, 8) +``` + +假设以上程序所在脚本文件为 `test.py`,在 Terminal 运行以下命令启动程序: +```shell +python3 -m oneflow.distributed.launch --nproc_per_node 2 test.py +``` + +模型并行示意图: + +![Data Parallelism](../parallelism/imgs/matmul_model_paralelism.png) + +同样只需要修改以下两部分: + +1. 数据 $x$ 保持完整(`sbp=flow.sbp.broadcast`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) +2. 模型 $w$ 按第 1 维度切分(`sbp=flow.sbp.split(dim=1)`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) + +### 流水并行 + +当神经网络过于巨大,无法在一个设备上存放时,可以选择流水并行策略。 流水并行将网络切分为多个阶段,并分发到不同的计算设备上,各个计算设备之间以“流水线”的方式完成训练。 + +以两卡流水并行为例,构造两阶段示例程序: + +```python +import oneflow as flow +P0 = flow.placement(type="cuda", ranks=[0]) +P1 = flow.placement(type="cuda", ranks=[1]) +BROADCAST = flow.sbp.broadcast +# 模型第一阶段分布在第 0 卡 +w0 = flow.randn(5, 8, placement=P0, sbp=BROADCAST) +# 模型第二阶段分布在第 1 卡 +w1 = flow.randn(8, 3, placement=P1, sbp=BROADCAST) +# 随机生成数据模拟输入,注意第一阶段的数据分布在第 0 卡 +in_stage0 = flow.randn(4, 5, placement=P0, sbp=BROADCAST) +out_stage0 = flow.matmul(in_stage0, w0) +print(out_stage0.shape) # (4, 8) +# 利用 to_global 将第二阶段的数据分布在第 1 卡 +in_stage1 = out_stage0.to_global(placement=P1, sbp=BROADCAST) +out_stage1 = flow.matmul(in_stage1, w1) +print(out_stage1.shape) # (4, 3) +``` + +假设以上程序所在脚本文件为 `test.py`,在 Terminal 运行以下命令启动程序: +```shell +python3 -m oneflow.distributed.launch --nproc_per_node 2 test.py +``` + +以上程序采用矩阵乘法,模拟了一个两阶段神经网络。与数据并行和模型并行不同,流水并行中的数据和模型均未被切分,而是分别将两个阶段分布在不同的设备上进行计算。 + +Global Tensor 的设计,使得计算过程中,只需通过 `to_global(...)` 方法调整上一阶段的输出数据的分布策略,作为下一阶段的输入数据即可。 + +### 混合并行 + +混合并行是结合使用以上两种或三种策略的并行策略。 + +OneFlow 同时支持 `Eager 模式` 和 `Graph 模式` 两种模型运行方式,二者均可用于并行计算策略。 + +- `Eager 模式` 是 OneFlow 的默认模式,网络模型继承自 `nn.Module` 模块。 +- `Graph 模式` 需要自定义继承自 `nn.Graph` 的类,并对 `Eager 模式` 的网络模型进行复用。 + +更多关于 `Graph 模式`的细节请参考:[静态图模块 nn.Graph](../basics/08_nn_graph.md) + +此处以 `4 卡`混合并行程序为例进行介绍。 + +!!! Note + 分别 **点击** 以下 `Eager` 或 `Graph` 标签,查看 两种模式的示例代码 + +=== "Eager" + + ```python + import oneflow as flow + import oneflow.nn as nn + P01 = flow.placement(type="cuda", ranks=[0, 1]) + P23 = flow.placement(type="cuda", ranks=[2, 3]) + class StageModule(nn.Module): + def __init__(self, in_dims, out_dims, placement=None, sbp=None): + super().__init__() + self.w = nn.Parameter( + flow.randn(in_dims, out_dims, placement=placement, sbp=sbp) + ) + def forward(self, x): + out = flow.matmul(x, self.w) + return out + class ModuleModel(nn.Module): + def __init__(self): + super().__init__() + # 模型第一阶段在第 0 和第 1 卡上进行数据并行计算 + self.m_stage0 = StageModule(5, 8, placement=P01, sbp=flow.sbp.broadcast) + # 模型第二阶段在第 2 和第 3 卡上进行模型并行计算 + self.m_stage1 = StageModule(8, 3, placement=P23, sbp=flow.sbp.split(dim=1)) + def forward(self, x): + # 第一阶段,数据切分在第 0 和第 1 卡,用于数据并行 + out_stage0 = self.m_stage0(x) + # 第二阶段需要将输入数据还原完整,并转移至第 2 和第 3 卡,用于模型并行 + in_stage1 = out_stage0.to_global(placement=P23, sbp=flow.sbp.broadcast) + out_stage1 = self.m_stage1(in_stage1) + return out_stage0, out_stage1 + if __name__ == "__main__": + model = ModuleModel() + # 需要将输入数据切分,用于数据并行 + in_stage0 = flow.randn(4, 5, placement=P01, sbp=flow.sbp.split(dim=0)) + out_stage0, out_stage1 = model(in_stage0) + print(out_stage0.shape, out_stage1.shape) # (4, 8) (4, 3) + ``` + +=== "Graph" + + ```python + import oneflow as flow + import oneflow.nn as nn + P01 = flow.placement(type="cuda", ranks=[0, 1]) + P23 = flow.placement(type="cuda", ranks=[2, 3]) + class StageModule(nn.Module): + def __init__(self, in_dims, out_dims, placement=None, sbp=None): + super().__init__() + self.w = nn.Parameter( + flow.randn(in_dims, out_dims, placement=placement, sbp=sbp) + ) + def forward(self, x): + out = flow.matmul(x, self.w) + return out + class ModuleModel(nn.Module): + def __init__(self): + super().__init__() + # 模型第一阶段在第 0 和第 1 卡上进行数据并行计算 + self.m_stage0 = StageModule(5, 8, placement=P01, sbp=flow.sbp.broadcast) + # 模型第二阶段在第 2 和第 3 卡上进行模型并行计算 + self.m_stage1 = StageModule(8, 3, placement=P23, sbp=flow.sbp.split(dim=1)) + def forward(self, x): + # 第一阶段,数据切分在第 0 和第 1 卡,用于数据并行 + out_stage0 = self.m_stage0(x) + # 第二阶段需要将输入数据还原完整,并转移至第 2 和第 3 卡,用于模型并行 + in_stage1 = out_stage0.to_global(placement=P23, sbp=flow.sbp.broadcast) + out_stage1 = self.m_stage1(in_stage1) + return out_stage0, out_stage1 + # Graph + class GraphModel(nn.Graph): + def __init__(self): + super().__init__() + self.model = ModuleModel() + self.model.m_stage0.config.set_stage(stage_id=0, placement=P01) + self.model.m_stage1.config.set_stage(stage_id=1, placement=P23) + def build(self, x): + return self.model(x) + if __name__ == "__main__": + graph = GraphModel() + # 需要将输入数据切分,用于数据并行 + in_stage0 = flow.randn(4, 5, placement=P01, sbp=flow.sbp.split(dim=0)) + out_stage0, out_stage1 = graph(in_stage0) + print(out_stage0.shape, out_stage1.shape) # (4, 8) (4, 3) + ``` + +以上程序构建了一个两阶段网络,其 `2 机 2 卡` 并行方式如下图所示: + + + +模型的两个阶段分别运行在两台机器进行流水并行,且第一阶段在第一台机器上进行两卡数据并行,第二阶段在第二台机器上进行两卡模型并行。 + +**运行方式:** + +`Eager 模式` 和 `Graph 模式` 的运行方式一致,假设脚本文件名为 `test.py` + +1. 单机四卡启动方式为: + + ```shell + python3 -m oneflow.distributed.launch --nproc_per_node 4 test.py + ``` + +2. oneflow 分布式工具支持多机多设备并行,以 `2 机 2 卡` 环境为例,启动方式如下: + + 在 第 0 号机器上运行: + ```shell + python3 -m oneflow.distributed.launch \ + --nnodes=2 \ + --node_rank=0 \ + --nproc_per_node=2 \ + --master_addr="192.168.1.1" \ # 第 0 号机器的 IP + --master_port=7788 \ + test.py + ``` + + 在 第 1 号机器上运行: + ```shell + python3 -m oneflow.distributed.launch \ + --nnodes=2 \ + --node_rank=1 \ + --nproc_per_node=2 \ + --master_addr="192.168.1.1" \ # 第 0 号机器的 IP + --master_port=7788 \ + test.py + ``` + + 注意要将 `master_addr` 设置为第 0 号机器的 IP + + +## 结语 + +并行策略的选择影响着训练效率,框架对并行训练的接口支持程度,决定了算法工程师的开发效率。 + +本文介绍了数据并行、模型并行、流水并行以及混合并行这些分布式并行策略,通过示例展示了 OneFlow 针对分布式训练所做的系统级设计和创新,以便于用户轻松上手分布式训练。 + From b8f4c9da26b677ab31388b7259c557c92546ba4b Mon Sep 17 00:00:00 2001 From: Hu Yanjun <100749531+httpshirley@users.noreply.github.com> Date: Mon, 29 Aug 2022 18:10:48 +0800 Subject: [PATCH 13/25] Update global_tensor_distributed.md --- en/docs/cookies/global_tensor_distributed.md | 54 ++++++++++++++++++++ 1 file changed, 54 insertions(+) diff --git a/en/docs/cookies/global_tensor_distributed.md b/en/docs/cookies/global_tensor_distributed.md index cfcbfb69..e14141b3 100644 --- a/en/docs/cookies/global_tensor_distributed.md +++ b/en/docs/cookies/global_tensor_distributed.md @@ -1,31 +1,55 @@ # 使用 Global Tensor 进行多机多设备编程:分布式并行策略 +# Using Global Tensor for Multi-Device Multi-GPU Programming: Distributed Parallelism Strategies By [Guoliang Cheng](https://github.com/lmyybh), [Xu Xiaoyu](https://github.com/strint) 深度学习是通过神经网络学习样本数据的内在规律和表现层次的一种复杂机器学习算法。计算过程主要涉及数据和模型两部分。 +Deep learning is a complicated machine learning algorithm where the neural network learns the patterns and representations of the training data. The computation mainly involves two parts: data and model. + 随着深度学习的广泛应用,模型规模不断扩大,对硬件(算力、内存)的需求也在不断提高。然而,受限于物理定律,持续提高芯片的集成越来越困难,单一设备的算力及容量难以跟上模型扩大的需求。 +The increasingly wide application of deep learning and the growing model size impose higher demands for hardware (computing power and memory). However, by some physical laws, it’s getting harder and harder to put more transistors on a chip. Thus, it is difficult for one single device to meet the computing and memory requirements for the ever-enlarging deep learning models. + 为解决算力增速不足的问题,多节点集群的分布式训练方式逐渐受到重视,高效易用的分布式并行策略的提出势在必行。 +Distributed training with multi-node clusters emerges as a solution. We are in urgent need of some efficient and easy-to-use distributed parallelism strategies. + ## 并行策略 +## Parallelism Strategies 值得注意的是,简单的设备堆叠并不一定会带来算力的增长。因为神经网络的训练并不是单纯的“把原来一个设备做的事情,现在分给多个设备各自做”,它不仅需要多个设备进行计算,还涉及到设备之间的数据传输,只有协调好集群中的计算与通信,才可以实现高效的分布式训练。 +It should be noted that simply multiplying the number of devices doesn’t necessarily bring increase in computing power, because neural network training is more complicated than just splitting the work of one device among multiple devices. In addition to computation on each device, it entails inter-device communication. That means we need to schedule the computation and communication well in order to achieve high efficiency in distributed training. + 常见的并行策略包括 **数据并行** 、**模型并行** 和 **流水并行**,特点如下: +Common parallelism strategies include **data parallelism**, **model parallelism**, and **pipeline parallelism**, which are detailed as follows: + - 数据并行:对 **数据** 进行切分,不同设备数据不同,但模型相同 - 模型并行:对 **模型** 进行切分,不同设备数据相同,但模型不同 - 流水并行:将 **模型** 分为多个阶段,分发到不同设备,各个设备之间以“流水线”的方式完成训练 +- Data Parallelism: partition the **data**, each device running the same model but processing different data shards. +- Model Parallelism: partition the **model**, each device running different parts of the model but processing the same data. +- Pipeline Parallelism: partition the **model** into stages and distribute them to various devices, the devices executing the stages in a pipeline fashion. + 除上述三种策略外, **混合并行** 也是一种常见的并行策略,通过上述两种或三种方式的混合使用完成训练目的。 +Another frequently used strategy is **mixed parallelism**, which means mixing two or three of the above strategies in neural network training. + 本文以矩阵乘法为例,解释并行策略间的区别,以及如何利用 `Global Tensor` 实现不同的并行方式。 +In the remainder of this article, we will explain the difference between these parallelism strategies with matrix multiplication as an example and introduce how to implement these strategies using ` Global Tensor`. + 假设神经网络中的某一层是进行矩阵乘法计算,其中,输入 $x$ 的形状为 $4\times5$,模型参数 $w$ 的形状为 $5\times8$,那么,矩阵乘法输出形状为 $4\times8$。 +Assuming that a certain layer in a neural network is dedicated to matrix multiplication. If the shape of the input $x$ is $4\times5$ and that of the model parameter $w$ is $5\times8$, then the shape of the output will be $4\times8$. + 基础代码: +Basic code: + ```python import oneflow as flow x = flow.randn(4, 5) @@ -36,20 +60,32 @@ print(out.shape) # (4, 8) 示意图如下: +Here is the illustration: + ![matmul](../parallelism/imgs/matmul_logical.png) 单设备的训练中,以上矩阵乘法计算得到 $out$ 后会传递到下一层,并最终计算得到 $loss$。然后,在反向传播过程中,得到 $\frac{\partial loss}{\partial w}$,用于更新 $w$。 +In single-device training, the above computation will produce an output $out$, which will be passed to the next layer. Eventually, we will get a $loss$. Then, in backward propagation, we will get $\frac{\partial loss}{\partial w}$, which will be used to update $w$. + ### 数据并行 +### Data Parallelism 数据并行是将数据进行切分输入不同设备,而每个设备上的模型保持完整和一致。 +In data parallelism, we input different data shards into different devices, and each device runs the same whole model to process its given data shard. + OneFlow 特有的 Global Tensor 采用 `placement` 与 `sbp` 结合的方式完成分布。其中 `placement` 表示 Global Tensor 分布的物理设备,`sbp` 表示 Global Tensor 分布的方式(详情可见:[创建 Global Tensor](./global_tensor.md/#global-tensor_2))。 +In OneFlow’s Global Tensor, the data is distributed via `placement` and `sbp`. `placement` refers to the physical devices that the global tensor is distributed among and `sbp` refers to the way that the global tensor is distributed. (For more information, please refer to [Create a Global Tensor](https://docs.oneflow.org/en/master/cookies/global_tensor.html)) + 以两卡并行为例,矩阵乘法案例的数据并行程序如下: +Take two-GPU parallelism as an example, the data parallelism program for the aforementioned matrix multiplication is as follows: + **注意:没有多个 GPU 的读者,可以通过将本文并行示例中的 `placement` 指定为 `type="cpu"`, 实现用 CPU 模拟多设备并行** +**Note: If you don’t have multiple GPUs, you can designate the `placement` as `type="cpu"` in the third line of the following snippet, so you can mimic multi-device parallelism with CPUs.** ```python import oneflow as flow @@ -62,27 +98,45 @@ print(out.shape) # (4, 8) 假设以上程序所在脚本文件为 `test.py`,不同于上一篇文章,本文章借助 oneflow 分布式工具,在 Terminal 运行以下命令启动程序: +Supposing that the above program is in the `test.py` script. Unlike what we’ve mentioned in the previous article, here we utilize a OneFlow distribution tool and execute the following instruction to start the program in terminal: + ```shell python3 -m oneflow.distributed.launch --nproc_per_node 2 test.py ``` 数据并行示意图: +Illustration of data parallelism: + ![Data Paralelism](../parallelism/imgs/matmul_data_paralelism.png) 以上程序可以看出,Global Tensor 的设计方式使得上述矩阵乘法案例的修改非常简单,只需要将: +As can be seen, the design of Global Tensor makes it easy to modify the code for the above matrix multiplication. All you need to do is to: + 1. 数据 $x$ 按第 0 维度切分(`sbp=flow.sbp.split(dim=0)`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) 2. 模型 $w$ 保持完整(`sbp=flow.sbp.broadcast`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) +
+ +1. Partition the data $x$ on the 0 dimension (`sbp=flow.sbp.split(dim=0)`), and distribute the data across two GPUs (`placement=flow.placement(type="cuda", ranks=[0, 1])`). +2. Keep the model parameter $w$ intact (`sbp=flow.sbp.broadcast`), and broadcast it to two GPUs (`placement=flow.placement(type="cuda", ranks=[0, 1])`). + ### 模型并行 +### Model Parallelism 当神经网络非常巨大时,数据并行同步梯度的代价很大,此时可以考虑采用模型并行策略。 +When the neural network is extremely large, data parallelism can result in huge cost of gradient synchronization. This is when model parallelism comes in handy. + 与数据并行相反,模型并行是将模型进行切分输入不同设备,而每个设备上的数据保持完整和一致。 +In contrast with data parallelism, with model parallelism, you partition the model and feed different parts of the model to various devices. Each device processes the same whole data. + 同样以两卡为例,矩阵乘法的模型并行程序如下: +Still, we take two-GPU parallelism as an example. The model parallelism program for the aforementioned matrix multiplication is as follows: + ```python import oneflow as flow placement = flow.placement(type="cuda", ranks=[0, 1]) From 2e774f315368a64a42d5ea9e779f8ea20a353a07 Mon Sep 17 00:00:00 2001 From: Hu Yanjun <100749531+httpshirley@users.noreply.github.com> Date: Tue, 30 Aug 2022 18:01:26 +0800 Subject: [PATCH 14/25] Update global_tensor_distributed.md --- en/docs/cookies/global_tensor_distributed.md | 120 +++++++++++++++---- 1 file changed, 100 insertions(+), 20 deletions(-) diff --git a/en/docs/cookies/global_tensor_distributed.md b/en/docs/cookies/global_tensor_distributed.md index e14141b3..4d4ec6ae 100644 --- a/en/docs/cookies/global_tensor_distributed.md +++ b/en/docs/cookies/global_tensor_distributed.md @@ -5,11 +5,11 @@ By [Guoliang Cheng](https://github.com/lmyybh), [Xu Xiaoyu](https://github.com/s 深度学习是通过神经网络学习样本数据的内在规律和表现层次的一种复杂机器学习算法。计算过程主要涉及数据和模型两部分。 -Deep learning is a complicated machine learning algorithm where the neural network learns the patterns and representations of the training data. The computation mainly involves two parts: data and model. +Deep learning is a complicated machine learning algorithm that uses a neural network to learn the patterns and representations of the training data. The computation mainly involves two parts: data and model. 随着深度学习的广泛应用,模型规模不断扩大,对硬件(算力、内存)的需求也在不断提高。然而,受限于物理定律,持续提高芯片的集成越来越困难,单一设备的算力及容量难以跟上模型扩大的需求。 -The increasingly wide application of deep learning and the growing model size impose higher demands for hardware (computing power and memory). However, by some physical laws, it’s getting harder and harder to put more transistors on a chip. Thus, it is difficult for one single device to meet the computing and memory requirements for the ever-enlarging deep learning models. +The increasingly wide application of deep learning and the growing model size impose higher demands for hardware (computing power and memory). However, by some physical laws, it’s getting harder and harder to put more transistors on one chip. Thus, it is difficult for one single device to meet the computing and memory requirements for the ever-enlarging deep learning models. 为解决算力增速不足的问题,多节点集群的分布式训练方式逐渐受到重视,高效易用的分布式并行策略的提出势在必行。 @@ -20,7 +20,7 @@ Distributed training with multi-node clusters emerges as a solution. We are in u 值得注意的是,简单的设备堆叠并不一定会带来算力的增长。因为神经网络的训练并不是单纯的“把原来一个设备做的事情,现在分给多个设备各自做”,它不仅需要多个设备进行计算,还涉及到设备之间的数据传输,只有协调好集群中的计算与通信,才可以实现高效的分布式训练。 -It should be noted that simply multiplying the number of devices doesn’t necessarily bring increase in computing power, because neural network training is more complicated than just splitting the work of one device among multiple devices. In addition to computation on each device, it entails inter-device communication. That means we need to schedule the computation and communication well in order to achieve high efficiency in distributed training. +It should be noted that simply multiplying the number of computing devices doesn’t necessarily bring increase in computing power, because neural network training is more complicated than just splitting the work of one device among multiple devices. In addition to computation on each device, distributed training entails inter-device communication. That means we need to schedule the computation and communication well in order to achieve high efficiency in distributed training. 常见的并行策略包括 **数据并行** 、**模型并行** 和 **流水并行**,特点如下: @@ -30,21 +30,21 @@ Common parallelism strategies include **data parallelism**, **model parallelism* - 模型并行:对 **模型** 进行切分,不同设备数据相同,但模型不同 - 流水并行:将 **模型** 分为多个阶段,分发到不同设备,各个设备之间以“流水线”的方式完成训练 -- Data Parallelism: partition the **data**, each device running the same model but processing different data shards. -- Model Parallelism: partition the **model**, each device running different parts of the model but processing the same data. -- Pipeline Parallelism: partition the **model** into stages and distribute them to various devices, the devices executing the stages in a pipeline fashion. +- Data Parallelism: The **data** is partitioned. Each device runs the same model but processes different data shards. +- Model Parallelism: The **model** is partitioned. Each device runs different parts of the model but processes the same data. +- Pipeline Parallelism: The **model** is partitioned into stages, which are distributed to various devices. The devices execute the stages in a pipeline fashion. 除上述三种策略外, **混合并行** 也是一种常见的并行策略,通过上述两种或三种方式的混合使用完成训练目的。 -Another frequently used strategy is **mixed parallelism**, which means mixing two or three of the above strategies in neural network training. +Another frequently used strategy is **mixed parallelism**, which refers to the combined use of two or three of the above strategies for neural network training. 本文以矩阵乘法为例,解释并行策略间的区别,以及如何利用 `Global Tensor` 实现不同的并行方式。 -In the remainder of this article, we will explain the difference between these parallelism strategies with matrix multiplication as an example and introduce how to implement these strategies using ` Global Tensor`. +In the remainder of this article, we will explain the differences between these parallelism strategies with matrix multiplication as an example and introduce how to implement these strategies using ` Global Tensor`. 假设神经网络中的某一层是进行矩阵乘法计算,其中,输入 $x$ 的形状为 $4\times5$,模型参数 $w$ 的形状为 $5\times8$,那么,矩阵乘法输出形状为 $4\times8$。 -Assuming that a certain layer in a neural network is dedicated to matrix multiplication. If the shape of the input $x$ is $4\times5$ and that of the model parameter $w$ is $5\times8$, then the shape of the output will be $4\times8$. +Assuming that a certain layer in a neural network is dedicated to matrix multiplication, if the shape of the input $x$ is $4\times5$ and that of the model parameter $w$ is $5\times8$, then the shape of the output will be $4\times8$. 基础代码: @@ -77,7 +77,7 @@ In data parallelism, we input different data shards into different devices, and OneFlow 特有的 Global Tensor 采用 `placement` 与 `sbp` 结合的方式完成分布。其中 `placement` 表示 Global Tensor 分布的物理设备,`sbp` 表示 Global Tensor 分布的方式(详情可见:[创建 Global Tensor](./global_tensor.md/#global-tensor_2))。 -In OneFlow’s Global Tensor, the data is distributed via `placement` and `sbp`. `placement` refers to the physical devices that the global tensor is distributed among and `sbp` refers to the way that the global tensor is distributed. (For more information, please refer to [Create a Global Tensor](https://docs.oneflow.org/en/master/cookies/global_tensor.html)) +In OneFlow’s unique Global Tensor, the distribution is implemented via `placement` and `sbp`. `placement` refers to the physical devices that the global tensor is distributed among, and `sbp` refers to the way that the global tensor is distributed. (For more information, please refer to [Create a Global Tensor](https://docs.oneflow.org/en/master/cookies/global_tensor.html)) 以两卡并行为例,矩阵乘法案例的数据并行程序如下: @@ -85,7 +85,7 @@ Take two-GPU parallelism as an example, the data parallelism program for the afo **注意:没有多个 GPU 的读者,可以通过将本文并行示例中的 `placement` 指定为 `type="cpu"`, 实现用 CPU 模拟多设备并行** -**Note: If you don’t have multiple GPUs, you can designate the `placement` as `type="cpu"` in the third line of the following snippet, so you can mimic multi-device parallelism with CPUs.** +**Note: If you don’t have multiple GPUs, you can designate the `placement` as `type="cpu"` in the third line of the following code, so you can simulate multi-device parallelism with CPUs.** ```python import oneflow as flow @@ -98,7 +98,7 @@ print(out.shape) # (4, 8) 假设以上程序所在脚本文件为 `test.py`,不同于上一篇文章,本文章借助 oneflow 分布式工具,在 Terminal 运行以下命令启动程序: -Supposing that the above program is in the `test.py` script. Unlike what we’ve mentioned in the previous article, here we utilize a OneFlow distribution tool and execute the following instruction to start the program in terminal: +Supposing that the above program is in a `test.py` script, unlike what we’ve mentioned in the previous article, here we utilize a OneFlow distribution tool and execute the following command to start the program in terminal: ```shell python3 -m oneflow.distributed.launch --nproc_per_node 2 test.py @@ -112,14 +112,14 @@ Illustration of data parallelism: 以上程序可以看出,Global Tensor 的设计方式使得上述矩阵乘法案例的修改非常简单,只需要将: -As can be seen, the design of Global Tensor makes it easy to modify the code for the above matrix multiplication. All you need to do is to: +As can be seen, the design of global tensor makes it easy to modify the code for the above matrix multiplication. All you need to do is to: 1. 数据 $x$ 按第 0 维度切分(`sbp=flow.sbp.split(dim=0)`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) 2. 模型 $w$ 保持完整(`sbp=flow.sbp.broadcast`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`)
-1. Partition the data $x$ on the 0 dimension (`sbp=flow.sbp.split(dim=0)`), and distribute the data across two GPUs (`placement=flow.placement(type="cuda", ranks=[0, 1])`). +1. Partition the data $x$ on dimension 0 (`sbp=flow.sbp.split(dim=0)`), and distribute the data shards across two GPUs (`placement=flow.placement(type="cuda", ranks=[0, 1])`). 2. Keep the model parameter $w$ intact (`sbp=flow.sbp.broadcast`), and broadcast it to two GPUs (`placement=flow.placement(type="cuda", ranks=[0, 1])`). ### 模型并行 @@ -147,69 +147,114 @@ print(out.shape) # (4, 8) ``` 假设以上程序所在脚本文件为 `test.py`,在 Terminal 运行以下命令启动程序: + +Supposing that the above program is in a `test.py` script, we execute the following command to start the program in terminal: + ```shell python3 -m oneflow.distributed.launch --nproc_per_node 2 test.py ``` 模型并行示意图: +Illustration of model parallelism: + ![Data Parallelism](../parallelism/imgs/matmul_model_paralelism.png) 同样只需要修改以下两部分: +Similarly, the modification is simple: + 1. 数据 $x$ 保持完整(`sbp=flow.sbp.broadcast`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) 2. 模型 $w$ 按第 1 维度切分(`sbp=flow.sbp.split(dim=1)`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) +
+1. Keep the data $x$ intact (`sbp=flow.sbp.broadcast`), and broadcast it to two GPUs (`placement=flow.placement(type="cuda", ranks=[0, 1])`). +
+2. Partition the model parameter $w$ on dimension 1 (`sbp=flow.sbp.split(dim=1)`), and distribute the shards across two GPUs (`placement=flow.placement(type="cuda", ranks=[0, 1])`). + ### 流水并行 +### Pipeline Parallelism 当神经网络过于巨大,无法在一个设备上存放时,可以选择流水并行策略。 流水并行将网络切分为多个阶段,并分发到不同的计算设备上,各个计算设备之间以“流水线”的方式完成训练。 +If the neural network is too large to be placed on one device, pipeline parallelism can help. Pipeline parallelism means to partition the neural network into stages and distribute the stages to various devices. The devices will execute their given stage in a pipeline fashion. + 以两卡流水并行为例,构造两阶段示例程序: +For example, we build a two-stage program for two-GPU pipeline parallelism: + ```python import oneflow as flow P0 = flow.placement(type="cuda", ranks=[0]) P1 = flow.placement(type="cuda", ranks=[1]) BROADCAST = flow.sbp.broadcast -# 模型第一阶段分布在第 0 卡 +# 模型第一阶段分布在第 0 卡 +# Place the first stage of the model on GPU 0. w0 = flow.randn(5, 8, placement=P0, sbp=BROADCAST) -# 模型第二阶段分布在第 1 卡 +# 模型第二阶段分布在第 1 卡 +# Place the second stage of the model on GPU 1. w1 = flow.randn(8, 3, placement=P1, sbp=BROADCAST) -# 随机生成数据模拟输入,注意第一阶段的数据分布在第 0 卡 +# 随机生成数据模拟输入,注意第一阶段的数据分布在第 0 卡 +# Randomly generate data to be used as input. Note that the data for the first stage should be placed on GPU 0. in_stage0 = flow.randn(4, 5, placement=P0, sbp=BROADCAST) out_stage0 = flow.matmul(in_stage0, w0) print(out_stage0.shape) # (4, 8) -# 利用 to_global 将第二阶段的数据分布在第 1 卡 +# 利用 to_global 将第二阶段的数据分布在第 1 卡 +# Place the data for the second stage on GPU 1 via to_global. in_stage1 = out_stage0.to_global(placement=P1, sbp=BROADCAST) out_stage1 = flow.matmul(in_stage1, w1) print(out_stage1.shape) # (4, 3) ``` 假设以上程序所在脚本文件为 `test.py`,在 Terminal 运行以下命令启动程序: + +Supposing that the above program is in a `test.py` script, we execute the following command to start the program in terminal: + ```shell python3 -m oneflow.distributed.launch --nproc_per_node 2 test.py ``` 以上程序采用矩阵乘法,模拟了一个两阶段神经网络。与数据并行和模型并行不同,流水并行中的数据和模型均未被切分,而是分别将两个阶段分布在不同的设备上进行计算。 +In the above program, we simulate a two-stage neural network with matrix multiplication. Different from data parallelism and model parallelism, pipeline parallelism does not shard the data or the model, but place the two stages of the model on two devices for computation. + Global Tensor 的设计,使得计算过程中,只需通过 `to_global(...)` 方法调整上一阶段的输出数据的分布策略,作为下一阶段的输入数据即可。 +Thanks to the neat design of global tensor, during the computation, all you need to do is adjusting the distribution strategy of the output data from the previous stage via `to_global(...)` so the data can be used as the input for the next stage. + ### 混合并行 +### Mixed Parallelism 混合并行是结合使用以上两种或三种策略的并行策略。 +Mixed parallelism refers to the combined use of two or three of the above parallelism strategies. + OneFlow 同时支持 `Eager 模式` 和 `Graph 模式` 两种模型运行方式,二者均可用于并行计算策略。 +OneFlow supports two model execution modes: `Eager Mode` and `Graph Mode`. Both modes support parallel computing strategies. + - `Eager 模式` 是 OneFlow 的默认模式,网络模型继承自 `nn.Module` 模块。 - `Graph 模式` 需要自定义继承自 `nn.Graph` 的类,并对 `Eager 模式` 的网络模型进行复用。 +- `Eager Mode` is the default mode in OneFlow. The neural network model is inherited from `nn.Module`. +- In `Graph Mode`, you need to customize the classes inherited from `nn.Graph`, and reuse the neural network model in `Eager Mode`. + + 更多关于 `Graph 模式`的细节请参考:[静态图模块 nn.Graph](../basics/08_nn_graph.md) +For more information of `Graph Mode`, please check: [Static Graph Interface: nn.Graph](../basics/08_nn_graph.md) + 此处以 `4 卡`混合并行程序为例进行介绍。 +The following example is a mixed parallelism program for 4 GPUs. + !!! Note 分别 **点击** 以下 `Eager` 或 `Graph` 标签,查看 两种模式的示例代码 +!!! Note + **Click** `Eager` and `Graph` for the corresponding sample code + + === "Eager" ```python @@ -230,19 +275,24 @@ OneFlow 同时支持 `Eager 模式` 和 `Graph 模式` 两种模型运行方式 def __init__(self): super().__init__() # 模型第一阶段在第 0 和第 1 卡上进行数据并行计算 + # The first stage of the model: execute data parallelism on GPU 0 and 1. self.m_stage0 = StageModule(5, 8, placement=P01, sbp=flow.sbp.broadcast) # 模型第二阶段在第 2 和第 3 卡上进行模型并行计算 + # The second stage of the model: execute model parallelism on GPU 2 and 3. self.m_stage1 = StageModule(8, 3, placement=P23, sbp=flow.sbp.split(dim=1)) def forward(self, x): # 第一阶段,数据切分在第 0 和第 1 卡,用于数据并行 + # First stage: the data is partitioned across GPU 0 and 1 for data parallelism. out_stage0 = self.m_stage0(x) # 第二阶段需要将输入数据还原完整,并转移至第 2 和第 3 卡,用于模型并行 + # Second stage: stitch the data together and pass them to GPU 2 and 3 for model parallelism. in_stage1 = out_stage0.to_global(placement=P23, sbp=flow.sbp.broadcast) out_stage1 = self.m_stage1(in_stage1) return out_stage0, out_stage1 if __name__ == "__main__": model = ModuleModel() # 需要将输入数据切分,用于数据并行 + # Partition the input data for data parallelism. in_stage0 = flow.randn(4, 5, placement=P01, sbp=flow.sbp.split(dim=0)) out_stage0, out_stage1 = model(in_stage0) print(out_stage0.shape, out_stage1.shape) # (4, 8) (4, 3) @@ -268,13 +318,17 @@ OneFlow 同时支持 `Eager 模式` 和 `Graph 模式` 两种模型运行方式 def __init__(self): super().__init__() # 模型第一阶段在第 0 和第 1 卡上进行数据并行计算 + # The first stage of the model: execute data parallelism on GPU 0 and 1. self.m_stage0 = StageModule(5, 8, placement=P01, sbp=flow.sbp.broadcast) # 模型第二阶段在第 2 和第 3 卡上进行模型并行计算 + # The second stage of the model: execute model parallelism on GPU 2 and 3. self.m_stage1 = StageModule(8, 3, placement=P23, sbp=flow.sbp.split(dim=1)) def forward(self, x): # 第一阶段,数据切分在第 0 和第 1 卡,用于数据并行 + # First stage: the data is partitioned across GPU 0 and 1 for data parallelism. out_stage0 = self.m_stage0(x) # 第二阶段需要将输入数据还原完整,并转移至第 2 和第 3 卡,用于模型并行 + # Second stage: stitch the data together and pass them to GPU 2 and 3 for model parallelism. in_stage1 = out_stage0.to_global(placement=P23, sbp=flow.sbp.broadcast) out_stage1 = self.m_stage1(in_stage1) return out_stage0, out_stage1 @@ -290,6 +344,7 @@ OneFlow 同时支持 `Eager 模式` 和 `Graph 模式` 两种模型运行方式 if __name__ == "__main__": graph = GraphModel() # 需要将输入数据切分,用于数据并行 + # Partition the input data for data parallelism. in_stage0 = flow.randn(4, 5, placement=P01, sbp=flow.sbp.split(dim=0)) out_stage0, out_stage1 = graph(in_stage0) print(out_stage0.shape, out_stage1.shape) # (4, 8) (4, 3) @@ -297,50 +352,75 @@ OneFlow 同时支持 `Eager 模式` 和 `Graph 模式` 两种模型运行方式 以上程序构建了一个两阶段网络,其 `2 机 2 卡` 并行方式如下图所示: +The above programs construct a two-stage network, whose `two-device two-GPU` parallelism is illustrated as follows: + 模型的两个阶段分别运行在两台机器进行流水并行,且第一阶段在第一台机器上进行两卡数据并行,第二阶段在第二台机器上进行两卡模型并行。 +The two stages of the model are separately executed on two machines, which constitutes pipeline parallelism. For the first stage, Machine 0 executes two-GPU data parallelism; for the second stage, Machine 1 executes two-GPU model parallelism. + **运行方式:** +**Execution** + `Eager 模式` 和 `Graph 模式` 的运行方式一致,假设脚本文件名为 `test.py` +`Eager Mode` and `Graph Mode` shares the same way of execution. Assuming that the script is a `test.py` file, + 1. 单机四卡启动方式为: +1. For a single-device 4-GPU environment, here is how it is started: + ```shell python3 -m oneflow.distributed.launch --nproc_per_node 4 test.py ``` 2. oneflow 分布式工具支持多机多设备并行,以 `2 机 2 卡` 环境为例,启动方式如下: +2. The OneFlow distribution tool supports multi-device multi-GPU parallelism. For example, for a `two-device two-GPU` environment, here is how it is started: + 在 第 0 号机器上运行: + + Execution on Machine 0: + ```shell python3 -m oneflow.distributed.launch \ --nnodes=2 \ --node_rank=0 \ --nproc_per_node=2 \ - --master_addr="192.168.1.1" \ # 第 0 号机器的 IP + --master_addr="192.168.1.1" \ # 第 0 号机器的 IP # IP of Machine 0 --master_port=7788 \ test.py ``` 在 第 1 号机器上运行: + + Execution on Machine 1: + ```shell python3 -m oneflow.distributed.launch \ --nnodes=2 \ --node_rank=1 \ --nproc_per_node=2 \ - --master_addr="192.168.1.1" \ # 第 0 号机器的 IP + --master_addr="192.168.1.1" \ # 第 0 号机器的 IP # IP of Machine 0 --master_port=7788 \ test.py ``` 注意要将 `master_addr` 设置为第 0 号机器的 IP + + Note that `master_addr` should be set to the IP of Machine 0. ## 结语 +## Conclusion 并行策略的选择影响着训练效率,框架对并行训练的接口支持程度,决定了算法工程师的开发效率。 +Your training efficiency is dependent on your choice of parallelism strategy, while the development efficiency of algorithm engineers is largely affected by how well their framework supports parallel training. + 本文介绍了数据并行、模型并行、流水并行以及混合并行这些分布式并行策略,通过示例展示了 OneFlow 针对分布式训练所做的系统级设计和创新,以便于用户轻松上手分布式训练。 +To sum up, in this article, we explain four distributed parallelism strategies: data parallelism, model parallelism, pipeline parallelism, and mixed parallelism. Also, we introduce the system-level innovations of OneFlow that allow users to apply distributed training more easily. + From b2d43ff66e366299fd4e592296ec6f6c4bfb8173 Mon Sep 17 00:00:00 2001 From: Eco Date: Tue, 30 Aug 2022 19:05:13 +0800 Subject: [PATCH 15/25] Create hybrid-parallel.png --- en/docs/cookies/imgs/hybrid-parallel.png | Bin 0 -> 60030 bytes 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 en/docs/cookies/imgs/hybrid-parallel.png diff --git a/en/docs/cookies/imgs/hybrid-parallel.png b/en/docs/cookies/imgs/hybrid-parallel.png new file mode 100644 index 0000000000000000000000000000000000000000..eeed27f79a9d723b176bc59b12415772028a8ac5 GIT binary patch literal 60030 zcmd?Rbx@UW7&dr7Q3MG|Man`#LL{U^knU~~K^keLLqsG+LXhq*X(UAukS>V>DqV-} z+WY-|vpf6Ge&5dQ>>oSpj5D0~Jm{d)TTYxtoSMO?(%+CXe|}`xTQl#SSQqCwpD;hU@GGJBcsD-lQCEHU zFHfUJt3v|~lAu-~VKya_!J2>7mKH z-NpXz<5hNc(;nLj4sseBL`III2Cg-uA#@^M6)EKzEh*yD^-^@6L;OyQ&z=Ml8rX~- z9WNw%SDx-o^zMzlr2E3}Y`LX<;V0|x!8cUOmg-*xoVtn;-L6YluRk;LI?N2E;_js4 zGJiIlce+lpxl-I%D_+g6K3b?ZI->KE#%iLnkmUiUd{D7+A<3P=XgzkR(Bc+%^df)f z^D*z!-B_WGrVB-V$A=5a6Gtn>>$FCt&jS>a1o?D0jO^-ntz!*b?A!n4q=%KwcYmpC z|L*55NlM$;?0>ng=}B^3mB-=M?D;PGI+uBbd-r00`ls-f#|2ER$1~v~l>+PE261Sn z(j4rXuoeRg(i=l(C~ z{%Y+?ksNK*(@?50Ox-#LJB*6k^7qN%w!zxIRdM~iZJM}iwqjy|Tr8VrzGmTzk~C~H zo`d);Yt~{$ev0uu0-oPZw)MN3$7@y7s~=d3UIo0egF&xwSsSkz^*s0+xOcMMTQ|Wy zs&6uC=w68SI$kTaUWwmez09IjRDOZ*aT88_J{9U-#SM*d>Yv3F`*?!iJEt@R+Yddn5wu@|HhE)zB;g@>A2 z1))6SKTnOCV{|3O^L~hI{k&r{_3aI&e^61N`)GfiZelMwiir5Tt%;EPo>OwB$L_FJ zCE8`8=KR=%-F4ct3ZFB*n5pp9SaIW7Esg7>J?0#5&q+{-?tz8gwEMacB^QmyMl<i(0U7C)s?lSe}<^Cj*$jKkw%j#-Lg{bCYIqo@2 zQx$x9{pLXSgR|{kkq%AYS3c#9T9oC!7cMhQXOBco)w{Qsg=Su(DZV|NuccIzv7voz z>oP`CNkYyYb7%5WMJurfswNb55miDp%;DbOaAs_*9AF=M`EW5kyue~G3-i!FGdZ(LL!UHZn&1d=Ox?aC;Rz4+f z^l3R}$zAn|KXI_feCh`GJ(~S%==n6Vs@2F#iMb7WOObaEY|Z*@y1C*|7ykN2-hv%n zkX0R(-A|uk)*Zh$>DYUZPR+>6x%a~dilLT=j9#k&lrD9LunP6}?p25nYEwJdB#EQ#a>AMAn9(j@QCI2eo^>{5ntW#-EJwXkDV;L+>B8HIKgn8F z@;mO4OPeP2g36p%#V@0xbbkjtWkt#M=7p#@w9~l$)_Iv2{a`4-e?a3g9p{Vp-SOOA zJZLAi(EzKJ!Yac;j-tll+JEgA%zchm$G)QJO*1<*s%|;sfDJFC-Z+ zdW;tYd&DKR(KBD1>oL{C=Wj!AJZcka=;oQIwzU%O(v0%9p8ffeO=Ig}JN3*NvXs#Z z=*FlZ7HXwbHWRAgZP+!nJ_^Uh&M&O~yk>47zeU|t)GLPGD)*#fFZ)g?K!zv5c_^=1 zj~Ui7a-In}`JKU`UE$}{%wE3}+4Ul8Q@owd`KHEpTY6bP6h0hLVo)dNF zY2;6O78}+_t`Dw3baM$W)~^{rv+I=IwLd#ts9UI8(RsAhgRc2D-)lac?- zk=n;;*Lb=&TQ;I=#p*io>N^4H7u#Bk=cl`)N`i;8p8_h=A$njc^sGw*CIa1dmunhb ze;Wt39uP>A>n1rr*;eWtHSju|>sT{@kVw)M!^%>Ae&=b3v~}HD zCFYGy^0{}RZUqU1$}!9KZTRu>f;)Xrf=t5r+UkoSq+>R{&yS}&cDXDD?rx%8r|(S< zW-BF2oF46xLhzPdtyxG4f=KYYz6e?SHLni;p07htndNLeTF#F;9{u{Vx4gJt_{fyt zItP9Kw>OUadClN%JM! zIJIm==ibBhkBv*G_3rzM4g1;2Ub0>;XO+k*YR4k8LgMx_?o0yFwvZcYIm&y}=V!c! z@jrqo@8jDrOS^fWpSo_@lPA3FO1F3A?Q5)$!hh-TiljH1VH# zY;+y?0%_fH^xAT>lj1$(akS2>ZJBdo-(g3<%-5Rch|*l5H#YYAq`)InlyA}=7BIr) zaj-G#NP1sUUbyIVBRo*-=l-;J^5dn9Pk1!FFw0E+jo#-kIA-m_g^zOFd%e%L>Gt z_vC_>;TzKmXC(e&Y9KE+^bL@|SloQn^Z3r;?+ZkNf5JW)B{k?hdfDStoU4-l4nIo3 zd0E{&5f*noe%|T6DAvNQKj!(?x+66_KxY+_W?vMiO)Ydl(w)a`}&Fe&Z+Kr=&L zSG3%w_H?tIE{L2(v$~1ffIf4+X4R-0%~d@p757Og`QR zoF${|Qm5;u+qXn~lO7m;tL}6BVpAFSWz2x<-E#$t9#RG6gRL(1I{O>N-?lz{$=UoH zNarFqB6usFQeykv+p(yVGpQ=2lj^tkO-D?3(QRT*$ECGZg!;w-Avc5v|egc)oxxrqPn0??R~;^Bdaau ztxj7SmvP2MBu9(~uc4t86_dLcp--1M;d*xch5dPzCib?QwKy;sYX`vsL@>b^=m z7Cu#%Jng=1wy~L(h1H$2nIA_pb>C+sCWs`ue^dU)%hau7-Jz7-qurIdjEhu)MOc43 zTe9f88>LoMUtOmuq3y9Du~n*O!zqoAP`j|?SwijAaFoPS-LKD0)km0OevuyLS|#<- z1E1WOF+4o0LcOZWswubpzC3mOA}Ui;5>+e>?$LV$6y?@9T#(z%Qro3MspfR?xlF3C zF9yF@V`g3ZWE5+s@ZPMkjBLr7{PW_xh0*cO(o9&#+bg7z6O`z-5(<%o2OJ)WNu!SE zr|aJM^;)b9@!=Az+@J0%8QRp9$*h$4iHa7gpM5??$?jV+Dd&gjnny+FKlaiKKAtSM zHccBidDIz^X=4F68REC8%Ti`+?Jv!v$*A1I9ImGLmkd(*T zs>bmp<#kgNLOk*U4SEeZX|9c*cPk2}rVI6WdG*}BzavJ_|`ul$9M7Ss0thZHSgvghiWbH5BjY}Yhfk@-RA>g+$*z_O?w;Rva7 z;t7$^i&%Y!pku{@qSvYN4W!0$inY5^dYY(=_UiymN6fwOmAKLFYZY2scI3Z?hL-_$ zxo{6NJ6X{NGO-cpGWXo|i(^4?2ItdN&<_=98rwDcMEOmuofjZ@DQPj$o0W zJsO*`-eP@VS%00^HUWRbW0cuhNR2+NNX?C{NQMb3Lkj9LMpYTSg1@T^Gy*P8o->|n zJL`?-7>0nsStly|)FQ3rqC4>seO*qT#daL99TEr7V`*w*5vdV+EGmm6R=@0hJAB5EMG5zVg`}S{&4V_Pv#@~tibjq$} z@|TXSzO}|%%c*=_hG4q3H>e~s20uN)!|)R+vyKl&Zk#+^lYqVsR}_WHq@cjo*rqjm zUc1?L6J5{#EzxZ`m&fzSNdRKur>}3^|2n8v+XMyZh*|^v}rN&0OoWRaHsS7>&+jO=-08YO1$!Bln2&Se9}?MVu_rNI>b&AM#+|* z_Y7Qjt}}K;lZONju+UD+TO<4X_D`FD@^{i{x&85^1RmiN(IqM z$?fV_hZ+Y5+J0hCVBVqGN&cux?G01b(L84y5osHjdCiu{C+(UK3iTn%@=p;C4r}+5 z)FM{HJghn6RC$Av2Lthc7YvC-wEWvR4){boyuHYKzWe^BQcFz{pHZ0gClZzxUWhFe z`)N$c&T^fjkd6NbIfZAl*2L_&QM|mtUpJ3StjU_1#|H;9;rElZsPs0OtjL`XPQ$Yn z2@d~A1fk09C=NqTN;k2>!qVHU4n}h{7;5YLQ|O1S1T()8=v8u2NELG4%4F57eQrt= zoONd}*;u7zhL)g>?6(CSxwZD)YTcKK&Z`-p6dlgP&1&#sNqZ(pwy><7694Xhx6|WM z6TWcx!RQJ~-N9Jy?E-!Yh5Xg)nVLijg4ywQq0P#%v6P0dn`>RxmJM%E{8MLHuWJsW z6&l~|){0>!h?sKh6#huCv_6)g0gA-NwEy8oxJ3)U={g?yXk^+O zCc*B&G>U69CJN>`VuydP6FZvQ{GxPxulRAE7-n3{S#G;s z6zc=$a~hZ5#N4JO$Kg}e~_GNt4?5{_?P7@iL zM4k;tCl4GZTTD_Oa{etl<*`iqs6O!=$G6Nd-z3{6%q4o;SLQR`7@q37oAHWX^|Wx1 zShayMeoBM?S-qEH?BW>~t~tLG!@J8qomRtn8W((1u2QuY?1>Yf+f@RlmAyN?>0_kv%E+631^M9>D z9s6ep>LK8rEa^Mm8@DQM{Xot#xlPI-FAao|X@zOBLno8%^xkiiaPe>=I^oHBySdJ2 zrYeo_-b4Z3%@%z2yKZ}{aXBjKPgD1K5&XaLo#_75>zUBjGSjXB%-4~%M3-@ke4FGL z0o%#yR!SoemBr0ZO2T6_y=SM#_X&k|`ow!_UNvC_y-a$;@X&7RoA@z@_t}ZW_I$UN zUZ61VS{D?@>WiCDbl2^l!a@#$&`a*7m@GNn;7O{MC2!w#4j4((z5CKl>pzB%-fX?7 zNB z?Gf5gEONT@NpA|S$9pJF2SDk?`q1m-p!Mg+JEWIsoFd%yr`43Ei`_9b*?pBu2SU3; zYPLG#4G1&9?~2i1FA4%2oljlRz^5@h5UAzVXN`1PxoO=r~kYxg7IqEVoiarw-G2?xnR-#K!aYGfwwn z(a4R8Rek#(`i#pX8I8jZ86#J^pL>m4WY^3T6{Mgm=7#dHL`>%^mB2<+Pm?}A}ziQ z#h1Vs*TgVajGIr9PT%Wod^(ZG_WG=PDQhI;GVSZ83YVNrsfqlkU-8$K4KiCO7&7HX z7K&^Y8={{Ul2`_))S6|gj4WhFb@FSTagS@&Kb^?YZ^3RBvoH+)^D z%DIbk_L_i(f2dxODdN?buZrnwbf`*BHE#g)i%rBIe5@jbm9~!pZkqceq@hr!JLkH)DXWs>yhd9}Z|52&TYecR6d7D&#&DxlZvzL^YNudz z`c>r_EBieggO0X_);IFZ%bUfE)A%VQaY}XcD0-K%&vvQhy|Go1)<%_k_j*iJRl-G@ zFDu&f||vHF+w>+U`IeL^}92g%*Xnx zLtiTM`{;`Sgc1!?jwrPpX09qtmkM{;2PVT|Un5Wb7A{r<>3`oVhHMXn7}P8_6gmb7 zI7!Hei{~81XTO|88W0$bVeFDc5}1Q*qxu98sOuK;rig&3bP`; z!JYoGoaBxv1xi5|y1EL1y&U0(+ph?tX0iY4;#(pPfVIX5iakEZx=j5H2&ziw^{K@0 zYodxN_!KA2-hZ#5=hDE-~I)%1$2l+(*Y=6_)P8Pr03ja}ni(C*AxVrUd95OWT>fDR;YLXtfIE znVyY_Cw`Fb)4geLZ^{sJ>-V%;9O70m3U4WpTinK>7e08r*-DDmr-RbN#hZoW$|1@G z@`~fk6GFia(VN_sQTVba7X!-Qp>i1WHf2Dius?X#>{`bUT+v#h>r|=8`N<&g&C%Aw zxb1J=8Wcgmw8p1+P$IPV`@5m%t_H78YSRAt1)nX=T?lCnJ-;lskZ)=5P)?lyd~7SQ zTH>#e$Ukh>lVGt@RA*b3uTg8O(GdNlSqf%<73z`_@3X^p=dLZW;D<)IK3f$bY}zPP z)F<;QM!M@eYZIe9Q?E@X+*S&0MhbK^fuSF+HOM5nbOYFlq^|e75FN)A1=eOc%@3O- zeXpXTD3QtS6w9-}0klj75L;-V;rRdK?*WzoTkZ_n3Tx$@c<*tM9v&wM9DnBVH=E^0 ziwtajd>{{)$K9N0zJ3j5kFc=x48~$2S1F!`TsOW<=TUNVz4Ql#UNxH*;HMaOwqfn? ze+SNtxVeV)wfHX#JvJrXfek?m-X1mbuD8cBKob2@?U7hUNq} zc|vwSTL{p49T<u`Ie!7$8bw2 zC^_RmD*fZs2>6%tO}c`x`$if@2Nb5zuG-ACk-g*1+uT+oPWvIGSPrpXjd{w-Rlk7Z z90P&J&Ss*r0tnuz#7|UQ?^&rq(i&0Ud0rG;?XaG&snYxu*ro!|EsCZqEU)v9o8Q^L zGb$rWdB|Z!;4_&$UG6opJfL1>YXAVLlG}FjH(FJOoH)vvURA$J*?B^hp7Ui7HRXGA z`Z~<_AOfseihE_BMZ6pGuMV&4IpQnCasDs^Mo?Fb_xH04v1NKobraf_0dhi*CmL2!(%hf1z<1Qz^Px- zWkBZsy{gB-wnR z9X_24Q4ZrBo$gOh0E_Yrs&B{o$F@L02w^WcBO=_0)Q!M5pkmef%tXqu zl?IXCXYePo#DNd8$#C~ZwD~Tr!+pNr>+er-4mDkysIt@C3!xxRxqfu*!zCkTt#d+w z)jyolA=G@Fan0|_>rg0qqm1QuxtMuiVQUisaq>KCLhWlab%{7NV}D5Jijqyu75>(r7Cz7$V44U><~OKrOPx=Wy^bs_Toc%|MO8e`yfX6#!`Fhmh$#0vLP)EhdZDrUk27>LAV>QYF3P)v(~vll{u)71;6VAbcZ1#VpA@VOrm3Kg+_ zGMo38S`=7MMC2MPZKu`CtIx)#i117E2t6d8YZ;O71DVU7N3|D3*x7FZ_>{W9cdt#k zuUF(3)#lx;dqVjjg%QaJAU2NS&^?V|fr-{q$^WGd+;@?I=k73)&|(%-`mnJxQhhZ$ zU4<-%7$wez+w$KC(h*u00Pye zq1;cWfYpnR$>Dv?g$ogl=VucNJ%Z0$ajgJZ+e>{ATlAqM~CmCCN`Jt zJfhQ%Xt|(uK6@JY0d?{H0jQ1W3-^;=-wp&toIdnbK|2%`$q$~82cj-A{_&Oo@;&Cu zemcmL)Kq{WZXaw7HeaT*hEXqrr116kcS#hsga>dtwy4^N>o94ePZ5%mf#GEJYwrh7 zJ}g|~)SsSTyLo0xpc7G|u!keJeaQmg-k$|ri-rsJV!vZUs=$t~e73hb_M^d<>zgQ+ zbJYOR%aeo6k!pJrUc!veK~C^tW#upgFzd6tcZ(!bdH0-_xM`3`ikE@4LG1+)s4z& z(l6BIUg=BqKW>T2N?B`sHU18RuA2a*Wx4W6;f*-e@`68^-~5UCc^E(EX%#08=cs(g zx%J}b5UW*T!8E<(+90*I524cx)d8$b*amqX*@FyZh}Cy z;uj7~0AfJE!=cmcY`SxEJ(|wzpxOBfY3RM0^Rr`<>37ztb+GM4JEAkhzo0U(w>=jt zr`)V=3b~bc$31E%C8V=nMkan+TxwFtLw1ht~rM1V{mrwM`1Sq;Nk^Xs<*Hs@Sfth6L77Mea)7u(Sr%Wi@w$AHxdR@bnl$-@<*w5KWK-d zg)V<}T2ciF`Mtnyx}kDdQ!mfu}%BGAHiz_a_ki_-?i=K21-J^QEL!DD}gG#ZF&VQkF`zFZg z4Cy50c^bYZ-v`Ct4v_nGKpHMUxW*7I^3n}V6-%uR)%xE@E^jW^p=WB9b4v?8o~J7{Ip-MpesHfeA%?;y_jBu}K~!XYIi*cJTcP^@(B79;AF!G!2&Y zuaoQH?(6luH#|685%(1{K6Zoy2MCg`D?kusPPoD~b=!;dbANz!t{f@U)0=4vMR#_1 zAX}x=4|SKpm^c5{lXt&Clm4>ceifP*grV-50bl$d2IhBJ)z00!370vrBMx?#2HTO1 zk*}6b=#M0N%gHPbU5ZbR(~~-ncEc zI5bKryNwD8sx_(rbgR0GAc+Vc$TD&uVYr!RY8tjPcCm?R{*i8uvz<6$Z z89|?Y(x4~L7W+~)0oGcLm6=YhBVV9Q2Cc6`X$hs^oMx2t-7OD0r1cK2=yMK$9TO1u zw!K7uQpvEa$?C-&6K5GXGOk-3p;{y4>SaASl@_Wpljt6**%Ne)3ZQuZ8eIAb@#F@Wz?jj+@X81KBE6|O zaEv@ar%jl~8f>}HA$^F14?dY`4|m=6TK%@&^QJBw8jH$6&r)G?4J?lq+wCq7cTIRA zH<$H8Wn2X=S81fg`1?0Bvi?#sa78ec;3@gpqzz=Z&WkOWaG&CqcTD~pi0xMcXLM2x}Gl` zCGs29&EO}z*qZIoOn*v%5Uqw7WUDGuA(xTL6T;MN%V`q=M^ZTNpqTDJF`aB z9b6N&(3trdO_2HL9!g9Gpp^(lU@%4O0%S_^nRh<2O;usNNkS9%*m7a!9tf=UvV**z zy24iA81~Tc+E2MN!Sv~UU8};m&YR1*CUd`^1VTSyakYbyM4;-G=O3qLaeJ^!ESU8s zL)HRLId}#~9lKVz0DLC^yDUJRcg694%-E;Ww+Tc;0mZ?qdhD&(o`=9NI4@9dMibd1 zo#s#FwHkTAzE3cgjenN)#nuvC=UN4vuH#!@*{6+&Lv9Kfj96_K{b*{ilzauAx`ypL ziaA_?XRH+>!_uDC-0+qtO7$rPyJ12ITdsn*y+_q$@5V8+FJo;t?N%G4P+MxCc&$Nb zlBPiYTRodCmIDggc^^s&SA$Z`0&Owg2lPZs`-Pl8AfH%t#j?Ay(k+K&I*PqHnZE+_ z9be@nW-(c9KecwVGfWYMSP1WreSN8<_p$34J;wSJ-r0jNeca)}q9LaWDM zy-_45jHeZ!8`?Un|Ado&)ttg9adJ4Dj*-AQ6cusKb@n~@t0=L-XiEV8VI|fZ+x|4k z{IzFqDqfSoV#1-P>dUr%Dr%8)^PZ`7iRpK0<2@K3LkATT@84NIuD2Sn*)3e3cM7^x@EYf?k0)6698+`_i%e*@DvYT zN&7-O#$E?l`P;UQaKVRL5~X^Ht2wyr8EhI9V+g!&PIwvNF;LH(Vf~4c)~f?&vA&0_ zb!|1c;)8G`C{at~PZV-T=}_cDS8)vxtnD+5+z`A5$oAg)KivcTq|ZVB?5sOQmi!3W z|C$~ApL2)*zkHY8DoZ}@Ig~;Fd|(vmS+R*19@$!VwIDS z^r+=#xo`#yqnR}(&v!nbdz=0Xmo%MpO?e>ct=I1;uXDb~ylhujmSn}kf_#xFd%-P{ zj3bQZ$U0Y}34Ps3&E2C3*2Az%nM~#P^@CY~M1@ZEF1q!^Df`bytMM?U$m{0Wrh|OiG1K$&AXIk{7}u9M?F8 z^$39tFbU8>nex7Az2ZVTQfQuDV*TzY$!wfow3v{(ExzLsxy)_+Bh6bsJd?k8(IclF zx~==A`+Xs69q;Mv~TiQn}b+UxCVi= z5{me7h#i=bVo~YhQ7iX-%oHrfKg4l)R7p=vm z;rv)cO2Ztgi&6Wuv`h!tBDFJun-Fzw405x+SUm(;jBEY#A9$|b2)7)qWoV)Bg{LBn zMqoTWTzJ`!G)~C95|~WS4o67@8O+`|KDJ@1n)Yny>Y{BYGVR zWEs}L{j%Z0z$qb4Rg8Vbyvx$SkHTIp4+Pr)M0j-kFPMFDo!_ZfXumo|av^~+aa*^_ z)<#R;J|N$TZ!Strzb{SGSmUkv&7tR?uGj>J^%m%S)#u1KO%i-l=APXX#7}hR9E=4& z-5Z(;g6r7Y(y)E;hE$ffUd_k7ter>^^VhdYA$JnlP%kkb%K2Ewbl3uR_D|L_Dx>!d zW`%|qb9eth7;>se2G3wket`N~Hd{HKhtpudYlI72SGD@pm-^(BuT=TX0P@vABwgDU z*YCySOY=J~&*Cv95s7~ImpppLd{Dj7f^Q(!x&jzKzrG$yU41MG?ysnU!S!KRNmaR7 zC?x)!6;JN~->bUl32X44F<>g!=vzmpOMV~YGsclYat51-BdigN)^di6`V)4ErCZ77|{6iCxcv?71slIs8PrxaRP6ijuD z!6raEg(>52{%Q8N38Zmh+^iB?P}Zo;mG8;f9RhYA*}pL7ikIj{^&J&B z52Q7!_lK~z5QkdhP{&sHMZ{{Mv_ye@p;$bo@!raSUUYArsUV zLq*CS)G2b$i;MM=DhYl(Iffuo!8`T(dp*0p>*Fj1bdLC6KB@Elm7R4hj@xmSZW6f; zvxxWsvO*si=aUP{`)6{fk&VXc4JJy{3c~l6gu|wyL#DTA&JCRd#_ii_*1-AW?46Bw zX-KG9ttvAjCf!zF_~6OZLq_8PRAL(J{hB)1xGhrQNK%8M(OS{HrW$y(i%B2zxx_bKK zIdft+X>>_i7o!a7qvdI4u22Gpv7c3Rsh{X~)r*{X@1R~?KP%pR(D@4|h2#8q@PIS4 zi-Z7g9hjyqXURmnlbR<6kGR|9S+jHBHukC;WD17jY)3h-fSROOQy+6Xf~hCI=N^e? zvXRJ47c)^C>=&zL!wh@Mv`rFF`;~)SM~!}JY-^A2MD)kt;BYNP@engz@lG9mOsV9r zIS6?xv>qxsHs_p6wU!*LHOIc^3*Pv~mEXSh3QwS8cdF7J-h}#DJ3T!g3!T(sV5(xT zJnpEmqupxX^W=v(KCaKVS@u_*{Bww>*y@G{NY;`$@_iI4wZ3g&iFZLq}t z82;riNBez8SY(I~kKtT1AV8O}@cp!`y+NF8{2ssH`N7a+nzdJ)qC8%!uiYJoq+TZ5 zW|@SdTM@+#M|sLx_l>?MV~$hXhcv_-19FtgYVRKM2}qD$wTyhAutn}-hZ_)QdcqG1 z(cC=wCN0447Zh<@8A0t_&2f}H0>tRRS4}y!##q`0zteIY->R-sNLe`S3fF7i*ln?j zpU3S#AD(+S>|>X?s<`W}k<|zRnfq{Jcs8xA;u<+F7OsP#6T{1Z(_ z4R9pQt74dC-p*qWRDXp0KAnjscdk<%5i&ZAEOaCkA}Yzk6=nAIDBQ$-S+#fH-jr5_ zpwc1tLO)Qm#scK^hG;U6PTB6Gd>uET-L*Z>+Chb_Z1J}Ym(9-v3| z6BuaJD+O^VmU@WXpdhm}9tPr{NwO#=U0yf-J%@o6t8x;qXv83yb&h+mWg_>@T{zcd z4e4k0NkF=9>@ATX!@Ly1RjajR5r|};u%tn9{XihR({Ml8V*ueus@N*X%vaxHN0bDx zB`}Stt@vBYX_zxv1c=$ss1Ed>+_fZEZI)bD*!5~$19? zJr3H*wXM)`6nH1CFYYN^D>$2B=3vqu5I*Kg%*A`~x8Rpo3BOMZVBY8xJ=SYrA5X^@ z{Xpd!gIV9$PrxM16}#t2zwa?KoOJ&__i`M2C3a3vF0O_;Q}XAR>=tX`yc;!t{O~6| z8#erRxCZkXf(B>JJGYE?6eLBb!#QwofPC{bu-)jz_l_FbM=Dw}4Z(P#Cz1vMct#~Z z-`XWIDs)M__E0ymvVgucZ3BmOfB#z3`2;YN_@}2}b&6SBeET(la@e^`?-OSDFKRC` z!LA@h-}vJ!8A6E6=9WB5(b49fD8(}wi?F2_k;m6xGSnJq39$_PPHJ8|PN!}Q0*Wl= z9N4(EwwEvRa&80;UU^g&o$)H=PxEEsSPjj|1Uo0j38NW0y|s?cqz?RfEZtf8Alswnp_X$=b5 z3FefBDPfOc5qT`+{F<&`7Qm^duczSPb@5<|o^n7jS2S%^|;>2o5p~JY8*MQ}B z&E4nn_BBjD8%J#FWm&2S;SMzD$|_(|7IkecwLIEE`Cdz5%?5jdZ^J&7uas@2vBGMt z^qq$_TtEV;)}2>V5*jAJRg%PEZVUl3uXk2e!g4mZQAfe^j`XOrk@*-&(6MkEmp=B0SlkGYcNY5fAz%Fo0Zig z3429^&3;{vwH2H&*C-Uk6?NzXLTw)Jn7Jw0zU@X4#SBnnRS1(yjwyVW^?cV=%g83N zK;1{#_vu%07U_-qtOOCr0ZTa}_a>~%)pLt{IJkLpI~7T{66v`_{6_Md{=&ga$8D;` zisc@J)13IvKjFsMq&%%UeLH<}R+cq~%b$5Cr%iK)x14>PonEg6KaMTx$n$kOwPT2t zG46#`^#JVdT`*)=E2vy*Dc^BU^~%FxZQ~ zNN?G!d@s$YowaK6-x-}yxI{0X@J1iseLZuTTn`wV5KEe&Oj)LvbxuqF6fZIh9?Yvv zm!LqzlVo#iwf&5K7HhX~p?HX2B$Q2@d=ZK(n$1dGLm6^-Ze$+j?h?o3JvFEo z{^wIuP(`@F`K`M1{Kl_av&UjfOSL$bs_aT%$=&<0n|nph6HXe}`86V+HBmk#D>>$q z<_g|Kjyrh;{`KJfi#i03U=$3M$`j!FL9gd=iG@$4%lac{voPxjr+_||y-)*i9DNF9 zzTu)i>d*P%bha1WvTE+?Ct=oaZcK0*WV0xe<4BLt;<=bzvmXvqxb5eUJyp6k&zkk@F@ZIx?*Bt`8FqIT~ghbqL$W`ZN z#k)8(t^TG}$T_`?i}ZG`PkIaqOU{)mxytFnNoQ~H6~+0DgLH+{hPvf=yFLF{y{bo4 z?urn`S9mlIKX2K_XgSjXw_leCilJnWHbK-|cH%|DfMWw!^ziUx?yD5GK*V_dwTHi} zVk);U1wYl#6sJS@@f0ZOnk#NQpp4YUcL&mW{Wkb4RH2v`ky=W!tXcuvo1(75IMq-r z67T$oHOKTx^kvvooO;Fa?G)V>qx{4(D#W(M}EhtMkjA&v3|TYNv#O zkjC(4Pd9pPAFb&$a}H}9fe+Tco}y(6o4I7YwALw21kx?LpRARS<9QGoajr<0(*!PXZ`IU6P)uf(1l0cc)0 z)1=S%6umxJ1SzpK$0u7nrE<_N`BMAw+1zqb=BI-7PlPX?9Ox;pB-&b@_Ss1XxEl2B zP7M6e<{cKy`%{d0YT2O{L6SM`t9wjFws?A$R-9tQ8f{75fAB#bn>g6oq{M01a|~(oV_Xq58*E@o)wsbzg}Ed{o@FAf#|6W+|HeAb7j0(25*Q z+&O>^{?F1tqoj=Mf~9?1@AUuf1@Po4EXf_B^lGtw692Yi+g~sGkv?7+ z-J{7$@}0#eaz}nF{_3c5 zjLuY@$xmzPkk+_GVm2z9eb*qZ+*7wf#G7%WzZ*~Y_6fJ)!uMBCU2L>u21BGq^PMqe zU2K?`!c#HZTQCAt>P2R{reS4%_M~`JLsEJMj9eQ6i5r2`E^?)QO;2Xnoo)|ZQB^4B zAIm)2%ITF{S8vN^Md}0}gWaz(?1!Nfh{O zbQlk^BbgVqo*4}Z@>zEXhp!5cW@3O74?N@!iJ3v`Q5r>J~#n ziiGu%yMxKGOPr6_XW1{MF3^+z;f{+xvep#~4m?WDFV3j-%&_!Q7^aa6&((`+GjhUA zjyc~^5m6_QAo_YPVL^*CJfFj!cPTBPs}VRrbp55+z?sJ09#c-oWceSx-s_mUL;S$5o!hM$|WP9^B?<6b&$Gb&}B^{<> z#zerg9pG-7+5b#8nP1Hyy#Yw9Sw-^UycCRH=Tv%=n6LCtBA0L%flV?0GJ7V|oRLDO ztmT%p0&+ajD53ASJu#Xkj1(_Gh3I18HJahg1h-&jYx9}Cu0WP`1!s8q zFJQ8tRwwVuW*Jf7{?l>DK0&DxFm9iZifY!I0d3Z?kj9{xP(5y9`LO@4jF)gY!fDwA z+W7_zxEwZHb`GVV6;Zu*g$EtfV!3+RVkx1kqV}{dtjT>}PuBH;+bKW12Di(cAAIKM ztT7Ps&zQxRh_hwnYI2^^l1=n;B@26v4Xq;YZsDTRJ9s=??0%{^mOPFMw)0<2a`j?$ z`M4&^93ojz{VN5}p7Y||(GA2qrDDuqh-Q7V=R*);AmOUQlNgGYVZA16k$*DD?33>v)0jA6TT-=(E@?sje!y zvLT)Eh0d7As43uO1-WzSL|k&te^tn)U|FwxH2G>Wu+1SST3js9?4>60E71LGd_x;jHgDej>& zi_?vb6KoYxuj3ypi23S10l&_I-~F{uBiCpGRmrAz<;7zCZgu}nVbdSKOpv0r&&jH1 zqLlXa8)%v-CxFcC3`eS+_Twe_X}Xv5o;@if!4qwLP}UR$+(EBMvR+%J&0;f-W9)uw+uuQ^wif)cjvD$TehUx< zbGLPNe>r9wm_ zRp81&%^-*WF8l*0-V@3cbA~LgGv^H=T&|W?}0ipCkg`u3R&Fd*Ay%3cA)0|aw=MQ1lB%h(kM>Sdip>? z_BBxMWV#H9>kuUfU-^J~ryf**T0Q=&D8x9!d{tbktwjv|SO3l}Y$9lcIM^~H*_v?a z?&E@92hN)R3y(cL%K(8a^0nd@>3_z{vDXi^h12z7fm=XS7r6olVSfPy(e{5_!ONVq z=}!w7&Q%M%07{`xIgoAe?|*G!gGT@N{ciuCXq@8%T4hT+Ha$;*BM7n|y%sLgi{cD{ zTUnSQg(OF>%C-hGi>OzgV3TF}v3ZI4D{Ps%dLuX?R6`eFIoE)L(l2~#8zj*}1E~eieM5*~BPyl7>qTV9`MABKe>*JrHQ{ICQAOAW%9_ zBb^bY&cLEsP=cs)LTtsF#6A3?IH`W^a2#WjKy;2&Yr`04M(G7)3#79--yS{xh=Y&v zaR7J(9HKxS#3MI)Q}jzu)FVPA3YjtxF?3c};3{gi(sFFvt-|4Lb6bYY zCu{imm7vP$Srrb?PzzPUd6Jv(s2}8kB0&HA(7J-$#!&}f^cy^C;U}B`F@qBzod-f- zL97DhbISnf$CmyGHgSbV&OH-w@Ud;W=WBGiA$C$!3U;Su$2nLdWY5n1@YzbBA{nt9 zEwX`!@aV5qG}MiNFtwyBw-XN!g4m)#I=!W>c_9MK%pGWC5xBquj!buw5CbAOraAVf zfD79GAGg%j{QB(Bo?7^T(bkaY`~-Q@510UuC-VU2>^0>^@g1*zMc1n?o5{wNz~KN9 ztMPKC|DsF8LK&5h)7tnj^7JGD5NXlVKe|^^7ox+ez!Mxjwkg9=js9Ii(pH1ZLQjTT$P6ZSx*@d-r{tibuaDtz{DeKG7t zm|7jl+=?eU#bh~%DGB2V%~80bAIM!QY5~|7P(0}N@dASJ#m*V{-CvkILf%(@rIYx+ zL@8N#55hKun|@p=SG!am*1TIUoJ+KZt>mEut$bwZ^S6C=;3*Pl@Sq^pe^coUH%9(e&N%RJZ^C2OUB(GD0|oY)UA~F`|r;>`ixM6A{^=Qb{62 z*@R>7OZKZGmi(kRdqkBaahgvJu{~3vMvCy`iAWaT)*W^99&$(Sl<>ye3-*om5lP zqyf)(?1S2Syj)OM7ig73hoFJFl_U#K8M@k~uaD8sN`kOB(E!#k#~>Q1vF9Y1%#zn< zwX)`PPiE}G`0tMvG5(#G8oL8(MEQ}Hd{x60c(4|@`?{XXn64548^+y8EV zeM-80GTX2Kh&c}2SHrID?*YNfa;**=n?mxBfiH@nC*e%(h}T?Ypp8LzZ*V#Afkj+4 zO_?kKQrzn?8YvYuZS=X)c!6p`P%}HiY1p0UK3EK^9yEH1Cik1bT;x-FDN)L*DhHIO zh*3PHHRLhf-hk9cZ)LsjYhZD4TsaCsYJdI{gmN8U5w$Lrht6f*GSgw8nh$FIknR*2Fn~50J+Io zHW&4kqQXmo1C|GvhY*c{V?R8VTYuxi&qD~_P&Y~KJsC)c^RNN{2#~aDqi3jzbY1k# zQ^arkA-ie965P%}idojd>DoUEr0e;O2nrjpl(Wd-0r@Gnm(~<{@LnOrNL#dWg25ZK z_K`0In0ye2Hr~BgwM^}EgNOD$Oe>Q|?`ZLB6;vaHnpU^^H5sOErUFkHKPp#2@zG(m zT|%3J$O$0xryxEv>gJLPu4hhz?u1$idAxzC&UXhOw>4XUVO;Z$;S92M_?{N4)$aQ9 z5z7Ej=eB1t7^;p^gb7XaE2TKfwd&+jTRD{-k09yhfV4uG1ViqDf+hDVrS z>a+6YK0+7+puzV&{MLCy@px_VS zi3v*rd`Xr5b|J-NGsWSK1%6vxn#C0n&KAG)yzjB(+UYc!MNFAlyJcS+E+NmKCE$e& zys*7#@xE=uzpM^BpB`i8nycqFOzx@;`&xW3*rQB|9wI!bgi+3gcWOj3I zV=$YzyTa5lReGe9e8#3j^49wfqQI!Iu1$ zT)CD*8-V3jF!MS-A@>(L++H`cMG$%u_)KT_cYe=6KJwWTxecn9JpCZjaFMdUj~p?- zjM6SSW=AJGSnDmb+E-FZUACj?7_WOfu>AAWuywvWX_JT{M{u44kGSWu1=0KuusN9}tKBd1iqzt!Q!Ow*Mgw(Lk zSW=LIz*IY!k~Idf&zW04$;{1F!>gUZtc?AD0)f`L!ZX0UNF56wD2`Yi`$cV_EjDt-UK# zWt7j#N#5+DBAtlNA;+n`%#^>(ho`o4>(0==EvLkn3kCpH$I=6jxp#Hmo~?@#`wKgG zxM^KdmXVj0msS8yux553r8mp02c9u9*)4w)M!SgX?0jLNOL*J}dF)2Dh|qsR>e zEb=RVv+V-0Z0H@M+Eg}iM~%;Plks(CK|Yn#chh9jVk0xgP<~I%vTQKLs%|-tJx)cH z0&N6V0BB0x=lePw^P_W_!%4`<`#_pAPBmhb-u5S@TE3U*|e1?B`6e4 zD&QCy78S_!BE9m@kBbX$Zil(_H@NL-JxWT@7m5s|;rg)Ul3=G$2~TXu^GInRDp$Zr zfo`)r3^=%^CYaQPuH1?Q8UP56;e888<*;lWy)2Gzmlh{qmVRBcEINu|Qb!BO@oqV7 zK7DHP^yeBo3F6TPY1L6$D;lxlyv}uOGRzaFDFWvI5NT-e`U;W`ruw{ZsDAz2)^7-@ zT7aMVqArkJXj<@XSe~h%L3VgB}HRO zJ2Ss1l4ZJIPa>VV%LH5<&%yASUG&o6R`12{RFO>c3Soqjnl>DmA_#u-e*x&i)+JkL1H6a@k)t*P{iRq+C1%Wpv!+w4J zjLGyY8X!%Hd}Mb@I5S=A#dAA?5}!rSB|{$ho8O~$5`R-Ku#4a@n)luIH0hG%KTw>A zyQuic0*mppsHDpAS^&=41l4asteEJ{kc6(Hp8sxPZ!_q8AZ{Ig!A)Pv<2j2@v)Z+| z+VIc_D=}?4T%uE<_tUKO@2{Z49p9z|9lYysx$XkX9Tl&je(Zbkqz`1Drg28j(2LAV zR~i>tH@ZHMp%;EPsf@|wZK<5vtNigvZvp<0uELue0=$pu-PyIr#(bgq&|LVA>WG`H zetXJ_n5@^@-MQZD!=p-rQ!jIke7jHmGS2GkDEicVZ2bXY@-&-H@G)jDK!#Qi1q_9k zVwCt!#4ABlEy~AXwmU2D@#jB5-`yiXfFUt`jj!+WQE49MI`IKHpySB*$DOkd%U&lM zP~0%}bKhS#e8)&PEt854@P=#Ee zWM+hpuh@F_5PPN^H~C&d_40Y0Rd;k?dP1x6+om=a^P0w-Jx{TiY9-FNg{tbb?{Wi0 zUa2Y8mFZ5DV+Ymg+IZ=EcZ$;S985kvq{+m&xc5#Q0Tp;#g2ROY9)drrreko_+urXIj?kwT`q|IPSt}@ogZ~K-| z_{dwqpy=LItDUuGy)$)BdTDW+mLixARfSmmYd_-oNiTx3*4_WGe9Y@yy8Z+7+*c|Z zbuOkIi_dpyc9&sRo=l~{YJ^{6af#2n6YS27J_ykMZ9pJKC#OLW!4dW^>N z55H-!p_V5|%ci`Hl8E;u95MMR?sF#9+nwxB&}LN69HO9-#Q4`Q4UbLzZXcz>ckc;1 zl!iFfn`K0wy3uH6^=SIOEvp?RRo5Sa)iXUh_-UnBB|eh%eBChj^-n#j;9ckAWk&6q zovo+(Cva;&TP91%-pDm-<0_vrjAgB#xN9@yt*L?5BxgiX^5ae8u!TOPSO zyE+5*wgfBNQJv+@+ma>DdUTrG>sxzkB8>P_KbOH|oO13pKe_JGYmGPOj&$+vVw9Mo z6Ta?Fm(GNN7lwd6iHZSP%IVvQG(u{8Lg`%BzT9~8dd{be9xk_2^0H~JL|6gvBAqnL z^Dcvv-eYQc2~mYyZnoki=I7U_CaaijPXtp{sVQf>(6E&!ukhB{zv9w#Qcy5nqSl@z zNb=}X2VwvCOvh!ezdmgcQv4y;-FDQ+{)JIAW2lXsmU~R6ODE0;%T0JKWM`7LVIDLf zi7`ujWmL)-oQV~huS<6qT3)$r*qxHT8EpCmGx0d*NARNazS5Yz-Ls2l`LiF}>8+mJ z(sXRp2vS!E-EM3DNRDEPT2bX zTyL>{s-eKQgScg)Uuzg@TK6p-{G{)aRmSjMaSw0W^oOdLGz@D>n;CsF)8aRZ(?%GxZ+K z?f(7q&FrzNc;pbzxk#||ZV_tk_Ku9b*jCwg?}zG@0c#NvJoA@D&anMsGPYy%I*IvQZ|<??iBW1c`sTvxd>sp^6e}HkYk@J|hLgIMyszf=A|yv|${PIAr(tui4aAsBU)#-$ zrQXZ?aHBP~NT?-cs!LTRigH83J565njb;8Z?^&P4gVS~lGcnGQAxG1qgWsQUOS+kq zbWV|Z9QWS#rmrpCV!hiPzTTMk`tEx`I38w=aGN1Z*((Yat-=H>PoHZPn|%6Xvfc15 zheuE`E=6Oq;lyRJ3LcdPh2@o!?yFH*m!g(SLu2VB4|n_OVq3a=-GTX0C!{cltQNA{ z(?2LRyt;GbwR#HXYOU()>J*F9Z;t)P+8XQIlRZWVw1^Uto2FfSyQH_s)jE}AFs5{C zt?p{#wKauk|D*nu_~Iai!02k*2X6x@us4maq)+(pzD|7bJ9cp?xxW5RSdlShGUjIb z({)}u%*<}Zq0Q^cmiph&9jd~DQ}<&@oUN4d(Zt2-_s ztVf=|cYH>@A>>DN+D-)TLBkmPMXZn)^iupBdXh$_I1dFK`mSTstB|P5CSjkH{f@?6 zkQkt+?$w=a%=bfh{NB~guBq#1oD~u;%c$pLuBiLC^_p?i%YN6%Bt-9cO&6AwsB?T+ zH}LYC6uuTG(X2b#89@LPxbo-}{%(hssIIS0<6XxLwRSmZ_Vj`&?{1^q{?8w?yGbAX zp)1CU@r|}=ezh@puxO1fp{05zp+daBwOVP^$wPId#6flUy$_At8(KfSeQe~0F(2*T zz1jQwXV*^bBc(~@C3fC5`^i9?SY71Yf*O^43`}X6N`<5nOxWeY=wSwX+45+!ET8(8 zRiXCwBd@7?Gc(?(vDh~g5oYR?fBMP56|B=tfe$;s1A`aQ?)NAC^5bBQ=0~)*)1$aV z^HFmBT~CbDc(#pBj=@>S{dt*?T{0;Q&O;)gXUs5>HBR+7-lxL*(t47n-y`yxXnGTV zR>R6eb&NGs2{v~L3Ixupqwe05Vu#*N5Wt@5*mh%AOthmru=Gvy8xiU@TqJ-!E%U>E zV?^IZ>ya>4MrLk_jC3+7Q!x_iZJJ}c(Pj-#?odg{6Yp&cveA9~{QtQC22}r*IDFH- zeAzKFk+S{1VAD!e8PZ9Dw(;e+4>kvG{*wtIq~x-X@iFeToR^8V#+R`6euj~G#5%!{ z)u$r_o2;U3yk}SFC%|D%lBNXuy6{<}F@lc;wlrsaMNGAq>23s!sSDaitF(5ef%;JN zaL3dP@Q~&~UwtoT&N(~jD(^!|d_ebF3MaF)S+)D~jn4~D})w7oSTV-&A;_UT? zSxinbvy67?=MM&z+%oL~Z#oQ*`fjRPWwo>gTmsa1aV#%x<qGb$h5GD+M~kU#@*?S- zQb#~q{5|~J+w*O{zin~3Q#ZR5X%c*A3>*{hXfn#r6l(=}`05GXZO?gD_Gv1H|E^{d zAbjN&gLazWWifj7PEbjG+zOcaKY^p+1E)$Nkb|a0iPrgI-uaViUU-_{QP=X@j7PIC z7a#RH`hCDAvB!50hMh}WazpKgvHu_(y#}G|UkrQdrLpj&?CQfedU#Ke*|;~vf2E2e zdDhd$9X*y4x}VR+o_Vi!L;qpozK7t<0Z3~qW4RCBP55D&8)9aA3w0Zn5~*XJes`h3 z)ir2p=6n{5U+HMjq!{*`7JlB_I{Pj9y@B>H@L3jmzYwnY+`5b*#X0N@bp#kad_rDV zCmumw7DwN19Z5dF|3`Jna4&DrE@^?{gWqYTVgY;%_ttj$?sorS*(ZL)v+_aCye0N` zjB5pTc61*;BA+%|O-n3YQ+b(jZq@HWVS;*3;+g?cNwJ5{;bk;tgAAold*OX;cID+2 zf;bzOo|Ez|q|%;V^Jte6pj#!oj}_h-w7JMMF}=;$zBR`rKfRPx|70^pfm9C&oE@6-_Sb4#fU-TP9uou9EU*n&T^owsb{E1GjA>=9ih zWuK5sP^Fd3s}-Lyj9SNrAFK*vR-QUF1gW0EtWdDUn|3H{K({&6d#cn-XMG?lEoRp3 zu?iO{a`sO4-aZz9IsDdnnZ^FYOtsB!dINKm(n8S7u_^t68&`$v3X`!3#z7u(>|<|7 zX6~~E+n-hlx<|8ko2Ja&_`lDUE)WyRz(&7?ZNswK3;~6{2Ery?E|txw5~vuYP!XK z8%%^IgO{GOm%n@+kj;h~_!9Dd*YPAr5QQTRqjgfxU7t;$#x%*weF*EB7cFi(RlnAh zqV{>tn#OWGy^UKozfJ$h(H38QrvuMe-`?#XBXx@Z-MyGh+T&i#L zqO>?Q=YLp{9j!5@;V0aon1;5JtrLmUjqm?RhYz?)4z$hV#@PkG+@tI4Kdvj4$I$Q^vFUjE#ez6uYKzSaC2LIkRrSXl`8E^7^~Szz45q|=tNqSo`s5Auxgf(pEWGM- zv?i!e9%k1h5F96>@5sHb$PhG`X%A@^ubQ-bk$a=&k@dsU4gURm`D)%!5~&a~o6r_R(47_pDK50!=z zpWG>0tofrss8;WS-%xD$cK61^nk^&N;Cv zyhv||b)9Uq?Aku-cmm=nq+BPO_U;2*w%vxWHwU+;vloxU)g;|jvnk;^BG~P4W~*AP znZ0Hh)P1P=v3RXQtTuZT1mv@RdIj{uhLXAhIBng0avC0`MG4fvc#*&?c$gUhZs;jC~gXHdB9# znHY4gcPBA`-8sU-el?m_g3D_m10(CPKi_hvfUU5CSd~gK~Jj8x%9wH+|vGf(>6{WumrA0fChhYY0k8 z9|ni(``WrCxbJ?i`ERCZ?t#k*hAvTaE)`mnJ>*AE9N#weJt=X4Hu@ILkGp>Xp?Z}l z8I%;fu9HR{TUGUNQg=^-#0 z#LTPTK0WDUpTpSZDbS6XfQjy+o>%jYJaK;&+_hv}4S^}}Y3Q}jI-CTm?QVcg>5YEh zCV_)`A21#>#kfM}MEQ=KmW!17U=FK{uG1itent{a(9dpChZAwcsgasdO|o$4=JZ0q z0ox31vcfy%c zw!pldaxShJo=~%A#FKr2wl7R9asB-ImD3>X|XHF(+JK73JJla8jS6mFa>F`80I1 z-RSS0a`TR|^n>mc)!(IzLtY{RLPjP0>Ur>;3`vm{;8OW*<7hJI_H#J=wtnXQDJ36= zhWFh)`7($8|B!RbTh=$Anym%pZW-hOHSNKWSt5kr1oko(c;a~JtT8{S%d!+csKp!NPbk>QfZLet z`3vUteznHGQhC-iZL~uAXj)o-lI)ZrXwW{y}Wh`1S7VRKtS9P(>kFSPlcfLwjrxt>rwG%hvMZ z)Ht2@QgP&~RQT5z(;82g%_uc6V)*GQpK|dbTp}hUSDsjNX!jvx8(gr8zO}uevwt;R zEowf5JB)wX5t#*ODGZUQm__l#gH%=Bh*s*%H73GZRLh<^{4L-w^Cp9E{5qP^j?kq~ z7LL;TxB|QfM`)<#nHAh&(0UptENWEJ1Z}Ct=V-u(Xe&zNAcv=XrYqyge!1?kk2>f4 z{(9>7WW%~s(~Caz0xx60JD=6&`?FaY1JIrsju5~eo&A}trYD_ck*-w>hDEixCY9oo z`{sgYbEuFZ0_GQs2j&-9C?dnu?QIyxNN{Es>89NI_U0BSLSJO_r?W=uB*!K}veoMt zFg3%32L!h+qh1}NPy{phG{O9GuJu}XEk4*`hTJnI9~IC}kZ5A+QOk(%6! za?~jn3VRpTUh}`o?yNMbVww-&a8P>_ilpmRjtj89FbQaVh7Nh7I@)zZ%>R3F*tg*3 z`)jP5vgPRye3uIy2c=@#a}|8WyUG>^N?Yr8W`$yc(F%$?TRBS>G5GbN-|ungP!|IU zKzh|%T)RsNYXO{rU+Ks|hbQyH)xXH~@x@^zrEh=B5CiUuKafj5sWC9eFg%IDuJYX> z%RnwDfjOuw+BzdHJn40aogVg7sTp%lLSMSpXMib+!4rnJffvasS)dM#x}-|0?O=F$$Tv1K|a6=G&uBg7eHr$x}nHtZ9+s2An5-kmdMfwgC?9 zyqNH@t*Rt8`o;d@Mk%ME>1bEOE0J>b5<>lMbG_f0g$*o_?oId{HwOJqhBE1&C+NgR z3FCYU! zbgF(OL85Z^A}oj2U?WfpY=?>uV#V{{Yvv@@v-`D4rsAjQrvwwg4zfD9#4GJf-}sg7u|=U2l#pq5{Aomid8V zyJ4yx_UECTAF!cnSq!54mG-usls#j$OOq}Ec1WO3^Z{AG`Oj3Fhcu@d@KJMZI#Tk|9NVFYZ>ESKL^W%c27!U$nGLR$A_fH?beE|{&}t<=eg6@jEah3XVt^r(%SAb|smU7dnyU)t3y_FlYeUI^4^QZ|A!nH^J}&=!!PTBZ}lP_70;@ zc*@-R~^& zYeNC%=3x98dI%TYTy&btQ;CBAR{MC(TBo{G+2u-3Etow30JWK5{(Jb2?hYQ_8t|9i zj!9UzLQC3o@*&!D^@=Q;g`K%L?gm#LQF~cy+jZI)HfbTsoQQh05Pb_b#Nq;`;$j{8 z1x@FFx+}7bF8qZO)1~2R!Q2C3{1$?zvK5(rUkgq!P7eTvgCnv%q+C(E&jn7)_GkKl zzlsb(xXWqpUq)jt{Ji2@l9gf>f~X{PX^G>PUr-jO`r&42SM&Y%TGz*##DG_r6%QMf zUZty547(O=Av^x?M$7}D8XmV;U^F2^q`bt(Yz!N~tOKXnYV(M4zg`P};MQ$`>vbO( z*se-$(XF5E_yb(Q5#3*%orSOs9}E8XDg50h!hz=6kD^RkmI)`EYrtg68fXoz#DjZh z+NXg~Aqxi5d2*_-gdSFe_XO0?I~{F;-C?qhkHN>_%GK4tu|N3N;d!D9Elh?ro-26W zO7zjeeBG>cllNYWnhr!&>IRlC72kT;;IpQc1MQ=}iRMVt88i3GZO#q-VfJ~IUy*q- z3mp^KL;gk=CSM`-ge}0}iqjWF&k?-!B^&iKm=&duW3La^yH2VhqsIriXm6)hM-BbEN-1n5I=Re54AO;AXJjK8{`+{HXqr&`q{(NVc5Z0O}Nr z=X#a}Mgsh9ci|W~bI0y~(@Zm%C17hz!SriXTXInq81588YNJKs;EYt2!{*{}p5;=B zZP(DG(S{pT?R-7;r7)8743pyowc=RdYnn>$lG4dt(BB@0J-r{y%^3Vb!D_HK4*J-P z2U_)Vv(zci%%G3|ee=pwc)dIuRr#?Ua4l!1#7lT59UX^7s~p9X@!S)!W)QhIQ1hVp zEr~QJ*cL$WsUA1>2`E=}IMXV(LZ|`;kdQd<)@iieG*xQelb5ha1T7k}M(BNion6SV zC~XhIi#&K8ku#(RyIg=i#uYZ>LAm`ETB84_2%Ec0Lhb^Z15iv;ry)zEj07*iS_O2! z-cg^YtcA+?KJ@#Imqu#2W8hPro$g;&2q9lpfaJKaLn5;z5jn-wP-_)|1z*)ArrXT_ zt|V)jlujfcwy;=ay~uXnX+EMuQakD44)|Y8c?E&}aObepg#c&pKY2@ISm*PgbDsss zg|@tEuM1$$Fs#16@V|%gjxwJhGdi;EFELqX5PvkKa3=^$$f-ES>agmMp zo->dO6J2ZLNTgxNe(DRobPvp32H=4VX$hluQc}ZshT5vx2bs~qX~tKyJTzSo98TZw zK7h{q?jbI)9LlH0`>bbKO5PolCth%Y7*Hqb$OpzS9|9~H2abf$O1$>0yT4$>F6UPx z>Co36H9&LP@do&hyQ>jHLjuEKn>2pODSPVWt9f20C<&RF- zkn>zHUI9M}bhflWgml;=jleT&9(hj#yZ8O@!KbD2Xp4oB+CexF)$;TSqEEJ(aDpGN zn048+1oovt$RV?XfUf%wIM_-uF6RZe-MpC}35V}~DA<}wC~qp%>ov{5k8ZlRLV@#B zjp9nm;;!)WwVZ@Qvnxk?my3l5%rs+^4!|J$@zoLK!#bQs%8bt#1wQ(Nl`4+Jdt<&3 z@j&47VtV3T;bGq&hxz@|BqMK%O5*||Af;HvnMQFr0ZY}@B?tAFy@55=w_$dRg z1I&p<@jJ@!lf^S!QZ-K5U(0wSFN-NI*#6*o5Z$0)07kGwn}WK8io}cH+Wh2%Ysho2 zI-sWM-|+zRA*ZYewj;d+U7^9o8>I?0q~YMu^JDK_3t2Z5`~AQxA7>N(?Y14cs7=rj z(ATKc&o@Ix@0&&;n}uuj`4AE_F%|gNTQFP@vh1D@F$ME5sK9Mxyh4SSi0*FQ_FVli zepLSNhr2Ui7#}?j)=Fidk5?iOi$3d#!VPL+8C~3gJcU76MB;5R47E8zQGs-m^&xlU zK}=pkI-2lK3*!nlPLn$>evrC2SmmBAq!iv|Ws9dsOS1iax#5Y~Cy*^)edr9;o4j~e zr+Y5J5Re!XV^%2rzbDF#?xl^LI_~3%4?*fuR8*Ei<=p|XLvb_Xw(VKGCRI0zj>wNE zFLX0T2IlGF=WWy4(YGi41PmIL#eH9QI_~mkzYT7X6ONYvle(Ke{^E zI_x1w79NF8-@W^DAGs?R zc@fPr8%hcs`*|m~QTN9eqsNl8ACR;hYYwenfR@eB>+1v>gVJu5uIq#i-8aL)FmYPi zJGfDN+bjNG4hPO?h$MvT+Ro<)Px2VM-k6?Mp_`n|5XJX9qEGueRE?&S86BxtSq4}D zdqro1&EiDX&V%8$K==t51xXEnp3z~GeIi{Idj9nAK*PL6su3% zK0z;1tMzLYP_H4YeHZN)5sQH0Jy+aKsY1sAk4fi4LRXLv*uO;@@TJIeNB zLQ263<^?9@{;)O`)m{gotk&!#-Mc+>1VrL5Y?bWxcZyeFlT!d4Xv=zQqC3?E3K}*` z;J4${3*lqvUM<_&as>nnL{VfS-In`HUrv=%G=H_C4*XHH@nUtp1@rYi3m;LHTU*ue z2Anr#TR1MmDSnty|B|8Y2@U)2sv8-CA{^RXRg}k08vt6TT_xmo zO2@CS%}?pz<=#>pwi2B^;M8IDZy(WsAHhbS0B&^2yk^zd`v6|`}_$@S+7{mS;IA{$WnSJ5-p0L_*fJly>RAy z&?1yh#&D1s7FHpU<6y@lu!kC-ob^rS2KBK2csZtmIWs^&ly@uHPveWFm56kwR@z6su=5E?fgjM)tpK=tUCia< z#az*6FZA4=UHNG6LszrpJBqKcZ$S?jch6-i;1F3*0ofxRnoGiTq5mv3;m-XZ4$%C3 zqq}k+vpz8V(5hva}zZz!k8u{_(!o^ zAp{~k?S~clsYyMk4p7EP=js6pn%QGpLGRv5Y$px3IMx8`gdvnu3%~&LFzxC&IcSW` z=H~mm{yBg{L*rCT#(!iVZIFpAKumnZE9l*-JSCXVo2#BkV4^wzAKMQ&y8~>-h{LYr zM1-U6E)ct|P}nwv6(X%3SEXoAP&hpCL4a~1E&f4W6O4A4l*y}c{f7k&ize{b$(by{ zoAWy!nzF>2d??N~5Knk8%?rBibPJzgp^3Dxhp!KqIA(XW2?#=yb@r#n00l=03`THB za|E;0SA&PhUO?V#lW#qk(3~Qsi;GBi;lCxyCC13Wd+9C%dX4PV0rhl~#zCZStUKr~ zqB4d{?e)J6+J6vj!EOVP3vHglP}W4~2#Qdap=wG651=d{L`6vZ@{C1n(eD7+c*CtP z0IWdC>6maA)Jp?RlQW%isqm;@@fys88=&)aFu>4Cs*|Pm#IV1M))FZczwiP`PZDaP zSLHTSvejrO$LNV}YoB>0NrguhFK=yW>+Gngb@zeZDeZ9vZEA(!zf zLL6R=y9O783MFbmXrGV!K`{b2Rg#zuWE|}3w4$JlYDXDi!%ja49FhM2KZ4?MsWKHO z)PfF3K4a?n^N|#I9Yf$j?LuGSrxD*xQz8fI^%epk&<`=x+?(2WSK+XCg_B#?n&xkr z5at0079uR3!xfp?SK!cAJr6GrdyohM(T?xYXnEy*50{*dcJm}a zb^HyO8R6>xml5z`U(q-)o%JY}Jjyc`rLP7o7X^M80%8h9TvVp+42tR^qIeu)DDK@@S^{QKL%QKN;oZK0G~a#rJqslSjE(b5 zq&JbcQ4fTWq@$eZ=EVlb1#=$3trZ`b`pUzl{f_3ts{ff)Z)k+`)NrV(U|ZmpqZpXj z59R}k%Cseg9#He?Zp>%ekL$z!R_@0BdmkQ>At1$e<(DR_+4tFh8xm%h+L2Q4u0Ry| zD3ZqPf$*!$eNK<)+1~beD`^~=N0)1t3cD-hgJj6)uXjm)+{*uifg3DdN1KqUR&6EE ztX5`mw;Pf}bK^E(pbB?}B3;{u15`q9+=i;97Y0c1(px7h`{yteBNoxs%?ow2hNEDn zB?zM_MZ+{s%5m29lun?&dP!i(A@zri^vyK+43vbrS!$CPp@A_`Zq35jVr&;(z z{~@l{rZep+@*GFLg`%yZn2u`?ox0G$^BFN_<+}U8OW12xUD2wT5-k3U_P#t9h!xTN z1$KOuuV~vhFt?tE4&OcXg3d>G>tUR{@2S7yD{t_xw>vL} zCiXM}T+U%&Ln;!tYN;6EG>Vwm?vH>|a~}v%5W%<1b+V;ohiKCawyf~cXqu}4)|3n1 zh|*{W00%ok^aXtXHd>CT72#{5nGa6!PFTkTpaq2SN^(^ED&-VN^xkrS0M2DJAJ*La z-zABCplSQ1X7msKs8Wn4xryljZ~O8RwPH+vRL?p|BSaH_iQt9LNTNGjy7IVfNmJbG zl6v^x1(2z^8`pTAZH4su#f2XLs3tJW#+ zDk1=YSDkLuz-xlvynGI~V^r?MkuQmnPfl&SvdTrmCfWUj&H-x~o3&rr?fh?1PB=UfNj^bG z^Q?QEI~&jCzdJaJw0ka+Ou~9OZ(0SmQIrOh_uyxqte2)jfBvvxtygL5b5hL<)5W@c z^FFw6uD?mQ0cI+*ZsTOxI&+LHO%4e5&WvjpxRZGeI;`&mQV99C&qJ&BV$!zL*vg4^ z5L9k7J&Ym4QDq}w6lEJBap6#GJ5Zihj4++=5+5FibSyMG;zL_swkdosnLVC5*$pVN z3)T{ocAmo+g;89b%}jLw2Tmt*YP%eIJ8s1+t@LNRTER@t+=?PyMWMy=hj9F%YgKZ~ zAIOI1pWIPU+SZmiQy@&LBIngMC!A(Zo;;+smtdaw(C_PQ5Nt2LNi9yM3U*s1li%q$ ze);yjc!ze?Cx49=V`HJ@m=1b>)N*BH3Y|fqSBN5mJ`J3}rEjeh>$t*xJchuONX#J> zZ9JwUrxAebp*F+y-WGdQj>EaKdeQ*X`I40Gd%=#xvp_TV*FZ2abI437`1$F{Abcdp z&TXPnO1MvQmny`0V>74oCkk?gFIb$vpS-&HA-me zy<=1#Lt9SB=!0R6r_Odt6@X(5&7C>jIqy8ZIs^j8NWiKSkY~FKX zXGb?j#SiuoD%|5EtiM`j>l7xDy0qbaTWVD=CKgSJv)S9^#Pv}8k&HJ}@ z*16Uk5{gE4x>zn0c?Hj<()b!E9wblXm{hL%DSSn_GIjpIYe3yi16aKuWTCyyR=Atj z0A=_Aaa1|*B@+MgCfS#9H;I*Yx{pBp^dSD<%S5{|X>j%b`r}K5t1>z(E$kQXJ4{&r z4}qQ$>%Qra+8W?W`{FO>uur=^^IuGX=S;IH2YyWWs%4nXF{a3n>YqD8+#h92}I(lRvJ>5Y=s@@V-5 z^>U``{yRB^E|K3vkw!kn`W))MSEWGTVnDY>5GpoxO?}rYu|{<|G$0r`lMmrK9yUTr zlMA)w2Apg9fWEj{T^ksV?9Y)dUFc@YAqB*=(o_kKfU1yFr$1q`6Ma~pxM)g zJ%vsJ4QQ~yi{gUwVqd~|TDbQB@Qhp_^n!iu6GO;2GLb|{fUp$#{;mUmggcPPP!4*Y z;8;}T;J!bKBLf`$ps9_DKf!my5EWYQ%M+y><4SG2z5?1Kp&t=5_V4yP!|gZH;DtcX zLSRQ1hWyh;>|?bNTqVqcFUy2l-$*?d0PQOHxdLkLc08{35vUb}n=%mv1=i?id+I z6%9=msM&6#@KhkmiQYNob;rjq#3_H$0Es(&p`59afY%2|&>LZXy`@P2D$Bo&NL9Z0 z*K>`ipSytg+}k*w6m;ja#JW$=Rl*o@1P7E?hA(sC=w_vMv`DKnr=A=b0(|x1L(reR+C610+mB%^N#M=6Z_EFzz& z7hH-Oc%hG=*0#Ck~#aP3}~JjPL$J*Maij3$mm)vqoQi zGQ6E32i7!YzdYUSUOyr#5TtjWFAP;i5)mIn^YCX`vXm1qVM!YniPieJg)+8B9ifPH zg4)<^xYfCGk{Pu|=BOF6pGDodh8B9LrOQzp72 z2zEWJ*6(hi>Ts9yrM8Cj7vL$_pot2|!XG1a&pQnO6^!^A#Oo?TxK~Mgh0|;Lq%k0( z6RYz(0n5@4wBrivD5wfpe=B8KvZCF-q+ z)+~P%cFuRLMU;r$SR6V%1n@?XF&sGsmR#)HhNj;xfv?zLF)GH5mX=INhF`Rs;0VW)|Tgz6=b z$4mXN)-96K`F`2`xFR+I)fYN8&chQfM->lmPVo^Tcb74F!NY#xJH&ZJ6o886Zq(K@ zW)V9;t;zu0GlHSd%~fGXAlycFD?>|OM>y);6yF29O1bc0Imon8RyL25Tkng!h-r^% zoB?rZxSAOMSpXLH!9|>Dp5tcYZ_m-h{riG1@Q@F3{-Q6`HPyGzExF)a_V+k_4xnE~ zy8%l@<+vJxe$Gxoq=!3k`*zq@BbjFJb_I&c(c+7GO4RtoQF6JB&dqOO#xf)P1|8mV z$8)B|M@3KW)a`p7Un;XI%{BK~c?6F>H0msr%K417xAN-uy*5dkDGwC2U90NAcAY+^ zrpBRPe3b$-As|Pmxi|n8R7)BG!sc~DvguR~)0b~>t;Iz-0s#=UZOf6|0Yzni!$~>x z*UlF`;Jy-CfZEB3Yvv*RhpPq-V{f6p81dzg5z_y`G>4L+7f>bvKmb?BNq6$bXOR6z zm_IVp<^Yb-t2zA~gIi$I`3_*!HguTV!}m#sdjk7t*w-3JcCcIA1C;}f9l1C3V*0?% zoQ;@x?l_}BjQY#$W7kuBJEUfvZ+&wkb)l}y@BVB!9*-;v_wqK21F1PK!b6(WU@oyI z{#dNrb<%&G0lB-26FFjkf%|Oa;m?uJ_+ff4tC6^(cAG!(W%e$ty`4rSHn-qh;ogXI z3?e;@JrAW#vH#wjxfGP4Ql7`GXpi%73zYh}vK)M(?%0)`b;05G6 z%jWtr%cv0~NS*nMp5x6P^z;k=6;OAe%lBJZ?=a7a006d#zA8o`+N)64clzlTVQ|9x zpY#Ea0bQMMDK(=+>I%@Q`XRxVsN&2`Q?Y(BcVCI^kSF+_Ldh)K3l1H2aJ&j*=Q+e) zJWFQ4HO8iC(|#66kDKWa5wtolcwnoU%zA|86etfwiRLq)n<#;I3$I>a!|>-50FXJ` zMoz#rfsM*<0_AL>WM()@n80%SjYY(dxtMJ3f{_!lTUk6SS#fju9x{6L?HFyLP{D|^ zT>3q|m9+WVZMM5P{&P<=R?a{_XbO5M6@w=L=XW}-K=O$8s8S;pmQO}+4X|{M)Jy&; zij9o5{!Bd6>Py3T-<^u_;4)=mIRs(?wr2AdGKAtmHn;0gznIz@*fuvKn;$6-EUEgp zXM3Nr#!K8>PHye?G-}W|<+r!4?+u{Q(Ak@Ox6|Obn)6tIhha82Ps|rLN5gn!Ir)5M z9fD#vf3^YbqX7F-S0*x_)&R?eI&Q5epu0Ta%a*MNuDKomg4M%7tVobnK~j2n0_Tm!_kcI{0h@}U z(&SuD97)g09h4r^hr$}F#-D4v=RXV~LHZN5d1LRP4DE$B#QEDlLHhNDNxxz)CB|TT zJY++GPl@sK5lFo>JrCC4=Zmu))p@Qez1)N~vdXc0rI&k){}MwBf9I3D8#1&({x_d& zzroN=ygVo4?GNpaDtn2ZQQ-bGxrp=@Y7CS)+5?XTR7LLKG-0|m2_#Xi{jRHVOQz94mIdlwN4?)O{8R3@gzoZ22dlz91}+Saia*ON9un&^vZFZxo!i)%XeJfw`*Q4Cc(knw0H;IWG$eAViLp02 zzR?HtEeYH>M=P^}JdDd=;40@mm(->+8OXF!^*Xr9#U1!XsAkNCd+WGNb29Irgw-nH z**@l>Faw2~!u$IGci$hq7vaXaH}s3zCu#X`{d+bH&%-f`Bq-otC6*o$y6LBfE2f?K zc85KXYXgW5BU3WE-p%u88BD|2=Dyj~Y<7sBm1|#7|B_-`kc9gz9{tMp>EQt1?GNTl ze7vF0UAcWop3HBJKN@isJVoytsxom_Np&IN&fOsODq%s-r{^&%AI-!Qr~h!TL#mL9 zJs<%0nj=z7UIGzO`sZ*D#WwV#qEcy9N-`fU zd-T@p?jb8(aq%}zxf-9?RQUaQRDza~2Ruy7GV^}5*5erdZJpW(#(A5uW#BDW{-wWR zB?Z9H-!xB0!C0TEAODZL_l&3V|KrDxy*XyKqlk=%WOIznhMAEhJ1dltvPYCSMpm*S zGg;Z|ltMO@l$B9r%gX${E}!p>|K0zM-<{v*&ZmbnuIpUa^?toy<2hc$xel}a=q9c) zelbO`c-3DW~mp`?NF(4 ze|*@ohxaZS4htA2t0JXozVxPfkDC>S3W-{xN8L$R3i{e1Ls}vw{F9p9jQPV-@^(^3 z$auQU^oJhW*y^6fmrUa?+ornb_q|!}4JzO$Ln@j4`UKbHit08q8W+V-M>osX z6O&wuHe9diD!)vY2U{?HhM+k<^O7GVh23xDGM_;AxBkz+oU3lO!+8qhHnakN(Jb-( zL+LkjHXt==c$+&vb0=``IhE4q$t7Tdc_>f7{4kE_byAZ8EbjCVj=h>W%x}_HV<4SbDa0r2bG^hDfRpgxu$Wz2)lM z{h)U_(6WqEH101w-Bj0+|Aevq0hiw-(AnJ296k9EciE8HtPo74l*ucbFQ z>J|%L#!QW&+Oo|4WdWRoFI(AgwN1^f_?lhvDK-%i`a5^^kJuJtWzA;45Sa0zv<>mW zLx>?n5;vUjeC!C+$xbMWt|#!_G$ zbnXy)bB~n=0nPSI3Xa-|c8PRfD++x7(Y8l*U)u2pM_A*TiV}P6XMI!rx)irCBj@jo zk3w0@I#PtBs{VbQ4px>x4f`VBH8Z0o67W3n?(K6)b%bkhvCge=;`S4ZX?=x`{sD-r z;jy9_|2w-))A9_-jHSp7(q6Zx_Zwr2e^`KUUfkP%_`RL>m9MXV1SFR;6}#6%(q}P2 z(Zfd0uKk*w^mu)5CheKi98->ZHrNz6Zb+QV-~Lib}g;ZTc}<+x;3v zYx9bxasA&a;;!G0GauGYOJ_Q#@ow$!$&ddY>W(2yNsksrokD3V9`?M+AVk1yLro6p z9ubO%JJUdb%+gZbg7~t?bF}m51+?;)eO_NQzTxhVj5II*-3`rldjJ`KZDrb@tx>>` zCF^EOW;9FaZ3G)8FSLFKTh`=)2jlmD*M-RjW<70wO+t9;g=`&y;=dIBwW~-a{u8lz zI+>wjOw#q+{{9>O`+az??SLo>NlbXcSJ_y0DWQs$KL(sPYB7@Kme2K+zwPB0(+5O@sc-7PLC&&6F&wO$HHIA+)MOfZX5}5TvZTwB41f;x?K0VdRo>=|`Qr`f1vFHq2hg3iS%#Lw zF&1Dor|Ue{|MlHNUEfR88-Vii5Dpjb-KnJxbM|WkZ^cGNl)cx9zk%3 z)74O-CS{2SqW>mB$67*VF$0pJc)A`d{@3bcWO)1m>}?BInAGpQJX#hz+HN^oN=qq> zb~w3jrM-`oWj{apY79k@%;E|eQm$~rD0lVX5b2dk)aPs`Ua7js!(&ZxErp8vZb(Sy zd=yi0z=%8Fy<4ShGPWVH& z;cU4@C6CXTmx*m%#2f3oAOKkxj6e;>MyQQH?!atl5$u579fG7>i?Y-Sg1*LHg*io! zeZkK-h6ZCbi-rD!EKwO#ee}yo%}`0#M;VADt&f9?)Po!)^7_~Iwnmi0Wa4QR#CD%{w+VBo3M4iangr5S%zxjnG*+i`b)BZRp$ z1OSCR|9UcA{xWd23ZvF$Kiqt(AX7!J3P+0e9KO)hGI- zr8ob%Xa#N%XrKEukFyHrBw^K2`mz`r?-FYCXjGy%cTQO5c{V{W;Altt@Rf9O-{!Ik z?S$}G1`jbaeSXCGDgG}!Xe_IKS%$KZ@Tjo!-Nxpn#$vekHy(bk0U}4MTmAg2THR!O zdw|-MhL+Io1c3=cVYX!`d+pT>g%JQ&{&_6vxOFwTI67}!(&;eIQo`IGL)q~*_Z+Kl zce?CQ56Ve4*WR!Z5i)VH-{&NKfz0ie7tU=Gn>U@F@gte(Yy7nrVO6Y>6>F>Q za-#nRwe`JRO-}fB$;jrGhj1SudU^s4tob|j7Oxg zSNgXIBF~x-y+lk#C20`>iR8V|NV-ebFUD4l^FpGU@REf&L`n7+?ujP?X!~PNuVcMw zDE45nMt@clyk>QM*Dcg5RG8f|Ka-frrxs&ed3 zIp_B4^6>4}EQ0EBVe|TYl{$M8|7}lY)8PkKblI501mg@>)Q%L`v2&s3M5O*f?HobG zH(yfDc&Vy5@JZ>u#A}NytF;p}`D~hR8<#w^!U?v3z%MCH;ynp0L0Vv@(^mu8)8FZm zG*N35+;{I8TwXEey0cODI>8m;oLnc&lc@H~oNp@TAEH#KvJl>_&7O|3H@at_C$Lsk zgdftTcu~?{GDjhhj?+`d@Pr8#^mkIfzS6(82H>`l-`Qx*H&#P}O*nm36tXEZ{2X{Zxr0zRF|uT_ccP5k51(#v9Mg=1n; zBp+T$+tj-fkxvjEKjLNoMNBcu%Y7|R)#mauQekW2%)VjO;b7flvW3_MpBG+hm*eiS zw6CZ&28UjCA6D53KVj)kW=Ml!6URRZ5WW5D@=fzsj%>PYiEncA!plP;&7wiw6DIiW7^T1G zr;Eh8OwYbHCihdGPnlLWk$gFOrFq@v8pFSCD(=3jCLyQky%fR8fyKvXO)$9UQ|#q) zvc_-41fK2TxZ(MV?p2+me*8+)E2uy5b@RO>gM}Z{OGIb|c^kG)2OSY*P zkl@i)o#vGy`s`Jmh+&<^%0OA|dwo~z+v!Ltu~RvWb(My=mCD5o`EJ+g6hcBA)3kG> z^Zk#Av5(1b0NmGhRUQ5fFT+``+>r&wru&)+DS zR9pBVbvMV1<~7Zx^VKgyYT70E;Pjh=Q+xjEe}lsa1^Ev@{-A4{xZ=DaB|lTlPU8HU z>@F&3aW$J9g&jxoIUQ1(q<2Q^wV(EGUwbE^v-9thB8wC{_Engn=u-o=_b?W)&UE${ zUGGkM;W(0k^P`YSkyNki#o z-_7~V2KwSFYm17q*Mq4=V2nV?O`!F7AR4KbnkF=a8wI*&3r6#LnHCOg`0x-^VBoX&#tbhUqY_uP z>C+>OSF?ZjOB9vZ3y0tzRja5Z`6aaF@Y@%)F)#jx5kOFg8iU_s{nYE5KlONmtm13z zy3ZCw7o-vHs?=ZA0UxhtUxwm^z0R6U1i`4Z{#DQ)t{ffnqOxp$y)?Tueq||+?2lnZ z)_EAUS>?B*pCs5D3ayi_#u)cL%D8Dq@|I7hZ`MIJ(J+Vfv5wUVh_;oj2zcU?nNYZ# zLvOlP)I@i@b142%{OmK_8@3aoJciTiSNrn)uSK;WVwWTcEkqD30MC3B6E*0(@oZWV z_)M-aL@hPSCq1HMI-j*oiT z=j_hsluO$4Lc_(BW_}h@?CH{XGN@P{wxXY`W0`Cn8_w%y|K zB_gf2QTkc`Iq&bZBxFt(3Pz}<2tQ938s2mNngi4Mv%!`>O4~QOYGIaH;L^1|??+UkWSl$n7+eFz;-(2lEN-6TxNPyX66_IdvE!kM#t2|-N(c{(!%)M&F-1=k;j8Z zEm|YI5HSPSLkH$`M~Y`Ksp9wiVT#xcu=5M)k1;qdCz%8jA`S;wPb*Sl3jOfcz~LG} zWSTq`dZBS%Y?vP-Twd(O2`7dLK@n3R6v(b6y?>UX?Sm#& z>{3p(@eM^8cHPMA5(CaJ9z)cR*g0sq6~6JeUVVO{M@nS0+Vy$EG6EjI&m)lTP8ete zZ*5aYh18NTem7n=k>m_WI>p|FC{3WL2*{_Bw6H3x$eho#;21%PnLr1@Yrp^{#*klb zY(kEt=Sv@iVd8rMgIo+Z{F$VZ2mTsKMnJBb&Cn#&DGy@yhgrynz%54gF92+Rz#ZQS zg+U|65cwLILYDZhP3^0|fpBO3bKCeAfkK9-*yjJRwzxac4}n3P}TtX4@WrK>r8uBi{ za@M~Pi6jW_4LO8D$V-{$UcXA?XwZ2R7+=W~AX`QI06Oyhb9rz_su2zsaPvz{%dOqyWyVqS7pJLus@*1<$Ci?n z*Yt7ZaF&yO0lO+4&|Mfi5%f-FK ze_Mfo9LHDfHs%BfzABiB))NtTVS+Y(lc9jz4iUO~_Bz4i<5@KP`>8^{@-N5dGKZ*n z?MfbKNLK?qq<-5^g@l(Fb3aT{TPv}s-{pH%r$2#KAo~oNvhaH4?u(m#&oV2|qT)4G zOeSAaf?Nkl5Th}QQkW2@fcC$J01R4>G*R(niKEjAyT~KCPe-7|-7hz8>$r}L&iOdt zomr$}2*K<75`x0RU5CR39iyf(7vM!bxSU!~y^zlyslhjW{@@3cxZ5?zvc|dr(}X}O z@jtbOE#yE(5y^b_;3ca-_?j$>I!}P7f35l<27kiTkYm?_2+{xH4EIi;>YzbR832eF7y;-s_#=o6 z&rlQ}ogl>xm@~%PKR|RhW@zDe>YaKAt<&!S?HqEOfIkL}Vo;r@oHS~IKv6X+4S&Jp z!#QcKO?^P{vzNR6`?ALvU#z$eT;iNiL$hd4_R$T!g&%D_~2sS8L@UDNPlYG_{ro1-amyx)I6NIxc z0_;hBHgX*Z;9CoTCUWnXvQCIx?>D|)2pb;2D?=_T@Rp!-xbSMwp(;ThIt&uFIxVya&_2_c;hB8Jq6kB$ykj!Ix}S(Yad zz;V;g9HrEJrhj43h{8#-2NmbX`w2f2)PF}SGl|-kVUI-QR+x7qnJ?LDY$L`)qn+6Z zAjmDR%V+T$vc{^QrmLIyD~tidzGhHFut99n7MRar{E+glC~}cQMT~;7^n9ngP)PD?tTtSA4 zroe$3!qboq75!;mOWmdrtz9MzHt6?KSB)0!B!0Kmz~WMPLl z5NmnH^ihdfr3<3gF{u`;egGcI_`EWXLi4Zi{bBPRv z`<~`20ziE0LGM`5$pS!q>wXdf|H5a_w*OL^9)`VjO$E0Os& zpfI=h0*SQfh%x`NfcLZoK+Af4h>;u5&Aoh#x}>EN z1|snwm|b$OlTAYy7=eLP`l|;NsDGNe6^ZF3l=>Nj(3QGYyZ^M?(W)3Ti|{aNY5}oj z5fEi(k!)6H-ZG>4kIbAj_mde+-zTQo+!rpS{p^wq$$2?r>Y=~4^>~`nbz6kn)d^C$hs9ji`?AGuWR4xq``tWgQmN&qq2w&ywccJbq`57h#Sn(V_WmO%>Rl8zdDj;r*sek`-@Fq3Aw^WrJ!cPr zRv|(<&fwL7U|?<}k64*i(hCuk=oqkX2>aL$=4!{qm&wV$v8%4XvGMn|{JjydEczwn z1L8%WpS6GSJVzq-XXq!{QEHFN+_?>f`qvqn+ptxb*_QEK(%TLVAQ1#|-;)_`rymdJ zt#7<)AMjo{1T7#BNgB$?&pzssy5FV!&&UjTYYO1^#yNKlJWvNc$s%Ufz;Z4ZmVE=j zjjp2`p29N4&vZIBP!$@ec)ESIO1RT`2hDJo*Xq~;WWT^ZV1nsx-Z z;`gjTGlGtkN&Rz)*^u#|!3Y9{tk3luC<(Jm@~I zUMxXUb@_c0>>1NUR6X_x);rEVM2_ms;A=Tyo+brtB75UVq|Y$AS#)JC;k(ViUB}5c zm|S?kFMwQvv<&cFz)iC-yvS%fILvi;R`;j5nSiH$-(P>u3yn zzJw96!g{*7WuD|ddq3GJ&}RO#m88n-06E zg5cFDS;4fCYgXB_I!Xenk~C?e_W?e+^q-;)4EDxaZ%Fd6S6$cOVI0$_q#igwH!Kp) ze%gtnYG0?TmWe!y@_9jGnY{|wzBu+*Nc1ELek+S^?Gj5J-278iic$Y@^81^XgQYtc zJf_r7e4bl7y7|C?@Q&fXgCqX%L|*-q;b@)5Ov9XZI*j^!Pb1Ik)3Z%IKX;iO%`b7{ zS-DnWVvg#?h?ytxyoNW`7mpSy4;_{!?$+F0R&0OK+qC^Rps9C(`oW5W^4`erj0UAX zzpv)2S(8W1L6IsvuLB+PA=zTrB+@6%Cb&cCR4|!_JIxkCofSYvsrZiSU7leW4#R}G z9euEBeVSsPYpqXxke7pkH_X<~YEI21%e+?f+%A-9Z%uir{jXoJCHNtp^wYTY&bguj z0p`(Yo5=%gcSKl{(NEU4wAKVdJ+Zc+?GMrZtP7@T#J7U&ZX!r*KNk0r`>d0}=K$Eo z{FNxgpT2kv2nt&H~+xNkwhSF3NE4pUeoc@i1cgq5Y*(%kRrj?vd;w`XMI>vmA3sTAokhoqezN zvrq^4QUj_!K=n?UOA)A2{i(xFcqpQ&R|Ty&h#97^7}P4azlQu=vS_L&-uI{P^$Pn* zP82uG2ED5l&j!bmn!C!}EsoxUcT{3?hv@sO=~fS2a%gn%Wi3(48*7jq;-E?rkpC5i zTO=XTl0$?p3!h(4=?O)v8c;_Eh!m@)hf$zK4pjCD33K#K(dQV*ACrRWOr{?Tx-yQ* z@x-~4tEXG92l>+anw8B^rRJe^r>+@1@`$mq3OYqP1xgWveLpV;;XD@HvA$L#OjD=J zXh`KH^i^t!RTSxj;{}5gTUQI-FQ9v$yKzTq)vCB;Fh-u!q>D0)>`9wC=_OVwm#oy&<|ysS@9v!#ZSN)E zCFH$@-|P58a*HhPH+QJNt9Cj?8)WJK2p^$U!_HymsXXg!E|9DEhruN!6H$W%F=66s zcdDe67Dwey&(#an-hp3_U|yOA+ty?K7}po99YggaN#;E623Q*B8kS<}G_Wn@#=Cn@j@ zGSO5`JYnb)l(>8E0v^Woap|JO<<-e~XJ*`K%*hFT=`F;_lWcfdxiHrmr`U2{L_PXR z9m?T#7FUNRB)#nrM?e{CwdwiS6T?l|)r@YX!DXDlk0s>{Hxl?p{3iO19l(lghGcUK zf4#;xt;GIPxz683{v#{*!?rgl5oa|8^1=1A4$p8?Al)EZYiWn9x1$WH{0zxE+4X{Q z8Mj`Vfw<%SX^CQFW00sFAY+}^dhz64CCY=lU@y4a49zP3>vChOyJiU`;je+;R#F>U zYg~AL_wn2<-9o$@!P1<3Sn3xvfuJw>0LeuU_xr%jCQQPuOOvi8~r;?Gvp#=_>Thg0Vy^<)RjsLW8SEO=%jvhd$4=bkwfJebO&d6c$X|BEZRIcHF`Itm#ZP>b=pp%0qZ?g`4glT?Zry52H zH$b5wqaMz-+Yv#c5}apAS})23b@zW*06&c5n8G>KNg}hCdM|BZewRke+Ew2#EyD5s zi12#IEc;5QHqr( z@&X0P&dVFo9M;y8msRGv6rW!%ro=Bma!4`BLrgdjr^%&D%J-zinM>9)_q_AhOHlZ8 zku}N6uF`Rkk}lI9{#(oB<`@4xR|fr(T{fX7Vt~%N+WEv=kC*Fh7VJJ+TKzK2zLzl(tI>8h-IzT$ zI9l?PdQV)Wrk4z{@};(I!Xae7U3BHC)4R3FimU5tt~N;Od=*oZ{%b6qz{vSPz_JJfJt&2$!}~U zZBqBGMvZ9J?CTyYtFr2tUDUaE6Ar3VWcvY=HPkJ|f{~9W^7wWh!CDCV)*37D*z9NK z%|~wC?A5;h$HXVS z7C{B>{4$|YsfC%t!mTW|$<~Bmb}sRb0ZY)?sq*jg4Zol-%SkHNM&}fO<(u4J#aWFE z049GU9WA+L!pBC(4NQo`kH)yQe8;RyPzH3Z~HZbo7nFV$qUf-umvdZRq@8o9s!7DmW)a3m+ z8J6Mj!%4;Jk+;)|y}=%BG9$7gj^nLu)Vog8(GD(;Si^$eZD!M0h_w-ZJdwdHOR>_- z@jJvB&-z2cj)QLFi^Lsz)hVKV@5vg%me`b0@jfTwEyEyN)K=^&RL7r3WV%!ANCdIu zyoFEMa_wEdS3V!Ske)ALu|z}BGe|;XQX@*i_RS4jA4Mo~fcHElPpTo4r1Vpgbr&BJ z^7X6&lOxfRs)*@9v{m3KX1=OJ;?T0O@>V6ck408{Nrl4k18z2~rF1eMU$-uEhb}zQ zYc;i^#u$ln3vh^YQ&;)C`OtI~Z`2Wni{`&Z;3>3X@^Jw4DlQM~z_~f5X{72v{Etb z@7pF6*AkwpJ0RyxJznV1#y+*G}M|*q#ZAwZUb(_>zeU%gQU8K~kxZ6*6v`F&j2996lB}G0- z+Q2r4$U14Mh&fapAK+8A^8W%&dqWi^Aq- z!*u|!wZHzI1^pLKfaA?ASn(|R1BLW$Z|YARx&iok&^MUhOD zCea~Kd%yg)nppIQlCN&#@$mupLJ0Y2g9#>pn|d1-Gqf*d!PbCYYKGo~{wm1bs{)}j zUy+jXDfmoYMGS>o^3nfYK2-vSz}hto`Fbg|l1RNyWyxnuVM}7QI9yh4#09%p{&zOS z*{CQri|jQhesw@|o$qGzLufkNA5eb`(0HJB$psnK55l-Y{RhxVPjboK>&VKWKfbfR z5rlj(=iTZJ@BtFQ^x5SI#snTg@4m3Wu`sM(LbRpHw&3E+jpM-eg@grAn;UQR!<9JB`7X1($Ntm*=A4j3{H-F z4c~|txXO(X&6q??^%Uu)N8Q!xZ!+zY&@)7nMUh;8_bl$@49ISNQO*Ktu`W;d_{HPZ zpinlAJ5S+lLuLTQAP7{xfJx!_jOk@z(wGbLZHe21xwe|{n{g<3ISxoo zo~Fm&U_A*dRbO667Jej!UakABk+NH%0a{|mFKVq@t$TxHwE^4`+wD9QFJIV)2Nn=z zkd;SEb$rniA$W;-&h_(xWQtA&8C*+-ED$(EOsk6?|93AsVOqo4nW@~uB&dfO@jnAc zLJi-74!12h6mzm4Ki%ZDBdXA#yHN;KRwK2G$kmV@e+fVZ3E)-H(i|U30*sK25gX+D zAj0{Qy@)g8_Yz~=KBuYbVJv|5wc#vhKx82aXitUj9I$f8hdQDlInuDf0_C3mX-`!q zTmkZ`I}?^ixU9LJ4_e0&0=*u%`2DRe5;qXqBs3ip;BATSJFB-4vD>$Tw;&^(C;9J+ z#zA`H{8L2_C}MAuf#b`$)W7IJP0o9(m4}6J02qd%h(ggm%wC0*Fsh?L!pA>vG2(K^ z$yJ)PcT)d4&}u(!2iv>&gf4nSt*b8$sxJ&K(<7 z_b-4cu>-+QgSSq`fuyLuXy|6aD!cYUorJqA6r%*huZ1Ucs5Rbmwp7ToK{49;7o0H) zyXXULNvK#pv-6t9g!^ahvt?a@rwLaQA*#qDJBi6_2}R z84K7(4cHg>V5jMKpg0p~Qlb}m`~!u9)D=J;Adi|$&Iq*kF$a6w!^JvLuaPB_n37L4 z8S4%+pLGCv=m0g^7PL`!>!O$$PaUtJ)Hn!xH-yt)_G*SxGTXw43u*cAe+u)eC@+Im z(}p6BunWeDbb#hY9>|&^o2?c+%DKSt-X#+*NxPzz^eBNk4A&jepmS>Q4kCYzNc=y8 zhwL^*2yoI(PhU#GAVhp+`sW5IBu{WggcqrxtP+ONUm|eOZ-2M6?a|TFnnW|kKaHx1 zzQ4T4&l}cSxdsE4E*MzNfXU96sV+qPMFF&3>qix6zFmc%Mk#(L(k>7M1SZh)des3o zDkp(n(a%O?8yzHAt{F^7^6(BNFW+54qC;>oOUscDJlyyE$mXC0`a^k83dKv`8NYP* zG8r9rBFqsB={+Y~3H0kspsAWr?#tLV3V7`4`cjCH?iHjnV>Au2`X4a(TwpQ zS|=pcwIC+kJGlYUd0!_z6-t3v%y{+!Ovt0Q>lJ9IuS=coAQv{m-Rnl|G7&Lf%&bAv~0Sb$D_!0nf|;>W0a%( z(s)MIDH^}y&r#dMba@bzPHqT=zk9{-f#Hmge8`WV`YM#}RchqC9@NQJNHAmz7d{Ch zVtkN5epT4-FlVOyF`-Cn_O?tpMhsG?02Yg7NF6+gk9XJjffOrjwb0CO1V;5m_$0x( zTV17#ic^Yc%`O)r z&}s{cbLI=DSQ+6RsVLHHPV}Qd`bvM6CfdTivGzzUjVY(f>6`0-{Jnh zL14Lzi>Ye8ih!sNzns>?Mg`? z_`+-h{`Xb^lTr%v`GyQ1H42WhG3k830( zL3;I*|2VFOuF#5NRDZJpuX)6((|N3t=kS19XN zW@4xt`hOpR=D=>cLAh1?^#-7eOr1DI>9U`2GR^-A%V@sivf`h7Jv{~n9^@uyu@Fam zh2>%ggKp%GiP}DTDFsuLg`4_OaXTb_>sCi`P+jK|VDQ(N^_Ttf@{tqD& z%Y)Ks5V|O3ALz?lmWCkSFlEa79)jdk7t~f_;16{9n@xW{%u=rav_}9&phJ5PD<49OIZ#f zXr&X>bWb6nH>M`6jUj9z)ghzMUkQqnJU9hpQ|tsCfYT69be6Jp7{w1k(kGvqO+p>O z*j!C#^+5GiJ-^Z)^a%uEWBc3(`(^Uf*`NEYnxlaa-|+5WVefrMN%c?~Kw+hCG^1I0 z5rYm3FnVE=@P_igp9r3GXdc&#@1DMzA%ZlC?1TUtWiagy2}lJfQAmByd2wHwXay>~ zq$5DvL9)c+9{>c7Fa2TJ$w|5v7DSAae?Wg}bpi?~`HQYECsxP74#ND+Dac%Q{~_9U zhVmf1nE=Vmr!We}D#d7NZuCh|e2U{%^ed=AZ4;6{1guFO>}o>h(Z1VZ*hTz8*q+-A zC{zt7t)52)seV0RI;2y!7hA3SOo$)RoJj{qn_V+}2+8~Ebf~g!gI;;FM&=V{;iPNt zRmr26`~Gg-ho`sVJR~NM>=u*(VZ-!yw$!Zhv_1#q9G}W~5$WOm0I(X6F#Z9uruEg= z0Wy`~_7ZZ8fFp3K_`q0A$0oryV0z{Gh0DsyAmQ+MHzSF7&l}vzoF<;1zs;~#0{L|F z@(|<;Mk@$vsA>s7M*MXX0KJ((MBsMiGTM`IE^SIkwCCfnlI*{b&poezTx_%W> zxrNHN##z|ldpt6K(Ye+6t*$G;CA;eJpwz@3Qv5?%k@1-*feakzpnaLhhf1~HMR?qX zY1Bh*I$Cl=woobm^WQiofO$Q2Y5}C~wY^WQ5ZTC(NYqJ+K_5UF5eMb>?g@GK$yQsj zQR-fh(ddN5BV04;`2{jp20k?fX5zbP0w%#R!qb?mP&oyV>+v+B(;n0HV z=MX^>fF8G{f6<{u{*2=w5CR(@@ zX6pdi9o%OE@m$4k0tUO4PsnWb&uWTuF)I{F(Y(bR2I3$FY%!4xNG#=#Tbas%&GsWs zfaq#j0>nTfgr(wq`uSiF<*|kL294l)@}~)}h~Lp+mfd;nGMz&3zQwm~t>M-?p> z^LE;2>H93gUM9+hoO*G#FLR>Bjq?chhm3w7J&>(tBFl#I;D@pabhQ{=D-Umt{+;QZ zUfG6aBJiD7&U>-o$Jx3WhBpA2K?o}OE4qn-Kbt^qF5qYl`v9ctSHVAN5scc@xA$Pu z-vlzfeozN(^a1jLqvZflfcF6c@;V@<+^Ih+*ayf-b9_TT7hKsr zINc}WsZ=$O`pGOfmWL3#n(eo+l*R#pnlL9vvmwPA=4Mj^a&UJ*F(l0e|DCcO^dy@o znT6(j5d|CPZ)MprUy$hD4!xCjWkj&c5J zGPX>R!Om#egq?95ly8?ZAG`~rh>E#K(+%@2vu&t79T=8Ew|7Rv^z?#)01vA6^Z{!X z$i1)CfLX|QWgk^l)o;!DZ&BsWobui-Cqp-2kzA?b8p&Hlgt9W{c3|E!tZWZO=;1j> zx?$TNp1->10ngZn$l^a#nPQ(7-(UI*np}HJD-$g@Nn-8fp77}zss&;B-hD%as~__c zO?}!me{C)mcqyI{V+A+ZGWqO{M#wAoT(jP&U|c9qoGqEc&w|d(Hbm6Cc7Qec)X&1| zGCns$sS!yTEfywP3T~3OETJYkSVCXeQL=s>1t?p;e>NdxF%WB2k(_>nfDt1uhX=g) z!&BrM_Ig?=+0ss*1;-p<-`|C_bd#PWa_@WePFAb9NO+O+Ox^Er>DCwgv>_u-+9zr5 zz_$AtMW+cgl!Wsr!|Op|3lq>UZ6YM0+}XRgGOjqV*#a^sw=R-?YxZvlq;nUoUII4k zE_>z1H*cusuq4^%@=MSv(K|^n| zDQ|78?ry%AD#l<^yK>KF6a-Y8u@N}uft>IoABO|UV4}EtkaaiilTj>6}_`m--lFB zfi95u?GHN!!o`ySXC(hv)@PIVv49diIWCpXvv$%gSUDNCzt4f&kj=lkk6VSUE02Ve zo-DLB8nQ7b3p}Ve-jgTX2fK$?Z(fVBoH%)9vT}^E4Ug%?s+MzC_ZWr1@gfWGLD` zn#QSiePf|DDkXzuKIHIZrImHE;Rm!_4!ENdJZs&5zB8zD)c1eNO2?F;-Wa`ifZUdS zZ}FJg6}L}L8+5mm35u5UuNjc3N-!RWNEgcJDQF4Y5IY0`nppOt($+HUk8?I3)uphf z-2kSW+lRCYT6(B8A84R=VS6u4%|tIgLCt`>%Um_DTExy?I)HnldT&h$?$l8Uwdi&@ zUi@QadsJZ)N8kbv@!>dmwC2TaDzQfK%gw~zO$BuozH8Papzy5Z?n=&wnEUN}R5-tv zr2Y(oiG@F}L%N~v`p)XI5LL>JH(|uKGVU>;f2~oI#dB@1ZTCsgcGbJyTif&C(b_+G z*Mu0a^~l4`sv86e3#?^WLV^#@f{5LK4|)Ra#Tp9&F<(~R1HzW$!s=K>2XEwP^WUG} zpXqr$q{yT8-o7Y7Fg=M6y?xgs?H3vN>#-LHBRGdm&)pbnPLu2`K$(Ub2(s z^;qi8jRM+wP~@Wc)S8m$liBP85c1M&H{kR5nvYq2o82)E$s@y3Ftv_Y^G?oVBHPo9 znj0j2w^Q_aXj%WNtPoYa{3-d6N8Gz~nD=evd6j!Q9S!JZC0-j21sT>@=&cr=;FYRO z0*gsiDpeh=^QUwXX!Vzdh(j3B$}W1G*1_~X)!Nj@4d}ib6|kV=9br38qaspxw?Vag zS62f@Wmn!L2Qleh`K|fw!&nHu!36#nS5ee-2(`jDKMT-Ow;w8}T2`l6?iY%4|Cs{1L&> z1v$U#In1u$@>R1Ykv#l5j3|>?Ppp}lU-P(1VZS=HOaF#I z4ApqS^dku!Nzxs&{f^F%EnX!I>$#yLAX^o$Jxx1>mIs@z)epMntR1BezW~SiwDx8h z;X|h_lBIW4h>+GoWw70S5c^An8c&nU2h#Q97JnzHy3qO~wiPfj=|AEm7(&wueu}8Y zO&;QhBiN$I2c<-Uq$B$mKy8k>KZWj2SSq2{xSB@^stks0x z*wFCmD(g>5-5i=xfsHd#A0oM8^Bh;hun&RR|Mt+E>5ckifKz%XI-d3T$@pb8)*$*p zg|rYV!-C))5HrwJ6t{QG4w0HV!N=sgMEa0y+iST0<$4uyk{xdnQG~CIh1zHTr@pj| zl1OVw=+Cm!mWyis-QT@wQD-d5Je=5K%d0~VJg)YO{k@_UW+z{MjZ7M2T{cQfs+=@w z_Nyd;_q(0lS#_AE%_>f5wbN78YprF|P^bgTJro7$>pBKVo6`y227RUclmRIad9aW~s~ zjX7hMGaF-AvNXvqidMGxDiGc9VVLhkH|JTc8>;1U7Y@-+8EU0ZPmSKq-EwJ;x{|&P zU0ns<=J%HGsfg(=igTEr=T6OS-hR`#{yxb9?<5T!5O)m;(FM%nl@$QQeM$$@)!3iuL9&w;Fd zPOG={k0ko;4O^8>)R_RpBNY0Jx>a-`3vsEGBe5a)<=O!4Y?Md)Kij0UBWsa1=Ys8q zl^*W6Uq(-E(T|_z5ZLxR$yQT3;F?-i_QrqNuciCtCPezvMHCG44NS{&qvcg=*zPET z71BZaK!Dgr#b`dIg?){`bn|`L!oA8XB-%ouC#-QI%w@z=m(fJlwU)9queAscw+=ks zDDQB&ujuo|X=z=9ff*Du7?sfWsdJ;4J04q*8K-A%l8*$6#GL#_OMPRH+ALM1r@(oV z*MAvu{E|22#6?iE94gsQzFU2Sr=DgYnHW8Ufy#I$mnP-{F)=aGAAU>+SLIL#soh&) z9KJB{=ul2KW?HOvl91->1&mTZ>7NU&@mYgL zEJ3)z7QU&MNurz3Ee>tu3O61=Bq{&$z7rh9`<|JjI<|qS&*{bF{xy`Ls_t?bHDv~^ zulnkXHXpLL=P@jy+ns;Hh*uIC=m95&OU>XjcgeNgAe4}&&s3)ch0IuIcbOT*x7 z1)19(tf3IS_T*EVv}11o6EF^F^F|aHx8y@X>;R2P@ak}ev9UizEab$1j@xE<3K~&_?s%};0OLr+V?KOssLBiSNCCR z_T}!R<$2fd=LeLK1`i}!e_(_!IIu~&lDF6DgXwq|PSpgvw7@_B^kv6))~1_?$Y|&O zt@$^Cu;$g;WQaj)K*}u;-I^PCKXI+x`eKtZxWa5fAMaJBG0@~%>^Il~iNK{@`}65$ zF{sx1{XxV<`VMSkyV5_GLt8jTc^=jPVPa}?86sUvHnjNtpr#iH4A=I)#j|kSXq<(l zTTb2r61p2JF?KvDQW{p$N;M2EqQ^4+M^Kwl-jF#U9MRv7{NPWsRgjP1h+ zZ#3sq=SROkG^)mZYw&qftq&dE`{0SO&&no}tt$U5!attq_eEv&lfOXoE0!i4CVO*P zRW*eBX}NlwmIAcmUb5w(jh;S;S;H&7W1UPpXGnL|*qx0*!BU2_3)e5!7IJTX@M3WY zQCl_`c5nwl25zMMUS`sAn%kPw!=F{nS7lKi!~2G~;{U87GSRz7KWoo};%K9%dPkkU zY2NV%HZV*N)HY2A-OAqiZgR^WtsinbzjHo!zx|_6$&O=vq3nm2o0}XfoD0^7E@&xd zetzUW>oEpbxr_dL+}SKX_dGC`Y5vX5e-;={E3fw!AKcu$Uvo?Q#{0m2N!8+l|0+-J z9-mS&C1y_&nRv_hEZ5zWU)AsIj1in3visr} z!%MaOPk&^5?@qgO_q~Tn?(QVjm1c+Mng?Axn6cm6Q}FG5m80w@+1@ONFFO$bE!hH^ z0|kzid73|9?6kF40CN;u=CE)$-9M!Y8g~cEPcU&$`Tw7}{I}KG@7Go|05?4{c)I$z JtaD0e0ssVDZ;}82 literal 0 HcmV?d00001 From e0186f7d0f5af4e884409984a6863eae67950b12 Mon Sep 17 00:00:00 2001 From: Cheng Guoliang Date: Wed, 31 Aug 2022 11:10:12 +0800 Subject: [PATCH 16/25] change mkdocs.yml in en --- en/mkdocs.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/en/mkdocs.yml b/en/mkdocs.yml index e9816a8e..4a9d7f01 100644 --- a/en/mkdocs.yml +++ b/en/mkdocs.yml @@ -134,6 +134,7 @@ nav: - Cookbook: - Basic Operations for Using Global Tensor to Program on Cluster: cookies/global_tensor.md + - Distributed Parallelism Strategies for Using Global Tensor to Program on Cluster: cookies/global_tensor_distributed.md - OneFlow with ONNX: cookies/oneflow2onnnx.md - Model Deployment: cookies/serving.md - Automatic Mixed Precision Training: cookies/amp.md From a9b2c2f12c6a742122592e5ad45c8817d7bf9c9b Mon Sep 17 00:00:00 2001 From: Jia Chuan <103753411+Jiachuann@users.noreply.github.com> Date: Mon, 5 Sep 2022 10:15:52 +0800 Subject: [PATCH 17/25] Apply suggestions from code review Co-authored-by: Guoliang Cheng --- en/docs/cookies/global_tensor_distributed.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/en/docs/cookies/global_tensor_distributed.md b/en/docs/cookies/global_tensor_distributed.md index 4d4ec6ae..5dd6a6bf 100644 --- a/en/docs/cookies/global_tensor_distributed.md +++ b/en/docs/cookies/global_tensor_distributed.md @@ -85,7 +85,7 @@ Take two-GPU parallelism as an example, the data parallelism program for the afo **注意:没有多个 GPU 的读者,可以通过将本文并行示例中的 `placement` 指定为 `type="cpu"`, 实现用 CPU 模拟多设备并行** -**Note: If you don’t have multiple GPUs, you can designate the `placement` as `type="cpu"` in the third line of the following code, so you can simulate multi-device parallelism with CPUs.** +**Note: If you don’t have multiple GPUs, you can simulate multi-device parallelism with CPUs via designating the `placement` as `type="cpu"`.** ```python import oneflow as flow From 9e8f7c50b80fa1753aaa8e92acdce201380b28bc Mon Sep 17 00:00:00 2001 From: Jia Chuan <103753411+Jiachuann@users.noreply.github.com> Date: Mon, 5 Sep 2022 10:44:45 +0800 Subject: [PATCH 18/25] Update global_tensor_distributed.md --- en/docs/cookies/global_tensor_distributed.md | 165 ++++--------------- 1 file changed, 34 insertions(+), 131 deletions(-) diff --git a/en/docs/cookies/global_tensor_distributed.md b/en/docs/cookies/global_tensor_distributed.md index 5dd6a6bf..3a0c449e 100644 --- a/en/docs/cookies/global_tensor_distributed.md +++ b/en/docs/cookies/global_tensor_distributed.md @@ -1,90 +1,54 @@ -# 使用 Global Tensor 进行多机多设备编程:分布式并行策略 # Using Global Tensor for Multi-Device Multi-GPU Programming: Distributed Parallelism Strategies By [Guoliang Cheng](https://github.com/lmyybh), [Xu Xiaoyu](https://github.com/strint) -深度学习是通过神经网络学习样本数据的内在规律和表现层次的一种复杂机器学习算法。计算过程主要涉及数据和模型两部分。 - Deep learning is a complicated machine learning algorithm that uses a neural network to learn the patterns and representations of the training data. The computation mainly involves two parts: data and model. -随着深度学习的广泛应用,模型规模不断扩大,对硬件(算力、内存)的需求也在不断提高。然而,受限于物理定律,持续提高芯片的集成越来越困难,单一设备的算力及容量难以跟上模型扩大的需求。 - The increasingly wide application of deep learning and the growing model size impose higher demands for hardware (computing power and memory). However, by some physical laws, it’s getting harder and harder to put more transistors on one chip. Thus, it is difficult for one single device to meet the computing and memory requirements for the ever-enlarging deep learning models. -为解决算力增速不足的问题,多节点集群的分布式训练方式逐渐受到重视,高效易用的分布式并行策略的提出势在必行。 - Distributed training with multi-node clusters emerges as a solution. We are in urgent need of some efficient and easy-to-use distributed parallelism strategies. -## 并行策略 ## Parallelism Strategies -值得注意的是,简单的设备堆叠并不一定会带来算力的增长。因为神经网络的训练并不是单纯的“把原来一个设备做的事情,现在分给多个设备各自做”,它不仅需要多个设备进行计算,还涉及到设备之间的数据传输,只有协调好集群中的计算与通信,才可以实现高效的分布式训练。 - It should be noted that simply multiplying the number of computing devices doesn’t necessarily bring increase in computing power, because neural network training is more complicated than just splitting the work of one device among multiple devices. In addition to computation on each device, distributed training entails inter-device communication. That means we need to schedule the computation and communication well in order to achieve high efficiency in distributed training. -常见的并行策略包括 **数据并行** 、**模型并行** 和 **流水并行**,特点如下: - Common parallelism strategies include **data parallelism**, **model parallelism**, and **pipeline parallelism**, which are detailed as follows: -- 数据并行:对 **数据** 进行切分,不同设备数据不同,但模型相同 -- 模型并行:对 **模型** 进行切分,不同设备数据相同,但模型不同 -- 流水并行:将 **模型** 分为多个阶段,分发到不同设备,各个设备之间以“流水线”的方式完成训练 - - Data Parallelism: The **data** is partitioned. Each device runs the same model but processes different data shards. - Model Parallelism: The **model** is partitioned. Each device runs different parts of the model but processes the same data. - Pipeline Parallelism: The **model** is partitioned into stages, which are distributed to various devices. The devices execute the stages in a pipeline fashion. -除上述三种策略外, **混合并行** 也是一种常见的并行策略,通过上述两种或三种方式的混合使用完成训练目的。 - Another frequently used strategy is **mixed parallelism**, which refers to the combined use of two or three of the above strategies for neural network training. -本文以矩阵乘法为例,解释并行策略间的区别,以及如何利用 `Global Tensor` 实现不同的并行方式。 - In the remainder of this article, we will explain the differences between these parallelism strategies with matrix multiplication as an example and introduce how to implement these strategies using ` Global Tensor`. -假设神经网络中的某一层是进行矩阵乘法计算,其中,输入 $x$ 的形状为 $4\times5$,模型参数 $w$ 的形状为 $5\times8$,那么,矩阵乘法输出形状为 $4\times8$。 - Assuming that a certain layer in a neural network is dedicated to matrix multiplication, if the shape of the input $x$ is $4\times5$ and that of the model parameter $w$ is $5\times8$, then the shape of the output will be $4\times8$. -基础代码: - Basic code: ```python import oneflow as flow + x = flow.randn(4, 5) w = flow.randn(5, 8) out = flow.matmul(x, w) print(out.shape) # (4, 8) ``` -示意图如下: - Here is the illustration: ![matmul](../parallelism/imgs/matmul_logical.png) -单设备的训练中,以上矩阵乘法计算得到 $out$ 后会传递到下一层,并最终计算得到 $loss$。然后,在反向传播过程中,得到 $\frac{\partial loss}{\partial w}$,用于更新 $w$。 - In single-device training, the above computation will produce an output $out$, which will be passed to the next layer. Eventually, we will get a $loss$. Then, in backward propagation, we will get $\frac{\partial loss}{\partial w}$, which will be used to update $w$. -### 数据并行 ### Data Parallelism -数据并行是将数据进行切分输入不同设备,而每个设备上的模型保持完整和一致。 - In data parallelism, we input different data shards into different devices, and each device runs the same whole model to process its given data shard. -OneFlow 特有的 Global Tensor 采用 `placement` 与 `sbp` 结合的方式完成分布。其中 `placement` 表示 Global Tensor 分布的物理设备,`sbp` 表示 Global Tensor 分布的方式(详情可见:[创建 Global Tensor](./global_tensor.md/#global-tensor_2))。 - In OneFlow’s unique Global Tensor, the distribution is implemented via `placement` and `sbp`. `placement` refers to the physical devices that the global tensor is distributed among, and `sbp` refers to the way that the global tensor is distributed. (For more information, please refer to [Create a Global Tensor](https://docs.oneflow.org/en/master/cookies/global_tensor.html)) -以两卡并行为例,矩阵乘法案例的数据并行程序如下: - Take two-GPU parallelism as an example, the data parallelism program for the aforementioned matrix multiplication is as follows: -**注意:没有多个 GPU 的读者,可以通过将本文并行示例中的 `placement` 指定为 `type="cpu"`, 实现用 CPU 模拟多设备并行** - **Note: If you don’t have multiple GPUs, you can simulate multi-device parallelism with CPUs via designating the `placement` as `type="cpu"`.** ```python @@ -96,49 +60,34 @@ out = flow.matmul(x, w) print(out.shape) # (4, 8) ``` -假设以上程序所在脚本文件为 `test.py`,不同于上一篇文章,本文章借助 oneflow 分布式工具,在 Terminal 运行以下命令启动程序: - Supposing that the above program is in a `test.py` script, unlike what we’ve mentioned in the previous article, here we utilize a OneFlow distribution tool and execute the following command to start the program in terminal: ```shell python3 -m oneflow.distributed.launch --nproc_per_node 2 test.py ``` -数据并行示意图: - Illustration of data parallelism: ![Data Paralelism](../parallelism/imgs/matmul_data_paralelism.png) -以上程序可以看出,Global Tensor 的设计方式使得上述矩阵乘法案例的修改非常简单,只需要将: - As can be seen, the design of global tensor makes it easy to modify the code for the above matrix multiplication. All you need to do is to: -1. 数据 $x$ 按第 0 维度切分(`sbp=flow.sbp.split(dim=0)`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) -2. 模型 $w$ 保持完整(`sbp=flow.sbp.broadcast`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) -
1. Partition the data $x$ on dimension 0 (`sbp=flow.sbp.split(dim=0)`), and distribute the data shards across two GPUs (`placement=flow.placement(type="cuda", ranks=[0, 1])`). 2. Keep the model parameter $w$ intact (`sbp=flow.sbp.broadcast`), and broadcast it to two GPUs (`placement=flow.placement(type="cuda", ranks=[0, 1])`). -### 模型并行 ### Model Parallelism -当神经网络非常巨大时,数据并行同步梯度的代价很大,此时可以考虑采用模型并行策略。 - When the neural network is extremely large, data parallelism can result in huge cost of gradient synchronization. This is when model parallelism comes in handy. -与数据并行相反,模型并行是将模型进行切分输入不同设备,而每个设备上的数据保持完整和一致。 - In contrast with data parallelism, with model parallelism, you partition the model and feed different parts of the model to various devices. Each device processes the same whole data. -同样以两卡为例,矩阵乘法的模型并行程序如下: - Still, we take two-GPU parallelism as an example. The model parallelism program for the aforementioned matrix multiplication is as follows: ```python import oneflow as flow + placement = flow.placement(type="cuda", ranks=[0, 1]) x = flow.randn(4, 5, placement=placement, sbp=flow.sbp.broadcast) w = flow.randn(5, 8, placement=placement, sbp=flow.sbp.split(dim=1)) @@ -146,110 +95,74 @@ out = flow.matmul(x, w) print(out.shape) # (4, 8) ``` -假设以上程序所在脚本文件为 `test.py`,在 Terminal 运行以下命令启动程序: - Supposing that the above program is in a `test.py` script, we execute the following command to start the program in terminal: ```shell python3 -m oneflow.distributed.launch --nproc_per_node 2 test.py ``` -模型并行示意图: - Illustration of model parallelism: ![Data Parallelism](../parallelism/imgs/matmul_model_paralelism.png) -同样只需要修改以下两部分: - Similarly, the modification is simple: -1. 数据 $x$ 保持完整(`sbp=flow.sbp.broadcast`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) -2. 模型 $w$ 按第 1 维度切分(`sbp=flow.sbp.split(dim=1)`),分布在两卡设备上(`placement=flow.placement(type="cuda", ranks=[0, 1])`) -
1. Keep the data $x$ intact (`sbp=flow.sbp.broadcast`), and broadcast it to two GPUs (`placement=flow.placement(type="cuda", ranks=[0, 1])`).
2. Partition the model parameter $w$ on dimension 1 (`sbp=flow.sbp.split(dim=1)`), and distribute the shards across two GPUs (`placement=flow.placement(type="cuda", ranks=[0, 1])`). -### 流水并行 ### Pipeline Parallelism -当神经网络过于巨大,无法在一个设备上存放时,可以选择流水并行策略。 流水并行将网络切分为多个阶段,并分发到不同的计算设备上,各个计算设备之间以“流水线”的方式完成训练。 - If the neural network is too large to be placed on one device, pipeline parallelism can help. Pipeline parallelism means to partition the neural network into stages and distribute the stages to various devices. The devices will execute their given stage in a pipeline fashion. -以两卡流水并行为例,构造两阶段示例程序: - For example, we build a two-stage program for two-GPU pipeline parallelism: ```python import oneflow as flow + P0 = flow.placement(type="cuda", ranks=[0]) P1 = flow.placement(type="cuda", ranks=[1]) BROADCAST = flow.sbp.broadcast -# 模型第一阶段分布在第 0 卡 + # Place the first stage of the model on GPU 0. -w0 = flow.randn(5, 8, placement=P0, sbp=BROADCAST) -# 模型第二阶段分布在第 1 卡 +w0 = flow.randn(5, 8, placement=P0, sbp=BROADCAST) # Place the second stage of the model on GPU 1. w1 = flow.randn(8, 3, placement=P1, sbp=BROADCAST) -# 随机生成数据模拟输入,注意第一阶段的数据分布在第 0 卡 + # Randomly generate data to be used as input. Note that the data for the first stage should be placed on GPU 0. in_stage0 = flow.randn(4, 5, placement=P0, sbp=BROADCAST) out_stage0 = flow.matmul(in_stage0, w0) print(out_stage0.shape) # (4, 8) -# 利用 to_global 将第二阶段的数据分布在第 1 卡 + # Place the data for the second stage on GPU 1 via to_global. in_stage1 = out_stage0.to_global(placement=P1, sbp=BROADCAST) out_stage1 = flow.matmul(in_stage1, w1) print(out_stage1.shape) # (4, 3) ``` -假设以上程序所在脚本文件为 `test.py`,在 Terminal 运行以下命令启动程序: - Supposing that the above program is in a `test.py` script, we execute the following command to start the program in terminal: ```shell python3 -m oneflow.distributed.launch --nproc_per_node 2 test.py ``` -以上程序采用矩阵乘法,模拟了一个两阶段神经网络。与数据并行和模型并行不同,流水并行中的数据和模型均未被切分,而是分别将两个阶段分布在不同的设备上进行计算。 - In the above program, we simulate a two-stage neural network with matrix multiplication. Different from data parallelism and model parallelism, pipeline parallelism does not shard the data or the model, but place the two stages of the model on two devices for computation. -Global Tensor 的设计,使得计算过程中,只需通过 `to_global(...)` 方法调整上一阶段的输出数据的分布策略,作为下一阶段的输入数据即可。 - Thanks to the neat design of global tensor, during the computation, all you need to do is adjusting the distribution strategy of the output data from the previous stage via `to_global(...)` so the data can be used as the input for the next stage. -### 混合并行 ### Mixed Parallelism -混合并行是结合使用以上两种或三种策略的并行策略。 - Mixed parallelism refers to the combined use of two or three of the above parallelism strategies. -OneFlow 同时支持 `Eager 模式` 和 `Graph 模式` 两种模型运行方式,二者均可用于并行计算策略。 - OneFlow supports two model execution modes: `Eager Mode` and `Graph Mode`. Both modes support parallel computing strategies. -- `Eager 模式` 是 OneFlow 的默认模式,网络模型继承自 `nn.Module` 模块。 -- `Graph 模式` 需要自定义继承自 `nn.Graph` 的类,并对 `Eager 模式` 的网络模型进行复用。 - - `Eager Mode` is the default mode in OneFlow. The neural network model is inherited from `nn.Module`. - In `Graph Mode`, you need to customize the classes inherited from `nn.Graph`, and reuse the neural network model in `Eager Mode`. - -更多关于 `Graph 模式`的细节请参考:[静态图模块 nn.Graph](../basics/08_nn_graph.md) - For more information of `Graph Mode`, please check: [Static Graph Interface: nn.Graph](../basics/08_nn_graph.md) -此处以 `4 卡`混合并行程序为例进行介绍。 - -The following example is a mixed parallelism program for 4 GPUs. - -!!! Note - 分别 **点击** 以下 `Eager` 或 `Graph` 标签,查看 两种模式的示例代码 +The following example is a mixed parallelism program for `4 GPUs`. !!! Note **Click** `Eager` and `Graph` for the corresponding sample code @@ -260,38 +173,44 @@ The following example is a mixed parallelism program for 4 GPUs. ```python import oneflow as flow import oneflow.nn as nn + P01 = flow.placement(type="cuda", ranks=[0, 1]) P23 = flow.placement(type="cuda", ranks=[2, 3]) + class StageModule(nn.Module): def __init__(self, in_dims, out_dims, placement=None, sbp=None): super().__init__() self.w = nn.Parameter( flow.randn(in_dims, out_dims, placement=placement, sbp=sbp) ) + def forward(self, x): out = flow.matmul(x, self.w) return out + class ModuleModel(nn.Module): def __init__(self): super().__init__() - # 模型第一阶段在第 0 和第 1 卡上进行数据并行计算 + # The first stage of the model: execute data parallelism on GPU 0 and 1. self.m_stage0 = StageModule(5, 8, placement=P01, sbp=flow.sbp.broadcast) - # 模型第二阶段在第 2 和第 3 卡上进行模型并行计算 + # The second stage of the model: execute model parallelism on GPU 2 and 3. self.m_stage1 = StageModule(8, 3, placement=P23, sbp=flow.sbp.split(dim=1)) - def forward(self, x): - # 第一阶段,数据切分在第 0 和第 1 卡,用于数据并行 + + def forward(self, x) # First stage: the data is partitioned across GPU 0 and 1 for data parallelism. out_stage0 = self.m_stage0(x) - # 第二阶段需要将输入数据还原完整,并转移至第 2 和第 3 卡,用于模型并行 + # Second stage: stitch the data together and pass them to GPU 2 and 3 for model parallelism. in_stage1 = out_stage0.to_global(placement=P23, sbp=flow.sbp.broadcast) out_stage1 = self.m_stage1(in_stage1) + return out_stage0, out_stage1 + + if __name__ == "__main__": model = ModuleModel() - # 需要将输入数据切分,用于数据并行 # Partition the input data for data parallelism. in_stage0 = flow.randn(4, 5, placement=P01, sbp=flow.sbp.split(dim=0)) out_stage0, out_stage1 = model(in_stage0) @@ -303,35 +222,41 @@ The following example is a mixed parallelism program for 4 GPUs. ```python import oneflow as flow import oneflow.nn as nn + P01 = flow.placement(type="cuda", ranks=[0, 1]) P23 = flow.placement(type="cuda", ranks=[2, 3]) + class StageModule(nn.Module): def __init__(self, in_dims, out_dims, placement=None, sbp=None): super().__init__() self.w = nn.Parameter( flow.randn(in_dims, out_dims, placement=placement, sbp=sbp) ) + def forward(self, x): out = flow.matmul(x, self.w) return out + class ModuleModel(nn.Module): def __init__(self): super().__init__() - # 模型第一阶段在第 0 和第 1 卡上进行数据并行计算 + # The first stage of the model: execute data parallelism on GPU 0 and 1. self.m_stage0 = StageModule(5, 8, placement=P01, sbp=flow.sbp.broadcast) - # 模型第二阶段在第 2 和第 3 卡上进行模型并行计算 + # The second stage of the model: execute model parallelism on GPU 2 and 3. self.m_stage1 = StageModule(8, 3, placement=P23, sbp=flow.sbp.split(dim=1)) def forward(self, x): - # 第一阶段,数据切分在第 0 和第 1 卡,用于数据并行 + # First stage: the data is partitioned across GPU 0 and 1 for data parallelism. out_stage0 = self.m_stage0(x) - # 第二阶段需要将输入数据还原完整,并转移至第 2 和第 3 卡,用于模型并行 + # Second stage: stitch the data together and pass them to GPU 2 and 3 for model parallelism. in_stage1 = out_stage0.to_global(placement=P23, sbp=flow.sbp.broadcast) out_stage1 = self.m_stage1(in_stage1) + return out_stage0, out_stage1 + # Graph class GraphModel(nn.Graph): def __init__(self): @@ -339,48 +264,35 @@ The following example is a mixed parallelism program for 4 GPUs. self.model = ModuleModel() self.model.m_stage0.config.set_stage(stage_id=0, placement=P01) self.model.m_stage1.config.set_stage(stage_id=1, placement=P23) + def build(self, x): return self.model(x) + if __name__ == "__main__": graph = GraphModel() - # 需要将输入数据切分,用于数据并行 # Partition the input data for data parallelism. in_stage0 = flow.randn(4, 5, placement=P01, sbp=flow.sbp.split(dim=0)) out_stage0, out_stage1 = graph(in_stage0) print(out_stage0.shape, out_stage1.shape) # (4, 8) (4, 3) ``` -以上程序构建了一个两阶段网络,其 `2 机 2 卡` 并行方式如下图所示: - The above programs construct a two-stage network, whose `two-device two-GPU` parallelism is illustrated as follows: -模型的两个阶段分别运行在两台机器进行流水并行,且第一阶段在第一台机器上进行两卡数据并行,第二阶段在第二台机器上进行两卡模型并行。 - The two stages of the model are separately executed on two machines, which constitutes pipeline parallelism. For the first stage, Machine 0 executes two-GPU data parallelism; for the second stage, Machine 1 executes two-GPU model parallelism. -**运行方式:** - **Execution** -`Eager 模式` 和 `Graph 模式` 的运行方式一致,假设脚本文件名为 `test.py` - `Eager Mode` and `Graph Mode` shares the same way of execution. Assuming that the script is a `test.py` file, -1. 单机四卡启动方式为: - 1. For a single-device 4-GPU environment, here is how it is started: ```shell python3 -m oneflow.distributed.launch --nproc_per_node 4 test.py ``` -2. oneflow 分布式工具支持多机多设备并行,以 `2 机 2 卡` 环境为例,启动方式如下: - 2. The OneFlow distribution tool supports multi-device multi-GPU parallelism. For example, for a `two-device two-GPU` environment, here is how it is started: - - 在 第 0 号机器上运行: Execution on Machine 0: @@ -389,12 +301,10 @@ The two stages of the model are separately executed on two machines, which const --nnodes=2 \ --node_rank=0 \ --nproc_per_node=2 \ - --master_addr="192.168.1.1" \ # 第 0 号机器的 IP # IP of Machine 0 + --master_addr="192.168.1.1" \ # IP of Machine 0 --master_port=7788 \ test.py ``` - - 在 第 1 号机器上运行: Execution on Machine 1: @@ -403,24 +313,17 @@ The two stages of the model are separately executed on two machines, which const --nnodes=2 \ --node_rank=1 \ --nproc_per_node=2 \ - --master_addr="192.168.1.1" \ # 第 0 号机器的 IP # IP of Machine 0 + --master_addr="192.168.1.1" \ # IP of Machine 0 --master_port=7788 \ test.py ``` - - 注意要将 `master_addr` 设置为第 0 号机器的 IP Note that `master_addr` should be set to the IP of Machine 0. -## 结语 ## Conclusion -并行策略的选择影响着训练效率,框架对并行训练的接口支持程度,决定了算法工程师的开发效率。 - Your training efficiency is dependent on your choice of parallelism strategy, while the development efficiency of algorithm engineers is largely affected by how well their framework supports parallel training. -本文介绍了数据并行、模型并行、流水并行以及混合并行这些分布式并行策略,通过示例展示了 OneFlow 针对分布式训练所做的系统级设计和创新,以便于用户轻松上手分布式训练。 - To sum up, in this article, we explain four distributed parallelism strategies: data parallelism, model parallelism, pipeline parallelism, and mixed parallelism. Also, we introduce the system-level innovations of OneFlow that allow users to apply distributed training more easily. From cb83a4e85dd6dfd9e124aa9c759f7b5853217c72 Mon Sep 17 00:00:00 2001 From: Hu Yanjun <100749531+httpshirley@users.noreply.github.com> Date: Mon, 5 Sep 2022 15:40:58 +0800 Subject: [PATCH 19/25] Update global_tensor_distributed.md --- en/docs/cookies/global_tensor_distributed.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/en/docs/cookies/global_tensor_distributed.md b/en/docs/cookies/global_tensor_distributed.md index 3a0c449e..d9967970 100644 --- a/en/docs/cookies/global_tensor_distributed.md +++ b/en/docs/cookies/global_tensor_distributed.md @@ -1,4 +1,4 @@ -# Using Global Tensor for Multi-Device Multi-GPU Programming: Distributed Parallelism Strategies +# Using Global Tensor for Distributed Programming: Distributed Parallelism Strategies By [Guoliang Cheng](https://github.com/lmyybh), [Xu Xiaoyu](https://github.com/strint) From 8042d0b3af58c2c32678011aa7d7cb27c77d4f54 Mon Sep 17 00:00:00 2001 From: Eco Date: Mon, 5 Sep 2022 18:09:14 +0800 Subject: [PATCH 20/25] Update global_tensor.md --- en/docs/cookies/global_tensor.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/en/docs/cookies/global_tensor.md b/en/docs/cookies/global_tensor.md index ffa9c7c4..c3cdc25a 100644 --- a/en/docs/cookies/global_tensor.md +++ b/en/docs/cookies/global_tensor.md @@ -1,4 +1,4 @@ -# Using Global Tensor to Program on Multi-Device Multi-GPU: Basic Operations +# Using Global Tensor for Distributed Programming: Basic Operations By [YaoChi](https://github.com/doombeaker), [Xu Xiaoyu](https://github.com/strint), [Zuo Yihao](https://github.com/Alive1024), [Guoliang Cheng](https://github.com/lmyybh), [Shen Jiali](https://github.com/Carly-Shen) @@ -200,7 +200,7 @@ In terms of type, the biggest difference between the Global Tensor and the gener The function of placement in global data distribution type is to specify the device group where data is distributed: -- The parameter `type` specifies the physical device type. `cuda represents the GPU device memory, and `cpu` refers to the CPU device memory. +- The parameter `type` specifies the physical device type. `cuda` represents the GPU device memory, and `cpu` refers to the CPU device memory. - The parameter `ranks` specifies the process ID set. Because each rank corresponds to one physical device, `ranks` can also be seen as the device ID set. Actually, `ranks` is an nd-array composed of rank ID, which supports high-dimensional device arrangement. For more details, please refer to [oneflow.placement](https://oneflow.readthedocs.io/en/v0.8.1/tensor_attributes.html#oneflow.placement). @@ -251,7 +251,7 @@ Here, the `to_global` conversion has merged the Local Tensors. Generally speakin Global Tensor’s type conversion can infer and execute the communication operations automatically. So, algorithm developers can concentrate on **thinking in data distribution** rather than **thinking in data communication operation**, and what they imagine is what they obtain, which helps them to develop distributed programs more efficiently. -Let’s add by introducing how to apply `numpy()` to the Global Tensor. For random Global Tensor, such as `x_global`, `x_global.numpy()` is equivalent to `x_global.to_global(spb=flow.sbp.broadcast).to_local().numpy()`, which means `x_global.numpy()` will firstly convert the original Global Tensor to one, which SBP is flow.sbp.broadcast(), then conduct a `to_local ` operation and finally invoke `numpy()` for the Local Tensor. Therefore, the `x_global.numpy()` method can obtain complete data. +Let’s add by introducing how to apply `numpy()` to the Global Tensor. For random Global Tensor, such as `x_global`, `x_global.numpy()` is equivalent to `x_global.to_global(spb=flow.sbp.broadcast).to_local().numpy()`, which means `x_global.numpy()` will firstly convert the original Global Tensor to one, which SBP is `flow.sbp.broadcast()`, then conduct a `to_local ` operation and finally invoke `numpy()` for the Local Tensor. Therefore, the `x_global.numpy()` method can obtain complete data. ## Global Tensor Participating in Computation @@ -301,7 +301,7 @@ This article has discussed: - Global Tensor supports converting the global data distribution type to implement distributed communication; - OneFlow operators are polymorphic enough to enable the execution of the Global Tensor; -So, this article will come to a close, and it fisrtly introduces how to create a Global Tensor and finally explains the detailed steps for data parallelism computation that is based on a Global Tensor. +So, this article will come to a close, and it firstly introduces how to create a Global Tensor and finally explains the detailed steps for data parallelism computation that is based on a Global Tensor. More about parallelism ways and SBP's inference logic will be discussed in our later articles. From a9dc8f0c7e82840317a1ccf50dcacce6ce4c6f73 Mon Sep 17 00:00:00 2001 From: Eco Date: Mon, 5 Sep 2022 18:30:13 +0800 Subject: [PATCH 21/25] update --- cn/docs/cookies/global_tensor.md | 2 +- cn/docs/cookies/global_tensor_distributed.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/cn/docs/cookies/global_tensor.md b/cn/docs/cookies/global_tensor.md index 4ec2a844..e9982a46 100644 --- a/cn/docs/cookies/global_tensor.md +++ b/cn/docs/cookies/global_tensor.md @@ -1,4 +1,4 @@ -# 使用 Global Tensor 进行多机多设备编程:基础操作 +# 使用 Global Tensor 进行分布式编程:基础操作 By [YaoChi](https://github.com/doombeaker), [Xu Xiaoyu](https://github.com/strint), [Zuo Yihao](https://github.com/Alive1024), [Guoliang Cheng](https://github.com/lmyybh) diff --git a/cn/docs/cookies/global_tensor_distributed.md b/cn/docs/cookies/global_tensor_distributed.md index 9b26c4ad..2372f182 100644 --- a/cn/docs/cookies/global_tensor_distributed.md +++ b/cn/docs/cookies/global_tensor_distributed.md @@ -1,4 +1,4 @@ -# 使用 Global Tensor 进行多机多设备编程:分布式并行策略 +# 使用 Global Tensor 进行分布式编程:分布式并行策略 By [Guoliang Cheng](https://github.com/lmyybh), [Xu Xiaoyu](https://github.com/strint) From 9b4b20b61af06cf4ce5ab5aa24549ca762265dcb Mon Sep 17 00:00:00 2001 From: Eco Date: Mon, 5 Sep 2022 18:38:43 +0800 Subject: [PATCH 22/25] Update mkdocs.yml --- cn/mkdocs.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/cn/mkdocs.yml b/cn/mkdocs.yml index b1f371b8..a33f2d42 100644 --- a/cn/mkdocs.yml +++ b/cn/mkdocs.yml @@ -134,8 +134,8 @@ nav: - 流水并行训练: parallelism/06_pipeline.md - 实践指南: - - 使用 Global Tensor 进行多机多设备编程:基础操作: cookies/global_tensor.md - - 使用 Global Tensor 进行多机多设备编程:分布式并行策略: cookies/global_tensor_distributed.md + - 使用 Global Tensor 进行分布式编程:基础操作: cookies/global_tensor.md + - 使用 Global Tensor 进行分布式编程:分布式并行策略: cookies/global_tensor_distributed.md - OneFlow 与 ONNX 交互: cookies/oneflow2onnnx.md - 模型部署: cookies/serving.md - 自动混合精度训练: cookies/amp.md From d63ff11bb79b046d9909687280739da7c96ac644 Mon Sep 17 00:00:00 2001 From: Eco Date: Mon, 5 Sep 2022 18:46:20 +0800 Subject: [PATCH 23/25] Update mkdocs.yml --- en/mkdocs.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/en/mkdocs.yml b/en/mkdocs.yml index 4a9d7f01..fd6d4f18 100644 --- a/en/mkdocs.yml +++ b/en/mkdocs.yml @@ -133,8 +133,8 @@ nav: - Pipelining Parallelism: parallelism/06_pipeline.md - Cookbook: - - Basic Operations for Using Global Tensor to Program on Cluster: cookies/global_tensor.md - - Distributed Parallelism Strategies for Using Global Tensor to Program on Cluster: cookies/global_tensor_distributed.md + - Using Global Tensor to Program on Multi-Device Multi-GPU: Basic Operations: cookies/global_tensor.md + - Using Global Tensor for Distributed Programming: Distributed Parallelism Strategies: cookies/global_tensor_distributed.md - OneFlow with ONNX: cookies/oneflow2onnnx.md - Model Deployment: cookies/serving.md - Automatic Mixed Precision Training: cookies/amp.md From 513c095b378d203905cb1cac740b6779b5785e6a Mon Sep 17 00:00:00 2001 From: Eco Date: Tue, 6 Sep 2022 09:21:54 +0800 Subject: [PATCH 24/25] update --- en/docs/cookies/global_tensor.md | 2 +- en/docs/cookies/global_tensor_distributed.md | 2 +- en/mkdocs.yml | 4 ++-- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/en/docs/cookies/global_tensor.md b/en/docs/cookies/global_tensor.md index c3cdc25a..e7bdb6bd 100644 --- a/en/docs/cookies/global_tensor.md +++ b/en/docs/cookies/global_tensor.md @@ -1,4 +1,4 @@ -# Using Global Tensor for Distributed Programming: Basic Operations +# Distributed Programming with Global Tensor: Basic Operations By [YaoChi](https://github.com/doombeaker), [Xu Xiaoyu](https://github.com/strint), [Zuo Yihao](https://github.com/Alive1024), [Guoliang Cheng](https://github.com/lmyybh), [Shen Jiali](https://github.com/Carly-Shen) diff --git a/en/docs/cookies/global_tensor_distributed.md b/en/docs/cookies/global_tensor_distributed.md index d9967970..fe712174 100644 --- a/en/docs/cookies/global_tensor_distributed.md +++ b/en/docs/cookies/global_tensor_distributed.md @@ -1,4 +1,4 @@ -# Using Global Tensor for Distributed Programming: Distributed Parallelism Strategies +# Distributed Programming with Global Tensor: Distributed Parallelism Strategies By [Guoliang Cheng](https://github.com/lmyybh), [Xu Xiaoyu](https://github.com/strint) diff --git a/en/mkdocs.yml b/en/mkdocs.yml index fd6d4f18..3f5c90e5 100644 --- a/en/mkdocs.yml +++ b/en/mkdocs.yml @@ -133,8 +133,8 @@ nav: - Pipelining Parallelism: parallelism/06_pipeline.md - Cookbook: - - Using Global Tensor to Program on Multi-Device Multi-GPU: Basic Operations: cookies/global_tensor.md - - Using Global Tensor for Distributed Programming: Distributed Parallelism Strategies: cookies/global_tensor_distributed.md + - Distributed Programming with Global Tensor: Basic Operations: cookies/global_tensor.md + - Distributed Programming with Global Tensor: Distributed Parallelism Strategies: cookies/global_tensor_distributed.md - OneFlow with ONNX: cookies/oneflow2onnnx.md - Model Deployment: cookies/serving.md - Automatic Mixed Precision Training: cookies/amp.md From 0ff15ae2a4e898e02e3bdab4157bf267ef4dc7ed Mon Sep 17 00:00:00 2001 From: Yao Chi Date: Thu, 8 Sep 2022 16:11:03 +0800 Subject: [PATCH 25/25] Update mkdocs.yml --- en/mkdocs.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/en/mkdocs.yml b/en/mkdocs.yml index 3f5c90e5..459eae4f 100644 --- a/en/mkdocs.yml +++ b/en/mkdocs.yml @@ -133,8 +133,8 @@ nav: - Pipelining Parallelism: parallelism/06_pipeline.md - Cookbook: - - Distributed Programming with Global Tensor: Basic Operations: cookies/global_tensor.md - - Distributed Programming with Global Tensor: Distributed Parallelism Strategies: cookies/global_tensor_distributed.md + - Basic Operations of Distributed Programming with Global Tensor: cookies/global_tensor.md + - Distributed Parallelism Strategies of Distributed Programming with Global Tensor: cookies/global_tensor_distributed.md - OneFlow with ONNX: cookies/oneflow2onnnx.md - Model Deployment: cookies/serving.md - Automatic Mixed Precision Training: cookies/amp.md