diff --git a/specification/src/development/compilation.md b/specification/src/development/compilation.md index b8b302d..9264533 100644 --- a/specification/src/development/compilation.md +++ b/specification/src/development/compilation.md @@ -127,7 +127,7 @@ Then you can use the `make.py` script to install what you like: ```bash python3 make.py libbrane_cli ``` -- To build the servives for a control node, run: +- To build the servives for a central node, run: ```bash python3 make.py instance ``` diff --git a/user-guide/src/SUMMARY.md b/user-guide/src/SUMMARY.md index e899e69..35bafa4 100644 --- a/user-guide/src/SUMMARY.md +++ b/user-guide/src/SUMMARY.md @@ -12,7 +12,7 @@ - [Installation](./system-admins/installation/introduction.md) - [Dependencies](./system-admins/installation/dependencies.md) - [`branectl`](./system-admins/installation/branectl.md) - - [Control node](./system-admins/installation/control-node.md) + - [Central node](./system-admins/installation/control-node.md) - [Worker node](./system-admins/installation/worker-node.md) - [Proxy node](./system-admins/installation/proxy-node.md) diff --git a/user-guide/src/config/admins/infra.md b/user-guide/src/config/admins/infra.md index 1d1a90f..fc34744 100644 --- a/user-guide/src/config/admins/infra.md +++ b/user-guide/src/config/admins/infra.md @@ -1,16 +1,16 @@ # The infrastructure file _source [`InfraFile`](/docs/brane_cfg/infra/struct.InfraFile.html) in [`brane_cfg/infra.rs`](/docs/src/brane_cfg/infra.rs.html)._ -The infrastructure file, or more commonly referenced as the `infra.yml` file, is a control node configuration file that is used to define the worker nodes part of a particular BRANE instance. Its location is defined by the [`node.yml`](./node.md) file. +The infrastructure file, or more commonly referenced as the `infra.yml` file, is a central node configuration file that is used to define the worker nodes part of a particular BRANE instance. Its location is defined by the [`node.yml`](./node.md) file. -The [`branectl`](TODO) tool can generate this file for you, using the [`branectl generate infra`](TODO) subcommand. See the [chapter on installing a control node](../../system-admins/installation/control-node.md) for a realistic example. +The [`branectl`](TODO) tool can generate this file for you, using the [`branectl generate infra`](TODO) subcommand. See the [chapter on installing a central node](../../system-admins/installation/central-node.md) for a realistic example. ## Toplevel layout The `infra.yml` file is written in [YAML](https://yaml.org). It features only the following toplevel field: - `locations`: A map that details the nodes present in the instance. It maps from strings, representing the node identifiers, to another map with three fields: - - `name`: Defines a human-friendly name for the node. This is only used on the control node, and only to make some logging messages nicer; there are therefor no constraints on this name. + - `name`: Defines a human-friendly name for the node. This is only used on the central node, and only to make some logging messages nicer; there are therefor no constraints on this name. - `delegate`: The address of the delegate service (i.e., `brane-job`) on the target worker node. Must be given using a scheme (either `http` or `grpc`), an IP address or hostname and a port. - `registry`: The address of the local registry service (i.e., `brane-reg`) on the target worker node. Must be given using a scheme (`https`), an IP address or hostname and a port. diff --git a/user-guide/src/config/admins/introduction.md b/user-guide/src/config/admins/introduction.md index 8a1ad41..f934e8a 100644 --- a/user-guide/src/config/admins/introduction.md +++ b/user-guide/src/config/admins/introduction.md @@ -7,7 +7,7 @@ The configuration files for administrators are sorted by node type. The files ar ## Control node -- [`infra.yml`](./infra.md): A **YAML** file that defines the worker nodes in the instance represented by the control node. +- [`infra.yml`](./infra.md): A **YAML** file that defines the worker nodes in the instance represented by the central node. - [`proxy.yml`](./proxy.md): A **YAML** file that defines the proxy settings for outgoing node traffic. Can also be found on the [worker](#worker-node) and [proxy](#proxy-node) nodes. - [`node.yml`](./node.md): A **YAML** file that defines the environment settings for this node, such as paths of the directories and the other configuration files, ports, hostnames, etc. Can also be found on the [worker](#worker-node) and [proxy](#proxy-node) nodes. diff --git a/user-guide/src/config/admins/node.md b/user-guide/src/config/admins/node.md index bd7543d..8c019ee 100644 --- a/user-guide/src/config/admins/node.md +++ b/user-guide/src/config/admins/node.md @@ -3,13 +3,13 @@ _source Note that all paths defined in the `node.yml` file _must_ be absolute paths, since they are mounted as Docker volumes. diff --git a/user-guide/src/config/admins/proxy.md b/user-guide/src/config/admins/proxy.md index 37e04cc..59c639e 100644 --- a/user-guide/src/config/admins/proxy.md +++ b/user-guide/src/config/admins/proxy.md @@ -3,7 +3,7 @@ _source In the future, a third option might be to download the standard images from [DockerHub](https://hub.docker.com/). However, due to the experimental nature of the framework, the images are not yet published. Instead, rely on `branectl` to make the process easy for you. + + +### Downloading prebuilt images +The recommended way to download the Brane images is to use `branectl`. These will download the images to `.tar` files, which can be send around at your leisure; and, if you will be deploying the framework on a device where internet is limited or restricted, you can also use it to download Brane's auxillary images ([ScyllaDB](https://www.scylladb.com/)). + +Run the following command to download the Brane services themselves: +```bash +# Download the images +branectl download services central -f +``` + +And to download the auxillary images (run in addition to the previous command): +```bash +branectl download services auxillary -f +``` +(the `-f` will automatically create missing directories for the target output path) + +Once these complete successfully, you should have the images for the central node in the directory `target/release`. While this path may be changed, it is recommended to stick to the default to make the commands in subsequent sections easier. + +> info By default, `branectl` will download the version for which it was compiled. However, you can change this with the `--version` option: +> ```bash +> # You should change this on all download commands +> branectl download services central --version 1.0.0 +> ``` +> +> Note, however, that not every Brane version may have the same services or the same method of downloading, and so this option may fail. Download the `branectl` for the desired version instead for a more reliable experience. + + +### Compiling the images +The other way to obtain the images is to compile them yourself. If you want to do so, refer to the [compilation instructions](/specification/development/compilation.html) over at the [Brane: A Specification](/specification)-book for instructions. + + +## Generating configuration +Once you have downloaded the images, it is time to setup the configuration files for the node. These files determine the type of node, as well as any of the node's properties and network specifications. + +For a central node, this means generating the following files: +- An infrastructure file (`infra.yml`), which will determine the worker nodes available in the instance; +- A proxy file (`proxy.yml`), which describes if any proxying should occur and how; and +- A node file (`node.yml`), which will contain the node-specific configuration like service names, ports, file locations, etc. + +All of these can be generated with `branectl` for convenience. + +First, we generate the `infra.yml` file. This can be done using the following command: +```bash +branectl generate infra : ... +``` +Here, multiple `:` pairs can be given, one per worker node that is available to the instance. In such a pair, the `` is the location ID of that domain (which must be the same as indicated in that node; see the chapter for [setting up worker nodes](./worker-node.md)), and the `` is the address (IP or hostname) where that domain is available. + +For example, suppose that we want to instantiate a central node for a Brane instance with two worker nodes: one called `amy`, at `amy-worker-node.com`, and one called `bob`, at `192.0.2.2`. We would generate an `infra.yml` as follows: +```bash +branectl generate infra -f -p ./config/infra.yml amy:amy-worker-node.com bob:192.0.2.2 +``` + +Running this command will generate the file `./config/infra.yml` for you, with default settings for each domain. If you want to change these, you can simply use more options and flags in the tool itself (see the [`branectl` documentation](../../config/admins/backend.md) or the builtin `branectl generate infra --help`), or change the file manually (see the [`infra.yml` documentation](../../config/admins/infra.md)). + +> info While the `-f` flag (fix missing directories) and the `-p` option (path of generated file) are not required, you will typically use these to make your life easier down the road. See the `branectl generate node` command below to find out why. + +Next, we will generate the `proxy.yml` file. Typically, this configuration can be left to the default settings, and so the following command will do the trick in most situations: +```bash +branectl generate proxy -f -p ./config/proxy.yml +``` + +A `proxy.yml` file should be available in `./config/proxy.yml` after running this command. + +The contents of this file will typically only differ if you have advanced networking requirements. If so, consult the [`branectl` documentation](TODO) or the builtin `branectl generate proxy --help`, or the [`proxy.yml` documentation](../../config/admins/proxy.md). + +> info This file may be skipped if you are setting up an external proxy node for this node. See the [chapter on proxy nodes](./proxy-node.md) for more information. + +Then we will generate the final file, the `node.yml` file. This file is done last, because it itself defines where the BRANE software may find any of the other configuration files. + +When generating this file, it is possible to manually specify where to find each of those files. However, in practise, it is more convenient to make sure that the files are at the default locations that the tools expects. The following tree structure displays the default locations for the configuration of a central node: +``` + +├ config +│ ├ certs +│ │ └ +│ ├ infra.yml +│ └ proxy.yml +└ node.yml +``` + +The `config/certs` directory will be used to store the certificates for each of the domains; we will do that in the [following section](#adding-certificates). + +Assuming that you have the files stored as above, the following command can be used to create a `node.yml` for a central node: +```bash +branectl generate node -f central +``` + +Here, `` is the address where any worker node may reach the central node. Only the hostname will suffice (e.g., `some-domain.com`), but any scheme or path you supply will be automatically stripped away. + +The `-f` flag will make sure that any of the missing directories (e.g., `config/certs`) will be generated automatically. + +Once again, you can change many of the properties in the `node.yml` file by specifying additional command-line options (see the [`branectl` documentation](TODO) or the builtin `branectl generate node --help`) or by changing the file manually (see the [`node.yml` documentation](../../config/admins/node.md)). + +> warning Due to a [bug](https://github.com/epi-project/brane/issues/27) in one of the framework's dependencies, it cannot handle certificates on IP addresses. To workaround this issue, the `-H` option is provided; it can be used to specify a certain hostname/IP mapping for this node only. Example: +> ```bash +> # We can address '192.0.2.2' with 'bob-domain' now +> branectl generate node -f -H bob-domain:192.0.2.2 central central-domain.com +> ``` +> Note that this is local to this domain only; you have to specify this on other nodes as well. For more information, see the [`node.yml` documentation](../../config/admins/node.md). +> > info Since the above is highly localized, it can be abused to do node-specific routing, by assigning the same hostname to different IPs on different machines. Definitely entering "hacky" territory here, though... + + +## Adding certificates +Before the framework can be fully used, the central node will need the public certificates of the worker nodes to be able to verify their identity during connection. Since we assume Brane may be running in a decentralized and shielded environment, the easiest is to add the domain's certificates to the `config/certs` directory. + +To do so, [obtain the public certificate](./worker-node.md#generating-certificates) of each of the workers in your instance. Then, navigate to the `config/certs` directory (or wherever you pointed it to in `node.yml`), and do the following for each certificate: +1. Create a directory with that domain's name (for the example above, you would create a directory named `amy` for that domain) +2. Move the certificate to that folder and call it `ca.pem`. + +At runtime, the Brane services will look for the peer domain's identity by looking up the folder with their name in it. Thus, make sure that every worker in your system has a name that you filesystem can represent. + + +## Launching the instance +Finally, now that you have the images and the configuration files, it's time to start the instance. + +We assume that you have installed your images to `target/release`. If you have built your images in development mode, however, they will be in `target/debug`; see the box below for the command then. + +This can be done with one `branectl` command: +```bash +branectl start central +``` + +This will launch the services in the local Docker daemon, which completes the setup! + +> info The command above assumes default locations for the images (`./target/release`) and for the `node.yml` file (`./node.yml`). If you use non-default locations, however, you can use the following flags: +> - Use `-n` or `--node` to specify another location for the `node.yml` file: +> ```bash +> branectl -n start central +> ``` +> It will define the rest of the configuration locations. +> - If you have installed all images to another folder than `./target/release` (e.g., `./target/debug`), you can use the quick option `--image-dir` to change the folders. Specifically: +> ```bash +> branectl start --image-dir "./target/debug" central +> ``` +> - If you want to use pre-downloaded image for the auxillary services (`aux-scylla`) that are in the same folder as the one indicated by `--image-dir`, you can specify `--local-aux` to use the folder version instead: +> ```bash +> branectl start central --local-aux +> ``` +> - You can also specify the location of each image individually. To see how, refer to the [`branectl` documentation](TODO) or the builtin `branectl start --help`. + +> warning Note that the Scylla database this command launches might need a minute to come online, even though its container already reports ready. Thus, before you can use your instance, wait until `docker ps` shows all Brane containers running (in particular the `brane-api` service will crash until the Scylla service is done). You can use `watch docker ps` if you don't want to re-call the command yourself. + + +## Next +Congratulations, you have configured and setup a Brane central node! + +Depending on which domains you are in charge of, you may also have to setup one or more [worker nodes](./worker-node.md) or [proxy nodes](./proxy-node.md). Note, though, that these are written to be used on their own, so parts of it overlap with this chapter. + +Otherwise, you can move on to other work! If you want to test your instance like a normal user, you can go to the documentation for [Software Engineers](../../software-engineers/introduction.md) or [Scientists](../../scientists/introduction.md). diff --git a/user-guide/src/system-admins/installation/control-node.md b/user-guide/src/system-admins/installation/control-node.md index 7038cdb..9a6ebc9 100644 --- a/user-guide/src/system-admins/installation/control-node.md +++ b/user-guide/src/system-admins/installation/control-node.md @@ -1,160 +1 @@ -# Control node -Before you follow the steps in this chapter, we assume you have installed the required [dependencies](./dependencies.md) and installed [`branectl`](./branectl.md), as discussed in the previous two chapters. - -If you did, then you are ready to install the control node. This chapter will explain you how to do that. - - -## Obtaining images -Just as with `branectl` itself, there are two ways of obtaining the Docker images and related resources: downloading them from the repository or compiling them. Note, however, that multiple files should be downloaded; and to aid with this, the `branectl` executable can be used to automate the downloading process for you. - -> info In the future, a third option might be to download the standard images from [DockerHub](https://hub.docker.com/). However, due to the experimental nature of the framework, the images are not yet published. Instead, rely on `branectl` to make the process easy for you. - - -### Downloading prebuilt images -The recommended way to download the Brane images is to use `branectl`. These will download the images to `.tar` files, which can be send around at your leisure; and, if you will be deploying the framework on a device where internet is limited or restricted, you can also use it to download Brane's auxillary images ([ScyllaDB](https://www.scylladb.com/)). - -Run the following command to download the Brane services themselves: -```bash -# Download the images -branectl download services central -f -``` - -And to download the auxillary images (run in addition to the previous command): -```bash -branectl download services auxillary -f -``` -(the `-f` will automatically create missing directories for the target output path) - -Once these complete successfully, you should have the images for the control node in the directory `target/release`. While this path may be changed, it is recommended to stick to the default to make the commands in subsequent sections easier. - -> info By default, `branectl` will download the version for which it was compiled. However, you can change this with the `--version` option: -> ```bash -> # You should change this on all download commands -> branectl download services central --version 1.0.0 -> ``` -> -> Note, however, that not every Brane version may have the same services or the same method of downloading, and so this option may fail. Download the `branectl` for the desired version instead for a more reliable experience. - - -### Compiling the images -The other way to obtain the images is to compile them yourself. If you want to do so, refer to the [compilation instructions](/specification/development/compilation.html) over at the [Brane: A Specification](/specification)-book for instructions. - - -## Generating configuration -Once you have downloaded the images, it is time to setup the configuration files for the node. These files determine the type of node, as well as any of the node's properties and network specifications. - -For a control node, this means generating the following files: -- An infrastructure file (`infra.yml`), which will determine the worker nodes available in the instance; -- A proxy file (`proxy.yml`), which describes if any proxying should occur and how; and -- A node file (`node.yml`), which will contain the node-specific configuration like service names, ports, file locations, etc. - -All of these can be generated with `branectl` for convenience. - -First, we generate the `infra.yml` file. This can be done using the following command: -```bash -branectl generate infra : ... -``` -Here, multiple `:` pairs can be given, one per worker node that is available to the instance. In such a pair, the `` is the location ID of that domain (which must be the same as indicated in that node; see the chapter for [setting up worker nodes](./worker-node.md)), and the `` is the address (IP or hostname) where that domain is available. - -For example, suppose that we want to instantiate a central node for a Brane instance with two worker nodes: one called `amy`, at `amy-worker-node.com`, and one called `bob`, at `192.0.2.2`. We would generate an `infra.yml` as follows: -```bash -branectl generate infra -f -p ./config/infra.yml amy:amy-worker-node.com bob:192.0.2.2 -``` - -Running this command will generate the file `./config/infra.yml` for you, with default settings for each domain. If you want to change these, you can simply use more options and flags in the tool itself (see the [`branectl` documentation](../../config/admins/backend.md) or the builtin `branectl generate infra --help`), or change the file manually (see the [`infra.yml` documentation](../../config/admins/infra.md)). - -> info While the `-f` flag (fix missing directories) and the `-p` option (path of generated file) are not required, you will typically use these to make your life easier down the road. See the `branectl generate node` command below to find out why. - -Next, we will generate the `proxy.yml` file. Typically, this configuration can be left to the default settings, and so the following command will do the trick in most situations: -```bash -branectl generate proxy -f -p ./config/proxy.yml -``` - -A `proxy.yml` file should be available in `./config/proxy.yml` after running this command. - -The contents of this file will typically only differ if you have advanced networking requirements. If so, consult the [`branectl` documentation](TODO) or the builtin `branectl generate proxy --help`, or the [`proxy.yml` documentation](../../config/admins/proxy.md). - -> info This file may be skipped if you are setting up an external proxy node for this node. See the [chapter on proxy nodes](./proxy-node.md) for more information. - -Then we will generate the final file, the `node.yml` file. This file is done last, because it itself defines where the BRANE software may find any of the other configuration files. - -When generating this file, it is possible to manually specify where to find each of those files. However, in practise, it is more convenient to make sure that the files are at the default locations that the tools expects. The following tree structure displays the default locations for the configuration of a central node: -``` - -├ config -│ ├ certs -│ │ └ -│ ├ infra.yml -│ └ proxy.yml -└ node.yml -``` - -The `config/certs` directory will be used to store the certificates for each of the domains; we will do that in the [following section](#adding-certificates). - -Assuming that you have the files stored as above, the following command can be used to create a `node.yml` for a central node: -```bash -branectl generate node -f central -``` - -Here, `` is the address where any worker node may reach the central node. Only the hostname will suffice (e.g., `some-domain.com`), but any scheme or path you supply will be automatically stripped away. - -The `-f` flag will make sure that any of the missing directories (e.g., `config/certs`) will be generated automatically. - -Once again, you can change many of the properties in the `node.yml` file by specifying additional command-line options (see the [`branectl` documentation](TODO) or the builtin `branectl generate node --help`) or by changing the file manually (see the [`node.yml` documentation](../../config/admins/node.md)). - -> warning Due to a [bug](https://github.com/epi-project/brane/issues/27) in one of the framework's dependencies, it cannot handle certificates on IP addresses. To workaround this issue, the `-H` option is provided; it can be used to specify a certain hostname/IP mapping for this node only. Example: -> ```bash -> # We can address '192.0.2.2' with 'bob-domain' now -> branectl generate node -f -H bob-domain:192.0.2.2 central central-domain.com -> ``` -> Note that this is local to this domain only; you have to specify this on other nodes as well. For more information, see the [`node.yml` documentation](../../config/admins/node.md). -> > info Since the above is highly localized, it can be abused to do node-specific routing, by assigning the same hostname to different IPs on different machines. Definitely entering "hacky" territory here, though... - - -## Adding certificates -Before the framework can be fully used, the central node will need the public certificates of the worker nodes to be able to verify their identity during connection. Since we assume Brane may be running in a decentralized and shielded environment, the easiest is to add the domain's certificates to the `config/certs` directory. - -To do so, [obtain the public certificate](./worker-node.md#generating-certificates) of each of the workers in your instance. Then, navigate to the `config/certs` directory (or wherever you pointed it to in `node.yml`), and do the following for each certificate: -1. Create a directory with that domain's name (for the example above, you would create a directory named `amy` for that domain) -2. Move the certificate to that folder and call it `ca.pem`. - -At runtime, the Brane services will look for the peer domain's identity by looking up the folder with their name in it. Thus, make sure that every worker in your system has a name that you filesystem can represent. - - -## Launching the instance -Finally, now that you have the images and the configuration files, it's time to start the instance. - -We assume that you have installed your images to `target/release`. If you have built your images in development mode, however, they will be in `target/debug`; see the box below for the command then. - -This can be done with one `branectl` command: -```bash -branectl start central -``` - -This will launch the services in the local Docker daemon, which completes the setup! - -> info The command above assumes default locations for the images (`./target/release`) and for the `node.yml` file (`./node.yml`). If you use non-default locations, however, you can use the following flags: -> - Use `-n` or `--node` to specify another location for the `node.yml` file: -> ```bash -> branectl -n start central -> ``` -> It will define the rest of the configuration locations. -> - If you have installed all images to another folder than `./target/release` (e.g., `./target/debug`), you can use the quick option `--image-dir` to change the folders. Specifically: -> ```bash -> branectl start --image-dir "./target/debug" central -> ``` -> - If you want to use pre-downloaded image for the auxillary services (`aux-scylla`) that are in the same folder as the one indicated by `--image-dir`, you can specify `--local-aux` to use the folder version instead: -> ```bash -> branectl start central --local-aux -> ``` -> - You can also specify the location of each image individually. To see how, refer to the [`branectl` documentation](TODO) or the builtin `branectl start --help`. - -> warning Note that the Scylla database this command launches might need a minute to come online, even though its container already reports ready. Thus, before you can use your instance, wait until `docker ps` shows all Brane containers running (in particular the `brane-api` service will crash until the Scylla service is done). You can use `watch docker ps` if you don't want to re-call the command yourself. - - -## Next -Congratulations, you have configured and setup a Brane control node! - -Depending on which domains you are in charge of, you may also have to setup one or more [worker nodes](./worker-node.md) or [proxy nodes](./proxy-node.md). Note, though, that these are written to be used on their own, so parts of it overlap with this chapter. - -Otherwise, you can move on to other work! If you want to test your instance like a normal user, you can go to the documentation for [Software Engineers](../../software-engineers/introduction.md) or [Scientists](../../scientists/introduction.md). +# Central node diff --git a/user-guide/src/system-admins/installation/dependencies.md b/user-guide/src/system-admins/installation/dependencies.md index 24cf2e5..0633454 100644 --- a/user-guide/src/system-admins/installation/dependencies.md +++ b/user-guide/src/system-admins/installation/dependencies.md @@ -60,4 +60,4 @@ If you do not meet this requirement, you will have to compile `branectl` (and an ## Next -Congratulations, you have prepared your machine for running (or compiling) a Brane instance! In the [next chapter](./branectl.md), we will discuss installing the invaluable node management tool `branectl`. After that, depending on which node you want to setup, you can follow the guide for installing [control nodes](./control-node.md) or [worker nodes](./worker-node.md). +Congratulations, you have prepared your machine for running (or compiling) a Brane instance! In the [next chapter](./branectl.md), we will discuss installing the invaluable node management tool `branectl`. After that, depending on which node you want to setup, you can follow the guide for installing [central nodes](./central-node.md) or [worker nodes](./worker-node.md). diff --git a/user-guide/src/system-admins/installation/introduction.md b/user-guide/src/system-admins/installation/introduction.md index 4daa3cb..082e45e 100644 --- a/user-guide/src/system-admins/installation/introduction.md +++ b/user-guide/src/system-admins/installation/introduction.md @@ -5,4 +5,4 @@ There are three types of nodes: a _central node_ (or _control node_), a _worker First, for any kind of node, you should start by [downloading the dependencies](./dependencies.md) on the VM where your worker node will run. Then, install the [`branectl`](./branectl.md) executable, which will help you in setting up and managing your node. -You can then go into the specifics for each kind of node. You can either setup a [control node](./control-node.md), [worker node](./worker-node.md) or a [proxy node](./proxy-node.md). +You can then go into the specifics for each kind of node. You can either setup a [central node](./central-node.md), [worker node](./worker-node.md) or a [proxy node](./proxy-node.md). diff --git a/user-guide/src/system-admins/installation/proxy-node.md b/user-guide/src/system-admins/installation/proxy-node.md index 051f87f..29c3a34 100644 --- a/user-guide/src/system-admins/installation/proxy-node.md +++ b/user-guide/src/system-admins/installation/proxy-node.md @@ -93,7 +93,7 @@ Once again, you can change many of the properties in the `node.yml` file by spec ## Generating certificates -In contrast to setting up a control node, a proxy node will have to strongly identify itself to prove to other nodes who it is. This is relevant, because worker nodes may want to download data from one another through their proxy nodes; and if this dataset is private, then the other domains likely won't share it unless they know who they are talking to. +In contrast to setting up a central node, a proxy node will have to strongly identify itself to prove to other nodes who it is. This is relevant, because worker nodes may want to download data from one another through their proxy nodes; and if this dataset is private, then the other domains likely won't share it unless they know who they are talking to. In Brane, the identity of domains is proven by the use of [X.509 certificates](https://en.wikipedia.org/wiki/X.509). Thus, before you can start your proxy node, we will have to generate some certificates. @@ -117,7 +117,7 @@ This should generate multiple files in the `./config/certs` directory, chief of > info Certificate generation is done using [cfssl](https://github.com/cloudflare/cfssl), which is dynamically downloaded by `branectl`. The checksum of the downloaded file is asserted, and if you ever see a checksum-related error, then you might be dealing with a fake binary that is being downloaded under a real address. In that case, tread with care. -When the certificates are generated, be sure to share `ca.pem` with the central node. If you are also adminstrating that node, see [here](./control-node.md#adding-certificates) for instructions on what to do with it. +When the certificates are generated, be sure to share `ca.pem` with the central node. If you are also adminstrating that node, see [here](./central-node.md#adding-certificates) for instructions on what to do with it. ### Client-side certificates @@ -183,6 +183,6 @@ This will launch the services in the local Docker daemon, which completes the se ## Next Congratulations, you have configured and setup a Brane proxy node! -If you are in charge of more proxy nodes, you can repeat the steps in this chapter to add more. If you are also charged with setting up a control node or worker node, you can check the [control node chapter](./control-node.md) or the [worker node chapter](./worker-node.md), respectively, for node specific instructions. +If you are in charge of more proxy nodes, you can repeat the steps in this chapter to add more. If you are also charged with setting up a central node or worker node, you can check the [central node chapter](./central-node.md) or the [worker node chapter](./worker-node.md), respectively, for node specific instructions. Otherwise, you can move on to other work! If you want to test your node like a normal user, you can go to the documentation for [Software Engineers](../../software-engineers/introduction.md) or [Scientists](../../scientists/introduction.md). diff --git a/user-guide/src/system-admins/installation/worker-node.md b/user-guide/src/system-admins/installation/worker-node.md index d9c3e3f..0c533af 100644 --- a/user-guide/src/system-admins/installation/worker-node.md +++ b/user-guide/src/system-admins/installation/worker-node.md @@ -132,7 +132,7 @@ Once again, you can change many of the properties in the `node.yml` file by spec ## Generating certificates -In contrast to setting up a control node, a worker node will have to strongly identify itself to prove to other worker nodes who it is. This is relevant, because worker nodes may want to download data from one another; and if this dataset is private, then the other domains likely won't share it unless they know who they are talking to. +In contrast to setting up a central node, a worker node will have to strongly identify itself to prove to other worker nodes who it is. This is relevant, because worker nodes may want to download data from one another; and if this dataset is private, then the other domains likely won't share it unless they know who they are talking to. In Brane, the identity of domains is proven by the use of [X.509 certificates](https://en.wikipedia.org/wiki/X.509). Thus, before you can start your worker node, we will have to generate some certificates. @@ -156,7 +156,7 @@ This should generate multiple files in the `./config/certs` directory, chief of > info Certificate generation is done using [cfssl](https://github.com/cloudflare/cfssl), which is dynamically downloaded by `branectl`. The checksum of the downloaded file is asserted, and if you ever see a checksum-related error, then you might be dealing with a fake binary that is being downloaded under a real address. In that case, tread with care. -When the certificates are generated, be sure to share `ca.pem` with the central node. If you are also adminstrating that node, see [here](./control-node.md#adding-certificates) for instructions on what to do with it. +When the certificates are generated, be sure to share `ca.pem` with the central node. If you are also adminstrating that node, see [here](./central-node.md#adding-certificates) for instructions on what to do with it. ### Client-side certificates @@ -240,7 +240,7 @@ This will launch the services in the local Docker daemon, which completes the se ## Next Congratulations, you have configured and setup a Brane worker node! -If you are in charge of more worker nodes, you can repeat the steps in this chapter to add more. If you are also charged with setting up a control node, you can check the [previous chapter](./control-node.md) for control-node specific instructions. +If you are in charge of more worker nodes, you can repeat the steps in this chapter to add more. If you are also charged with setting up a central node, you can check the [previous chapter](./central-node.md) for central-node specific instructions. Alternatively, you can also see if a proxy node is something for your use-case in the [next chapter](./proxy-node.md). diff --git a/user-guide/src/system-admins/introduction.md b/user-guide/src/system-admins/introduction.md index 0651e4a..44063dd 100644 --- a/user-guide/src/system-admins/introduction.md +++ b/user-guide/src/system-admins/introduction.md @@ -5,16 +5,16 @@ To know more about the inner workings of Brane, we recommend you checkout the [B ## Background & Terminology -The Brane instance defines a _control node_ (or _central node_), which is where the orchestrator itself and associated services run. This node is run by the _Brane administrators_. Then, as a counterpart to this control node, there is the _worker plane_, which is composed of all the different compute sites that Brane orchestrates over. Each such compute site is referred to as a _domain_, a _location_ or, since Brane treats them as a single entity, a _worker node_. Multiple worker nodes may exist per physical domain (e.g., a single hospital can have multiple domains for different tasks), but Brane will treat these as conceptually different places. +The Brane instance defines a _central node_ (or _control node_), which is where the orchestrator itself and associated services run. This node is run by the _Brane administrators_. Then, as a counterpart to this central node, there is the _worker plane_, which is composed of all the different compute sites that Brane orchestrates over. Each such compute site is referred to as a _domain_, a _location_ or, since Brane treats them as a single entity, a _worker node_. Multiple worker nodes may exist per physical domain (e.g., a single hospital can have multiple domains for different tasks), but Brane will treat these as conceptually different places. Within the framework, a _system administrator_ is someone who acts as the 'technical owner' of a certain worker node. They are the ones who can make sure their system is prepared and meets the Brane requirements, and who defines the security requirements of any operation of the framework on their system. They are also the ones who make any data technically available that is published from their domain. And although policies are typically handled by [_policy writers_](../policy-experts/introduction.md), another role in the framework, in practise, this can be the same person as the system administrator. ## The Central node -For every Brane instance, there is typically only one _control node_. Even if multiple VMs are used, the framework expects it to behave like a single node; this is due to the centralized nature of it. +For every Brane instance, there is typically only one _central node_. Even if multiple VMs are used, the framework expects it to behave like a single node; this is due to the centralized nature of it. -The control node consists of the following few services: -- The _driver service_ is, as the name suggests, the driving service behing a control node. It takes incoming workflows submitted by scientists, and starts executing them, emitting jobs that need to be executed on the worker nodes. +The central node consists of the following few services: +- The _driver service_ is, as the name suggests, the driving service behind a central node. It takes incoming workflows submitted by scientists, and starts executing them, emitting jobs that need to be executed on the worker nodes. - The _planner service_ takes incoming workflows submitted to the driver service and _plans_ them. This is simply the act of defining which worker node will execute which task, and takes into account available resources on each of the domains, as well as policies that determine if a domain can actually transfer data or execute the job. - The _registry service_ (sometimes called _central registry service_ or _API service_ for disambiguation) is the centralized version of the local registry services (see [below](#the-worker-node)). It acts as a centralized database for the framework, which provides information about which dataset is located where, which domains are participating and where to find them1, and in addition hosts a central package repository. - Finally, the _proxy service_ acts as a gateway between the other services and the outside world to _enable_ proxying (i.e., it does not accept proxied requests, but rather creates them). In addition, it is also the point that handles server certificates and parses client certificates for identifications. @@ -40,7 +40,7 @@ More information on each backend and how to set it up is discussed in the [backe ## Next -To start setting up your own worker node, we recommend checking out the [installation chapters](./installation/introduction.md). These will walk you through everything you need to setup a node, both control nodes and worker nodes. +To start setting up your own worker node, we recommend checking out the [installation chapters](./installation/introduction.md). These will walk you through everything you need to setup a node, both central nodes and worker nodes. For information on setting up different backends, check the [backend chapters](./backends/introduction.md).