Skip to content

Latest commit

 

History

History
363 lines (267 loc) · 12.5 KB

03_lnd-gateway.md

File metadata and controls

363 lines (267 loc) · 12.5 KB

3. lnd-gateway

lnd-gateway is a microservice that sets up a small websocket server and uses it to proxy a limited set of gRPC requests to lnd.

  1. Overview
    1. Dockerfile
    2. docker-compose
    3. Environment
    4. Startup Script
    5. server.js
    6. Command Line
  2. Running in Development

When we get our website up and running, we're going to want a way to generate Lightning invoices so that we can display them as a QR code to the user.

Generating the invoice and detecting when we receive a payment for it are things that happen inside of lnd. lnd lets you create an invoice and has a callback for payment detection through its gRPC interface.

But web browsers don't generally make gRPC requests. They make http REST requests. (GET, POST, that kind of thing.) Browsers also support websockets for more realtime, 2-way, streaming communication between client and server.

So rather than make the browser try to talk to lnd directly, we'd like to make a little middleman process. This middleman process will let web browsers connect to it through websockets, will receive their requests for payment invoice generation, and forward them to lnd through gRPC. It will also do the reverse: listen to lnd through its gRPC callback stream and broadcast payment detection back to the browser through its websocket connection.

In addition to being a translator between websockts and gRPC, having a middleman service is important because it allows us to keep any gRPC-related credentials for lnd a secret, not exposed to the browser. We can also keep our lnd instance exposed only to our other processes, and not the whole internet, which is great for security and control.

Take a look at services/lnd-gateway/Dockerfile.

All this Dockerfile does is set up a node environment, copies the app files, and then installs dependencies. It also exposes a port on the container.

Let's take a look at the root-level docker-compose.yaml, where we'll find an entry for lnd-gateway that looks like this:

lnd-gateway:
  image: lnd-gateway
  container_name: lnd-gateway
  build: ./services/lnd-gateway/
  env_file: ./services/lnd-gateway/.env.docker
  volumes:
    - shared_lightning_creds:/shared:ro
  depends_on:
    - lnd
  ports:
    # host:container
    - "4040:80"
  command: ["yarn", "start:prod"]

For brevity we won't over-elaborate on things already described in previous pages, but there are a few things to note:

depends_on:
  - lnd

Just like our lnd service "depends on" btcd, this lnd-gateway service depends on lnd. docker-compose is smart, so dependencies of dependencies will be started up when you run a service. So if you type docker-compose up lnd-gateway, it will start btcd first, then lnd, then lnd-gateway.

volumes:
  - shared_lightning_creds:/shared:ro

Note the existence of a volume here. This shared_lightning_creds volume was first seen in the lnd description in docker-compose.yaml. It's used to share credentials generated by lnd to lnd-gateway for use when lnd-gateway makes gRPC calls to lnd - in production only. (We'll get to why prod only in the Environment section below.)

command: ["yarn", "start:prod"]

Our startup command works differently here. Whereas with btcd and lnd we pointed to custom shell scripts to call the node executables with flags, here we see ["yarn", "start:prod"]. We'll break down what that means in the Startup Script section below, but for now just note it.

Like in btcd and lnd, there is a sample env file at services/lnd-gateway/.env.sample we can take a look at to see what variables are necessary.

Note that for each environment in the sample, there are not only different values, but different keys, too. For example, in the development section we see this var:

LND_BASE64_CERT=

But in the prod section we see this one:

LND_CERT_PATH=

Since we have different env var values for running as a local process (development) vs running in a container (pseudo-production), we are going to use two different .env files.

We have .env.local to use when we're running as a local process, and .env.docker for running as a container.

1.3.1 .env.local

Because .env.local files are strictly for development, they'll never be already-present in the project. You'll need to create it yourself.

LND_HOST=0.0.0.0:10009
PORT=4040
LND_BASE64_CERT=<some_value_here>
LND_BASE64_MACAROON=<some_value_here>

LND_HOST is the URI that our lnd instance is accessible at.

PORT is the port on our local host that this service will be accessible at.

LND_BASE64_CERT is the actual whole value of the tls.cert credential file generated by lnd, converted into base64. Read about how to generate it in this guide: Generating base64 lnd creds.

LND_BASE64_MACAROON is the actual whole value of the admin.macaroon credential file generated by lnd, converted into base64. Read about how to generate it in this guide: Generating base64 lnd creds.

1.3.2 .env.docker

LND_HOST=lnd:10009
PORT=80
LND_CERT_PATH=/shared/tls.cert
LND_MACAROON_PATH=/shared/admin.macaroon

LND_HOST is the URI that our lnd instance will be accessible at from within the lnd-gateway docker-container. Note the value: lnd:10009. When docker-compose starts lnd, it also creates a host named lnd that points to it, accessible from other docker containers.

PORT is the port within the docker container that the service will be accessible at. Remember that we map this port to a different one on our host machine in docker-compose:

ports:
  # host:container
  - "4040:80"

LND_CERT_PATH is the path to the certificate created by lnd that we need to make gRPC calls to lnd. In both lnd and lnd-gateway, we map the /shared directory to a shared volume, so they can both read and write to that same location.

When using docker, it is easier to use a shared volume like this rather than generate base64 versions of the credentials because: (1) generating the creds is a step that takes time, and (2) if we need to rotate the creds for some reason, we won't need to worry about base64-ing them again since just pointing to the file location will already be the equivalent of using the new values.

LND_MACAROON_PATH is the path to the admin macaroon file, another credential file created by lnd, also used for gRPC calls, also placed in the shared volume.

In our docker-compose.yaml we saw that our startup command is this:

command: ["yarn", "start:prod"]

What does it mean?

If you supply multiple arguments in the command array, they'll just be pieced together on the command line as one command. So in the container we'll really be running yarn start:prod.

Yarn is a package manager for javascript, like the older npm, both frequently used in conjunction with node, a tool for running local javascript processes outside of the browser.

When we execute yarn start:prod, we're calling the yarn program and passing start:prod as a flag or sub-command, specifying the name of a "script". Yarn looks in the "scripts" section of our package.json file to find a definition for start:prod.

In the package.json file you'll see this part:

"scripts": {
  "start:dev": "NODE_ENV=development babel-node server.js",
  "start:prod": "NODE_ENV=production babel-node server.js"
}

In this section start:prod maps to NODE_ENV=production babel-node server.js. So when we run yarn start:prod, we're really ultimately running our server script, server.js. In this case NOD_ENV=production is just a prefix that's setting the NODE_ENV environment variable at command execution time, and babel-node is technically the program we're executing. babel-node takes server.js as an argument and then manipulates it to let us use some more modern javascript features which aren't valid by default, then runs it.

Open up server.js and take a look, just to see what we're actually running when we start up.

It first does some sanity checking on the env vars that we've provided, based on whether we're in a development or production environment.

Then we create a new LndGrpc instance with our credentials to handle actual gRPC requests to lnd. Take a moment to be thankful that such a library exists and that we don't need to write it ourselves.

Then we create an express server and a websocket server.

In the setup method we attach a bunch of callback functions to different events that can happen on our websocket connections and on the "invoiceStream", the gRPC callback stream from lnd.

We also setup a /hearbeat path in case we'd like to ping the server with a an old school REST request just to see if it's up.

The websocket and invoiceStream callbacks are somewhat self-explanatory so we won't dive into detail.

There is no command line for lnd-gateway. Our program inside the container is just one file - server.js. It's not complicated enough to justify a CLI for. If we want to query it, we'll just query it over HTTP in the browser. Note that server.js sets up a /heartbeat path just for checking if it's up and running, if you need to.

💡 There are multiple ways to run this process in development

Because we are not just wrapping up an already-existing executable into a container like we were did for btcd and lnd, we have more options here.

btcd and lnd should always be run as containers because they are essentially pieces that we consider to be maintained separately by others and which only need to be configured for our environment to run properly. We'll never be touching their code. Running them in containers ensures their isolation from our local development environment, and sets a clear expectation for how we can interact with them.

lnd-gateway however is the first process with code that is actually written as part of this project and can be edited. We may want to make a change and then see that change reflected quickly. For this reason we'd definitely like to run our server code locally sometimes, rather than always having to build it into a container and bring the container up.

For that reason the preferred way to run lnd-gateway is locally, not as a container - unless:

  1. We're specifically testing that building and running the container works.
  2. We're focused on working on a separate service, and we'd like to bring lnd-gateway up and down in orchestration with other services for simplicity.

2.1 Running as a local process

First remember to create your .env.local.

Next, your lnd-gateway must have the proper credentials for talking to lnd, and those credentials need to be put into the .env.local that you create. See how to do that in this guide: Generating base64 lnd creds.

Once that's done, run this:

yarn start:dev

The local process should start and you should see a couple lines like this, printed from the container:

listening on port 80
If this process is running in a container, it may be mapped to a different host port.

Note that we are using start:dev here, not start:prod as mentioned above and which is called from inside the container. These scripts set the NODE_ENV environment variable differently, which affects how credentials for lnd are expected to be defined in other environment variables.

Read more back in the Environment section.

2.2 Running with docker-compose

Run this to build the lnd-gateway container:

docker-compose build lnd-gateway

Run this command to bring the container up:

docker-compose up lnd-gateway

The container should start and you should see a couple lines like this, printed from the container:

web | listening on port 80
web | If this process is running in a container, it may be mapped to a different host port.

Now let's take a look at step 4, web.