generated from 18F/open-source-policy
-
Notifications
You must be signed in to change notification settings - Fork 4
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Update README and clean up a few small things
- Loading branch information
Showing
4 changed files
with
59 additions
and
16 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -13,35 +13,82 @@ To accomplish this for systems hosted on cloud.gov, the code in this repository | |
|
||
## Deploying | ||
|
||
1. Create a user-provided service with your New Relic license key | ||
Note: Instructions currently assume you will ship to _both_ New Relic and S3. Better configuration is TODO. | ||
|
||
All of the following steps take place in the same cf space where the logshipper will reside. Commands in .profile are looking for specific service names, so use the names suggested (or edit .profile). | ||
|
||
1. Create a user-provided service "newrelic-creds" with your New Relic license key | ||
```sh | ||
cf create-user-provided-service newrelic-creds -p '{"NEW_RELIC_LICENSE_KEY":"[your key]", "NEW_RELIC_LOGS_ENDPOINT": "[your endpoint]"}' | ||
``` | ||
NB: Use the correct NEW_RELIC_LOGS_ENDPOINT for your account. Refer to https://docs.newrelic.com/docs/logs/log-api/introduction-log-api/#endpoint | ||
|
||
2. Push the application | ||
2. Create an s3 bucket "log-storage" to receive log files: | ||
```sh | ||
cf create-service s3 basic log-storage | ||
``` | ||
|
||
3. Create a user-provided service "cg-logshipper-creds" to provide HTTP basic auth creds. These will be provided to the logshipper by the service; you will also need to supply them to the log drain service(s) as part of the URL: | ||
```sh | ||
cf create-user-provided-service cg-logshipper-creds -p '{"HTTP_USER": "Some_username_you_provide", "HTTP_PASS": "Some_password"}' | ||
``` | ||
|
||
4. Push the application | ||
```sh | ||
cf push | ||
``` | ||
3. Check the logs to see if there were any problems | ||
|
||
5. Bind the services to the app (now that it exists) and restage it: | ||
```sh | ||
cf bind-service fluentbit-drain newrelic-creds | ||
cf bind-service fluentbit-drain cg-logshipper-creds | ||
cf bind-service fluentbit-drain log-storage | ||
cf restage fluentbit-drain | ||
``` | ||
|
||
6. Check the logs to see if there were any problems | ||
```sh | ||
cf logs fluentbit-drain --recent | ||
``` | ||
|
||
7. If you are using an egress proxy, set the $HTTPS_PROXY variable. (TODO; current .profile assumes a $PROXYROUTE in the app's env) | ||
At this point you should have a running app, but nothing is sending logs to it. | ||
## Setting up a log drain service | ||
Set up one or more log drain services to transmit files to the logshipper app. You will need the basic auth credentials you generated while deploying the app, as well as the URL of the fluentbit-drain app. | ||
The log drain service should be in the space with the app(s) from which you want to collect logs. The name of the log drain service doesn't matter; "log-drain-to-fluentbit" is just an example. | ||
|
||
The `drain-type=all` query parameter tells Cloud Foundry to send both logs and metrics, which is probably what you want. See [Cloud Foundry's log management documentation](https://docs.cloudfoundry.org/devguide/services/log-management.html#:~:text=Where%20%60DRAIN%2DTYPE%2DVALUE%60%20is%20one%20of%20the%20following%3A). | ||
1. Set up a log drain service: | ||
```sh | ||
cf create-user-provided-service log-drain-to-fluentbit -l 'https://Some_username_you_provide:[email protected]/?drain-type=all' | ||
``` | ||
2. Bind the log drain service to the app(s): | ||
```sh | ||
cf bind-service hello-world-app log-drain-to-fluentbit | ||
cf bind-service another-app log-drain-to-fluentbit | ||
``` | ||
Logs should begin to flow after a short delay. You will be able to see traffic hitting the fluent-bit app's web server. The logshipper uses New Relic's Logs API to transfer individual log entries as it processes them. For s3, it batches log entries into files that are transferred to the s3 bucket when they reach a certain size (default 50M) or when the upload timeout period (default 10 minutes) has passed. | ||
## Status | ||
- Can run `cf push` and see fluentbit running with the supplied configuration | ||
- We have tested with a legit NR license key and seen logs appearing in NR. | ||
- Input configured to accept logs from a cf log-drain service. | ||
- Look for and use `HTTPS_PROXY` for egress connections (New Relic's plugin provides this) | ||
- Web server accepts HTTP request and proxies them to fluent-bit (using TCP). | ||
- Web server requires HTTP basic auth. | ||
- Look for and use `HTTPS_PROXY` for egress connections (New Relic's plugin provides this). | ||
|
||
### TODO | ||
|
||
- Add a web server. We're currently accepting HTTP requests but not sending a response, which is rude and will probably lead to excessive open connections. Web server needs to listen on ${PORT}, pipe data to fluentbit, and send a response (not necessarily in that order). | ||
- Restrict incoming traffic by credentials (basic auth) | ||
- Restrict to cloud.gov egress ranges (52.222.122.97/32, 52.222.123.172/32)? | ||
- Futher improve the parsing of logs -- handle or include examples for nginx, apache log messages | ||
- Configure the app to recognize a bound S3 bucket service in VCAP_SERVICES, and ship logs there as well | ||
- Maybe restrict incoming traffic to cloud.gov egress ranges (52.222.122.97/32, 52.222.123.172/32)? | ||
- Document parsing of logs, maybe add examples for parsing common formats. | ||
- Port over all the [`datagov-logstack`](https://github.com/GSA/datagov-logstack) utility scripts for registering drains on apps/spaces | ||
- Add tests? | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters