|
6 | 6 |
|
7 | 7 | ## Status
|
8 | 8 |
|
9 |
| -Polaris Catalog will be open sourced under an Apache 2.0 license in the next 90 days. In the meantime: |
| 9 | +Polaris Catalog is open source under an Apache 2.0 license. |
10 | 10 |
|
11 |
| -- 👀 Watch this repo if you would like to be notified when the Polaris code goes live. |
12 | 11 | - ⭐ Star this repo if you’d like to bookmark and come back to it!
|
13 | 12 | - 📖 Read the <a href="https://snowflake.com/blog/introducing-polaris-catalog/" target="_blank">announcement blog post<a/> for more details!
|
| 13 | + |
| 14 | +## API Docs |
| 15 | + |
| 16 | +API docs are hosted via Github Pages at https://polaris-catalog.github.io/polaris. All updates to the main branch |
| 17 | +update the hosted docs. |
| 18 | + |
| 19 | +The Polaris management API docs are found [here](docs%2Fpolaris-management%2Findex.html) |
| 20 | + |
| 21 | +The open source Iceberg REST API docs are at [index.html](docs%2Ficeberg-rest%2Findex.html) |
| 22 | + |
| 23 | +Docs are generated using Redocly. They can be regenerated by running the following commands |
| 24 | +from the project root directory |
| 25 | + |
| 26 | +```bash |
| 27 | +docker run -p 8080:80 -v ${PWD}:/spec redocly/cli build-docs spec/polaris-management-service.yml --output=docs/polaris-management/index.html |
| 28 | +docker run -p 8080:80 -v ${PWD}:/spec redocly/cli build-docs spec/rest-catalog-open-api.yaml --output=docs/iceberg-rest/index.html |
| 29 | +``` |
| 30 | + |
| 31 | +# Setup |
| 32 | + |
| 33 | +## Requirements / Setup |
| 34 | + |
| 35 | +- Java JDK >= 21 . If on a Mac you can use [jenv](https://www.jenv.be/) to set the appropriate SDK. |
| 36 | +- Gradle 8.6 - This is included in the project and can be run using `./gradlew` in the project root. |
| 37 | +- Docker - If you want to run the project in a containerized environment. |
| 38 | + |
| 39 | +Command-Line getting started |
| 40 | +------------------- |
| 41 | +Polaris is a multi-module project with three modules: |
| 42 | + |
| 43 | +- `polaris-core` - The main Polaris entity definitions and core business logic |
| 44 | +- `polaris-server` - The Polaris REST API server |
| 45 | +- `polaris-eclipselink` - The Eclipselink implementation of the MetaStoreManager interface |
| 46 | + |
| 47 | +Build the binary (first time may require installing new JDK version). This build will run IntegrationTests by default. |
| 48 | + |
| 49 | +``` |
| 50 | +./gradlew build |
| 51 | +``` |
| 52 | + |
| 53 | +Run the Polaris server locally on localhost:8181 |
| 54 | + |
| 55 | +``` |
| 56 | +./gradlew runApp |
| 57 | +``` |
| 58 | + |
| 59 | +While the Polaris server is running, run regression tests, or end-to-end tests in another terminal |
| 60 | + |
| 61 | +``` |
| 62 | +./regtests/run.sh |
| 63 | +``` |
| 64 | + |
| 65 | +Docker Instructions |
| 66 | +------------------- |
| 67 | + |
| 68 | +Build the image: |
| 69 | + |
| 70 | +``` |
| 71 | +docker build -t localhost:5001/polaris:latest . |
| 72 | +``` |
| 73 | + |
| 74 | +Run it in a standalone mode. This runs a single container that binds the container's port `8181` to localhosts `8181`: |
| 75 | + |
| 76 | +``` |
| 77 | +docker run -p 8181:8181 localhost:5001/polaris:latest |
| 78 | +``` |
| 79 | + |
| 80 | +# Running the tests |
| 81 | + |
| 82 | +## Unit and Integration tests |
| 83 | + |
| 84 | +Unit and integration tests are run using gradle. To run all tests, use the following command: |
| 85 | + |
| 86 | +```bash |
| 87 | +./gradlew test |
| 88 | +``` |
| 89 | + |
| 90 | +## Regression tests |
| 91 | + |
| 92 | +Regression tests, or functional tests, are stored in the `regtests` directory. They can be executed in a docker |
| 93 | +environment by using the `docker-compose.yml` file in the project root. |
| 94 | + |
| 95 | +```bash |
| 96 | +docker compose up --build --exit-code-from regtest |
| 97 | +``` |
| 98 | + |
| 99 | +They can also be executed outside of docker by following the setup instructions in |
| 100 | +the [README](regtests/README.md) |
| 101 | + |
| 102 | +# Kubernetes Instructions |
| 103 | +----------------------- |
| 104 | + |
| 105 | +You can run Polaris as a mini-deployment locally. This will create two pods that bind themselves to port `8181`: |
| 106 | + |
| 107 | +``` |
| 108 | +./setup.sh |
| 109 | +``` |
| 110 | + |
| 111 | +You can check the pod and deployment status like so: |
| 112 | + |
| 113 | +``` |
| 114 | +kubectl get pods |
| 115 | +kubectl get deployment |
| 116 | +``` |
| 117 | + |
| 118 | +If things aren't working as expected you can troubleshoot like so: |
| 119 | + |
| 120 | +``` |
| 121 | +kubectl describe deployment polaris-deployment |
| 122 | +``` |
| 123 | + |
| 124 | +## Creating a Catalog manually |
| 125 | + |
| 126 | +Before connecting with Spark, you'll need to create a catalog. To create a catalog, generate a token for the root |
| 127 | +principal: |
| 128 | + |
| 129 | +```bash |
| 130 | +curl -i -X POST \ |
| 131 | + http://localhost:8181/api/catalog/v1/oauth/tokens \ |
| 132 | + -d 'grant_type=client_credentials&client_id=<principalClientId>=&client_secret=<mainSecret>=&scope=PRINCIPAL_ROLE:ALL' |
| 133 | +``` |
| 134 | + |
| 135 | +The response output will contain an access token: |
| 136 | + |
| 137 | +```json |
| 138 | +{ |
| 139 | + "access_token": "ver:1-hint:1036-ETMsDgAAAY/GPANareallyverylongstringthatissecret", |
| 140 | + "token_type": "bearer", |
| 141 | + "expires_in": 3600 |
| 142 | +} |
| 143 | +``` |
| 144 | + |
| 145 | +Set the contents of the `access_token` field as the `PRINCIPAL_TOKEN` variable. Then use curl to invoke the |
| 146 | +createCatalog |
| 147 | +api: |
| 148 | + |
| 149 | +```bash |
| 150 | +$ export PRINCIPAL_TOKEN=ver:1-hint:1036-ETMsDgAAAY/GPANareallyverylongstringthatissecret |
| 151 | + |
| 152 | +$ curl -i -X PUT -H "Authorization: Bearer $PRINCIPAL_TOKEN" -H 'Accept: application/json' -H 'Content-Type: application/json' \ |
| 153 | + http://${POLARIS_HOST:-localhost}:8181/api/v1/catalogs \ |
| 154 | + -d '{"name": "snowflake", "id": 100, "type": "INTERNAL", "readOnly": false}' |
| 155 | +``` |
| 156 | + |
| 157 | +This creates a catalog called `snowflake`. From here, you can use Spark to create namespaces, tables, etc. |
| 158 | + |
| 159 | +You must run the following as the first query in your spark-sql shell to actually use Polaris: |
| 160 | + |
| 161 | +``` |
| 162 | +use polaris; |
| 163 | +``` |
0 commit comments