Polaris configuration with External Minio S3 (HTTPS) ERROR #2705
Unanswered
MissaouiAhmed
asked this question in
Q&A
Replies: 1 comment
-
Hi @MissaouiAhmed, The error happens because Java does not trust the Minio SSL certificate. The easiest way to fix it is to import the Minio certificate into the Java truststore. You can do this by running:
Replace /path/to/minio.crt with the path to your Minio server certificate. Then restart Polaris and your Spark session. This usually fixes the PKIX path building failed error. If this works for you, please mark this as the accepted answer. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello, I am writing a sample use case.
Deploy Apache Polaris configured to use an external Minio Server exposed using https
Create a Spark application and create an iceberg table.
I am getting unable to find valid certification path to requested target error despite the fact that I specified to disable the SSL verification.
Any idea how to fix?
Am i missing a configuration?
Thanks for help
##########################
POLARIS DOCKER COMPOSE
#########################
services:
polaris:
image: apache/polaris:latest
ports:
# API port
- "8181:8181"
# Optional, allows attaching a debugger to the Polaris JVM
- "5005:5005"
environment:
JAVA_DEBUG: true
JAVA_DEBUG_PORT: "*:5005"
POLARIS_BOOTSTRAP_CREDENTIALS: POLARIS,root,s3cr3t
polaris.realm-context.realms: POLARIS
AWS_ACCESS_KEY_ID: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXx
AWS_SECRET_ACCESS_KEY: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXx
AWS_REGION: us-east-1
quarkus.otel.sdk.disabled: "true"
POLARIS_SSL_VERIFY: "false"
JAVA_TOOL_OPTIONS : "-Djdk.internal.httpclient.disableHostnameVerification=true -Dcom.sun.net.ssl.checkRevocation=false "
healthcheck:
test: ["CMD", "curl", "http://localhost:8182/q/health"]
interval: 2s
timeout: 10s
retries: 10
start_period: 10s
polaris-setup:
image: alpine/curl
depends_on:
polaris:
condition: service_healthy
environment:
- CLIENT_ID=root
- CLIENT_SECRET=s3cr3t
volumes:
- ../assets/polaris/:/polaris
entrypoint: "/bin/sh"
command:
- "-c"
- >-
chmod +x /polaris/create-catalog.sh;
chmod +x /polaris/obtain-token.sh;
source /polaris/obtain-token.sh;
echo Creating catalog...;
export STORAGE_CONFIG_INFO='{"storageType":"S3","endpoint":"https://$MINIO-SERVER:9000","endpointInternal":"https://$MINIO-SERVER:9000","pathStyleAccess":true}';
export STORAGE_LOCATION='s3a://polaris';
/polaris/create-catalog.sh POLARIS $$TOKEN;
echo Extra grants...;
curl -H "Authorization: Bearer $$TOKEN" -H 'Content-Type: application/json'
-X PUT
http://polaris:8181/api/management/v1/catalogs/quickstart_catalog/catalog-roles/catalog_admin/grants
-d '{"type":"catalog", "privilege":"CATALOG_MANAGE_CONTENT"}';
echo Done.;
##########################
POLARIS CATALOG
#########################
class Catalog {
type: INTERNAL
name: quickstart_catalog
properties: class CatalogProperties {
{default-base-location=s3a://polaris}
defaultBaseLocation: s3a://polaris
}
createTimestamp: 1759134555943
lastUpdateTimestamp: 0
entityVersion: 1
storageConfigInfo: class AwsStorageConfigInfo {
class StorageConfigInfo {
storageType: S3
allowedLocations: [s3a://polaris]
}
roleArn: null
externalId: null
userArn: null
region: null
endpoint: https://$MINIO-SERVER:9000
stsEndpoint: null
endpointInternal: https://$MINIO-SERVER:9000
pathStyleAccess: true
}
}
}
##########################
POLARIS SPARK CLIENT
#########################
spark-shell --master local
--deploy-mode client
--jars /jars/iceberg-aws-bundle-1.9.0.jar,/jars/iceberg-spark-runtime-3.5_2.12-1.9.0.jar
--conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions
--conf spark.sql.legacy.pathOptionBehavior.enabled=true
--conf spark.hadoop.fs.s3a.path.style.access=true
--conf spark.hadoop.fs.s3a.endpoint=https://$MINIO-SERVER:9000/
--conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem
--conf spark.hadoop.fs.s3a.access.key="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
--conf spark.hadoop.fs.s3a.secret.key="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
--conf spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
--conf spark.sql.catalog.quickstart_catalog=org.apache.iceberg.spark.SparkCatalog
--conf spark.sql.catalog.quickstart_catalog.catalog-impl=org.apache.iceberg.rest.RESTCatalog
--conf spark.sql.catalog.quickstart_catalog.uri=http://$POLAIRS-FQDN:8181/api/catalog
--conf spark.sql.catalog.quickstart_catalog.s3a.access-key="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
--conf spark.sql.catalog.quickstart_catalog.s3a.secret-key="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
--conf spark.sql.catalog.quickstart_catalog.s3a.path-style-access=true
--conf spark.sql.catalog.quickstart_catalog.credential='root:s3cr3t'
--conf spark.sql.catalog.quickstart_catalog.scope='PRINCIPAL_ROLE:ALL'
--conf spark.sql.catalog.quickstart_catalog.warehouse=quickstart_catalog
--conf spark.sql.catalog.quickstart_catalog.ssl.trust-all=true
--conf spark.sql.catalog.quickstart_catalog.token-refresh-enabled=false
########################
spark.sql(s"CREATE NAMESPACE quickstart_catalog.my_ns")
scala> spark.sql(s"CREATE TABLE quickstart_catalog.my_ns.demo_table1 (id int) USING iceberg LOCATION 's3a://polaris/my_ns/demo_table1'")
#########
ERROR
#########
25/09/29 01:41:08 WARN OutputStatisticsOutputDatasetFacetBuilder: No jobId found in context
25/09/29 01:41:08 WARN InputFieldsCollector: Could not extract dataset identifier from org.apache.spark.sql.catalyst.analysis.ResolvedIdentifier
org.apache.iceberg.exceptions.RESTException: Unable to process: Failed to get subscoped credentials: Unable to execute HTTP request: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target (SDK Attempt Count: 4)
at org.apache.iceberg.rest.ErrorHandlers$DefaultErrorHandler.accept(ErrorHandlers.java:248)
at org.apache.iceberg.rest.ErrorHandlers$TableErrorHandler.accept(ErrorHandlers.java:123)
at org.apache.iceberg.rest.ErrorHandlers$TableErrorHandler.accept(ErrorHandlers.java:107)
at org.apache.iceberg.rest.HTTPClient.throwFailure(HTTPClient.java:215)
at org.apache.iceberg.rest.HTTPClient.execute(HTTPClient.java:299)
at org.apache.iceberg.rest.BaseHTTPClient.post(BaseHTTPClient.java:88)
at org.apache.iceberg.rest.RESTSessionCatalog$Builder.create(RESTSessionCatalog.java:771)
Beta Was this translation helpful? Give feedback.
All reactions