Releases: AbsaOSS/hyperdrive-trigger
Releases · AbsaOSS/hyperdrive-trigger
v0.5.2.2
Bugfixes
- Override log4j version to 2.15.0
v0.5.2.1
v0.5.3
Enhancements
- #540 Add throttling for recurring sensor
- #528 Admin role for admin endpoints
- #535 Show job template usage
- #526 #527 Job templates CRUD operations with History
- #521 UX Hyperdrive ingestion types (HyperConformance Raw to Publish Topic, Offload Raw Topic with HyperConformance, Offload Publish Topic)
Bugfixes
Internal tasks
Application properties changes
New
recurringSensor.maxJobsPerDuration
. Optional (Default value:8
)recurringSensor.duration
. Optional (Default value:1h
)auth.admin.role
. Optional (To disable, remove auth.admin.role property)auth.inmemory.admin.user
. Optional (Default value:hyperdriver-admin-user
)auth.inmemory.admin.password
. Optional (Default value:hyperdriver-admin-password
)
v0.5.2
v0.5.1
v0.5.0
Enhancements
Bugfixes
- #494 Use the sparkYarnSinkConfig.executablesFolder property
- #490 Use the shellExecutor.executablesFolder property
- #489 Use auth inmemory config properties
- #457 Job status should be updated immediately if the submission failed
Internal tasks
- #496 Remove generic form from details and sensors form
- #487 Upgrade spring version to v2.5.2
- #483 Use JsonB type for Sensor properties
- #477 Drop deprecated database columns
- #481 Add logging for failed futures
Application properties changes
New
spark.submitApi
. Must be eitheryarn
oremr
. Default value:yarn
spark.emr.clusterId
. The Id of the EMR-cluster (e.g. j-2AXXXXXXGAPLF) Mandatory, ifspark.submitApi=emr
spark.emr.filesToDeploy
. Optionalspark.emr.additionalConfs
. Optional. This is just a prefix and works like the counterpartsparkYarnSink.additionalConfs
spark.emr.awsProfile
. Optional. Intended for local developmentspark.emr.region
. Optional. Intended for local development
Renamed
kafkaSource.key.deserializer
tokafkaSource.properties.key.deserializer
kafkaSource.value.deserializer
tokafkaSource.properties.value.deserializer
kafkaSource.max.poll.records
tokafkaSource.properties.max.poll.records
v0.4.3
Enhancements
Internal tasks
- #470: Add timeout config for yarn connection health indicator
- #446: Use JsonB type for Job properties
- #455: Postgres test containers
- #455: Notifications backend implementation
- #454: Notifications frontend implementation
- Rename database delta scripts
- Upgrate snakeyaml from 1.25 to 1.26
Application properties changes
- Added property
sparkYarnSink.userUsedToKillJob
- Added property
health.yarnConnection.timeoutMillis
- Only numbers are accepted. (Optional value)
- Added property
notification.enabled
- Only booleans are accepted. (Default: false)
- Added property
notification.sender.address
- Only characters are accepted. (Default: empty string)
- Added property
spring.mail.host
- Added property
spring.mail.port
v0.4.2
Enhancements
- #428: Removed spark client deployment mode
- #425: Added health indicators (Endpoint: /admin/health)
- #434: Log in user name is converted to lower case
- #435: Runs - increased default number of displayed rows
- #450: Workflows - Bulk run
Bugfixes
- #436: Refresh runs is closing details window
- #437: On an expired session first rest request is failing
Internal tasks
Application properties changes
- Added property
management.endpoint.health.show-details=always
- Added property
health.databaseConnection.timeoutMillis
- Only numbers are accepted. (Default: 120000)
- Added property
health.yarnConnection.testEndpoint
- Yarn endpoint to check health. (In case of YARN /cluster/cluster)
- Added property
application.maximumNumberOfWorkflowsInBulkRun
- Only numbers are accepted. Number of workflows that can be executed in bulk. (Default: 10)
v0.4.1
v0.4.0
Enhancements
- Disabled authentication for health rest endpoint (/admin/health) (#412)
- Kafka triggers do not skip messages produced during scheduler downtime (#410)
- Spark and Shell jobs can use different file locations (#413)
- New status for executed jobs: SubmissionTimeout (#414)
- New UI icons for executed jobs for Submitting and SubmissionTimeout statuses (#414)
- Executed spark job detail contains a link to the application running in the Resource Manager (#416)
Application properties changes
- Changed property
kafkaSource.group.id
tokafkaSource.group.id.prefix
- Removed property
scheduler.executors.executablesFolder
- Added property
shellExecutor.executablesFolder
- Base path to shell executables
- Added property
sparkYarnSink.executablesFolder
- Base path to spark executables