Skip to content

Commit

Permalink
Merge branch 'develop'
Browse files Browse the repository at this point in the history
  • Loading branch information
cjllanwarne committed Feb 4, 2022
2 parents 10892f4 + 660229b commit 33afec0
Show file tree
Hide file tree
Showing 54 changed files with 1,024 additions and 169 deletions.
38 changes: 38 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,43 @@
# Cromwell Change Log

## 75 Release Notes

### New `AwaitingCloudQuota` backend status

For Cloud Life Sciences v2beta only.

When a user's GCP project reaches a quota limit, Cromwell continues to submit jobs and Life Sciences acknowledges them as created even if the physical VM cannot yet start. Cromwell now detects this condition in the backend and reports `AwaitingCloudQuota`.

The status is informational and does not require any action. Users wishing to maximize throughput can use `AwaitingCloudQuota` as an indication they should check quota in Cloud Console and request a quota increase from GCP.

`AwaitingCloudQuota` will appear between the `Initializing` and `Running` backend statuses, and will be skipped if not applicable.

Now:

| Status in metadata |Quota normal| Quota delay | Status meaning |
|--------------------|----|----------------------|---------------------------------------------------|
| `executionStatus` |`Running`| `Running` | Job state Cromwell is requesting from the backend |
| `backendStatus` |`Running`| `AwaitingCloudQuota` | Job state reported by backend |

Previously:

| Status in metadata |Quota normal|Quota delay| Status meaning |
|--------------------|----|----|-----------------------------------------------------------|
| `executionStatus` |`Running`|`Running`| Job state Cromwell is requesting from the backend |
| `backendStatus` |`Running`|`Running`| Job state reported by backend |

### New 'requestedWorkflowId' API Option

Allows users to choose their own workflow IDs at workflow submission time.

If supplied for single workflows, this value must be a JSON string containing a valid, and not already used, UUID. For batch submissions, this value must be a JSON array of valid UUIDs.

If not supplied, the behavior is as today: Cromwell will generate a random workflow ID for every workflow submitted.

### Bug Fixes

* Fixed a bug on Google Pipelines API backends where missing optional output files (`File?`) were not correctly detected by Cromwell and caused invalid call cache entries to be written.

## 73 Release Notes

### Workflow Restart Performance Improvements
Expand Down
10 changes: 10 additions & 0 deletions CromIAM/src/main/resources/swagger/cromiam.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,11 @@ paths:
required: false
type: file
in: formData
- name: requestedWorkflowId
description: An ID to assign to this workflow. Must be a JSON string in UUID-format. If not supplied a random ID will be generated for the workflow.
required: false
type: string
in: formData
tags:
- Workflows
responses:
Expand Down Expand Up @@ -198,6 +203,11 @@ paths:
required: false
type: file
in: formData
- name: requestedWorkflowId
description: A set of IDs to assign to these workflows. Must be a JSON list of strings in UUID-format. Must have the same number of entries and be in the same order as the workflow inputs list. If not supplied, random ID will be generated for the workflows.
required: false
type: string
in: formData
tags:
- Workflows
responses:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,8 @@ class CallCachingBlacklistManagerSpec extends AnyFlatSpec with CromwellTimeoutSp
workflowOptions = WorkflowOptions(JsObject.empty),
labelsJson = "",
workflowOnHold = false,
warnings = List.empty
warnings = List.empty,
requestedWorkflowId = None
)

val workflowSourcesYesGrouping = workflowSourcesNoGrouping.copy(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@ name: backendWithNoDocker
backends: [LocalNoDocker]
testFormat: runtwiceexpectingcallcaching

# CROM-6807 Don't retry failures, subsequent runs will fail because of unexpected cache hits from the initial run
retryTestFailures: false

files {
workflow: backendWithNoDocker/backendWithNoDocker.wdl
}
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
name: cacheBetweenWF
testFormat: runtwiceexpectingcallcaching

# CROM-6807 Don't retry failures, subsequent runs will fail because of unexpected cache hits from the initial run
retryTestFailures: false

files {
workflow: cacheBetweenWF/cacheBetweenWF.wdl
options: common_options/cache_read_off_write_on.options
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,9 @@ name: call_cache_hit_prefixes_empty_hint_papi
testFormat: runtwiceexpectingcallcaching
backends: [Papi]

# CROM-6807 Don't retry failures, subsequent runs will fail because of unexpected cache hits from the initial run
retryTestFailures: false

files {
workflow: call_cache_hit_prefixes/call_cache_hit_prefixes.wdl
inputs: call_cache_hit_prefixes/call_cache_hit_prefixes_empty_hint.inputs
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@
name: call_cache_hit_prefixes_no_hint
testFormat: runtwiceexpectingcallcaching

# CROM-6807 Don't retry failures, subsequent runs will fail because of unexpected cache hits from the initial run
retryTestFailures: false

files {
workflow: call_cache_hit_prefixes/call_cache_hit_prefixes.wdl
inputs: call_cache_hit_prefixes/call_cache_hit_prefixes_no_hint.inputs
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,9 @@ name: call_cache_hit_prefixes_two_roots_empty_hint_cache_hit_papi
testFormat: runthriceexpectingcallcaching
backends: [Papi]

# CROM-6807 Don't retry failures, subsequent runs will fail because of unexpected cache hits from the initial run
retryTestFailures: false

files {
workflow: call_cache_hit_prefixes/call_cache_hit_prefixes.wdl
inputs: call_cache_hit_prefixes/call_cache_hit_prefixes_two_roots_empty_hint_hit_papi.inputs
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,9 @@ workflowType: CWL
workflowTypeVersion: v1.0
skipDescribeEndpointValidation: true

# CROM-6807 Don't retry failures, subsequent runs will fail because of unexpected cache hits from the initial run
retryTestFailures: false

files {
workflow: cwl_cache_between_workflows/cwl_cache_between_workflows.cwl
inputs: cwl_cache_between_workflows/cwl_cache_between_workflows.json
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
name: floating_tags
testFormat: runtwiceexpectingcallcaching

# CROM-6807 Don't retry failures, subsequent runs will fail because of unexpected cache hits from the initial run
retryTestFailures: false

files {
workflow: floating_tags/floating_tags.wdl
options: floating_tags/floating_tags.options
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@ name: fofn_caching
testFormat: runtwiceexpectingcallcaching
backends: [Papi-Caching-No-Copy]

# CROM-6807 Don't retry failures, subsequent runs will fail because of unexpected cache hits from the initial run
retryTestFailures: false

files {
workflow: fofn_caching/fofn_caching.wdl
}
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
name: google_artifact_registry
testFormat: runtwiceexpectingcallcaching

# CROM-6807 Don't retry failures, subsequent runs will fail because of unexpected cache hits from the initial run
retryTestFailures: false

files {
workflow: google_artifact_registry/google_artifact_registry.wdl
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@ name: hello_private_repo
testFormat: runtwiceexpectingcallcaching
backends: [LocalDockerSecure]

# CROM-6807 Don't retry failures, subsequent runs will fail because of unexpected cache hits from the initial run
retryTestFailures: false

files {
workflow: hello_private_repo/hello_private_repo.wdl
inputs: hello_private_repo/hello_private_repo.inputs.json
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@ name: use_cache_copy_dir
testFormat: runtwiceexpectingcallcaching
backends: [Papiv2]

# CROM-6807 Don't retry failures, subsequent runs will fail because of unexpected cache hits from the initial run
retryTestFailures: false

files {
workflow: use_cacheCopy_dir/use_cacheCopy_dir.wdl
}
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
name: wdl_optional_outputs_call_caching
testFormat: runtwiceexpectingcallcaching
backends: [Papiv2]

# CROM-6807 Don't retry failures, subsequent runs will fail because of unexpected cache hits from the initial run
retryTestFailures: false

files {
workflow: wdl_optional_outputs_call_caching/wdl_optional_outputs_call_caching.wdl
}

metadata {
workflowName: missing_optional_output
status: Succeeded
"calls.missing_optional_output.do_and_do_not_output.callCaching.result": "Cache Hit: <<CACHE_HIT_UUID>>:missing_optional_output.do_and_do_not_output:-1"
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
version 1.0

task do_and_do_not_output {
command <<<
touch do_output.txt
>>>
runtime {
docker: "ubuntu"
}
output {
File? do_not_output = "do_not_output.txt"
File? do_output = "do_output.txt"
}
}

workflow missing_optional_output {
call do_and_do_not_output
output {
File? should_be_present = do_and_do_not_output.do_output
File? should_be_null = do_and_do_not_output.do_not_output
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ sealed trait WorkflowSourceFilesCollection {
def workflowType: Option[WorkflowType]
def workflowTypeVersion: Option[WorkflowTypeVersion]
def workflowOnHold: Boolean
def requestedWorkflowId: Option[WorkflowId]

def warnings: Seq[String]

Expand Down Expand Up @@ -49,7 +50,8 @@ object WorkflowSourceFilesCollection {
labelsJson: WorkflowJson,
importsFile: Option[Array[Byte]],
workflowOnHold: Boolean,
warnings: Seq[String]): WorkflowSourceFilesCollection = importsFile match {
warnings: Seq[String],
requestedWorkflowId: Option[WorkflowId]): WorkflowSourceFilesCollection = importsFile match {
case Some(imports) =>
WorkflowSourceFilesWithDependenciesZip(
workflowSource = workflowSource,
Expand All @@ -62,7 +64,8 @@ object WorkflowSourceFilesCollection {
labelsJson = labelsJson,
importsZip = imports,
workflowOnHold = workflowOnHold,
warnings = warnings)
warnings = warnings,
requestedWorkflowId = requestedWorkflowId)
case None =>
WorkflowSourceFilesWithoutImports(
workflowSource = workflowSource,
Expand All @@ -74,7 +77,8 @@ object WorkflowSourceFilesCollection {
workflowOptions = workflowOptions,
labelsJson = labelsJson,
workflowOnHold = workflowOnHold,
warnings = warnings)
warnings = warnings,
requestedWorkflowId = requestedWorkflowId)
}
}

Expand All @@ -87,7 +91,8 @@ final case class WorkflowSourceFilesWithoutImports(workflowSource: Option[Workfl
workflowOptions: WorkflowOptions,
labelsJson: WorkflowJson,
workflowOnHold: Boolean = false,
warnings: Seq[String]) extends WorkflowSourceFilesCollection
warnings: Seq[String],
requestedWorkflowId: Option[WorkflowId]) extends WorkflowSourceFilesCollection

final case class WorkflowSourceFilesWithDependenciesZip(workflowSource: Option[WorkflowSource],
workflowUrl: Option[WorkflowUrl],
Expand All @@ -99,7 +104,8 @@ final case class WorkflowSourceFilesWithDependenciesZip(workflowSource: Option[W
labelsJson: WorkflowJson,
importsZip: Array[Byte],
workflowOnHold: Boolean = false,
warnings: Seq[String]) extends WorkflowSourceFilesCollection {
warnings: Seq[String],
requestedWorkflowId: Option[WorkflowId]) extends WorkflowSourceFilesCollection {
override def toString = {
s"WorkflowSourceFilesWithDependenciesZip($workflowSource, $workflowUrl, $workflowType, $workflowTypeVersion," +
s""" $inputsJson, ${workflowOptions.asPrettyJson}, $labelsJson, <<ZIP BINARY CONTENT>>, $warnings)"""
Expand Down
6 changes: 4 additions & 2 deletions core/src/test/scala/cromwell/util/SampleWdl.scala
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,8 @@ trait SampleWdl extends TestFileUtil {
workflowTypeVersion = workflowTypeVersion,
warnings = Vector.empty,
workflowOnHold = workflowOnHold,
importsZip = zip)
importsZip = zip,
requestedWorkflowId = None)
case None =>
WorkflowSourceFilesWithoutImports(
workflowSource = Option(workflowSource(runtime)),
Expand All @@ -56,7 +57,8 @@ trait SampleWdl extends TestFileUtil {
workflowType = workflowType,
workflowTypeVersion = workflowTypeVersion,
warnings = Vector.empty,
workflowOnHold = workflowOnHold)
workflowOnHold = workflowOnHold,
requestedWorkflowId = None)
}
}

Expand Down
Original file line number Diff line number Diff line change
@@ -1,28 +1,27 @@
package cromwell.database.slick

import java.sql.{Connection, PreparedStatement, Statement}
import java.util.concurrent.{ExecutorService, Executors}

import com.mysql.cj.jdbc.exceptions.MySQLTransactionRollbackException
import com.typesafe.config.{Config, ConfigFactory}
import cromwell.database.slick.tables.DataAccessComponent
import cromwell.database.sql.SqlDatabase
import net.ceedubs.ficus.Ficus._
import org.postgresql.util.{PSQLException, ServerErrorMessage}
import org.slf4j.LoggerFactory
import org.postgresql.util.PSQLException
import org.slf4j.{Logger, LoggerFactory}
import slick.basic.DatabaseConfig
import slick.jdbc.{JdbcCapabilities, JdbcProfile, PostgresProfile, TransactionIsolation}

import java.sql.{Connection, PreparedStatement, Statement}
import java.util.concurrent.{ExecutorService, Executors}
import scala.concurrent.duration._
import scala.concurrent.{Await, ExecutionContext, Future}

object SlickDatabase {
/**
* Returns either the "url" or "properties.url"
*/
def urlKey(config: Config) = if (config.hasPath("db.url")) "db.url" else "db.properties.url"
def urlKey(config: Config): String = if (config.hasPath("db.url")) "db.url" else "db.properties.url"

lazy val log = LoggerFactory.getLogger("cromwell.database.slick")
lazy val log: Logger = LoggerFactory.getLogger("cromwell.database.slick")

def createSchema(slickDatabase: SlickDatabase): Unit = {
// NOTE: Slick 3.0.0 schema creation, Clobs, and MySQL don't mix: https://github.com/slick/slick/issues/637
Expand Down Expand Up @@ -57,7 +56,7 @@ object SlickDatabase {
*/
abstract class SlickDatabase(override val originalDatabaseConfig: Config) extends SqlDatabase {

override val urlKey = SlickDatabase.urlKey(originalDatabaseConfig)
override val urlKey: String = SlickDatabase.urlKey(originalDatabaseConfig)
protected val slickConfig = DatabaseConfig.forConfig[JdbcProfile]("", databaseConfig)

/*
Expand All @@ -73,7 +72,7 @@ abstract class SlickDatabase(override val originalDatabaseConfig: Config) extend
// NOTE: if you want to refactor database is inner-class type: this.dataAccess.driver.backend.DatabaseFactory
val database = slickConfig.db

override lazy val connectionDescription = databaseConfig.getString(urlKey)
override lazy val connectionDescription: String = databaseConfig.getString(urlKey)

SlickDatabase.log.info(s"Running with database $urlKey = $connectionDescription")

Expand Down Expand Up @@ -134,10 +133,12 @@ abstract class SlickDatabase(override val originalDatabaseConfig: Config) extend
actionThreadPool, database.executor.executionContext.reportFailure
)

protected[this] lazy val insertBatchSize = databaseConfig.getOrElse("insert-batch-size", 2000)
protected[this] lazy val insertBatchSize: Int = databaseConfig.getOrElse("insert-batch-size", 2000)

protected[this] lazy val useSlickUpserts = dataAccess.driver.capabilities.contains(JdbcCapabilities.insertOrUpdate)
protected[this] lazy val useSlickUpserts: Boolean =
dataAccess.driver.capabilities.contains(JdbcCapabilities.insertOrUpdate)

//noinspection SameParameterValue
protected[this] def assertUpdateCount(description: String, updates: Int, expected: Int): DBIO[Unit] = {
if (updates == expected) {
DBIO.successful(())
Expand Down Expand Up @@ -220,20 +221,11 @@ abstract class SlickDatabase(override val originalDatabaseConfig: Config) extend
/*
The exception may contain possibly sensitive row contents within the DETAIL section. Remove it.
Tried adjusting this using configuration:
- log_error_verbosity=TERSE
- log_min_messages=PANIC
- client_min_messages=ERROR
Instead resorting to reflection.
Discussion: https://github.com/pgjdbc/pgjdbc/issues/1577
*/
val message = pSQLException.getServerErrorMessage
val field = classOf[ServerErrorMessage].getDeclaredField("mesgParts")
field.setAccessible(true)
val parts = field.get(message).asInstanceOf[java.util.Map[Character, String]]
parts.remove('D')
// The original exception has already stored the DETAIL into a string. So we must create a new Exception.
throw new PSQLException(message)
throw new PSQLException(message, false)
}
}
}(actionExecutionContext)
Expand Down
Loading

0 comments on commit 33afec0

Please sign in to comment.