Skip to content

Commit 258d815

Browse files
authored
Merge pull request #1861 from agrare/update_workflows_docs
Update workflows docs for changes in floe v0.17.0 and document builtin method parameters
2 parents 706a620 + 72545ac commit 258d815

File tree

1 file changed

+138
-14
lines changed

1 file changed

+138
-14
lines changed

managing_providers/_topics/embedded_workflows.md

Lines changed: 138 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@ You can create and use embedded workflows as needed to not only change parts of
159159

160160
Workflows must be authored in Amazon State Languages (ASL) format. As part of authoring a workflow, you (or your users) can build container images that are able to perform any tasks that are required in any language that you like. You can use these images during Task states in your workflows.
161161

162-
1. Define the code for the workflow. If your workflow requires the use of any credentials or parameters to be specified, ensure that they are passed in the code.
162+
* Define the code for the workflow. If your workflow requires the use of any credentials or parameters to be specified, ensure that they are passed in the code.
163163

164164
Within the workflow code, you need to specify the states that your workflow requires, including any next steps. For `Task` type steps in the workflow, a docker container is called. The container defines what happens for that Task state. For example, a docker container can run to clone a template. If your states require parameters or credentials, you can specify them in your state definitions.
165165

@@ -174,12 +174,90 @@ Workflows must be authored in Amazon State Languages (ASL) format. As part of au
174174
- ItemReader
175175
- ResultWriter
176176

177-
2. Build the docker containers that are required for the workflow.
177+
* Build the docker containers that are required for the workflow.
178178

179179
When you have the code for your task resource written, you need to bundle it into a docker image. You can bundle the code by creating a standard [Dockerfile](https://docs.docker.com/engine/reference/builder/) and building the image (https://docs.docker.com/engine/reference/commandline/build/). Then, you can push the image to a [registry](https://docs.docker.com/engine/reference/commandline/push/), which makes the image available to be used by {{ site.data.product.title_short }}. When you have pushed your images to an image registry, you can add the registry to {{ site.data.product.title_short }}.
180180

181181
Pull secrets for containers are used differently between appliances and the OpenShift Container Platform (OCP). These differences are outlined in the following sections.
182182

183+
* Use "builtin" runner methods from the ManageIQ Task Runner
184+
185+
In addition to the `docker://` runner which can run any container you want, there are also builtin runner methods for some common tasks like executing an http call or sending an email.
186+
187+
* `manageiq://http` - Execute any HTTP action
188+
189+
Parameters:
190+
* `Method` (required) - HTTP method name. Permitted values: `GET`, `POST`, `PUT`, `DELETE`, `HEAD`, `PATCH`, `OPTIONS`, or `TRACE`
191+
* `Url` (required) - URL to execute the HTTP call to
192+
* `Headers` - Hash of unencoded HTTP request header key/value pairs.
193+
* `QueryParameters` - URI query unencoded key/value pairs.
194+
* `Body` - HTTP request body. Depending on Encoding this can be a String or a Hash of key/value pairs.
195+
* `Ssl` - SSL options
196+
* `Verify` - Boolean - Verify SSL certificate. Defaults to `true`
197+
* `VerifyHostname` - Boolean - Verify SSL certificate hostname. Defaults to `true`
198+
* `Hostname` - String - Server hostname for SNI.
199+
* `CaFile` - String - Path to a CA file in PEM format.
200+
* `CaPath` - String - Path to a CA directory.
201+
* `VerifyMode` - Integer - OpenSSL constant. `VERIFY_NONE` => 0, `VERIFY_PEER` => 1, `VERIFY_FAIL_IF_NO_PEER_CERT` => 2, `VERIFY_CLIENT_ONCE` => 4,
202+
* `VerifyDepth` - Integer - Maximum depth for the certificate chain validation.
203+
* `Version` - Integer - SSL Version.
204+
* `MinVersion` - Integer - Minimum SSL Version.
205+
* `MaxVersion` - Integer - Maximum SSL Version.
206+
* `Ciphers` - String - Ciphers supported.
207+
* `Proxy`
208+
* `Uri` - String - URI of the proxy.
209+
* `User` - String - User for the proxy.
210+
* `Password` - String - Password for the proxy
211+
* `Options`
212+
* `Timeout`
213+
* `ReadTimeout`
214+
* `OpenTimeout`
215+
* `WriteTimeout`
216+
* `Encoding` - String
217+
* `JSON` - JSON encodes the request and decodes the response
218+
219+
220+
* `manageiq://email` - Send an email using the configured SMTP server
221+
222+
Parameters:
223+
* `To` - Array of recipient email addresses, defaults to service requester email
224+
* `From` - Sender email address, defaults to smtp.from Setting
225+
* `Subject` - Email Subject string
226+
* `Cc` - Array of recipients to carbon-copy
227+
* `Bcc` - Array of recipients to blind-carbon-copy
228+
* `Body` - The body of the email
229+
* `Attachment` - A hash with the filename as the key and the content as the value
230+
231+
* `manageiq://embedded_ansible` - Execute an ansible playbook with EmbeddedAnsible
232+
233+
Identifying a playbook: You must identity a playbook by either:
234+
235+
`PlaybookId` - This is the database identifier of the `ConfigurationScript`
236+
237+
or
238+
239+
`RepositoryUrl`, `RepositoryBranch`, and `PlaybookName`
240+
241+
Parameters:
242+
* `RepositoryUrl` - URL of the configuration script source identifying the repository where the playbook resides
243+
* `RepositoryBranch` - Branch of the configuration script source where the playbook resides
244+
* `PlaybookName` - Name of the playbook
245+
* `PlaybookId` - Integer - Database ID of the `ConfigurationScript`
246+
* `Hosts` - Array - hostnames to target with the playbook
247+
* `ExtraVars` - Hash - key/value pairs that will be passed as extra_vars
248+
* `BecomeEnabled` - Boolean - If playbook should activate privilege escalation, defaults to false
249+
* `Timeout` - Integer - Minutes for how long to allow the playbook to run for
250+
* `Verbosity` - Integer - Ansible verbosity level 0-5
251+
* `CredentialId` - Integer - Database ID of an ansible credential
252+
* `CloudCredentialId` - Integer - Database ID of an ansible cloud credential
253+
* `NetworkCredentialId` - Integer - Database ID of an ansible network credential
254+
* `VaultCredentialId` - Integer - Database ID of an ansible vault credential
255+
256+
* `manageiq://provision_execute` - Execute an MiqProvision task
257+
258+
This can be used for a VM Provision Service Catalog item in place of automate. No explicit parameters are required, as state input is used as the provision options.
259+
260+
183261
#### Running an Embedded Workflow on Appliances
184262

185263
* On appliances, `podman` is used to execute the container so use [podman login](https://docs.podman.io/en/stable/markdown/podman-login.1.html) as the `manageiq` user.
@@ -322,7 +400,7 @@ If the user is running an embedded workflow on OCP, and is using a docker reposi
322400

323401
Long lived credentials like usernames and passwords should be defined as Mapped Credentials as described in `Adding Credentials`.
324402

325-
Short lived credentials such as bearer tokens which are obtained while the workflow is running can be set as state output and stored securely in the Credentials field for further states. This can be accomplished by using `ResultPath` with a path starting with `$.Credentials`. This will set the output of the state in the `Credentials` payload.
403+
Short lived credentials such as bearer tokens which are obtained while the workflow is running can be set as state output and stored securely in the Credentials field for further states. This can be accomplished by using `ResultPath` with a path starting with `$$.Credentials`. This will set the output of the state in the `Credentials` payload.
326404

327405
For an example lets say we have a State which takes a username and password and outputs a bearer token to be used later on:
328406

@@ -331,10 +409,10 @@ For an example lets say we have a State which takes a username and password and
331409
"Type": "Task",
332410
"Resource": "docker://login:latest",
333411
"Credentials": {
334-
"username.$": "$.username",
335-
"password.$": "$.password"
412+
"username.$": "$$.Credentials.username",
413+
"password.$": "$$.Credentials.password"
336414
},
337-
"ResultPath": "$.Credentials",
415+
"ResultPath": "$$.Credentials",
338416
"Next": "NextState"
339417
}
340418
```
@@ -346,7 +424,7 @@ If the output of the docker image is `{"bearer_token":"abcd"}` then we will be a
346424
"Type": "Task",
347425
"Resource": "docker://do-something:latest",
348426
"Credentials": {
349-
"token.$": "$.bearer_token"
427+
"token.$": "$$.Credentials.bearer_token"
350428
}
351429
}
352430
```
@@ -358,13 +436,13 @@ All of the normal Input/Output processing still applies so if you need to manipu
358436
"Type": "Task",
359437
"Resource": "docker://login:latest",
360438
"Credentials": {
361-
"username.$": "$.username",
362-
"password.$": "$.password"
439+
"username.$": "$$.Credentials.username",
440+
"password.$": "$$.Credentials.password"
363441
},
364442
"ResultSelector": {
365443
"bearer_token.$": "$.result"
366444
},
367-
"ResultPath": "$.Credentials",
445+
"ResultPath": "$$.Credentials",
368446
"Next": "NextState"
369447
}
370448
```
@@ -376,10 +454,10 @@ We can also store the result in a parent node for organization:
376454
"Type": "Task",
377455
"Resource": "docker://login:latest",
378456
"Credentials": {
379-
"username.$": "$.username",
380-
"password.$": "$.password"
457+
"username.$": "$$.Credentials.username",
458+
"password.$": "$$.Credentials.password"
381459
},
382-
"ResultPath": "$.Credentials.VMware",
460+
"ResultPath": "$$.Credentials.VMware",
383461
"Next": "NextState"
384462
}
385463
```
@@ -391,7 +469,7 @@ And then access it like:
391469
"Type": "Task",
392470
"Resource": "docker://do-something:latest",
393471
"Credentials": {
394-
"token.$": "$.VMware.bearer_token"
472+
"token.$": "$$.VMware.bearer_token"
395473
}
396474
}
397475
```
@@ -467,3 +545,49 @@ You can create a generic service catalog item that uses an embedded workflow. To
467545
The list of services and requests is shown when the catalog item is submitted. Clicking the request shows the execution status, including any embedded workflows.
468546

469547
![Workflow Status](../images/embedworkflow_runstatus.png)
548+
549+
## Upgrading
550+
551+
If you wrote a workflow with floe prior to `v0.17.0` you might have to update your workflow content. You can check your floe version by using `bundle info floe`
552+
553+
1. The Credentials Task property has changed to use `$$.Credentials` to access the credentials payload, `$.` will use state input which is consistent with the rest of Input/Output processing. `ResultPath` also has to be updated to set credentials to `$$.Credentials`.
554+
555+
Example:
556+
```json
557+
{
558+
"Type": "Task",
559+
"Credentials": {"password.$": "$.Password"},
560+
"ResultPath": "$.Credentials"
561+
}
562+
```
563+
564+
Becomes:
565+
```json
566+
{
567+
"Type": "Task",
568+
"Credentials": {"password.$": "$$.Credentials.password"},
569+
"ResultPath": "$$.Credentials"
570+
}
571+
```
572+
573+
2. Nested hashes no longer require the key to have a `.$` suffix to perform interpolation
574+
575+
Example:
576+
```json
577+
{
578+
"Type": "Pass",
579+
"Result": {
580+
"Body.$": {"foo.$": "$.bar"}
581+
}
582+
}
583+
```
584+
585+
Becomes:
586+
```json
587+
{
588+
"Type": "Pass",
589+
"Result": {
590+
"Body": {"foo.$": "$.bar"}
591+
}
592+
}
593+
```

0 commit comments

Comments
 (0)