Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v11 #495

Open
wants to merge 35 commits into
base: master
Choose a base branch
from
Open

v11 #495

Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
64 changes: 3 additions & 61 deletions docs/api/constructor.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,38 +57,9 @@ The following options can be set as properties in an object for additional confi
Database schema that contains all required storage objects. Only alphanumeric and underscore allowed, length: <= 50 characters


**Queue options**

Queue options contain the following constructor-only settings.

* **archiveCompletedAfterSeconds**

Specifies how long in seconds completed jobs get archived. Note: a warning will be emitted if set to lower than 60s and cron processing will be disabled.

Default: 12 hours

* **archiveFailedAfterSeconds**

Specifies how long in seconds failed jobs get archived. Note: a warning will be emitted if set to lower than 60s and cron processing will be disabled.

Default: `archiveCompletedAfterSeconds`

**Monitoring options**

* **monitorStateIntervalSeconds** - int, default undefined

Specifies how often in seconds an instance will fire the `monitor-states` event. Must be >= 1.

* **monitorStateIntervalMinutes** - int, default undefined

Specifies how often in minutes an instance will fire the `monitor-states` event. Must be >= 1.

> When a higher unit is is specified, lower unit configuration settings are ignored.


**Maintenance options**

Maintenance operations include checking active jobs for expiration, archiving completed jobs from the primary job table, and deleting archived jobs from the archive table.
Maintenance operations include checking active jobs for expiration, caching queue stats, and deleting completed jobs.

* **supervise**, bool, default true

Expand All @@ -102,42 +73,13 @@ Maintenance operations include checking active jobs for expiration, archiving co

If this is set to false, this instance will skip attempts to run schema migratations during `start()`. If schema migrations exist, `start()` will throw and error and block usage. This is an advanced use case when the configured user account does not have schema mutation privileges.

**Archive options**

When jobs in the archive table become eligible for deletion.

* **deleteAfterSeconds**, int

delete interval in seconds, must be >=1

* **deleteAfterMinutes**, int

delete interval in minutes, must be >=1

* **deleteAfterHours**, int

delete interval in hours, must be >=1

* **deleteAfterDays**, int

delete interval in days, must be >=1

* Default: 7 days

> When a higher unit is is specified, lower unit configuration settings are ignored.

**Maintenance interval**

How often maintenance operations are run against the job and archive tables.
How often maintenance operations are run.

* **maintenanceIntervalSeconds**, int

maintenance interval in seconds, must be >=1

* **maintenanceIntervalMinutes**, int

interval in minutes, must be >=1

* Default: 1 minute

> When a higher unit is is specified, lower unit configuration settings are ignored.
* Default: 60 seconds
40 changes: 4 additions & 36 deletions docs/api/events.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,47 +14,15 @@ Ideally, code similar to the following example would be used after creating your
```js
boss.on('error', error => logger.error(error));
```
## `warning`

## `monitor-states`
During monitoring and maintenance, pg-boss may raise warning events.

The `monitor-states` event is conditionally raised based on the `monitorStateInterval` configuration setting and only emitted from `start()`. If passed during instance creation, it will provide a count of jobs in each state per interval. This could be useful for logging or even determining if the job system is handling its load.
Examples are slow queries, large queues, and scheduling clock skew.

The payload of the event is an object with a key per queue and state, such as the following example.

```json
{
"queues": {
"send-welcome-email": {
"created": 530,
"retry": 40,
"active": 26,
"completed": 3400,
"cancelled": 0,
"failed": 49,
"all": 4049
},
"archive-cleanup": {
"created": 0,
"retry": 0,
"active": 0,
"completed": 645,
"cancelled": 0,
"failed": 0,
"all": 645
}
},
"created": 530,
"retry": 40,
"active": 26,
"completed": 4045,
"cancelled": 0,
"failed": 4,
"all": 4694
}
```
## `wip`

Emitted at most once every 2 seconds when workers are receiving jobs. The payload is an array that represents each worker in this instance of pg-boss. If you'd rather monitor activity across all instances, use `monitor-states`.
Emitted at most once every 2 seconds when workers are receiving jobs. The payload is an array that represents each worker in this instance of pg-boss.

```js
[
Expand Down
67 changes: 22 additions & 45 deletions docs/api/jobs.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,41 +45,19 @@ Available in constructor as a default, or overridden in send.

* **expireInSeconds**, number

How many seconds a job may be in active state before it is failed because of expiration. Must be >=1

* **expireInMinutes**, number

How many minutes a job may be in active state before it is failed because of expiration. Must be >=1

* **expireInHours**, number

How many hours a job may be in active state before it is failed because of expiration. Must be >=1
How many seconds a job may be in active state before being retried or failed. Must be >=1

* Default: 15 minutes

> When a higher unit is is specified, lower unit configuration settings are ignored.

**Retention options**

* **retentionSeconds**, number

How many seconds a job may be in created or retry state before it's archived. Must be >=1

* **retentionMinutes**, number

How many minutes a job may be in created or retry state before it's archived. Must be >=1
How many seconds a job may be in created or retry state before it's deleted. Must be >=1

* **retentionHours**, number
* Default: 14 days

How many hours a job may be in created or retry state before it's archived. Must be >=1

* **retentionDays**, number

How many days a job may be in created or retry state before it's archived. Must be >=1

* Default: 30 days

> When a higher unit is is specified, lower unit configuration settings are ignored.

**Connection options**

Expand All @@ -105,28 +83,17 @@ Available in constructor as a default, or overridden in send.
**Throttle or debounce jobs**

* **singletonSeconds**, int
* **singletonMinutes**, int
* **singletonHours**, int
* **singletonNextSlot**, bool
* **singletonKey** string

Throttling jobs to 'one per time slot', where units could be seconds, minutes, or hours. This option is set on the send side of the API since jobs may or may not be created based on the existence of other jobs.

For example, if you set the `singletonMinutes` to 1, then submit 2 jobs within the same minute, only the first job will be accepted and resolve a job id. The second request will resolve a null instead of a job id.
Throttling jobs to 'one per time slot'. This option is set on the send side of the API since jobs may or may not be created based on the existence of other jobs.

> When a higher unit is is specified, lower unit configuration settings are ignored.
For example, if you set the `singletonSeconds` to 60, then submit 2 jobs within the same minute, only the first job will be accepted and resolve a job id. The second request will resolve a null instead of a job id.

Setting `singletonNextSlot` to true will cause the job to be scheduled to run after the current time slot if and when a job is throttled. This option is set to true, for example, when calling the convenience function `sendDebounced()`.

As with queue policies, using `singletonKey` will extend throttling to allow one job per key within the time slot.

**Dead Letter Queues**

* **deadLetter**, string

When a job fails after all retries, if a `deadLetter` property exists, the job's payload will be copied into that queue, copying the same retention and retry configuration as the original job.


```js
const payload = {
email: "[email protected]",
Expand Down Expand Up @@ -189,7 +156,7 @@ Like, `sendThrottled()`, but instead of rejecting if a job is already sent in th

This is a convenience version of `send()` with the `singletonSeconds`, `singletonKey` and `singletonNextSlot` option assigned. The `key` argument is optional.

### `insert(Job[])`
### `insert(name, Job[])`

Create multiple jobs in one request with an array of objects.

Expand All @@ -209,8 +176,8 @@ interface JobInsert<T = object> {
startAfter?: Date | string;
singletonKey?: string;
expireInSeconds?: number;
deleteAfterSeconds?: number;
keepUntil?: Date | string;
deadLetter?: string;
}
```

Expand Down Expand Up @@ -249,7 +216,8 @@ Returns an array of jobs from a queue
startedOn: Date;
singletonKey: string | null;
singletonOn: Date | null;
expireIn: PostgresInterval;
expireInSeconds: number;
deleteAfterSeconds: number;
createdOn: Date;
completedOn: Date | null;
keepUntil: Date;
Expand Down Expand Up @@ -290,6 +258,18 @@ Deletes a job by id.

Deletes a set of jobs by id.

### `deleteQueuedJobs(name)`

Deletes all queued jobs in a queue.

### `deleteStoredJobs(name)`

Deletes all jobs in completed, failed, and cancelled state in a queue.

### `deleteAllJobs(name)`

Deletes all jobs in a queue, including active jobs.

### `cancel(name, id, options)`

Cancels a pending or active job.
Expand Down Expand Up @@ -343,7 +323,4 @@ Retrieves a job with all metadata by name and id

**options**

* `includeArchive`: bool, default: false

If `true`, it will search for the job in the archive if not found in the primary job storage.

* **db**, object, see notes in `send()`
4 changes: 0 additions & 4 deletions docs/api/ops.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,10 +44,6 @@ By default, calling `stop()` without any arguments will gracefully wait for all
Default: 30000. Maximum time (in milliseconds) to wait for workers to finish job processing before shutting down the PgBoss instance.


### `clearStorage()`

Utility function if and when needed to clear all job and archive storage tables. Internally, this issues a `TRUNCATE` command.

### `isInstalled()`

Utility function to see if pg-boss is installed in the configured database.
Expand Down
35 changes: 14 additions & 21 deletions docs/api/queues.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,17 +28,24 @@ Allowed policy values:

> `stately` queues are special in how retries are handled. By definition, stately queues will not allow multiple jobs to occupy `retry` state. Once a job exists in `retry`, failing another `active` job will bypass the retry mechanism and force the job to `failed`. If this job requires retries, consider a custom retry implementation using a dead letter queue.

### `updateQueue(name, options)`
* **deadLetter**, string

Updates options on an existing queue. The policy can be changed, but understand this won't impact existing jobs in flight and will only apply the new policy on new incoming jobs.
When a job fails after all retries, if the queue has a `deadLetter` property, the job's payload will be copied into that queue, copying the same retention and retry configuration as the original job.

* **deleteAfterSeconds**, int

How long to keep jobs after processing.

### `purgeQueue(name)`
* Default: 7 days

Deletes all queued jobs in a queue.

### `updateQueue(name, options)`

Updates options on an existing queue. The policy can be changed, but understand this won't impact existing jobs in flight and will only apply the new policy on new incoming jobs.

### `deleteQueue(name)`

Deletes a queue and all jobs from the active job table. Any jobs in the archive table are retained.
Deletes a queue and all jobs.

### `getQueues()`

Expand All @@ -48,20 +55,6 @@ Returns all queues

Returns a queue by name

### `getQueueSize(name, options)`

Returns the number of pending jobs in a queue by name.

`options`: Optional, object.
### `getQueueStats(name)`

| Prop | Type | Description | Default |
| - | - | - | - |
|`before`| string | count jobs in states before this state | states.active |

As an example, the following options object include active jobs along with created and retry.

```js
{
before: states.completed
}
```
Returns the number of jobs in various states in a queue. The result matches the results from getQueue(), but ignores the cached data and forces the stats to be retrieved immediately.
2 changes: 2 additions & 0 deletions docs/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
<div id="app"></div>
<script>
window.$docsify = {
search: 'auto',
loadSidebar: true,
subMaxLevel: 3,
auto2top: true,
Expand All @@ -21,5 +22,6 @@
</script>
<!-- Docsify v4 -->
<script src="//cdn.jsdelivr.net/npm/docsify@4"></script>
<script src="//cdn.jsdelivr.net/npm/docsify/lib/plugins/search.min.js"></script>
</body>
</html>
1 change: 0 additions & 1 deletion docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,6 @@ NOTE: If an existing schema was used during installation, created objects will n
```sql
DROP TABLE pgboss.version;
DROP TABLE pgboss.job;
DROP TABLE pgboss.archive;
DROP TYPE pgboss.job_state;
DROP TABLE pgboss.subscription;
DROP TABLE pgboss.schedule;
Expand Down
2 changes: 1 addition & 1 deletion docs/introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,4 @@ In a worker, when your handler function completes, jobs will be marked `complete

Uncompleted jobs may also be assigned to `cancelled` state via [`cancel(name, id)`](#cancelname-id-options), where they can be moved back into `created` via [`resume(name, id)`](#resumename-id-options).

All jobs that are `completed`, `cancelled` or `failed` become eligible for archiving according to your configuration. Once archived, jobs will be automatically deleted after the configured retention period.
All jobs that are not actively deleted during processing will remain in `completed`, `cancelled` or `failed` state until they are automatically removed.
3 changes: 2 additions & 1 deletion docs/sql.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,8 @@ CREATE TABLE pgboss.job (
started_on timestamp with time zone,
singleton_key text,
singleton_on timestamp without time zone,
expire_in interval not null default interval '15 minutes',
expire_seconds integer not null default (900),
deletion_seconds integer not null default (60 * 60 * 24 * 7),
created_on timestamp with time zone not null default now(),
completed_on timestamp with time zone,
keep_until timestamp with time zone NOT NULL default now() + interval '14 days',
Expand Down
Loading
Loading