Skip to content

LDVRecipes

pozil edited this page Nov 14, 2024 · 13 revisions

LDVRecipes Class

A demonstration recipe for how to process a large amount of records in serial chunks using Queueables. The idea behind this recipe is that Queueables, in production, have no max-queue depth. Meaning that so long as you only enqueue one new queueable, it can keep cycling through until the entire data set is processed. This is useful for instance, when you want to process hundreds of thousands of records.

Note: You're not able to re-enqueue within a test context, so the unit test for this code is limited to the same number of records as chunkSize below.

Note: This should be refactored to be an abstract class that you can extend named 'Ouroboros'. (Ouroboros = the snake eating it's own tail)

Group LDV Recipes

Implements

Queueable

Fields

chunkSize

Signature

private final chunkSize

Type

Integer


offsetId

Signature

private offsetId

Type

Id


objectsToProcess

Signature

private objectsToProcess

Type

List<ContentDocumentLink>


chunksExecuted

TESTVISIBLE

Signature

private static chunksExecuted

Type

Integer

Constructors

LDVRecipes()

No param constructor. Use for starting the chain.

Signature

public LDVRecipes()

LDVRecipes(offsetId)

Constructor accepting an ID to use as an offset. Use this version to continue the chain.

Signature

public LDVRecipes(Id offsetId)

Parameters

Name Type Description
offsetId Id

Methods

execute(queueableContext)

This method contains the 'what' happens to each chunk of records. Note, that this example doesn't actually do any processing. In a real-life use case you'd iterate over the records stored in this.objectsToProcess.

Signature

public void execute(System.QueueableContext queueableContext)

Parameters

Name Type Description
queueableContext System.QueueableContext

Return Type

void


getRecordsToProcess(offsetId)

Returns a 'cursor' - a set of records of size X from a given offset. Note: We originally intended to use OFFSET - the SOQL keyword, but discovered the max OFFSET size is 2000. This obviously won't work for large data volumes greater than 2000 so we switched to using the ID of the record. Since ID is an indexed field, this should also allow us to prevent full table scans even on the largest tables.

Signature

private List<ContentDocumentLink> getRecordsToProcess(Id offsetId)

Parameters

Name Type Description
offsetId Id The offset ID is used to demarcate already processed
records.

Return Type

List<ContentDocumentLink>


safeToReenqueue()

Signature

private Boolean safeToReenqueue()

Return Type

Boolean

Clone this wiki locally