You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here is a simplified example of the code generating the faulty behaviour.
Call the function with a larger slice of objects to persist (several thousands).
func (c*Client) BulkCreate(ctx context.Context, collIDstring, srcany) error {
ifreflect.TypeOf(src).Kind() !=reflect.Slice {
returnerrors.New("bulk create: src must be a slice")
}
w:=c.delegate.BulkWriter(ctx)
deferw.End() // This will Flush all the remaining documents at the enddocs:=reflect.ValueOf(src)
collRef:=c.delegate.Collection(collID)
fori:=0; i<docs.Len(); i++ {
bwj, err:=w.Create(collRef.NewDoc(), docs.Index(i).Interface())
iferr!=nil {
returnfmt.Errorf("bulk create: %w", err)
}
}
returnnil
}
go.mod
go 1.23.4
require (
cloud.google.com/go/firestore v1.17.0
)
Expected behavior
All documents passed to a BulkWriter are written to the database. If some documents fail, an error is thrown.
Actual behavior
When adding documents quickly to a bulkWriter, some documents are skipped silently. The bulk operation succeeds but there is no indication that any data is missing.
Possible explanation/work-around
After reading the source code, the following work-around was created to make sure that no documents are skipped.
// The maxBatchSize for the bulkwriter is 20 after which the bulkwriter tries to write to the database// Wait for the write results before continuing. This prevents silent loss of data.if (i+1)%maxBatchSize==0 {
if_, err:=bwj.Results(); err!=nil {
returnfmt.Errorf("bulk create document: %w", err)
}
}
The faulty code seems to be the silencing of errors coming from the bundler.Add function:
It's possible that the bundler is throwing an ErrOverflow error when the document addition rate to the queue is faster than the Firestore persistence rate.
The text was updated successfully, but these errors were encountered:
Client
firestore
Environment
go 1.23.4
Code and Dependencies
Here is a simplified example of the code generating the faulty behaviour.
Call the function with a larger slice of objects to persist (several thousands).
go.mod
Expected behavior
All documents passed to a BulkWriter are written to the database. If some documents fail, an error is thrown.
Actual behavior
When adding documents quickly to a bulkWriter, some documents are skipped silently. The bulk operation succeeds but there is no indication that any data is missing.
Possible explanation/work-around
After reading the source code, the following work-around was created to make sure that no documents are skipped.
The faulty code seems to be the silencing of errors coming from the
bundler.Add
function:It's possible that the bundler is throwing an
ErrOverflow
error when the document addition rate to the queue is faster than the Firestore persistence rate.The text was updated successfully, but these errors were encountered: