You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the level of the interface, it may be useful to have a way to establish an upper limit of messages per second that can be written to the device as to prevent the device from potentially dropping/accumulating commands.
Motivation
The current client interface does not assume an upper bound to the number of messages one can send to a device. We effectively assume this value to be infinite. In the past, the assumption has been violated and resulted in dropped write messages from the client to the host when writing a few (2-3 messages) in quick succession (from the client side effectively synchronously). @filcarv implemented a double buffering strategy that far decreased this failure rate but, as far as I remember did not remove it. We have recently come across something similar in the RP2040 implementation and, once again, raised the need to control the outgoing message bandwidth from the client side.
These are a list of potential candidates to consider to help with this behavior:
Add an operator identical to Sample but turn the temperature of the sequence to cold (essentially buffers and resamples). It would be equivalent to CreateObservable(Do()->Delay)->Concat.
Move the protocol to be synchronous and block writes until the device returns the expected write.
Low level flow-control hardware solution like using CTS. This would be ideal since the timing is largely given by the hardware and it would guarantee the device is ready to receive another message every single time.
Drawbacks
1- High jitter given the non-RTOS, locks the scheduler, its still "open-loop"
2- Annoying to keep track of. Kills the asynchronous grammar of harp/bonsai
3- Needs some studying to check if it is possible to implement from both the client and the host sides.
Alternatives
Unresolved Questions
Design Meetings
The text was updated successfully, but these errors were encountered:
One option is putting a queue directly in the device to rate-limit message sending by make sure you get a reply to a previous command before sending in another command:
could be another property of the Device node
lower throughput since we are waiting for replies before sending more commands (probably not so important, just leaving it as a note)
one possible danger here is what to do if device or transport layer drops a reply message:
timeout (configurable value?)
how to deal with edge cases like OP_REG where extra messages are sent back-to-back with the reply (possibly confusing matches, could be solved by sending the paired reply at the end of the dump but need to check with each implementation what is currently done, e.g. rp2040 currently is dumping all registers but requires calling a read function to ensure initialization / meaningful value)
should try it out in a prototype
The other option is to provide a generic rate-limiting operator where the first input queues values, and the second input dequeues queued values (if any), and then connect the message stream from the device to the first:
this could be done as a generic operator without knowledge of matching message replies
or as a Harp specific operator where queueing / dequeuing follows register matching
Do a proof-of-concept of either of these to try out
Summary
At the level of the interface, it may be useful to have a way to establish an upper limit of messages per second that can be written to the device as to prevent the device from potentially dropping/accumulating commands.
Motivation
The current client interface does not assume an upper bound to the number of messages one can send to a device. We effectively assume this value to be infinite. In the past, the assumption has been violated and resulted in dropped write messages from the client to the host when writing a few (2-3 messages) in quick succession (from the client side effectively synchronously). @filcarv implemented a double buffering strategy that far decreased this failure rate but, as far as I remember did not remove it. We have recently come across something similar in the RP2040 implementation and, once again, raised the need to control the outgoing message bandwidth from the client side.
Detailed Design
We should, first and foremost, add the implementation ID to the yaml file (https://github.com/harp-tech/protocol/blob/main/schema/device.json). This can be used as a hook for downstream behavior (not only this proposal but also #18 may benefit)
These are a list of potential candidates to consider to help with this behavior:
Sample
but turn the temperature of the sequence to cold (essentially buffers and resamples). It would be equivalent toCreateObservable(Do()->Delay)->Concat
.Drawbacks
1- High jitter given the non-RTOS, locks the scheduler, its still "open-loop"
2- Annoying to keep track of. Kills the asynchronous grammar of harp/bonsai
3- Needs some studying to check if it is possible to implement from both the client and the host sides.
Alternatives
Unresolved Questions
Design Meetings
The text was updated successfully, but these errors were encountered: