Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

iSpindel Approx ABV calculation has potential to be less than approximate #329

Open
sickchilly opened this issue Jan 28, 2022 · 1 comment

Comments

@sickchilly
Copy link

This feature is really cool, so thanks for that. But depending upon the actual data captured, the calculation can be pretty far off. I was just looking through some of my archived sessions and noticed these two that illustrate extremes. Sometimes I’ll start the iSpindel session before I drop it in the tank so it’ll be either propped up and flat, thus recording some non-normal extremely high or extremely low values at the start of the session. Conversely, sometimes I’ll forget to stop the session when removing the iSpindel from the tank and will get a few bogus data points at the end of the session.
So I wonder if there’s some way to determine the bogus data points and not use them in the calculation or perhaps normalize/average the first X number of hours and the final X number of hours in the session data set to minimize the extremes?
Here are two examples showing an incorrectly high approximation and an incorrectly negative approximation:
126EEF73-59ED-4564-91E5-466432861008
01896450-CEB3-45B8-AA21-2CA039969612

@tmack8001
Copy link
Collaborator

tmack8001 commented Jan 29, 2022

This feature is really cool, so thanks for that.

Thanks, this was on my list of things to add for quite some time actually. When finally someone outside of the Picobrew world asked me about if this server (have shared screenshots elsewhere recently) supported non Pico brewing products and well I said yes cause after all it does support Tilt and iSpindel/iSpindle ... long story short there might be non Pico folks joining in on our little fun side project soon (including the creators of Tilt).

Regarding the incorrect measurements, yes there isn't any "data accuracy" that is being smoothed or automatically determined in the algorithm so far (and neither does the Tilt Spreadsheet BTW). Sure we can add it, but wouldn't it be simplier just to go into the brew session and delete those erroneous data points for now? I like the idea of taking an average of the first N data points (over a time range) and the last M data points (over a similarly sized time range) though if say you left your Tilt logging after you racked for days even if we assumed hours it would be wrong for some people. Even if we did an average the outliers in your raw data will also skew the calculation of OG/FG far outside normal ranges, maybe we could add in standard deviation detection of potential outliers. However, I think it would rather be a better strategy to simply provide the raw data to the brewer/user to edit as they want (this is both what we have today via editing the session file and what tiltpi does with Google docs). There could be some nice additions to alert when outliers are detected and attempt to ignore them. I frequently get high values reported during racking + cleaning as I often forget to disable logging before doing these actions.

Anyways thanks @sickchilly for the feedback and issue to track this. For now manually cleaning the session of these data points will work for most people. You can edit the JSON session file via SMB (samba - network sharing), via SSH terminal session, or a local terminal session (or raspbian desktop if using the display variant).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants