You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In short, we should add a linter to ensure added Lua code is valid. I see this as a two phase project:
We use an existing tool that lints Lua code, ensuring things like imported files exist, only valid variables / functions are referenced, functions are called with the correct number of arguments, etc.
We create a custom linter that also does type checking. We can use the documentation comments to deduce the types of function arguments / return values, and something like the Hindly-Milner type system to infer the types of all variables. We can then ensure that all functions are called with the correct objects, that variables are always reassigned to the same type, and only valid methods/properties are called from Finale objects. There is not an existing solution for this as we'd need to define every single object inside the linter (e.g., finale.FCString())
Phase 1 seems quite doable and would catch basic mistakes that would be detrimental to our code. While it wouldn't be too useful when developing scripts (because we are probably manually testing our scripts to begin with), it could catch errors when refactoring our shared library.
Phase 2 would yield massive benefits in stability for the ecosystem as a whole, especially since it's unlikely we'll be able to add tests as described in #255. However, this would take a lot of effort on our end. Perhaps not worth it right now (though I'm working on a similar project for my personal use so I'd be able to share a lot of the same learnings).
This is inspired by #255.
In short, we should add a linter to ensure added Lua code is valid. I see this as a two phase project:
Phase 1 seems quite doable and would catch basic mistakes that would be detrimental to our code. While it wouldn't be too useful when developing scripts (because we are probably manually testing our scripts to begin with), it could catch errors when refactoring our shared library.
Phase 2 would yield massive benefits in stability for the ecosystem as a whole, especially since it's unlikely we'll be able to add tests as described in #255. However, this would take a lot of effort on our end. Perhaps not worth it right now (though I'm working on a similar project for my personal use so I'd be able to share a lot of the same learnings).
cc @rpatters1
The text was updated successfully, but these errors were encountered: