-
-
Notifications
You must be signed in to change notification settings - Fork 171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal for view and data provider separation #139
Comments
Thank you for sharing! I can see you put a lot of effort into this and have considered external API, the implementation choices and also the modularity of the parts. And I appreciate that you put the SineWaveAudioContext into the examples project, not the main project. Top notch. I am reading through the pull request. Right off the bat I can say overall yes, this is a welcome change — separating the concerns. For your other questions, making So far my immediate questions are:
Thanks again, and I'm hoping this opens up the discussion. |
Thanks for the reply. I will look more into the codebase over the weekend to see what would be the best way to abstract the reading part. This was more of a proof of concept on doing it without affecting any of the APIs. I'll get back to you with my answers in the coming days. |
Another note, maybe samplesPerPixel should be outside of that reader. |
Significant progress on this issue is at #144 Nearly done, just left in case anyone wants to handle that last part. And thanks again for the great idea here! |
@fulldecent Thanks I will take a look once I have time. I ended up going into another direction with Metal rendering of the waveform mostly because I needed large zoom levels where even individual samples can be seen. I'm still in the process but maybe the learnings can be useful in the future here as well. Looking forward to the updates here. Happy new year! |
First of all many thanks for this great library.
I'm in the process of adapting this library for my app and came across some of the limitations that were already mentioned in previous issues.
I'd like to put forward a proposal to separate the view logic from reading data:
Data
chunks or[Float]
arrays based on requested rangesFDWaveformView
andFDWaveformRenderOperation
removingAVFoundation
dependency: only use the datasource protocolOnce you implement the data source protocol you are free to integrate with any arbitrary data source not just
AVAsset
s.As a proof of concept I did all of the above:
FDAudioContextProtocol
to define the data access layerFDAudioContext
to encapsulate all code related to loadingAVAsset
s, implementingFDAudioContextProtocol
FDWaveformRenderOperation
to only use theFDAudioContextProtocol
SineWaveAudioContext
as an example to demonstrate a custom data source based on a generated sine wave.I could achieve all of this with couple of API changes:
FDWaveformView.audioContext
public so I can assign my custom sine exampleWaveformType
enum publicI would be interested if this is something you would consider for the project.
If so, let's start a discussion about the specifics. I can dedicate some time to this task in the coming weeks.
The text was updated successfully, but these errors were encountered: