Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal for view and data provider separation #139

Open
intonarumori opened this issue Mar 18, 2020 · 5 comments
Open

Proposal for view and data provider separation #139

intonarumori opened this issue Mar 18, 2020 · 5 comments

Comments

@intonarumori
Copy link

First of all many thanks for this great library.

I'm in the process of adapting this library for my app and came across some of the limitations that were already mentioned in previous issues.

I'd like to put forward a proposal to separate the view logic from reading data:

  • Define a protocol for a data source: this should support reading data in batches returning Data chunks or [Float] arrays based on requested ranges
  • Clean up FDWaveformView and FDWaveformRenderOperation removing AVFoundation dependency: only use the datasource protocol

Once you implement the data source protocol you are free to integrate with any arbitrary data source not just AVAssets.

As a proof of concept I did all of the above:

  • Defined FDAudioContextProtocol to define the data access layer
  • Updated FDAudioContext to encapsulate all code related to loading AVAssets, implementing FDAudioContextProtocol
  • Updated FDWaveformRenderOperation to only use the FDAudioContextProtocol
  • Added SineWaveAudioContext as an example to demonstrate a custom data source based on a generated sine wave.
  • Added an option to the iOS example to load the sine wave, please see screenshots below.

I could achieve all of this with couple of API changes:

  • made FDWaveformView.audioContext public so I can assign my custom sine example
  • made WaveformType enum public
  • rest of the changes happened in internal changes not affecting the API

I would be interested if this is something you would consider for the project.
If so, let's start a discussion about the specifics. I can dedicate some time to this task in the coming weeks.
Simulator Screen Shot - iPhone 8 - 2020-03-18 at 01 44 14
Simulator Screen Shot - iPhone 8 - 2020-03-18 at 01 44 34

@fulldecent
Copy link
Owner

Thank you for sharing! I can see you put a lot of effort into this and have considered external API, the implementation choices and also the modularity of the parts. And I appreciate that you put the SineWaveAudioContext into the examples project, not the main project. Top notch.

I am reading through the pull request. Right off the bat I can say overall yes, this is a welcome change — separating the concerns. For your other questions, making FDWaveformView.audioContext public, yes this is acceptable. Likewise for WaveformType, as has already been noted in code.

So far my immediate questions are:

  • Should FDWaveformType process(normalizedSamples: inout [Float]) be moved into the context and out of the FDWaveformView file?
  • Are the class names general enough considering new responsibility?
  • Are TODO notes in code reminding us of any issues?
  • Is Incremental rendering #2 possible going forward?

Thanks again, and I'm hoping this opens up the discussion.

@intonarumori
Copy link
Author

Thanks for the reply.

I will look more into the codebase over the weekend to see what would be the best way to abstract the reading part. This was more of a proof of concept on doing it without affecting any of the APIs.

I'll get back to you with my answers in the coming days.

@fulldecent
Copy link
Owner

Another note, maybe samplesPerPixel should be outside of that reader.

fulldecent added a commit that referenced this issue Dec 27, 2020
@fulldecent
Copy link
Owner

fulldecent commented Dec 27, 2020

Significant progress on this issue is at #144

Nearly done, just left in case anyone wants to handle that last part. And thanks again for the great idea here!

@intonarumori
Copy link
Author

@fulldecent Thanks I will take a look once I have time.

I ended up going into another direction with Metal rendering of the waveform mostly because I needed large zoom levels where even individual samples can be seen. I'm still in the process but maybe the learnings can be useful in the future here as well.

Looking forward to the updates here. Happy new year!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants