Skip to content

Conversation

@tommasoclini
Copy link

@tommasoclini tommasoclini commented Nov 13, 2025

This PR makes it so that the user can choose whether to use statically or dynamically allocated cache.
It is useful whenever page count and keys are not known at compile time.

It does that with a List<T, N> structure, as described in the module.

This PR aims to solve issue #94.

@diondokter
Copy link
Member

Hey, why did you make a Box<[T; N]>? Now you still need to know the amount of pages and keys at compile time.
The only difference is where it's allocated. But that doesn't really help much.
You can already put the cache on the heap that way by doing e.g. Box::new(PagePointerCache<8>::new()).

I thought you were going to add new cache types that weren't going to have const generics anymore

@tommasoclini
Copy link
Author

tommasoclini commented Nov 14, 2025

Have you actually read the documentation and/or the code in the list module?
@diondokter

@diondokter
Copy link
Member

Ah I misread something when I scanned over it. It's indeed Box<[T]>.

Unsure if I like this approach. I'll reflect on it...
It's not the way I'd have done it.

@tommasoclini
Copy link
Author

My goal was to touch as few things as I can, my change shouldn't be breaking l.
How would you have done it?
I'm open to criticism obviously

@diondokter
Copy link
Member

Hey, alright, this is how I would want to see it:

The cache traits are pretty simple. I think it's better to duplicate the logic for different types of implementations.
This will also have some performance benefits.

For example, the KeyPointerCache could use a hashmap instead of a vec, which would make the cache lookup be O(1) instead of O(N). It'd also allow you to get rid of the const generic entirely.
It makes it so each implementation is the best version of itself.

So that's what I'd want to see. Separate cache types for heap allocated caches

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants