Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feat]: Optimize LRU cache promotion #2946

Closed
zztaki opened this issue Dec 5, 2023 · 3 comments
Closed

[Feat]: Optimize LRU cache promotion #2946

zztaki opened this issue Dec 5, 2023 · 3 comments
Assignees
Labels

Comments

@zztaki
Copy link
Contributor

zztaki commented Dec 5, 2023

Is your feature request related to a problem? (你需要的功能是否与某个问题有关?)
In src/common/lru_cache.h, LRUCache's method MoveToFront will do copying the requested elem, erasing the elem from list, pushing duplica to the list's head, and even updating the hashtable. This will waste time.

Describe the solution you'd like (描述你期望的解决方法)
list.splice is OK instead of the above steps.

Describe alternatives you've considered (描述你想到的折衷方案)

Additional context/screenshots (更多上下文/截图)
I've done some performance testing with epoch = 1000000.

    auto cache = std::make_shared<LRUCache<int, int>>(5, nullptr);
    for(int i = 0; i < 5; i++){
        cache->Put(i, i);
    }
    clock_t start,end;

    start = clock();
    for(int i = 0; i < EPOCH; i++){
        int v;
        for(int k = 0; k < 5; k++){
            cache->Get(k, &v);
            assert(k == v);
        }
    }
    end = clock();
    std::cout << "Get with duplicate: " << (double)(end - start) / CLOCKS_PER_SEC << std::endl;

    start = clock();
    for(int i = 0; i < EPOCH; i++){
        int v;
        for(int k = 0; k < 5; k++){
            cache->GetWithSplice(k, &v);
            assert(k == v);
        }
    }
    end = clock();
    std::cout << "Get without duplicate: " << (double)(end - start) / CLOCKS_PER_SEC << std::endl;

Result is

Get with duplicate: 2.17597
Get without duplicate: 1.42666
@zztaki zztaki added the enhancement improve feature label Dec 5, 2023
@zztaki
Copy link
Contributor Author

zztaki commented Dec 5, 2023

If you think this makes sense, I'm glad to do PRs to optimize LRUCache, TimedLRUCache and SglLRUCache without affecting the semantics. :)

@Ziy1-Tan
Copy link
Contributor

Ziy1-Tan commented Dec 6, 2023

If you think this makes sense, I'm glad to do PRs to optimize LRUCache, TimedLRUCache and SglLRUCache without affecting the semantics. :)

Good catch, just do it! Maybe you could use google/benchmark to evaluate impremovement.:)

@Ziy1-Tan Ziy1-Tan assigned zztaki and unassigned ilixiaocui Dec 6, 2023
@zztaki
Copy link
Contributor Author

zztaki commented Dec 6, 2023

If you think this makes sense, I'm glad to do PRs to optimize LRUCache, TimedLRUCache and SglLRUCache without affecting the semantics. :)

Good catch, just do it! Maybe you could use google/benchmark to evaluate impremovement.:)

Nice tool! I will try it and submit a PR soon~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants