Skip to content

does the vocabulary we should use depend on our data ? #33

@RashidLadj

Description

@RashidLadj

Hello @rmsalinas @shinsumicco ,

I had to test DBoW2, DBoW3, and FBoW, and I didn't understand something important.
On DBoW2, a demo.cpp code has been provided with a dataset of 4 images, the first step is to retrieve the features, then create the vocabulary with these features, and see the score between each pair of images.

On FBoW, it's a little bit the same, except that a "vocabulary" file with the "Orb" descriptor was provided, so I used it directly to see the correspondence between each pair of images in my dataset which gave me pretty good results, but I also built my own vocabulary with my dataset, and I redid the test on my dataset, I get fairly good results (maybe less good than the first ones), and so my question is:

  • the exiting vocabulary, it was built with which image dataset?
  • the vocabulary to use, does it depend on our data? (I think not, since I tested my own dataset with the existing vocabulary, and it gave me good results)
  • To say that a vocabulary is rich and robust, should it be created with a huge dataset containing images from different places?
  • In addition, I would like to know what is the impact of the choice of L (depth/level of the tree), and of K (number of children of each node)

Thank you kindly for clarifying my ideas a little bit, and thank you for your codes which are very clean, and which will be used by a lot of people

Good luck for the future.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions