Skip to content

February-countercurrent/AI-Tutor2.0-patch

Repository files navigation

AI-Tutor2.0-patch

Principle

The two similar sentences that need to be calculated are first spliced together, and then the overall coding information is obtained through the Bert model, and then the dimension is output through the fully connected layer to output the probability of similarity and dissimilarity. And this support both english and chinese sentence similarity calculation.

Prepare work

We are using BERT model from Google, so before you using it, you need to download the model.

Click here: https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip

Decompress the zip file and put it into 'data/pretrained_model/'. A parameter file will be offered to you as well.

link:https://pan.baidu.com/s/1N9hg0yRCbE3IBd79OBCR6w

code:gu7t

After downloading, just put it in 'data/'.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published