Skip to content

msamprovalaki/Transformers-Text-Classification

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Overview

This project aims to fine-tune a pre-trained BERT model for sentiment analysis, with a focus on hyperparameter tuning. The primary objectives include optimizing the model's task-specific layers and determining the number of frozen BERT encoder blocks. The tuning process is conducted on the development subset of the dataset to enhance performance.

Key Tasks

  • Fine-Tuning BERT Model: Implement fine-tuning procedures to adapt BERT for sentiment analysis.
  • Hyperparameter Tuning: Explore and tune hyperparameters, such as task-specific layer sizes and frozen BERT encoder blocks.
  • Experimental Comparisons: Provide experimental results, including the performance of a baseline majority classifier and comparisons with the best classifiers from prior exercises.

Experimental Setup

  • Dataset: Utilize a diverse dataset, focusing on a development subset for hyperparameter tuning.
  • Monitoring Performance: During training, closely monitor the model's performance on the development subset to determine optimal epochs.

Results and Comparisons

  • Include results from the baseline majority classifier and best classifiers from previous exercises for comprehensive comparisons.

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

Utilizing DistilBERT for Analyzing Sentiments in Movie Reviews

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published