BERTVision
The project explores text classification using pretrained BERT models, comparing different feature extraction methods ([CLS] token, mean-max pooling) and evaluating the impact of fine-tuning on classification performance.
The project explores text classification using pretrained BERT models, comparing different feature extraction methods ([CLS] token, mean-max pooling) and evaluating the impact of fine-tuning on classification performance.