Super Quick Intro into PyTorch and Hugging Face Transformers

Ben Hart
2 min readFeb 12, 2023

--

In recent years, transfer learning has become a crucial aspect in the field of natural language processing (NLP) and computer vision. Transfer learning involves using pre-trained models on a large dataset and fine-tuning them on a smaller dataset to solve a specific task.

Hugging Face is a platform that provides easy access to a huge collection of pre-trained models for NLP and computer vision. The platform supports PyTorch, TensorFlow, and TensorFlow.js, making it a versatile choice for NLP and computer vision projects.

In this tutorial, we will be focusing on how to use Hugging Face and PyTorch to perform transfer learning for NLP tasks.

Installing Required Libraries

To get started, we need to install PyTorch and transformers library from Hugging Face. The transformers library provides an easy access to the pre-trained models available on Hugging Face.

To install PyTorch, run the following command:

pip install torch

To install the transformers library, run the following command:

pip install transformers

Loading the Pre-Trained Model

Once we have the required libraries installed, we can now load the pre-trained model. To load the model, we will use the AutoModel class from the transformers library.

The AutoModel class takes the name of the model and the type of the task as inputs and returns an instance of the model. For example, to load the BERT model, we will run the following code:

from transformers import AutoTokenizer, AutoModel

model_name = "bert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)

In the code above, we first loaded the tokenizer using the AutoTokenizer class and then loaded the model using the AutoModel class. The from_pretrained method takes the name of the model as input and returns an instance of the model or tokenizer.

Fine-Tuning the Model

Once we have the model loaded, we can now fine-tune it on our dataset. To fine-tune the model, we need to define a custom classifier, which will be added on top of the pre-trained model.

Here is an example of how to define a custom classifier for sentiment analysis:

import torch
import torch.nn as nn

class SentimentClassifier(nn.Module):
def __init__(self, model):
super().__init__()
self.model = model
self.fc = nn.Linear(model.config.hidden_size, 1)
self.sigmoid = nn.Sigmoid()

def forward(self, input_ids, attention_mask):
last_hidden_state, pooler_output = self.model(input_ids, attention_mask=attention_mask)
logits = self.fc(pooler_output)
probs = self.sigmoid(logits)
return probs

In the code above, we defined a custom classifier SentimentClassifier by subclassing nn.Module. The classifier takes the pre-trained model as input and adds a fully connected layer and a sigmoid activation`.

If you are interested in learning about PyTorch and Hugging Face transformers I’d recommend trying the examples above. If you have any issues leave a comment below and I’ll try and help, but solving those bugs yourself may be the best learning you ever get.

--

--

Ben Hart
Ben Hart

Written by Ben Hart

Data Scientist and Entrepreneur

No responses yet