Set up training logics
We now have all the ingredients we need for the machine learning model, now is time to specify the training logic for the model to learn
We have all the components we need
Raw Data: Loaded the SMILE dataset
Split Data: We used stratified sampling to ensure generalized distribution
Data For Machine Learning: we tokenized the data into
X_train, X_valModel we are going to use:
Model = BertForSequenceClassification.from_pretrainedTools to feed data:
dataloader_train = DataLoaderTools to train:
optimizer = AdamW ; scheduler = get_linear_schedule_with_warmupTools to measure performance:
f1_score_func ; accuracy_per_class
Check environment, let model know whether to use GPU or even TPU
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
print(device)Customize evaluate function for logging:
The main training happens in model.eval()
We customize the evaluate function to evaluate and log the validation performance
Training Loop:
tqdm : a progress bar library. tqdm derives from the Arabic word taqaddum (تقدّم) which can mean "progress," and is an abbreviation for "I love you so much" in Spanish (te quiero demasiado).
Logics of the loop:
For the training, iterate through all epochs
For the epoch, train on each batch
Log the training performance on that epoch
Last updated
Was this helpful?