Training
This module contains the classes to train neural networks. It follows the Strategy design pattern providing the abstract
interfaces TrainingStrategy
and TestingStrategy
.
At the moment, we provide only a single training and testing strategy using PyTorch.
- class pynever.strategies.training.TrainingStrategy[source]
Bases:
ABC
An abstract class used to represent a Training Strategy.
- abstractmethod train(network, dataset)[source]
Train the neural network of interest using a testing strategy determined in the concrete children.
- Parameters:
network (NeuralNetwork) – The neural network to train.
dataset (Dataset) – The dataset to use to train the neural network.
- Returns:
The Neural Network resulting from the training of the original network using the training strategy and the dataset.
- Return type:
- class pynever.strategies.training.TestingStrategy[source]
Bases:
ABC
An abstract class used to represent a Testing Strategy.
- abstractmethod test(network, dataset)[source]
Test the neural network of interest using a testing strategy determined in the concrete children.
- Parameters:
network (NeuralNetwork) – The neural network to test.
dataset (Dataset) – The dataset to use to test the neural network.
- Returns:
A measure of the correctness of the networks dependent on the concrete children
- Return type:
float
- class pynever.strategies.training.PytorchTraining(optimizer_con, opt_params, loss_function, n_epochs, validation_percentage, train_batch_size, validation_batch_size, r_split=True, scheduler_con=None, sch_params=None, precision_metric=None, network_transform=None, device='cpu', train_patience=None, checkpoints_root='', verbose_rate=None)[source]
Bases:
TrainingStrategy
Class used to represent the training strategy based on the Pytorch learning framework. It supports different optimization algorithms, schedulers, loss functions and others based on the attributes provided at instantiation time.
- optimizer_con[source]
Reference to the class constructor for the Optimizer of choice for the training strategy.
- Type:
type
- opt_params[source]
Dictionary of the parameters to pass to the constructor of the optimizer excluding the first which is always assumed to be the parameters to optimize
- Type:
dict
- loss_function[source]
Loss function for the training strategy. We assume it to take as parameters two pytorch Tensor corresponding to the output of the network and the target. Other parameters should be given as attributes of the callable object.
- Type:
Callable
- validation_batch_size[source]
Dimension for the validation batch size for the training procedure
- Type:
int
- scheduler_con[source]
Reference to the class constructor for the Scheduler for the learning rate of choice for the training strategy (default: None)
- Type:
type, Optional
- sch_params[source]
Dictionary of the parameters to pass to the constructor of the scheduler excluding the first which is always assumed to be the optimizer whose learning rate must be updated. (default: None)
- Type:
dict, Optional
- precision_metric[source]
Function for measuring the precision of the neural network. It is used to choose the best model and to control the Plateau Scheduler and the early stopping. We assume it to take as parameters two pytorch Tensor corresponding to the output of the network and the target.It should produce a float value and such value should decrease for increasing correctness of the network (as the traditional loss value). Optional supplementary parameters should be given as attributes of the object. (default: None)
- Type:
Callable, Optional
- network_transform[source]
We provide the possibility to define a function which will be applied to the network after the computation of backward and before the optimizer step. In practice, we use it for the manipulation needed to the pruning oriented training. It should take a pytorch module (i.e., the neural network) as input and optional supplementary parameters () should be given as attributes of the object. (default: None)
- Type:
Callable, Optional
- train_patience[source]
The number of epochs in which the loss may not decrease before the training procedure is interrupted with early stopping (default: None).
- Type:
int, Optional
- checkpoints_root[source]
Where to store the checkpoints of the training strategy. (default: ‘’)
- Type:
str, Optional
- verbose_rate[source]
After how many batch the strategy prints information about how the training is going.
- Type:
int, Optional
- train(network, dataset)[source]
Train the neural network of interest using a testing strategy determined in the concrete children.
- Parameters:
network (NeuralNetwork) – The neural network to train.
dataset (Dataset) – The dataset to use to train the neural network.
- Returns:
The Neural Network resulting from the training of the original network using the training strategy and the dataset.
- Return type:
- pytorch_training(net, dataset)[source]
Training procedure using PyTorch.
- Parameters:
net (PyTorchNetwork) – The PyTorchNetwork to train.
dataset (Dataset) – The dataset to use for the training of the
PyTorchNetwork
- Returns:
The trained
PyTorchNetwork
.- Return type:
- class pynever.strategies.training.PytorchTesting(metric, metric_params, test_batch_size, device='cpu', save_results=False)[source]
Bases:
TestingStrategy
Class used to represent the testing strategy based on the Pytorch learning framework. It supports different metrics measure for the correctness of the neural network.
- metric[source]
Function for measuring the precision/correctness of the neural network.
- Type:
Callable
- metric_params[source]
Supplementary parameters for the metric other than the output and the target (which should always be the first two parameters of the metric). It is assumed that it produce a float value and such value decrease for increasing correctness of the network (as the traditional loss value).
- Type:
dict
- save_results[source]
Whether to save outputs, targets and losses as attributes.
- Type:
bool, Optional
- test(network, dataset)[source]
Test the neural network of interest using a testing strategy determined in the concrete children.
- Parameters:
network (NeuralNetwork) – The neural network to test.
dataset (Dataset) – The dataset to use to test the neural network.
- Returns:
A measure of the correctness of the networks dependent on the concrete children
- Return type:
float