pyabsa.tasks.CodeDefectDetection.models.__plm__.models
Module Contents
Classes
Head for sentence-level classification tasks. |
|
Reference: |
|
References: |
|
Reference: |
|
Base class for all neural network modules. |
|
Build Seqence-to-Sequence. |
|
Functions
|
|
|
Attributes
- class pyabsa.tasks.CodeDefectDetection.models.__plm__.models.RobertaClassificationHead(config)[source]
Bases:
torch.nn.Module
Head for sentence-level classification tasks.
- class pyabsa.tasks.CodeDefectDetection.models.__plm__.models.FocalLoss(gamma=0.75)[source]
Bases:
torch.nn.Module
Reference: Li et al., Focal Loss for Dense Object Detection. ICCV 2017.
Equation: Loss(x, class) = - (1-sigmoid(p^t))^gamma log(p^t)
Focal loss tries to make neural networks to pay more attentions on difficult samples. :param gamma: gamma > 0; reduces the relative loss for well-classified examples (p > .5),
putting more focus on hard, misclassified examples
- class pyabsa.tasks.CodeDefectDetection.models.__plm__.models.LDAMLoss(cls_num_list, max_m=0.5, weight=None, s=30)[source]
Bases:
torch.nn.Module
References: Cao et al., Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss. NeurIPS 2019.
- Parameters:
s (float, double) – the scale of logits, according to the official codes.
max_m (float, double) – margin on loss functions. See original paper’s Equation (12) and (13)
- Notes: There are two hyper-parameters of LDAMLoss codes provided by official codes,
but the authors only provided the settings on long-tailed CIFAR. Settings on other datasets are not avaliable (https://github.com/kaidic/LDAM-DRW/issues/5).
- class pyabsa.tasks.CodeDefectDetection.models.__plm__.models.ClassBalanceCE(para_dict=None)[source]
Bases:
torch.nn.Module
Reference: Cui et al., Class-Balanced Loss Based on Effective Number of Samples. CVPR 2019.
Equation: Loss(x, c) = frac{1-beta}{1-beta^{n_c}} * CrossEntropy(x, c)
Class-balanced loss considers the real volumes, named effective numbers, of each class, rather than nominal numeber of images provided by original datasets.
- Parameters:
beta (float, double) – hyper-parameter for class balanced loss to control the cost-sensitive weights.
- class pyabsa.tasks.CodeDefectDetection.models.__plm__.models.DefectModel(config)[source]
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- class pyabsa.tasks.CodeDefectDetection.models.__plm__.models.Seq2Seq(encoder, decoder, config, beam_size=None, max_length=None, sos_id=None, eos_id=None)[source]
Bases:
torch.nn.Module
Build Seqence-to-Sequence.
Parameters:
encoder- encoder of seq2seq model. e.g. roberta
decoder- decoder of seq2seq model. e.g. transformer
config- configuration of encoder model.
beam_size- beam size for beam search.
max_length- max length of target for beam search.
sos_id- start of symbol ids in target for beam search.
eos_id- end of symbol ids in target for beam search.
- _tie_or_clone_weights(first_module, second_module)[source]
Tie or clone module weights depending of weither we are using TorchScript or not
- class pyabsa.tasks.CodeDefectDetection.models.__plm__.models.Beam(size, sos, eos)[source]
Bases:
object