peptdeep.model.generic_property_prediction#

Check Tutorial: building new models.

Classes:

ModelInterface_for_Generic_AASeq_BinaryClassification(...)

ModelInterface for all Generic_AASeq_BinaryClassification models

ModelInterface_for_Generic_AASeq_MultiLabelClassification(...)

ModelInterface_for_Generic_AASeq_MultiTargetClassification

alias of ModelInterface_for_Generic_AASeq_MultiLabelClassification

ModelInterface_for_Generic_AASeq_Regression(...)

ModelInterface for Generic_AASeq_Regression models

ModelInterface_for_Generic_ModAASeq_BinaryClassification(...)

ModelInterface for Generic_ModAASeq_BinaryClassification

ModelInterface_for_Generic_ModAASeq_MultiLabelClassification(...)

ModelInterface_for_Generic_ModAASeq_MultiTargetClassification

alias of ModelInterface_for_Generic_ModAASeq_MultiLabelClassification

ModelInterface_for_Generic_ModAASeq_Regression(...)

ModelInterface for all Generic_ModAASeq_Regression models

Model_for_Generic_AASeq_BinaryClassification_LSTM(*)

Generic LSTM classification model for AA sequence

Model_for_Generic_AASeq_BinaryClassification_Transformer(*)

Generic transformer classification model for AA sequence

Model_for_Generic_AASeq_Regression_LSTM(*[, ...])

Generic LSTM regression model for AA sequence

Model_for_Generic_AASeq_Regression_Transformer(*)

Generic transformer regression model for AA sequence

Model_for_Generic_ModAASeq_BinaryClassification_LSTM(*)

Generic LSTM classification model for modified sequence

Model_for_Generic_ModAASeq_BinaryClassification_Transformer(*)

Generic transformer classification model for modified sequence

Model_for_Generic_ModAASeq_Regression_LSTM(*)

Generic LSTM regression model for modified sequence

Model_for_Generic_ModAASeq_Regression_Transformer(*)

Generic transformer regression model for modified sequence

class peptdeep.model.generic_property_prediction.ModelInterface_for_Generic_AASeq_BinaryClassification(model_class: ~torch.nn.modules.module.Module = <class 'peptdeep.model.generic_property_prediction.Model_for_Generic_AASeq_BinaryClassification_LSTM'>, dropout=0.1, device: str = 'gpu', hidden_dim=256, output_dim=1, nlayers=4, **kwargs)[source][source]#

Bases: ModelInterface

ModelInterface for all Generic_AASeq_BinaryClassification models

Methods:

__init__([model_class, dropout, device, ...])

Class to predict retention times from precursor dataframes.

__init__(model_class: ~torch.nn.modules.module.Module = <class 'peptdeep.model.generic_property_prediction.Model_for_Generic_AASeq_BinaryClassification_LSTM'>, dropout=0.1, device: str = 'gpu', hidden_dim=256, output_dim=1, nlayers=4, **kwargs)[source][source]#

Class to predict retention times from precursor dataframes.

class peptdeep.model.generic_property_prediction.ModelInterface_for_Generic_AASeq_MultiLabelClassification(num_target_values: int = 6, model_class: ~torch.nn.modules.module.Module = <class 'peptdeep.model.generic_property_prediction.Model_for_Generic_AASeq_BinaryClassification_Transformer'>, nlayers=4, hidden_dim=256, device='gpu', dropout=0.1, **kwargs)[source][source]#

Bases: ModelInterface_for_Generic_AASeq_BinaryClassification

Methods:

__init__([num_target_values, model_class, ...])

Class to predict retention times from precursor dataframes.

__init__(num_target_values: int = 6, model_class: ~torch.nn.modules.module.Module = <class 'peptdeep.model.generic_property_prediction.Model_for_Generic_AASeq_BinaryClassification_Transformer'>, nlayers=4, hidden_dim=256, device='gpu', dropout=0.1, **kwargs)[source][source]#

Class to predict retention times from precursor dataframes.

peptdeep.model.generic_property_prediction.ModelInterface_for_Generic_AASeq_MultiTargetClassification[source]#

alias of ModelInterface_for_Generic_AASeq_MultiLabelClassification Methods:

__init__([num_target_values, model_class, ...])

Class to predict retention times from precursor dataframes.

class peptdeep.model.generic_property_prediction.ModelInterface_for_Generic_AASeq_Regression(model_class: ~torch.nn.modules.module.Module = <class 'peptdeep.model.generic_property_prediction.Model_for_Generic_AASeq_Regression_LSTM'>, dropout=0.1, device: str = 'gpu', hidden_dim=256, output_dim=1, nlayers=4, **kwargs)[source][source]#

Bases: ModelInterface

ModelInterface for Generic_AASeq_Regression models

Methods:

__init__([model_class, dropout, device, ...])

param device:

device type in 'get_available', 'cpu', 'mps', 'gpu' (or 'cuda'),

__init__(model_class: ~torch.nn.modules.module.Module = <class 'peptdeep.model.generic_property_prediction.Model_for_Generic_AASeq_Regression_LSTM'>, dropout=0.1, device: str = 'gpu', hidden_dim=256, output_dim=1, nlayers=4, **kwargs)[source][source]#
Parameters:
  • device (str, optional) – device type in ‘get_available’, ‘cpu’, ‘mps’, ‘gpu’ (or ‘cuda’), by default ‘gpu’

  • fixed_sequence_len (int, optional) – See fixed_sequence_len, defaults to 0.

  • min_pred_value (float, optional) – See min_pred_value, defaults to 0.0.

class peptdeep.model.generic_property_prediction.ModelInterface_for_Generic_ModAASeq_BinaryClassification(model_class: ~torch.nn.modules.module.Module = <class 'peptdeep.model.generic_property_prediction.Model_for_Generic_ModAASeq_BinaryClassification_LSTM'>, dropout=0.1, device: str = 'gpu', hidden_dim=256, output_dim=1, nlayers=4, **kwargs)[source][source]#

Bases: ModelInterface

ModelInterface for Generic_ModAASeq_BinaryClassification

Methods:

__init__([model_class, dropout, device, ...])

param device:

device type in 'get_available', 'cpu', 'mps', 'gpu' (or 'cuda'),

__init__(model_class: ~torch.nn.modules.module.Module = <class 'peptdeep.model.generic_property_prediction.Model_for_Generic_ModAASeq_BinaryClassification_LSTM'>, dropout=0.1, device: str = 'gpu', hidden_dim=256, output_dim=1, nlayers=4, **kwargs)[source][source]#
Parameters:
  • device (str, optional) – device type in ‘get_available’, ‘cpu’, ‘mps’, ‘gpu’ (or ‘cuda’), by default ‘gpu’

  • fixed_sequence_len (int, optional) – See fixed_sequence_len, defaults to 0.

  • min_pred_value (float, optional) – See min_pred_value, defaults to 0.0.

class peptdeep.model.generic_property_prediction.ModelInterface_for_Generic_ModAASeq_MultiLabelClassification(num_target_values: int = 6, model_class: ~torch.nn.modules.module.Module = <class 'peptdeep.model.generic_property_prediction.Model_for_Generic_ModAASeq_BinaryClassification_Transformer'>, nlayers=4, hidden_dim=256, device='gpu', dropout=0.1, **kwargs)[source][source]#

Bases: ModelInterface_for_Generic_ModAASeq_BinaryClassification

Methods:

__init__([num_target_values, model_class, ...])

param device:

device type in 'get_available', 'cpu', 'mps', 'gpu' (or 'cuda'),

__init__(num_target_values: int = 6, model_class: ~torch.nn.modules.module.Module = <class 'peptdeep.model.generic_property_prediction.Model_for_Generic_ModAASeq_BinaryClassification_Transformer'>, nlayers=4, hidden_dim=256, device='gpu', dropout=0.1, **kwargs)[source][source]#
Parameters:
  • device (str, optional) – device type in ‘get_available’, ‘cpu’, ‘mps’, ‘gpu’ (or ‘cuda’), by default ‘gpu’

  • fixed_sequence_len (int, optional) – See fixed_sequence_len, defaults to 0.

  • min_pred_value (float, optional) – See min_pred_value, defaults to 0.0.

peptdeep.model.generic_property_prediction.ModelInterface_for_Generic_ModAASeq_MultiTargetClassification[source]#

alias of ModelInterface_for_Generic_ModAASeq_MultiLabelClassification Methods:

__init__([num_target_values, model_class, ...])

param device:

device type in 'get_available', 'cpu', 'mps', 'gpu' (or 'cuda'),

class peptdeep.model.generic_property_prediction.ModelInterface_for_Generic_ModAASeq_Regression(model_class: ~torch.nn.modules.module.Module = <class 'peptdeep.model.generic_property_prediction.Model_for_Generic_ModAASeq_Regression_LSTM'>, dropout=0.1, device: str = 'gpu', hidden_dim=256, output_dim=1, nlayers=4, **kwargs)[source][source]#

Bases: ModelInterface

ModelInterface for all Generic_ModAASeq_Regression models

Methods:

__init__([model_class, dropout, device, ...])

param device:

device type in 'get_available', 'cpu', 'mps', 'gpu' (or 'cuda'),

__init__(model_class: ~torch.nn.modules.module.Module = <class 'peptdeep.model.generic_property_prediction.Model_for_Generic_ModAASeq_Regression_LSTM'>, dropout=0.1, device: str = 'gpu', hidden_dim=256, output_dim=1, nlayers=4, **kwargs)[source][source]#
Parameters:
  • device (str, optional) – device type in ‘get_available’, ‘cpu’, ‘mps’, ‘gpu’ (or ‘cuda’), by default ‘gpu’

  • fixed_sequence_len (int, optional) – See fixed_sequence_len, defaults to 0.

  • min_pred_value (float, optional) – See min_pred_value, defaults to 0.0.

class peptdeep.model.generic_property_prediction.Model_for_Generic_AASeq_BinaryClassification_LSTM(*, hidden_dim=256, output_dim=1, nlayers=4, dropout=0.1, **kwargs)[source][source]#

Bases: Model_for_Generic_AASeq_Regression_LSTM

Generic LSTM classification model for AA sequence

Methods:

__init__(*[, hidden_dim, output_dim, ...])

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(aa_x)

Define the computation performed at every call.

__init__(*, hidden_dim=256, output_dim=1, nlayers=4, dropout=0.1, **kwargs)[source][source]#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(aa_x)[source][source]#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class peptdeep.model.generic_property_prediction.Model_for_Generic_AASeq_BinaryClassification_Transformer(*, hidden_dim=256, output_dim=1, nlayers=4, output_attentions=False, dropout=0.1, **kwargs)[source][source]#

Bases: Model_for_Generic_AASeq_Regression_Transformer

Generic transformer classification model for AA sequence

Methods:

__init__(*[, hidden_dim, output_dim, ...])

Model based on a transformer Architecture from Huggingface's BertEncoder class.

forward(aa_x)

Define the computation performed at every call.

__init__(*, hidden_dim=256, output_dim=1, nlayers=4, output_attentions=False, dropout=0.1, **kwargs)[source][source]#

Model based on a transformer Architecture from Huggingface’s BertEncoder class.

forward(aa_x)[source][source]#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class peptdeep.model.generic_property_prediction.Model_for_Generic_AASeq_Regression_LSTM(*, hidden_dim=256, output_dim=1, nlayers=4, dropout=0.1, **kwargs)[source][source]#

Bases: Module

Generic LSTM regression model for AA sequence

Methods:

__init__(*[, hidden_dim, output_dim, ...])

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(aa_x)

Define the computation performed at every call.

__init__(*, hidden_dim=256, output_dim=1, nlayers=4, dropout=0.1, **kwargs)[source][source]#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(aa_x)[source][source]#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class peptdeep.model.generic_property_prediction.Model_for_Generic_AASeq_Regression_Transformer(*, hidden_dim=256, output_dim=1, nlayers=4, output_attentions=False, dropout=0.1, **kwargs)[source][source]#

Bases: Module

Generic transformer regression model for AA sequence

Methods:

__init__(*[, hidden_dim, output_dim, ...])

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(aa_x)

Define the computation performed at every call.

Attributes:

__init__(*, hidden_dim=256, output_dim=1, nlayers=4, output_attentions=False, dropout=0.1, **kwargs)[source][source]#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(aa_x)[source][source]#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

property output_attentions: bool#
class peptdeep.model.generic_property_prediction.Model_for_Generic_ModAASeq_BinaryClassification_LSTM(*, hidden_dim=256, output_dim=1, nlayers=4, dropout=0.1, **kwargs)[source][source]#

Bases: Model_for_Generic_ModAASeq_Regression_LSTM

Generic LSTM classification model for modified sequence

Methods:

__init__(*[, hidden_dim, output_dim, ...])

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(aa_x, mod_x)

Define the computation performed at every call.

__init__(*, hidden_dim=256, output_dim=1, nlayers=4, dropout=0.1, **kwargs)[source][source]#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(aa_x, mod_x)[source][source]#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class peptdeep.model.generic_property_prediction.Model_for_Generic_ModAASeq_BinaryClassification_Transformer(*, hidden_dim=256, output_dim=1, nlayers=4, output_attentions=False, dropout=0.1, **kwargs)[source][source]#

Bases: Model_for_Generic_ModAASeq_Regression_Transformer

Generic transformer classification model for modified sequence

Methods:

__init__(*[, hidden_dim, output_dim, ...])

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(aa_indices, mod_x)

Define the computation performed at every call.

Attributes:

__init__(*, hidden_dim=256, output_dim=1, nlayers=4, output_attentions=False, dropout=0.1, **kwargs)[source][source]#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(aa_indices, mod_x)[source][source]#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

property output_attentions: bool#
class peptdeep.model.generic_property_prediction.Model_for_Generic_ModAASeq_Regression_LSTM(*, hidden_dim=256, output_dim=1, nlayers=4, dropout=0.1, **kwargs)[source][source]#

Bases: Module

Generic LSTM regression model for modified sequence

Methods:

__init__(*[, hidden_dim, output_dim, ...])

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(aa_x, mod_x)

Define the computation performed at every call.

__init__(*, hidden_dim=256, output_dim=1, nlayers=4, dropout=0.1, **kwargs)[source][source]#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(aa_x, mod_x)[source][source]#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class peptdeep.model.generic_property_prediction.Model_for_Generic_ModAASeq_Regression_Transformer(*, hidden_dim=256, output_dim=1, nlayers=4, output_attentions=False, dropout=0.1, **kwargs)[source][source]#

Bases: Module

Generic transformer regression model for modified sequence

Methods:

__init__(*[, hidden_dim, output_dim, ...])

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(aa_indices, mod_x)

Define the computation performed at every call.

Attributes:

__init__(*, hidden_dim=256, output_dim=1, nlayers=4, output_attentions=False, dropout=0.1, **kwargs)[source][source]#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(aa_indices, mod_x)[source][source]#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

property output_attentions: bool#