GPT-JΒΆ
OverviewΒΆ
The GPT-J model was released in the kingoflolz/mesh-transformer-jax repository by Ben Wang and Aran Komatsuzaki. It is a GPT-2-like causal language model trained on the Pile dataset.
This model was contributed by Stella Biderman.
Tips:
To load GPT-J in float32 one would need at least 2x model size CPU RAM: 1x for initial weights and another 1x to load the checkpoint. So for GPT-J it would take at least 48GB of CPU RAM to just load the model. To reduce the CPU RAM usage there are a few options. The
torch_dtypeargument can be used to initialize the model in half-precision. And thelow_cpu_mem_usageargument can be used to keep the RAM usage to 1x. There is also a fp16 branch which stores the fp16 weights, which could be used to further minimize the RAM usage. Combining all this it should take roughly 12.1GB of CPU RAM to load the model.
>>> from transformers import GPTJForCausalLM
>>> import torch
>>> model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", revision="float16", torch_dtype=torch.float16, low_cpu_mem_usage=True)
The model should fit on 16GB GPU for inference. For training/fine-tuning it would take much more GPU RAM. Adam optimizer for example makes four copies of the model: model, gradients, average and squared average of the gradients. So it would need at least 4x model size GPU memory, even with mixed precision as gradient updates are in fp32. This is not including the activations and data batches, which would again require some more GPU RAM. So one should explore solutions such as DeepSpeed, to train/fine-tune the model. Another option is to use the original codebase to train/fine-tune the model on TPU and then convert the model to Transformers format for inference. Instructions for that could be found here
Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer. These extra tokens are added for the sake of efficiency on TPUs. To avoid the mis-match between embedding matrix size and vocab size, the tokenizer for GPT-J contains 143 extra tokens
<|extratoken_1|>... <|extratoken_143|>, so thevocab_sizeof tokenizer also becomes 50400.
GenerationΒΆ
The generate() method can be used to generate text using GPT-J
model.
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
>>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
>>> prompt = "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " \
... "previously unexplored valley, in the Andes Mountains. Even more surprising to the " \
... "researchers was the fact that the unicorns spoke perfect English."
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids
>>> gen_tokens = model.generate(input_ids, do_sample=True, temperature=0.9, max_length=100,)
>>> gen_text = tokenizer.batch_decode(gen_tokens)[0]
β¦or in float16 precision:
>>> from transformers import GPTJForCausalLM, AutoTokenizer
>>> import torch
>>> model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.float16)
>>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
>>> prompt = "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " \
... "previously unexplored valley, in the Andes Mountains. Even more surprising to the " \
... "researchers was the fact that the unicorns spoke perfect English."
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids
>>> gen_tokens = model.generate(input_ids, do_sample=True, temperature=0.9, max_length=100,)
>>> gen_text = tokenizer.batch_decode(gen_tokens)[0]
GPTJConfigΒΆ
-
class
transformers.GPTJConfig(vocab_size=50400, n_positions=2048, n_ctx=2048, n_embd=4096, n_layer=28, n_head=16, rotary_dim=64, n_inner=None, activation_function='gelu_new', resid_pdrop=0.0, embd_pdrop=0.0, attn_pdrop=0.0, layer_norm_epsilon=1e-05, initializer_range=0.02, scale_attn_weights=True, use_cache=True, bos_token_id=50256, eos_token_id=50256, **kwargs)[source]ΒΆ This is the configuration class to store the configuration of a
GPTJModel. It is used to instantiate a GPT-J model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the GPT-J gpt-j-6B architecture. Configuration objects inherit fromPretrainedConfigand can be used to control the model outputs. Read the documentation fromPretrainedConfigfor more information.- Parameters
vocab_size (
int, optional, defaults to 50400) β Vocabulary size of the GPT-J model. Defines the number of different tokens that can be represented by theinputs_idspassed when callingGPTJModel.n_positions (
int, optional, defaults to 2048) β The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).n_ctx (
int, optional, defaults to 2048) β Dimensionality of the causal mask (usually same as n_positions).n_embd (
int, optional, defaults to 4096) β Dimensionality of the embeddings and hidden states.n_layer (
int, optional, defaults to 28) β Number of hidden layers in the Transformer encoder.n_head (
int, optional, defaults to 16) β Number of attention heads for each attention layer in the Transformer encoder.rotary_dim (
int, optional, defaults to 64) β Number of dimensions in the embedding that Rotary Position Embedding is applied to.n_inner (
int, optional, defaults to None) β Dimensionality of the inner feed-forward layers.Nonewill set it to 4 times n_embdactivation_function (
str, optional, defaults to"gelu_new") β Activation function, to be selected in the list["relu", "silu", "gelu", "tanh", "gelu_new"].resid_pdrop (
float, optional, defaults to 0.1) β The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.embd_pdrop (
int, optional, defaults to 0.1) β The dropout ratio for the embeddings.attn_pdrop (
float, optional, defaults to 0.1) β The dropout ratio for the attention.layer_norm_epsilon (
float, optional, defaults to 1e-5) β The epsilon to use in the layer normalization layers.initializer_range (
float, optional, defaults to 0.02) β The standard deviation of the truncated_normal_initializer for initializing all weight matrices.scale_attn_weights (
bool, optional, defaults toTrue) β Scale attention weights by dividing by sqrt(hidden_size).use_cache (
bool, optional, defaults toTrue) β Whether or not the model should return the last key/values attentions (not used by all models).
Example:
>>> from transformers import GPTJModel, GPTJConfig >>> # Initializing a GPT-J 6B configuration >>> configuration = GPTJConfig() >>> # Initializing a model from the configuration >>> model = GPTJModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config
GPTJModelΒΆ
-
class
transformers.GPTJModel(config)[source]ΒΆ The bare GPT-J Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters
config (
GPTJConfig) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()method to load the model weights.
-
forward(input_ids=None, past_key_values=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]ΒΆ The
GPTJModelforward method, overrides the__call__()special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_ids (
torch.LongTensorof shape(batch_size, sequence_length)) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using
transformers.GPTJTokenizer. Seetransformers.PreTrainedTokenizer.encode()andtransformers.PreTrainedTokenizer.__call__()for details.attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional) βMask to avoid performing attention on padding token indices. Mask values selected in
[0, 1]:1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) βSegment token indices to indicate first and second portions of the inputs. Indices are selected in
[0, 1]:0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
position_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) βIndices of positions of each input sequence tokens in the position embeddings. Selected in the range
[0, config.n_positions - 1].head_mask (
torch.FloatTensorof shape(num_attention_heads,)or(n_layer, num_attention_heads), optional) βMask to nullify selected heads of the self-attention modules. Mask values selected in
[0, 1]:1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, n_ctx), optional) β Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the modelβs internal embedding lookup matrix.output_attentions (
bool, optional) β Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail.output_hidden_states (
bool, optional) β Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail.return_dict (
bool, optional) β Whether or not to return aModelOutputinstead of a plain tuple.
- Returns
A
BaseModelOutputWithPastor a tuple oftorch.FloatTensor(ifreturn_dict=Falseis passed or whenconfig.return_dict=False) comprising various elements depending on the configuration (GPTJConfig) and inputs.last_hidden_state (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)) β Sequence of hidden-states at the output of the last layer of the model.If
past_key_valuesis used only the last hidden-state of the sequences of shape(batch_size, 1, hidden_size)is output.past_key_values (
tuple(tuple(torch.FloatTensor)), optional, returned whenuse_cache=Trueis passed or whenconfig.use_cache=True) β Tuple oftuple(torch.FloatTensor)of lengthconfig.n_layers, with each tuple having 2 tensors of shape(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally ifconfig.is_encoder_decoder=True2 additional tensors of shape(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=Truein the cross-attention blocks) that can be used (seepast_key_valuesinput) to speed up sequential decoding.hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) β Tuple oftorch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- Return type
BaseModelOutputWithPastortuple(torch.FloatTensor)
Example:
>>> from transformers import GPT2Tokenizer, GPTJModel >>> import torch >>> tokenizer = GPT2Tokenizer.from_pretrained('EleutherAI/gpt-j-6B') >>> model = GPTJModel.from_pretrained('EleutherAI/gpt-j-6B') >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state
GPTJForCausalLMΒΆ
-
class
transformers.GPTJForCausalLM(config)[source]ΒΆ The GPT-J Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings).
This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters
config (
GPTJConfig) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()method to load the model weights.
-
forward(input_ids=None, past_key_values=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]ΒΆ The
GPTJForCausalLMforward method, overrides the__call__()special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_ids (
torch.LongTensorof shape(batch_size, sequence_length)) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using
transformers.GPTJTokenizer. Seetransformers.PreTrainedTokenizer.encode()andtransformers.PreTrainedTokenizer.__call__()for details.attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional) βMask to avoid performing attention on padding token indices. Mask values selected in
[0, 1]:1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) βSegment token indices to indicate first and second portions of the inputs. Indices are selected in
[0, 1]:0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
position_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) βIndices of positions of each input sequence tokens in the position embeddings. Selected in the range
[0, config.n_positions - 1].head_mask (
torch.FloatTensorof shape(num_attention_heads,)or(n_layer, num_attention_heads), optional) βMask to nullify selected heads of the self-attention modules. Mask values selected in
[0, 1]:1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, n_ctx), optional) β Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the modelβs internal embedding lookup matrix.output_attentions (
bool, optional) β Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail.output_hidden_states (
bool, optional) β Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail.return_dict (
bool, optional) β Whether or not to return aModelOutputinstead of a plain tuple.labels (
torch.LongTensorof shape(batch_size, sequence_length), optional) β Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can setlabels = input_idsIndices are selected in[-100, 0, ..., config.vocab_size]All labels set to-100are ignored (masked), the loss is only computed for labels in[0, ..., config.vocab_size]
- Returns
A
CausalLMOutputWithPastor a tuple oftorch.FloatTensor(ifreturn_dict=Falseis passed or whenconfig.return_dict=False) comprising various elements depending on the configuration (GPTJConfig) and inputs.loss (
torch.FloatTensorof shape(1,), optional, returned whenlabelsis provided) β Language modeling loss (for next-token prediction).logits (
torch.FloatTensorof shape(batch_size, sequence_length, config.vocab_size)) β Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).past_key_values (
tuple(tuple(torch.FloatTensor)), optional, returned whenuse_cache=Trueis passed or whenconfig.use_cache=True) β Tuple oftuple(torch.FloatTensor)of lengthconfig.n_layers, with each tuple having 2 tensors of shape(batch_size, num_heads, sequence_length, embed_size_per_head))Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_valuesinput) to speed up sequential decoding.hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) β Tuple oftorch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- Return type
CausalLMOutputWithPastortuple(torch.FloatTensor)
Example:
>>> import torch >>> from transformers import GPT2Tokenizer, GPTJForCausalLM >>> tokenizer = GPT2Tokenizer.from_pretrained('EleutherAI/gpt-j-6B') >>> model = GPTJForCausalLM.from_pretrained('EleutherAI/gpt-j-6B') >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs, labels=inputs["input_ids"]) >>> loss = outputs.loss >>> logits = outputs.logits
GPTJForSequenceClassificationΒΆ
-
class
transformers.GPTJForSequenceClassification(config)[source]ΒΆ The GPT-J Model transformer with a sequence classification head on top (linear layer).
GPTJForSequenceClassificationuses the last token in order to do the classification, as other causal models (e.g. GPT, GPT-2, GPT-Neo) do.Since it does classification on the last token, it requires to know the position of the last token. If a
pad_token_idis defined in the configuration, it finds the last token that is not a padding token in each row. If nopad_token_idis defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens wheninputs_embedsare passed instead ofinput_ids, it does the same (take the last value in each row of the batch).This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters
config (
GPTJConfig) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()method to load the model weights.
-
forward(input_ids=None, past_key_values=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]ΒΆ The
GPTJForSequenceClassificationforward method, overrides the__call__()special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_ids (
torch.LongTensorof shape(batch_size, sequence_length)) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using
transformers.GPTJTokenizer. Seetransformers.PreTrainedTokenizer.encode()andtransformers.PreTrainedTokenizer.__call__()for details.attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional) βMask to avoid performing attention on padding token indices. Mask values selected in
[0, 1]:1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) βSegment token indices to indicate first and second portions of the inputs. Indices are selected in
[0, 1]:0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
position_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) βIndices of positions of each input sequence tokens in the position embeddings. Selected in the range
[0, config.n_positions - 1].head_mask (
torch.FloatTensorof shape(num_attention_heads,)or(n_layer, num_attention_heads), optional) βMask to nullify selected heads of the self-attention modules. Mask values selected in
[0, 1]:1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, n_ctx), optional) β Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the modelβs internal embedding lookup matrix.output_attentions (
bool, optional) β Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail.output_hidden_states (
bool, optional) β Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail.return_dict (
bool, optional) β Whether or not to return aModelOutputinstead of a plain tuple.labels (
torch.LongTensorof shape(batch_size,), optional) β Labels for computing the sequence classification/regression loss. Indices should be in[0, ..., config.num_labels - 1]. Ifconfig.num_labels == 1a regression loss is computed (Mean-Square loss), Ifconfig.num_labels > 1a classification loss is computed (Cross-Entropy).
- Returns
A
SequenceClassifierOutputWithPastor a tuple oftorch.FloatTensor(ifreturn_dict=Falseis passed or whenconfig.return_dict=False) comprising various elements depending on the configuration (GPTJConfig) and inputs.loss (
torch.FloatTensorof shape(1,), optional, returned whenlabelsis provided) β Classification (or regression if config.num_labels==1) loss.logits (
torch.FloatTensorof shape(batch_size, config.num_labels)) β Classification (or regression if config.num_labels==1) scores (before SoftMax).past_key_values (
tuple(tuple(torch.FloatTensor)), optional, returned whenuse_cache=Trueis passed or whenconfig.use_cache=True) β Tuple oftuple(torch.FloatTensor)of lengthconfig.n_layers, with each tuple having 2 tensors of shape(batch_size, num_heads, sequence_length, embed_size_per_head))Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_valuesinput) to speed up sequential decoding.hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) β Tuple oftorch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- Return type
SequenceClassifierOutputWithPastortuple(torch.FloatTensor)
Example:
>>> from transformers import GPT2Tokenizer, GPTJForSequenceClassification >>> import torch >>> tokenizer = GPT2Tokenizer.from_pretrained('EleutherAI/gpt-j-6B') >>> model = GPTJForSequenceClassification.from_pretrained('EleutherAI/gpt-j-6B') >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> labels = torch.tensor([1]).unsqueeze(0) # Batch size 1 >>> outputs = model(**inputs, labels=labels) >>> loss = outputs.loss >>> logits = outputs.logits