Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

BERT Inner Workings

I created this notebook to better understand the inner workings of Bert. I followed a lot of tutorials to try to understand the architecture, but I was never able to really understand what was happening under the hood. For me it always helps to see the actual code instead of just simple abstract diagrams that a lot of times don’t match the actual implementation. If you’re like me than this tutorial will help!

I went as deep as you can go with Deep Learning — all the way to the tensor level. For me it helps to see the code and how the tensors move between layers. I feel like this level of abstraction is close enough to the core of the model to perfectly understand the inner workings.

If this in-depth educational content is useful for you, subscribe to our AI research mailing list to be alerted when we release new material. 

I will use the implementation of Bert from one of the best NLP library out there — HuggingFace Transformers. More specifically, I will show the inner working of Bert For Sequence Classification.

The term forward pass is used in Neural Networks and it refers to the calculations involved from the input sequence all the way to output of the last layer. It’s basically the flow of data from input to output.

I will follow the code from an example input sequence all the way to the final output prediction.

What should I know for this notebook?

Some prior knowledge of Bert is needed. I won’t go into any details of how Bert works. For this there is plenty of information out there.

Since I am using the PyTorch implementation of Bert any knowledge on PyTorch is very useful.

Knowing a little bit about the transformers library helps too.

How deep are we going?

I think the best way to understand such a complex model as Bert is to see the actual layer components that are used. I will dig in the code until I see the actual PyTorch layers used torch.nn. In my opinion there is no need to go deeper than the torch.nn layers.

Tutorial Structure

Each section contains multiple subsections.

The order of each section matches the order of the model’s layers from input to output.

At the beginning of each section of code I created a diagram to illustrate the flow of tensors of that particular code.

I created the diagrams following the model’s implementation.

The major section Bert For Sequence Classification starts with the Class Call that shows how we normally create the Bert model for sequence classification and perform a forward pass. Class Components contains the components of BertForSequenceClassification implementation.

At the end of each major section, I assemble all components from that section and show the output and diagram.

At the end of the notebook, I have all the code parts and diagrams assembled.

Terminology

I will use regular deep learning terminology found in most Bert tutorials. I’m using some terms in a slightly different way:

  • Layer and layers: In this tutorial when I mention layer it can be an abstraction of a group of layers or just a single layer. When I reach torch.nn you know I refer to a single layer.
  • torch.nn: I’m referring to any PyTorch layer module. This is the deepest I will go in this tutorial.

How to use this notebook?

The purpose of this notebook is purely educational. This notebook is to be used to align known information on how Bert woks with the code implementation of Bert. I used the Bert implementation from Transformers. My contribution is on arranging the code implementation and creating associated diagrams.

Dataset

For simplicity I will only use two sentences as our data input: I love cats! and He hates pineapple pizza.. I’ll pretend to do binary sentiment classification on these two sentences.

Coding

Now let’s do some coding! We will go through each coding cell in the notebook and describe what it does, what’s the code, and when is relevant — show the output.

I made this format to be easy to follow if you decide to run each code cell in your own python notebook.

When I learn from a tutorial, I always try to replicate the results. I believe it’s easy to follow along if you have the code next to the explanations.

Installs

  • transformers library needs to be installed to use all the awesome code from Hugging Face. To get the latest version I will install it straight from GitHub.
# install the transformers library
!pip install -q git+https://github.com/huggingface/transformers.git
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
 |████████████████████████████████| 2.9MB 6.7MB/s 
 |████████████████████████████████| 890kB 48.9MB/s 
 |████████████████████████████████| 1.1MB 49.0MB/s 
Building wheel for transformers (PEP 517) ... done
Building wheel for sacremoses (setup.py) ... done
 |████████████████████████████████| 71kB 5.2MB/s

Imports

Import all needed libraries for this notebook.

Declare parameters used for this notebook:

  • set_seed(123) – Always good to set a fixed seed for reproducibility.
  • n_labels – How many labels are we using in this dataset. This is used to decide size of classification head.
  • ACT2FN – Dictionary for special activation functions used in Bert. We’ll only need the gelu activation function.
  • BertLayerNorm – Shortcut for calling the PyTorch normalization layer torch.nn.LayerNorm.
import math
import torch
from transformers.activations import gelu
from transformers import (BertTokenizer, BertConfig, 
                          BertForSequenceClassification, BertPreTrainedModel, 
                          apply_chunking_to_forward, set_seed,
                          )
from transformers.modeling_outputs import (BaseModelOutputWithPastAndCrossAttentions, 
                                           BaseModelOutputWithPoolingAndCrossAttentions, 
                                           SequenceClassifierOutput,
                                           )


# Set seed for reproducibility.
set_seed(123)

# How many labels are we using in training.
# This is used to decide size of classification head.
n_labels = 2

# GELU Activation function.
ACT2FN = {"gelu": gelu}

# Define BertLayerNorm.
BertLayerNorm = torch.nn.LayerNorm

Define Input

Let’s define some text data on which we will use Bert to classify as positive or negative.

We encoded our positive and negative sentiments into:

  • 0 — for negative sentiments.
  • 1 — for positive sentiments.
# Array of text we want to classify
input_texts = ['I love cats!',
              "He hates pineapple pizza."]

# Senitmen labels
labels = [1, 0]

Bert Tokenizer

Creating the tokenizer is pretty standard when using the Transformers library.

Using our newly created tokenizer we’ll use it on our two sentence dataset and create the input_sequence that will be used as input for our Bert model.

Show Bert Tokenizer Diagram

# Create BertTokenizer.
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')

# Create input sequence using tokenizer.
input_sequences = tokenizer(text=input_texts, add_special_tokens=True, padding=True, truncation=True, return_tensors='pt')

# Since input_sequence is a dictionary we can also add the labels to it
# want to make sure all values ar tensors.
input_sequences.update({'labels':torch.tensor(labels)})

# The tokenizer will return a dictionary of three: input_ids, attention_mask and token_type_ids.
# Let's do a pretty print.
print('PRETTY PRINT OF `input_sequences` UPDATED WITH `labels`:')
[print('%s : %s\n'%(k,v)) for k,v in input_sequences.items()];

# Lets see how the text looks like after Bert Tokenizer.
# We see the special tokens added.
print('ORIGINAL TEXT:')
[print(example) for example in input_texts];
print('\nTEXT AFTER USING `BertTokenizer`:')
[print(tokenizer.decode(example)) for example in input_sequences['input_ids'].numpy()];
Downloading: 100% |████████████████████████████████| 213k/213k [00:00

Bert Configuration

Predefined values specific to Bert architecture already defined for us by Hugging Face.

# Create the bert configuration.
bert_configuraiton = BertConfig.from_pretrained('bert-base-cased')

# Let's see number of layers.
print('NUMBER OF LAYERS:', bert_configuraiton.num_hidden_layers)

# We can also see the size of embeddings inside Bert.
print('EMBEDDING SIZE:', bert_configuraiton.hidden_size)

# See which activation function used in hidden layers.
print('ACTIVATIONS:', bert_configuraiton.hidden_act)
Downloading: 100% |████████████████████████████████| 433/433 [00:00

Bert For Sequence Classification

I will go over the Bert for Sequence Classification model. This is a Bert language model with a classification layer on top.

If you plan on looking at other transformers models his tutorial will be very similar.

Class Call

Let’s start with doing a forward pass using the whole model call from Hugging Face Transformer.

# Let' start with the final model how we normally use.
model = BertForSequenceClassification.from_pretrained('bert-base-cased')

# Perform a forward pass. We only care about the output and no gradients.
with torch.no_grad():
  output = model.forward(**input_sequences)

print()

# Let's check how a forward pass output looks like.
print('FORWARD PASS OUTPUT:', output)
Downloading: 100% |████████████████████████████████| 436M/436M [00:07

Class Components

Now let’s look at the code implementation and break down each part of the model and check the outputs.

Start with the BertForSequenceClassification found in transformers/src/transformers/models/bert/modeling_bert.py#L1449.

The forward pass uses the following layers:

  • BertModel layer:

self.bert = BertModel(config)

  • torch.nn.Dropout layer for dropout:

self.dropout = nn.Dropout(config.hidden_dropout_prob)

  • torch.nn.Linear layer used for classification:

self.classifier = nn.Linear(config.hidden_size, config.num_labels)

BertModel

This is the core Bert model that can be found at: transformers/src/transformers/models/bert/modeling_bert.py#L815.

Hugging Face was nice enough to mention a small summary: The bare Bert Model transformer outputting raw hidden-states without any specific head on top.

The forward pass uses the following layers:

  • BertEmbeddings layer:

self.embeddings = BertEmbeddings(config)

  • BertEncoder layer:

self.encoder = BertEncoder(config)

  • BertPooler layer:

self.pooler = BertPooler(config)

Bert Embeddings

This is where we feed the input_sequences created under Bert Tokenizer and get our first embeddings.

Implementation can be found at: transformers/src/transformers/models/bert/modeling_bert.py#L165.

This layer contains actual PyTorch layers. I won’t go into farther details since this is how far we need to go.

The forward pass uses following layers:

  • torch.nn.Embedding layer for word embeddings:

self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id)

  • torch.nn.Embedding layer for position embeddings:

self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)

  • torch.nn.Embedding for token type embeddings:

self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size)

  • torch.nn.LayerNorm layer for normalization:

self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)

  • torch.nn.Dropout layer for dropout:

self.dropout = nn.Dropout(config.hidden_dropout_prob)

Bert Embeddings Diagram
class BertEmbeddings(torch.nn.Module):
    """Construct the embeddings from word, position and token_type embeddings."""

    def __init__(self, config):
        super().__init__()
        self.word_embeddings = torch.nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id)
        self.position_embeddings = torch.nn.Embedding(config.max_position_embeddings, config.hidden_size)
        self.token_type_embeddings = torch.nn.Embedding(config.type_vocab_size, config.hidden_size)

        # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
        # any TensorFlow checkpoint file
        self.LayerNorm = torch.nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
        self.dropout = torch.nn.Dropout(config.hidden_dropout_prob)

        # position_ids (1, len position emb) is contiguous in memory and exported when serialized
        self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1)))
        self.position_embedding_type = getattr(config, "position_embedding_type", "absolute")

    def forward(
        self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None, past_key_values_length=0
    ):
        if input_ids is not None:
            input_shape = input_ids.size()
        else:
            input_shape = inputs_embeds.size()[:-1]

        seq_length = input_shape[1]

        if position_ids is None:
            position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length]

        # ADDED
        print('Created Tokens Positions IDs:\n', position_ids)
        

        if token_type_ids is None:
            token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device)

        if inputs_embeds is None:
            inputs_embeds = self.word_embeddings(input_ids)
        token_type_embeddings = self.token_type_embeddings(token_type_ids)

        # ADDED
        print('\nTokens IDs:\n', input_ids.shape)
        print('\nTokens Type IDs:\n', token_type_ids.shape)
        print('\nWord Embeddings:\n', inputs_embeds.shape)

        embeddings = inputs_embeds + token_type_embeddings
        if self.position_embedding_type == "absolute":
            position_embeddings = self.position_embeddings(position_ids)

            # ADDED
            print('\nPosition Embeddings:\n', position_embeddings.shape)

            embeddings += position_embeddings

        # ADDED
        print('\nToken Types Embeddings:\n', token_type_embeddings.shape)
        print('\nSum Up All Embeddings:\n', embeddings.shape)

        embeddings = self.LayerNorm(embeddings)

        # ADDED
        print('\nEmbeddings Layer Nromalization:\n', embeddings.shape)

        embeddings = self.dropout(embeddings)

        # ADDED
        print('\nEmbeddings Dropout Layer:\n', embeddings.shape)
        
        return embeddings


# Create Bert embedding layer.
bert_embeddings_block = BertEmbeddings(bert_configuraiton)

# Perform a forward pass.
embedding_output = bert_embeddings_block.forward(input_ids=input_sequences['input_ids'], token_type_ids=input_sequences['token_type_ids'])
Created Tokens Positions IDs:
 tensor([[0, 1, 2, 3, 4, 5, 6, 7, 8]])

Tokens IDs:
 torch.Size([2, 9])

Tokens Type IDs:
 torch.Size([2, 9])

Word Embeddings:
 torch.Size([2, 9, 768])

Position Embeddings:
 torch.Size([1, 9, 768])

Token Types Embeddings:
 torch.Size([2, 9, 768])

Sum Up All Embeddings:
 torch.Size([2, 9, 768])

Embeddings Layer Nromalization:
 torch.Size([2, 9, 768])

Embeddings Dropout Layer:
 torch.Size([2, 9, 768])

Bert Encoder

This layer contains the core of the bert model where the self-attention happens.

The implementation can be found at: transformers/src/transformers/models/bert/modeling_bert.py#L512.

The forward pass uses:

  • 12 of the BertLayer layers ( in this setup config.num_hidden_layers=12):

self.layer = nn.ModuleList([BertLayer(config) for _ in range(config.num_hidden_layers)])

BERT LAYER

This layer contains basic components of the self-attention implementation.

Implementation can be found at transformers/src/transformers/models/bert/modeling_bert.py#L429.

The forward pass uses:

  • BertAttention layer:

self.attention = BertAttention(config)

  • BertIntermediate layer:

self.intermediate = BertIntermediate(config)

  • BertOutput layer:

self.output = BertOutput(config)

Bert Attention

This layer contains basic components of the self-attention implementation.

Implementation can be found at transformers/src/transformers/models/bert/modeling_bert.py#L351.

The forward pass uses:

  • BertSelfAttention layer:

self.self = BertSelfAttention(config)

  • BertSelfOutput layer:

self.output = BertSelfOutput(config)

BertSelfAttention

This layer contains the torch.nn basic components of the self-attention implementation.

Implementation can be found at transformers/src/transformers/models/bert/modeling_bert.py#L212.

The forward pass uses:

  • torch.nn.Linear used for the Query layer:

self.query = nn.Linear(config.hidden_size, self.all_head_size)

  • torch.nn.Linear used for the Key layer:

self.key = nn.Linear(config.hidden_size, self.all_head_size)

  • torch.nn.Linear used for the Value layer:

self.value = nn.Linear(config.hidden_size, self.all_head_size)

  • torch.nn.Dropout layer for dropout:

self.dropout = nn.Dropout(config.attention_probs_dropout_prob)

BertSelfAttention Diagram
class BertSelfAttention(torch.nn.Module):
    def __init__(self, config):
        super().__init__()
        if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"):
            raise ValueError(
                "The hidden size (%d) is not a multiple of the number of attention "
                "heads (%d)" % (config.hidden_size, config.num_attention_heads)
            )

        self.num_attention_heads = config.num_attention_heads
        self.attention_head_size = int(config.hidden_size / config.num_attention_heads)
        self.all_head_size = self.num_attention_heads * self.attention_head_size

        # ADDED
        print('Attention Head Size:\n', self.attention_head_size)
        print('\nCombined Attentions Head Size:\n', self.all_head_size)

        self.query = torch.nn.Linear(config.hidden_size, self.all_head_size)
        self.key = torch.nn.Linear(config.hidden_size, self.all_head_size)
        self.value = torch.nn.Linear(config.hidden_size, self.all_head_size)

        self.dropout = torch.nn.Dropout(config.attention_probs_dropout_prob)
        self.position_embedding_type = getattr(config, "position_embedding_type", "absolute")
        if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query":
            self.max_position_embeddings = config.max_position_embeddings
            self.distance_embedding = nn.Embedding(2 * config.max_position_embeddings - 1, self.attention_head_size)

        self.is_decoder = config.is_decoder

    def transpose_for_scores(self, x):
        new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size)
        x = x.view(*new_x_shape)
        return x.permute(0, 2, 1, 3)

    def forward(
        self,
        hidden_states,
        attention_mask=None,
        head_mask=None,
        encoder_hidden_states=None,
        encoder_attention_mask=None,
        past_key_value=None,
        output_attentions=False,
    ):
        # ADDED
        print('\nHidden States:\n', hidden_states.shape)

        mixed_query_layer = self.query(hidden_states)

        # If this is instantiated as a cross-attention module, the keys
        # and values come from an encoder; the attention mask needs to be
        # such that the encoder's padding tokens are not attended to.
        is_cross_attention = encoder_hidden_states is not None

        if is_cross_attention and past_key_value is not None:

            # ADDED
            print('\nQuery Linear Layer:\n', mixed_query_layer.shape)
            print('\nKey Linear Layer:\n', past_key_value[0].shape)
            print('\nValue Linear Layer:\n', past_key_value[1].shape)

            # reuse k,v, cross_attentions
            key_layer = past_key_value[0]
            value_layer = past_key_value[1]
            attention_mask = encoder_attention_mask
        elif is_cross_attention:

            # ADDED
            print('\nQuery Linear Layer:\n', mixed_query_layer.shape)
            print('\nKey Linear Layer:\n', self.key(encoder_hidden_states).shape)
            print('\nValue Linear Layer:\n', self.value(encoder_hidden_states).shape)

            key_layer = self.transpose_for_scores(self.key(encoder_hidden_states))
            value_layer = self.transpose_for_scores(self.value(encoder_hidden_states))
            attention_mask = encoder_attention_mask
        elif past_key_value is not None:

            # ADDED
            print('\nQuery Linear Layer:\n', mixed_query_layer.shape)
            print('\nKey Linear Layer:\n', self.key(hidden_states).shape)
            print('\nValue Linear Layer:\n', self.value(hidden_states).shape)

            key_layer = self.transpose_for_scores(self.key(hidden_states))
            value_layer = self.transpose_for_scores(self.value(hidden_states))
            key_layer = torch.cat([past_key_value[0], key_layer], dim=2)
            value_layer = torch.cat([past_key_value[1], value_layer], dim=2)
        else:

            # ADDED
            print('\nQuery Linear Layer:\n', mixed_query_layer.shape)
            print('\nKey Linear Layer:\n', self.key(hidden_states).shape)
            print('\nValue Linear Layer:\n', self.value(hidden_states).shape)

            key_layer = self.transpose_for_scores(self.key(hidden_states))
            value_layer = self.transpose_for_scores(self.value(hidden_states))

        
        

        query_layer = self.transpose_for_scores(mixed_query_layer)

        # ADDED
        print('\nQuery:\n', query_layer.shape)
        print('\nKey:\n', key_layer.shape)
        print('\nValue:\n', value_layer.shape)

        if self.is_decoder:
            # if cross_attention save Tuple(torch.Tensor, torch.Tensor) of all cross attention key/value_states.
            # Further calls to cross_attention layer can then reuse all cross-attention
            # key/value_states (first "if" case)
            # if uni-directional self-attention (decoder) save Tuple(torch.Tensor, torch.Tensor) of
            # all previous decoder key/value_states. Further calls to uni-directional self-attention
            # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case)
            # if encoder bi-directional self-attention `past_key_value` is always `None`
            past_key_value = (key_layer, value_layer)

        # ADDED
        print('\nKey Transposed:\n', key_layer.transpose(-1, -2).shape)

        # Take the dot product between "query" and "key" to get the raw attention scores.
        attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))

        # ADDED
        print('\nAttention Scores:\n', attention_scores.shape)

        if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query":
            seq_length = hidden_states.size()[1]
            position_ids_l = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(-1, 1)
            position_ids_r = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(1, -1)
            distance = position_ids_l - position_ids_r
            positional_embedding = self.distance_embedding(distance + self.max_position_embeddings - 1)
            positional_embedding = positional_embedding.to(dtype=query_layer.dtype)  # fp16 compatibility

            if self.position_embedding_type == "relative_key":
                relative_position_scores = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding)
                attention_scores = attention_scores + relative_position_scores
            elif self.position_embedding_type == "relative_key_query":
                relative_position_scores_query = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding)
                relative_position_scores_key = torch.einsum("bhrd,lrd->bhlr", key_layer, positional_embedding)
                attention_scores = attention_scores + relative_position_scores_query + relative_position_scores_key

        attention_scores = attention_scores / math.sqrt(self.attention_head_size)

        # ADDED
        print('\nAttention Scores Divided by Scalar:\n', attention_scores.shape)

        if attention_mask is not None:
            # Apply the attention mask is (precomputed for all layers in BertModel forward() function)
            attention_scores = attention_scores + attention_mask

        # Normalize the attention scores to probabilities.
        attention_probs = torch.nn.Softmax(dim=-1)(attention_scores)

        # ADDED
        print('\nAttention Probabilities Softmax Layer:\n', attention_probs.shape)

        # This is actually dropping out entire tokens to attend to, which might
        # seem a bit unusual, but is taken from the original Transformer paper.
        attention_probs = self.dropout(attention_probs)

        # ADDED
        print('\nAttention Probabilities Dropout Layer:\n', attention_probs.shape)

        # Mask heads if we want to
        if head_mask is not None:
            attention_probs = attention_probs * head_mask

        context_layer = torch.matmul(attention_probs, value_layer)

        # ADDED
        print('\nContext:\n', context_layer.shape)

        context_layer = context_layer.permute(0, 2, 1, 3).contiguous()

        # ADDED
        print('\nContext Permute:\n', context_layer.shape)

        new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,)
        context_layer = context_layer.view(*new_context_layer_shape)

        # ADDED
        print('\nContext Reshaped:\n', context_layer.shape)
        
        outputs = (context_layer, attention_probs) if output_attentions else (context_layer,)

        if self.is_decoder:
            outputs = outputs + (past_key_value,)
        return outputs

# Create bert self attention layer.
bert_selfattention_block = BertSelfAttention(bert_configuraiton)

# Perform a forward pass.
context_embedding = bert_selfattention_block.forward(hidden_states=embedding_output)
Attention Head Size:
 64

Combined Attentions Head Size:
 768

Hidden States:
 torch.Size([2, 9, 768])

Query Linear Layer:
 torch.Size([2, 9, 768])

Key Linear Layer:
 torch.Size([2, 9, 768])

Value Linear Layer:
 torch.Size([2, 9, 768])

Query:
 torch.Size([2, 12, 9, 64])

Key:
 torch.Size([2, 12, 9, 64])

Value:
 torch.Size([2, 12, 9, 64])

Key Transposed:
 torch.Size([2, 12, 64, 9])

Attention Scores:
 torch.Size([2, 12, 9, 9])

Attention Scores Divided by Scalar:
 torch.Size([2, 12, 9, 9])

Attention Probabilities Softmax Layer:
 torch.Size([2, 12, 9, 9])

Attention Probabilities Dropout Layer:
 torch.Size([2, 12, 9, 9])

Context:
 torch.Size([2, 12, 9, 64])

Context Permute:
 torch.Size([2, 9, 12, 64])

Context Reshaped:
 torch.Size([2, 9, 768])

BertSelfOutput

This layer contains the torch.nn basic components of the self-attention implementation.

Implementation can be found at transformers/src/transformers/models/bert/modeling_bert.py#L337.

The forward pass uses:

  • torch.nn.Linear layer:

self.dense = nn.Linear(config.hidden_size, config.hidden_size)

  • torch.nn.LayerNorm layer for normalization:

self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)

  • torch.nn.Dropout layer for dropout:

self.dropout = nn.Dropout(config.hidden_dropout_prob)

BertSelfOutput Diagram
class BertSelfOutput(torch.nn.Module):
    def __init__(self, config):
        super().__init__()
        self.dense = torch.nn.Linear(config.hidden_size, config.hidden_size)
        self.LayerNorm = BertLayerNorm(config.hidden_size, eps=config.layer_norm_eps)
        self.dropout = torch.nn.Dropout(config.hidden_dropout_prob)

    def forward(self, hidden_states, input_tensor):
        print('Hidden States:\n', hidden_states.shape)

        hidden_states = self.dense(hidden_states)
        print('\nHidden States Linear Layer:\n', hidden_states.shape)

        hidden_states = self.dropout(hidden_states)
        print('\nHidden States Dropout Layer:\n', hidden_states.shape)

        hidden_states = self.LayerNorm(hidden_states + input_tensor)
        print('\nHidden States Normalization Layer:\n', hidden_states.shape)

        return hidden_states


# Create Bert self output layer.
bert_selfoutput_block = BertSelfOutput(bert_configuraiton)

# Perform a forward pass - context_embedding[0] because we have tuple.
attention_output = bert_selfoutput_block.forward(hidden_states=context_embedding[0], input_tensor=embedding_output)
Hidden States:
 torch.Size([2, 9, 768])

Hidden States Linear Layer:
 torch.Size([2, 9, 768])

Hidden States Dropout Layer:
 torch.Size([2, 9, 768])

Hidden States Normalization Layer:
 torch.Size([2, 9, 768])

Assemble BertAttention

Put together BertSelfAttention layer and BertSelfOutput layer to create the BertAttention layer.

Now perform a forward pass using previous output layer as input.

BertAttention Diagram
class BertAttention(torch.nn.Module):
    def __init__(self, config):
        super().__init__()
        self.self = BertSelfAttention(config)
        self.output = BertSelfOutput(config)
        self.pruned_heads = set()

    def prune_heads(self, heads):
        if len(heads) == 0:
            return
        heads, index = find_pruneable_heads_and_indices(
            heads, self.self.num_attention_heads, self.self.attention_head_size, self.pruned_heads
        )

        # Prune linear layers
        self.self.query = prune_linear_layer(self.self.query, index)
        self.self.key = prune_linear_layer(self.self.key, index)
        self.self.value = prune_linear_layer(self.self.value, index)
        self.output.dense = prune_linear_layer(self.output.dense, index, dim=1)

        # Update hyper params and store pruned heads
        self.self.num_attention_heads = self.self.num_attention_heads - len(heads)
        self.self.all_head_size = self.self.attention_head_size * self.self.num_attention_heads
        self.pruned_heads = self.pruned_heads.union(heads)

    def forward(
        self,
        hidden_states,
        attention_mask=None,
        head_mask=None,
        encoder_hidden_states=None,
        encoder_attention_mask=None,
        past_key_value=None,
        output_attentions=False,
    ):
        self_outputs = self.self(
            hidden_states,
            attention_mask,
            head_mask,
            encoder_hidden_states,
            encoder_attention_mask,
            past_key_value,
            output_attentions,
        )
        attention_output = self.output(self_outputs[0], hidden_states)
        outputs = (attention_output,) + self_outputs[1:]  # add attentions if we output them
        return outputs

# Create attention assembled layer.
bert_attention_block = BertAttention(bert_configuraiton)

# Perform a forward pass to wholte Bert Attention layer.
attention_output = bert_attention_block(hidden_states=embedding_output)
Attention Head Size:
 64

Combined Attentions Head Size:
 768

Hidden States:
 torch.Size([2, 9, 768])

Query Linear Layer:
 torch.Size([2, 9, 768])

Key Linear Layer:
 torch.Size([2, 9, 768])

Value Linear Layer:
 torch.Size([2, 9, 768])

Query:
 torch.Size([2, 12, 9, 64])

Key:
 torch.Size([2, 12, 9, 64])

Value:
 torch.Size([2, 12, 9, 64])

Key Transposed:
 torch.Size([2, 12, 64, 9])

Attention Scores:
 torch.Size([2, 12, 9, 9])

Attention Scores Divided by Scalar:
 torch.Size([2, 12, 9, 9])

Attention Probabilities Softmax Layer:
 torch.Size([2, 12, 9, 9])

Attention Probabilities Dropout Layer:
 torch.Size([2, 12, 9, 9])

Context:
 torch.Size([2, 12, 9, 64])

Context Permute:
 torch.Size([2, 9, 12, 64])

Context Reshaped:
 torch.Size([2, 9, 768])
Hidden States:
 torch.Size([2, 9, 768])

Hidden States Linear Layer:
 torch.Size([2, 9, 768])

Hidden States Dropout Layer:
 torch.Size([2, 9, 768])

Hidden States Normalization Layer:
 torch.Size([2, 9, 768])

BertIntermediate

This layer contains the torch.nn basic components of the Bert model implementation.

Implementation can be found at transformers/src/transformers/models/bert/modeling_bert.py#L400.

The forward pass uses:

  • torch.nn.Linear layer:

self.dense = nn.Linear(config.hidden_size, config.intermediate_size)

BertIntermediate Diagram
class BertIntermediate(torch.nn.Module):
    def __init__(self, config):
        super().__init__()
        self.dense = torch.nn.Linear(config.hidden_size, config.intermediate_size)
        if isinstance(config.hidden_act, str):
            self.intermediate_act_fn = ACT2FN[config.hidden_act]
        else:
            self.intermediate_act_fn = config.hidden_act

    def forward(self, hidden_states):
        print('\nHidden States:\n', hidden_states.shape)

        hidden_states = self.dense(hidden_states)
        print('\nHidden States Linear Layer:\n', hidden_states.shape)

        hidden_states = self.intermediate_act_fn(hidden_states)
        print('\nHidden States Gelu Activation Function:\n', hidden_states.shape)

        return hidden_states


# Create bert intermediate layer.
bert_intermediate_block = BertIntermediate(bert_configuraiton)

# Perform a forward pass - attention_output[0] because we have tuple.
intermediate_output = bert_intermediate_block.forward(hidden_states=attention_output[0])
Hidden States:
 torch.Size([2, 9, 768])

Hidden States Linear Layer:
 torch.Size([2, 9, 3072])

Hidden States Gelu Activation Function:
 torch.Size([2, 9, 3072])

BertOutput

This layer contains the torch.nn basic components of the Bert model implementation.

Implementation can be found at transformers/src/transformers/models/bert/modeling_bert.py#L415.

The forward pass uses:

  • torch.nn.Linear layer:

self.dense = nn.Linear(config.intermediate_size, config.hidden_size)

  • torch.nn.LayerNorm layer for normalization:

self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)

  • torch.nn.Dropout layer for dropout:

self.dropout = nn.Dropout(config.hidden_dropout_prob)

BertOutput Diagram
class BertOutput(torch.nn.Module):
    def __init__(self, config):
        super().__init__()
        self.dense = torch.nn.Linear(config.intermediate_size, config.hidden_size)
        self.LayerNorm = BertLayerNorm(config.hidden_size, eps=config.layer_norm_eps)
        self.dropout = torch.nn.Dropout(config.hidden_dropout_prob)

    def forward(self, hidden_states, input_tensor):
        print('\nHidden States:\n', hidden_states.shape)

        hidden_states = self.dense(hidden_states)
        print('\nHidden States Linear Layer:\n', hidden_states.shape)

        hidden_states = self.dropout(hidden_states)
        print('\nHidden States Dropout Layer:\n', hidden_states.shape)

        hidden_states = self.LayerNorm(hidden_states + input_tensor)
        print('\nHidden States Layer Normalization:\n', hidden_states.shape)

        return hidden_states


# Create bert output layer.
bert_output_block = BertOutput(bert_configuraiton)

# Perform forward pass - attention_output[0] dealing with tuple.
layer_output = bert_output_block.forward(hidden_states=intermediate_output, input_tensor=attention_output[0])
Hidden States:
 torch.Size([2, 9, 3072])

Hidden States Linear Layer:
 torch.Size([2, 9, 768])

Hidden States Dropout Layer:
 torch.Size([2, 9, 768])

Hidden States Layer Normalization:
 torch.Size([2, 9, 768])

Assemble BertLayer

Put together BertAttention layer, BertIntermediate layer and BertOutput layer to create the BertLayer layer.

Now perform a forward pass using previous output layer as input.

BertLayer Diagram
class BertLayer(torch.nn.Module):
    def __init__(self, config):
        super().__init__()
        self.chunk_size_feed_forward = config.chunk_size_feed_forward
        self.seq_len_dim = 1
        self.attention = BertAttention(config)
        self.is_decoder = config.is_decoder
  


This post first appeared on Leading Strategy And Research Firm For Applied AI, please read the originial post: here

Share the post

BERT Inner Workings

×

Subscribe to Leading Strategy And Research Firm For Applied Ai

Get updates delivered right to your inbox!

Thank you for your subscription

×