keras 版本的bert源码tensorflow版本的bert源码
环境要求:python版本>=3.5,tensorflow版本>=1.10(笔者使用的是1.12)
pip install bert-serving-server
pip install bert-serving-client
下载训练好的BERT中文模型:https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip
bert-serving-start -model_dir ./chinese_L-12_H-768_A-12 -num_worker=2
from bert_serving.client import BertClient
bc = BertClient()
g=bc.encode(['你好'])
In [4]: g.shape
Out[4]: (1, 768)
大家记住768这个数字
我只问一个问题,那就是bert 词向量化输出的是什么?
在这个问题之前,大家可以想想,word2vec 输出的是什么?
大家可以看看这篇文章大白话讲解word2vec到底在做些什么 我们要获取的dense vector其实就是Hidden Layer的输出单元。有的地方定为Input Layer和Hidden Layer之间的权重
我看了bert的tensorflow 源码,大家可以看看bert tensorflow 源码的36行和877行,都出现了 hidden_size=768
def __init__(self,
vocab_size,
#大家看看下面这个768
hidden_size=768,
num_hidden_layers=12,
num_attention_heads=12,
intermediate_size=3072,
hidden_act="gelu",
hidden_dropout_prob=0.1,
attention_probs_dropout_prob=0.1,
max_position_embeddings=512,
type_vocab_size=16,
initializer_range=0.02):
# Down-project back to `hidden_size` then add the residual.
with tf.variable_scope("output"):
layer_output = tf.layers.dense(
intermediate_output,
hidden_size,
kernel_initializer=create_initializer(initializer_range))
layer_output = dropout(layer_output, hidden_dropout_prob)
layer_output = layer_norm(layer_output + attention_output)
prev_output = layer_output
all_layer_outputs.append(layer_output)
我又看了bert keras 版本,在源码的47行和121行都出现了 embed_dim=768
def get_model(token_num,
pos_num=512,
seq_len=512,
embed_dim=768,
transformer_num=12,
head_num=12,
feed_forward_dim=3072,
dropout_rate=0.1,
attention_activation=None,
feed_forward_activation='gelu',
training=True,
trainable=None,
output_layer_num=1):
mlm_dense_layer = keras.layers.Dense(
units=embed_dim,
activation=feed_forward_activation,
name='MLM-Dense',
)(transformed)
mlm_norm_layer = LayerNormalization(name='MLM-Norm')(mlm_dense_layer)
mlm_pred_layer = EmbeddingSimilarity(name='MLM-Sim')([mlm_norm_layer, embed_weights])
masked_layer = Masked(name='MLM')([mlm_pred_layer, inputs[-1]])
extract_layer = Extract(index=0, name='Extract')(transformed)
nsp_dense_layer = keras.layers.Dense(
units=embed_dim,
activation='tanh',
name='NSP-Dense',
)(extract_layer)
nsp_pred_layer = keras.layers.Dense(
units=2,
activation='softmax',
name='NSP',
)(nsp_dense_layer)
结论:
word2vec本质上就是一个三层的神经网络,word2vec输出的是隐藏层到输出层权重矩阵的某一行,
bert是一个8层的神经网络,分别是
1.Dense=>
2.LayerNormalization=>
3.EmbeddingSimilarity=>
5.Masked=>
6Extract=>
7Dense=>(这一层的神经元个数是768)
8Dense=>
bert的深度更加深,所以训练的时间要长,并且bert 词向量的长度是固定的,要想改变,需要改变网络参数,重新训练