Proper Way To Add New Vectors For Oov Words
Solution 1:
I think there is some confusion about the different components - I'll try to clarify:
- The tokenizer does not produce vectors. It's just a component that segments texts into tokens. In spaCy, it's rule-based and not trainable, and doesn't have anything to do with vectors. It looks at whitespace and punctuation to determine which are the unique tokens in a sentence.
- An
nlp
model in spaCy can have predefined (static) word vectors that are accessible on theToken
level. Every token with the same Lexeme gets the same vector. Some tokens/lexemes may indeed be OOV, like misspellings. If you want to redefine/extend all vectors used in a model, you can use something likeinit-model
(init vectors
in spaCy v3). - The
tok2vec
layer is a machine learning component that learns how to produce suitable (dynamic) vectors for tokens. It does this by looking at lexical attributes of the token, but may also include the static vectors of the token (cf item 2). This component is generally not used by itself, but is part of another component, such as an NER. It will be the first layer of the NER model, and it can be trained as part of training the NER, to produce vectors that are suitable for your NER task.
In spaCy v2, you can first train a tok2vec component with pretrain
, and then use this component for a subsequent train
command. Note that all settings need to be the same across both commands, for the layers to be compatible.
To answer your questions:
Isn't the tok2vec the part that generates the vectors?
If you mean the static vectors, then no. The tok2vec component produces new vectors (possibly with a different dimension) on top of the static vectors, but it won't change the static ones.
What does it mean loading pretrained vectors and then train a component to predict these vectors? What's the purpose of doing this?
The purpose is to get a tok2vec
component that is already pretrained from external vectors data. The external vectors data already embeds some "meaning" or "similarity" of the tokens, and this is -so to say- transferred into the tok2vec
component, which learns to produce the same similarities. The point is that this new tok2vec
component can then be used & further fine-tuned in the subsequent train
command (cf item 3)
Is there a way to still make use of this for OOV words?
It really depends on what your "use" is. As https://stackoverflow.com/a/57665799/7961860 mentions, you can set the vectors yourself, or you can implement a user hook which will decide on how to define token.vector
.
I hope this helps. I can't really recommend the best approach for you to follow, without understanding why you want the OOV vectors / what your use-case is. Happy to discuss further in the comments!
Post a Comment for "Proper Way To Add New Vectors For Oov Words"