whatlies.language.GensimLanguage
¶
This object is used to lazily fetch Embeddings or EmbeddingSets from a keyed vector file. These files are generated by gensim. This object is meant for retreival, not plotting.
Important
The vectors are not given by this library they must be download/created upfront. A potential benefit of this is that you can train your own embeddings using gensim and visualise them using this library.
Here's a snippet that you can use to train your own (very limited) word2vec embeddings.
from gensim.test.utils import common_texts
from gensim.models import Word2Vec
model = Word2Vec(common_texts, size=10, window=5, min_count=1, workers=4)
model.wv.save("wordvectors.kv")
You can also download pre-trained embeddings that are hosted by the gensim project.
import gensim.downloader as api
# To check what models are available
api.info()['models'].keys()
# To download the vectors
wv = api.load('glove-twitter-25')
# This is typically saved in `~/gensim/data` but you can also edit these
# vectors and save them someplace else if you'd like.
wv.save("glove-twitter-25.kv")
Note that if a word is not available in the keyed vectors file then we'll assume a zero vector. If you pass a sentence then we'll add together the embeddings vectors of the seperate words.
Parameters
Name | Type | Description | Default |
---|---|---|---|
keyedfile |
name of the model to load, be sure that it's downloaded or trained beforehand | required |
Usage:
> from whatlies.language import GensimLanguage
> lang = GensimLanguage("wordvectors.kv")
> lang['computer']
> lang = GensimLanguage("wordvectors.kv")
> lang[['computer', 'human', 'dog']]
__getitem__(self, query)
¶
Show source code in language/_gensim_lang.py
70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 |
|
Retreive a single embedding or a set of embeddings.
Parameters
Name | Type | Description | Default |
---|---|---|---|
query |
Union[str, List[str]] |
single string or list of strings | required |
Usage
> from whatlies.language import GensimLanguage
> lang = GensimLanguage("wordvectors.kv")
> lang['computer']
> lang = GensimLanguage("wordvectors.kv")
> lang[['computer', 'human', 'dog']]
embset_similar(self, emb, n=10, lower=False, metric='cosine')
¶
Show source code in language/_gensim_lang.py
147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 |
|
Retreive an EmbeddingSet that are the most similar to the passed query.
Parameters
Name | Type | Description | Default |
---|---|---|---|
emb |
Union[str, whatlies.embedding.Embedding] |
query to use | required |
n |
int |
the number of items you'd like to see returned | 10 |
metric |
metric to use to calculate distance, must be scipy or sklearn compatible | 'cosine' |
|
lower |
only fetch lower case tokens | False |
Returns
Type | Description |
---|---|
EmbeddingSet |
An EmbeddingSet containing the similar embeddings. |
score_similar(self, emb, n=10, metric='cosine', lower=False)
¶
Show source code in language/_gensim_lang.py
113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 |
|
Retreive a list of (Embedding, score) tuples that are the most similar to the passed query.
Parameters
Name | Type | Description | Default |
---|---|---|---|
emb |
Union[str, whatlies.embedding.Embedding] |
query to use | required |
n |
int |
the number of items you'd like to see returned | 10 |
metric |
metric to use to calculate distance, must be scipy or sklearn compatible | 'cosine' |
|
lower |
only fetch lower case tokens | False |
Returns
Type | Description |
---|---|
List |
An list of (Embedding, score) tuples. |