whatlies.language.FasttextLanguage
¶
This object is used to lazily fetch Embeddings or EmbeddingSets from a fasttext language backend. This object is meant for retreival, not plotting.
Important
The vectors are not given by this library they must be downloaded upfront.
You can find the download links here.
Note: you'll want the bin
file, not the text
file.
To train your own fasttext model see the guide here.
This language backend might require you to manually install extra dependencies unless you installed via either;
pip install whatlies[fasttext]
pip install whatlies[all]
Warning
You could theoretically use fasttext to train your own models with this code;
> import fasttext
> model = fasttext.train_unsupervised('data.txt',
model='cbow',
dim=10)
> model = fasttext.train_unsupervised('data.txt',
model='skipgram',
dim=20,
epoch=20,
lr=0.1,
min_count=1)
> lang = FasttextLanguage(model)
> lang['python']
> model.save_model("result/data-skipgram-20.bin")
> lang = FasttextLanguage("result/data-skipgram-20.bin")
But you need to be aware that the fasttext library from facebook has gone stale. Last update on pypi was June 2019. Our preferred usecase for it is to use the pretrained vectors. Note that you can also import these via spaCy but this requires a packaging step.
Parameters
Name | Type | Description | Default |
---|---|---|---|
model |
name of the model to load, be sure that it's downloaded or trained beforehand | required |
Usage:
> from whatlies.language import FasttextLanguage
> lang = FasttextLanguage("cc.en.300.bin")
> lang['python']
> lang = FasttextLanguage("cc.en.300.bin", size=10)
> lang[['python', 'snake', 'dog']]
__getitem__(self, query)
¶
Show source code in language/_fasttext_lang.py
97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 |
|
Retreive a single embedding or a set of embeddings.
Parameters
Name | Type | Description | Default |
---|---|---|---|
query |
Union[str, List[str]] |
single string or list of strings | required |
Usage
> lang = FasttextLanguage("cc.en.300.bin")
> lang['python']
> lang[['python'], ['snake']]
> lang[['nobody expects'], ['the spanish inquisition']]
embset_proximity(self, emb, max_proximity=0.1, top_n=20000, lower=True, metric='cosine')
¶
Show source code in language/_fasttext_lang.py
135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 |
|
Retreive an EmbeddingSet or embeddings that are within a proximity.
Parameters
Name | Type | Description | Default |
---|---|---|---|
emb |
Union[str, whatlies.embedding.Embedding] |
query to use | required |
max_proximity |
float |
the number of items you'd like to see returned | 0.1 |
top_n |
likelihood limit that sets the subset of words to search | 20000 |
|
metric |
metric to use to calculate distance, must be scipy or sklearn compatible | 'cosine' |
|
lower |
only fetch lower case tokens | True |
Returns
Type | Description |
---|---|
`` | An EmbeddingSet containing the similar embeddings. |
embset_similar(self, emb, n=10, top_n=20000, lower=False, metric='cosine')
¶
Show source code in language/_fasttext_lang.py
165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 |
|
Retreive an EmbeddingSet that are the most similar to the passed query.
Parameters
Name | Type | Description | Default |
---|---|---|---|
emb |
Union[str, whatlies.embedding.Embedding] |
query to use | required |
n |
int |
the number of items you'd like to see returned | 10 |
top_n |
likelihood limit that sets the subset of words to search | 20000 |
|
metric |
metric to use to calculate distance, must be scipy or sklearn compatible | 'cosine' |
|
lower |
only fetch lower case tokens, note that the official english model only has lower case tokens | False |
Important
This method is incredibly slow at the moment without a good top_n
setting due to
this bug.
Returns
Type | Description |
---|---|
`` | An EmbeddingSet containing the similar embeddings. |
score_similar(self, emb, n=10, top_n=20000, lower=False, metric='cosine')
¶
Show source code in language/_fasttext_lang.py
193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 |
|
Retreive a list of (Embedding, score) tuples that are the most similar to the passed query.
Parameters
Name | Type | Description | Default |
---|---|---|---|
emb |
Union[str, whatlies.embedding.Embedding] |
query to use | required |
n |
int |
the number of items you'd like to see returned | 10 |
top_n |
likelihood limit that sets the subset of words to search, to ignore set to None |
20000 |
|
metric |
metric to use to calculate distance, must be scipy or sklearn compatible | 'cosine' |
|
lower |
only fetch lower case tokens, note that the official english model only has lower case tokens | False |
Important
This method is incredibly slow at the moment without a good top_n
setting due
to this bug.
Returns
Type | Description |
---|---|
`` | An list of (Embedding, score) tuples. |