whatlies.language.FloretLanguage
¶
This object is used to lazily fetch Embeddings or EmbeddingSets from a floret language backend.
Important
The vectors are not given by this library they must be on disk upfront.
To train your own floret vectors see the guide here. In short, you can train your model via;
import floret
model = floret.train_unsupervised("data.txt")
model.save_model("vectors.bin")
This language backend might require you to manually install extra dependencies unless you installed via either;
pip install whatlies[floret]
pip install whatlies[all]
Parameters
Name | Type | Description | Default |
---|---|---|---|
path |
path to the vectors on disk, be sure that it's on disk beforehand | required |
Usage:
> from whatlies.language import FloretLanguage
> lang = FasttextLanguage("cc.en.300.bin")
> lang['python']
> lang = FasttextLanguage("cc.en.300.bin", size=10)
> lang[['python', 'snake', 'dog']]
__getitem__(self, query)
¶
Show source code in language/_floret_lang.py
69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 |
|
Retreive a single embedding or a set of embeddings.
Parameters
Name | Type | Description | Default |
---|---|---|---|
query |
Union[str, List[str]] |
single string or list of strings | required |
Usage
> lang = FasttextLanguage("cc.en.300.bin")
> lang['python']
> lang[['python'], ['snake']]
> lang[['nobody expects'], ['the spanish inquisition']]
embset_proximity(self, emb, max_proximity=0.1, top_n=20000, lower=True, metric='cosine')
¶
Show source code in language/_floret_lang.py
106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 |
|
Retreive an EmbeddingSet or embeddings that are within a proximity.
Parameters
Name | Type | Description | Default |
---|---|---|---|
emb |
Union[str, whatlies.embedding.Embedding] |
query to use | required |
max_proximity |
float |
the number of items you'd like to see returned | 0.1 |
top_n |
likelihood limit that sets the subset of words to search | 20000 |
|
metric |
metric to use to calculate distance, must be scipy or sklearn compatible | 'cosine' |
|
lower |
only fetch lower case tokens | True |
Returns
Type | Description |
---|---|
`` | An EmbeddingSet containing the similar embeddings. |
embset_similar(self, emb, n=10, top_n=20000, lower=False, metric='cosine')
¶
Show source code in language/_floret_lang.py
136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 |
|
Retreive an EmbeddingSet that are the most similar to the passed query.
Parameters
Name | Type | Description | Default |
---|---|---|---|
emb |
Union[str, whatlies.embedding.Embedding] |
query to use | required |
n |
int |
the number of items you'd like to see returned | 10 |
top_n |
likelihood limit that sets the subset of words to search | 20000 |
|
metric |
metric to use to calculate distance, must be scipy or sklearn compatible | 'cosine' |
|
lower |
only fetch lower case tokens, note that the official english model only has lower case tokens | False |
Important
This method is incredibly slow at the moment without a good top_n
setting due to
this bug.
Returns
Type | Description |
---|---|
`` | An EmbeddingSet containing the similar embeddings. |
score_similar(self, emb, n=10, top_n=20000, lower=False, metric='cosine')
¶
Show source code in language/_floret_lang.py
164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 |
|
Retreive a list of (Embedding, score) tuples that are the most similar to the passed query.
Parameters
Name | Type | Description | Default |
---|---|---|---|
emb |
Union[str, whatlies.embedding.Embedding] |
query to use | required |
n |
int |
the number of items you'd like to see returned | 10 |
top_n |
likelihood limit that sets the subset of words to search, to ignore set to None |
20000 |
|
metric |
metric to use to calculate distance, must be scipy or sklearn compatible | 'cosine' |
|
lower |
only fetch lower case tokens, note that the official english model only has lower case tokens | False |
Important
This method is incredibly slow at the moment without a good top_n
setting due
to this bug.
Returns
Type | Description |
---|---|
`` | An list of (Embedding, score) tuples. |