whatlies.language.FasttextLanguage

This object is used to lazily fetch Embeddings or EmbeddingSets from a fasttext language backend. This object is meant for retreival, not plotting.

Important

The vectors are not given by this library they must be downloaded upfront. You can find the download links here. Note: you'll want the bin file, not the text file. To train your own fasttext model see the guide here.

This language backend might require you to manually install extra dependencies unless you installed via either;

pip install whatlies[fasttext]
pip install whatlies[all]

Warning

You could theoretically use fasttext to train your own models with this code;

> import fasttext
> model = fasttext.train_unsupervised('data.txt',
                                      model='cbow',
                                      dim=10)
> model = fasttext.train_unsupervised('data.txt',
                                      model='skipgram',
                                      dim=20,
                                      epoch=20,
                                      lr=0.1,
                                      min_count=1)
> lang = FasttextLanguage(model)
> lang['python']
> model.save_model("result/data-skipgram-20.bin")
> lang = FasttextLanguage("result/data-skipgram-20.bin")

But you need to be aware that the fasttext library from facebook has gone stale. Last update on pypi was June 2019. Our preferred usecase for it is to use the pretrained vectors. Note that you can also import these via spaCy but this requires a packaging step.

Parameters

Name Type Description Default
model name of the model to load, be sure that it's downloaded or trained beforehand required

Usage:

> from whatlies.language import FasttextLanguage
> lang = FasttextLanguage("cc.en.300.bin")
> lang['python']
> lang = FasttextLanguage("cc.en.300.bin", size=10)
> lang[['python', 'snake', 'dog']]

__getitem__(self, query)

Show source code in language/_fasttext_lang.py
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
    def __getitem__(self, query: Union[str, List[str]]):
        """
        Retreive a single embedding or a set of embeddings.

        Arguments:
            query: single string or list of strings

        **Usage**
        ```python
        > lang = FasttextLanguage("cc.en.300.bin")
        > lang['python']
        > lang[['python'], ['snake']]
        > lang[['nobody expects'], ['the spanish inquisition']]
        ```
        """
        if isinstance(query, str):
            self._input_str_legal(query)
            vec = self.model.get_word_vector(query)
            return Embedding(query, vec)
        return EmbeddingSet(*[self[tok] for tok in query])

Retreive a single embedding or a set of embeddings.

Parameters

Name Type Description Default
query Union[str, List[str]] single string or list of strings required

Usage

> lang = FasttextLanguage("cc.en.300.bin")
> lang['python']
> lang[['python'], ['snake']]
> lang[['nobody expects'], ['the spanish inquisition']]

embset_proximity(self, emb, max_proximity=0.1, top_n=20000, lower=True, metric='cosine')

Show source code in language/_fasttext_lang.py
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
    def embset_proximity(
        self,
        emb: Union[str, Embedding],
        max_proximity: float = 0.1,
        top_n=20_000,
        lower=True,
        metric="cosine",
    ):
        """
        Retreive an [EmbeddingSet][whatlies.embeddingset.EmbeddingSet] or embeddings that are within a proximity.

        Arguments:
            emb: query to use
            max_proximity: the number of items you'd like to see returned
            top_n: likelihood limit that sets the subset of words to search
            metric: metric to use to calculate distance, must be scipy or sklearn compatible
            lower: only fetch lower case tokens

        Returns:
            An [EmbeddingSet][whatlies.embeddingset.EmbeddingSet] containing the similar embeddings.
        """
        if isinstance(emb, str):
            emb = self[emb]

        queries = self._prepare_queries(top_n, lower)
        distances = self._calculate_distances(emb, queries, metric)
        return EmbeddingSet(
            {w: self[w] for w, d in zip(queries, distances) if d <= max_proximity}
        )

Retreive an EmbeddingSet or embeddings that are within a proximity.

Parameters

Name Type Description Default
emb Union[str, whatlies.embedding.Embedding] query to use required
max_proximity float the number of items you'd like to see returned 0.1
top_n likelihood limit that sets the subset of words to search 20000
metric metric to use to calculate distance, must be scipy or sklearn compatible 'cosine'
lower only fetch lower case tokens True

Returns

Type Description
`` An EmbeddingSet containing the similar embeddings.

embset_similar(self, emb, n=10, top_n=20000, lower=False, metric='cosine')

Show source code in language/_fasttext_lang.py
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
    def embset_similar(
        self,
        emb: Union[str, Embedding],
        n: int = 10,
        top_n=20_000,
        lower=False,
        metric="cosine",
    ):
        """
        Retreive an [EmbeddingSet][whatlies.embeddingset.EmbeddingSet] that are the most similar to the passed query.

        Arguments:
            emb: query to use
            n: the number of items you'd like to see returned
            top_n: likelihood limit that sets the subset of words to search
            metric: metric to use to calculate distance, must be scipy or sklearn compatible
            lower: only fetch lower case tokens, note that the official english model only has lower case tokens

        Important:
            This method is incredibly slow at the moment without a good `top_n` setting due to
            [this bug](https://github.com/facebookresearch/fastText/issues/1040).

        Returns:
            An [EmbeddingSet][whatlies.embeddingset.EmbeddingSet] containing the similar embeddings.
        """
        embs = [w[0] for w in self.score_similar(emb, n, top_n, lower, metric)]
        return EmbeddingSet({w.name: w for w in embs})

Retreive an EmbeddingSet that are the most similar to the passed query.

Parameters

Name Type Description Default
emb Union[str, whatlies.embedding.Embedding] query to use required
n int the number of items you'd like to see returned 10
top_n likelihood limit that sets the subset of words to search 20000
metric metric to use to calculate distance, must be scipy or sklearn compatible 'cosine'
lower only fetch lower case tokens, note that the official english model only has lower case tokens False

Important

This method is incredibly slow at the moment without a good top_n setting due to this bug.

Returns

Type Description
`` An EmbeddingSet containing the similar embeddings.

score_similar(self, emb, n=10, top_n=20000, lower=False, metric='cosine')

Show source code in language/_fasttext_lang.py
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
    def score_similar(
        self,
        emb: Union[str, Embedding],
        n: int = 10,
        top_n=20_000,
        lower=False,
        metric="cosine",
    ):
        """
        Retreive a list of (Embedding, score) tuples that are the most similar to the passed query.

        Arguments:
            emb: query to use
            n: the number of items you'd like to see returned
            top_n: likelihood limit that sets the subset of words to search, to ignore set to `None`
            metric: metric to use to calculate distance, must be scipy or sklearn compatible
            lower: only fetch lower case tokens, note that the official english model only has lower case tokens

        Important:
            This method is incredibly slow at the moment without a good `top_n` setting due
            to [this bug](https://github.com/facebookresearch/fastText/issues/1040).

        Returns:
            An list of ([Embedding][whatlies.embedding.Embedding], score) tuples.
        """
        if isinstance(emb, str):
            emb = self[emb]

        queries = self._prepare_queries(top_n, lower)
        distances = self._calculate_distances(emb, queries, metric)
        by_similarity = sorted(zip(queries, distances), key=lambda z: z[1])

        if len(queries) < n:
            warnings.warn(
                f"We could only find {len(queries)} feasible words. Consider changing `top_n` or `lower`",
                UserWarning,
            )

        return [(self[q], float(d)) for q, d in by_similarity[:n]]

Retreive a list of (Embedding, score) tuples that are the most similar to the passed query.

Parameters

Name Type Description Default
emb Union[str, whatlies.embedding.Embedding] query to use required
n int the number of items you'd like to see returned 10
top_n likelihood limit that sets the subset of words to search, to ignore set to None 20000
metric metric to use to calculate distance, must be scipy or sklearn compatible 'cosine'
lower only fetch lower case tokens, note that the official english model only has lower case tokens False

Important

This method is incredibly slow at the moment without a good top_n setting due to this bug.

Returns

Type Description
`` An list of (Embedding, score) tuples.