whatlies.language.FloretLanguage

This object is used to lazily fetch Embeddings or EmbeddingSets from a floret language backend.

Important

The vectors are not given by this library they must be on disk upfront.

To train your own floret vectors see the guide here. In short, you can train your model via;

import floret

model = floret.train_unsupervised("data.txt")
model.save_model("vectors.bin")

This language backend might require you to manually install extra dependencies unless you installed via either;

pip install whatlies[floret]
pip install whatlies[all]

Parameters

Name Type Description Default
path path to the vectors on disk, be sure that it's on disk beforehand required

Usage:

> from whatlies.language import FloretLanguage
> lang = FasttextLanguage("cc.en.300.bin")
> lang['python']
> lang = FasttextLanguage("cc.en.300.bin", size=10)
> lang[['python', 'snake', 'dog']]

__getitem__(self, query)

Show source code in language/_floret_lang.py
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
    def __getitem__(self, query: Union[str, List[str]]):
        """
        Retreive a single embedding or a set of embeddings.

        Arguments:
            query: single string or list of strings

        **Usage**
        ```python
        > lang = FasttextLanguage("cc.en.300.bin")
        > lang['python']
        > lang[['python'], ['snake']]
        > lang[['nobody expects'], ['the spanish inquisition']]
        ```
        """
        if isinstance(query, str):
            vec = self.model.get_word_vector(query)
            return Embedding(query, vec)
        return EmbeddingSet(*[self[tok] for tok in query])

Retreive a single embedding or a set of embeddings.

Parameters

Name Type Description Default
query Union[str, List[str]] single string or list of strings required

Usage

> lang = FasttextLanguage("cc.en.300.bin")
> lang['python']
> lang[['python'], ['snake']]
> lang[['nobody expects'], ['the spanish inquisition']]

embset_proximity(self, emb, max_proximity=0.1, top_n=20000, lower=True, metric='cosine')

Show source code in language/_floret_lang.py
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
    def embset_proximity(
        self,
        emb: Union[str, Embedding],
        max_proximity: float = 0.1,
        top_n=20_000,
        lower=True,
        metric="cosine",
    ):
        """
        Retreive an [EmbeddingSet][whatlies.embeddingset.EmbeddingSet] or embeddings that are within a proximity.

        Arguments:
            emb: query to use
            max_proximity: the number of items you'd like to see returned
            top_n: likelihood limit that sets the subset of words to search
            metric: metric to use to calculate distance, must be scipy or sklearn compatible
            lower: only fetch lower case tokens

        Returns:
            An [EmbeddingSet][whatlies.embeddingset.EmbeddingSet] containing the similar embeddings.
        """
        if isinstance(emb, str):
            emb = self[emb]

        queries = self._prepare_queries(top_n, lower)
        distances = self._calculate_distances(emb, queries, metric)
        return EmbeddingSet(
            {w: self[w] for w, d in zip(queries, distances) if d <= max_proximity}
        )

Retreive an EmbeddingSet or embeddings that are within a proximity.

Parameters

Name Type Description Default
emb Union[str, whatlies.embedding.Embedding] query to use required
max_proximity float the number of items you'd like to see returned 0.1
top_n likelihood limit that sets the subset of words to search 20000
metric metric to use to calculate distance, must be scipy or sklearn compatible 'cosine'
lower only fetch lower case tokens True

Returns

Type Description
`` An EmbeddingSet containing the similar embeddings.

embset_similar(self, emb, n=10, top_n=20000, lower=False, metric='cosine')

Show source code in language/_floret_lang.py
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
    def embset_similar(
        self,
        emb: Union[str, Embedding],
        n: int = 10,
        top_n=20_000,
        lower=False,
        metric="cosine",
    ):
        """
        Retreive an [EmbeddingSet][whatlies.embeddingset.EmbeddingSet] that are the most similar to the passed query.

        Arguments:
            emb: query to use
            n: the number of items you'd like to see returned
            top_n: likelihood limit that sets the subset of words to search
            metric: metric to use to calculate distance, must be scipy or sklearn compatible
            lower: only fetch lower case tokens, note that the official english model only has lower case tokens

        Important:
            This method is incredibly slow at the moment without a good `top_n` setting due to
            [this bug](https://github.com/facebookresearch/fastText/issues/1040).

        Returns:
            An [EmbeddingSet][whatlies.embeddingset.EmbeddingSet] containing the similar embeddings.
        """
        embs = [w[0] for w in self.score_similar(emb, n, top_n, lower, metric)]
        return EmbeddingSet({w.name: w for w in embs})

Retreive an EmbeddingSet that are the most similar to the passed query.

Parameters

Name Type Description Default
emb Union[str, whatlies.embedding.Embedding] query to use required
n int the number of items you'd like to see returned 10
top_n likelihood limit that sets the subset of words to search 20000
metric metric to use to calculate distance, must be scipy or sklearn compatible 'cosine'
lower only fetch lower case tokens, note that the official english model only has lower case tokens False

Important

This method is incredibly slow at the moment without a good top_n setting due to this bug.

Returns

Type Description
`` An EmbeddingSet containing the similar embeddings.

score_similar(self, emb, n=10, top_n=20000, lower=False, metric='cosine')

Show source code in language/_floret_lang.py
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
    def score_similar(
        self,
        emb: Union[str, Embedding],
        n: int = 10,
        top_n=20_000,
        lower=False,
        metric="cosine",
    ):
        """
        Retreive a list of (Embedding, score) tuples that are the most similar to the passed query.

        Arguments:
            emb: query to use
            n: the number of items you'd like to see returned
            top_n: likelihood limit that sets the subset of words to search, to ignore set to `None`
            metric: metric to use to calculate distance, must be scipy or sklearn compatible
            lower: only fetch lower case tokens, note that the official english model only has lower case tokens

        Important:
            This method is incredibly slow at the moment without a good `top_n` setting due
            to [this bug](https://github.com/facebookresearch/fastText/issues/1040).

        Returns:
            An list of ([Embedding][whatlies.embedding.Embedding], score) tuples.
        """
        if isinstance(emb, str):
            emb = self[emb]

        queries = self._prepare_queries(top_n, lower)
        distances = self._calculate_distances(emb, queries, metric)
        by_similarity = sorted(zip(queries, distances), key=lambda z: z[1])

        if len(queries) < n:
            warnings.warn(
                f"We could only find {len(queries)} feasible words. Consider changing `top_n` or `lower`",
                UserWarning,
            )

        return [(self[q], float(d)) for q, d in by_similarity[:n]]

Retreive a list of (Embedding, score) tuples that are the most similar to the passed query.

Parameters

Name Type Description Default
emb Union[str, whatlies.embedding.Embedding] query to use required
n int the number of items you'd like to see returned 10
top_n likelihood limit that sets the subset of words to search, to ignore set to None 20000
metric metric to use to calculate distance, must be scipy or sklearn compatible 'cosine'
lower only fetch lower case tokens, note that the official english model only has lower case tokens False

Important

This method is incredibly slow at the moment without a good top_n setting due to this bug.

Returns

Type Description
`` An list of (Embedding, score) tuples.