Skip to content

Sentence Models

A different take on textcat.

Why sentence models?

I was working on a project that tries to detect topics in academic articles found on arxiv. One of the topics I was interested in was "new datasets". If an article presents a new dataset there's usually something interesting happening so I wanted to build a classifier for it.

I'm dealing with abstracts that look like this.

You could build a classifier on the entire text and that could work, but it takes a lot of effort to annotate because you'd need to read the entire abstract. It's probably hard for an algorithm too. It has to figure out what part of the abstract is important for the topic of interest and there's a lot of text that might matter.

But what if we choose to approach the problem slightly differently?

It seems a lot simpler to just detect sentences within the abstract.

Maybe it makes sense to split the text into sentences and run a classifier on each one of those. This might not be perfect for every scenario out there. But it seems like a valid starting point to help you annotate and get started.

If you have sentence level predictions, you could re-use that to do abstract-level predictions.

This is how you might use sentence-level predictions to classify abstracts.

And the library is set up in such a way that you can add as many labels as you like. We're even able to do some clever fine-tuning tricks internally.

The internal pipeline.

Note: the finetuning bit is still a work in progress in the library.

This approach isn't state of the art, there's probably a whole bunch of things we can improve. But it does seem like a pragmatic, and understandable, starting point for a lot of text categorisation projects.

Quickstart

This project is all about making text classification models by predicting properties on a sentence level first.

from sentence_models import SentenceModel
from sklearn.feature_extraction.text import HashingVectorizer

# Learn a new sentence-model using a stateless encoder.
encoder = HashingVectorizer()
smod = SentenceModel(encoder=encoder).learn_from_disk("annotations.jsonl")

# Make a prediction
example = "In this paper we introduce a new dataset for citrus fruit detection. We also contribute a state of the art algorithm."
smod(example)

The output of this model will make a prediction for each sentence so that you may build downstream rules on top of it. Here's what the predictions might look like, depending on the labels in the annotations.jsonl file.

{
    'text': 'In this paper we introduce a new dataset for citrus fruit detection. We also contribute a state of the art algorithm.',
    'sentences': [
        {
            'sentence': 'In this paper we introduce a new dataset for citrus fruit detection.',
            'cats': {
                "new-dataset": 0.8654,
                "llm": 0.212,
                "benchmark": 0.321
            }
        },{
            'sentence': 'We also contribute a state of the art algorithm.',
            'cats': {
                "new-dataset": 0.398,
                "llm": 0.431,
                "benchmark": 0.967
            }
        },
    ]
}

Learning from data

The SentenceModel can learn from a .jsonl file directly, but will assume a specific structure when it is learning. Internally it runs the following PyDantic model to ensure the data is in the right format.

class Example(BaseModel):
    text: str
    target: Dict[str, bool]

That means that an example like below would work:

{
    "text": "In this paper we introduce a new dataset for citrus fruit detection",
    "target": {
        "new-dataset": True,
        "llm": False,
        "benchmark": False
    }
}

It is preferable have text keys that represent a single sentence. It's not required. But the library will assume sentences when it makes a prediction.

Note that you don't need to have all labels available in each example. That's a feature! Typically when you're annotating it's a lot simpler to just annotate one label at a time and it's perfectly fine if you have examples annotated that don't contain all the labels that you are interested in.

Embedding models

You might prefer to use pretrained embedding models as an encoder in our setup. For this, you may prefer to use the Embetter library. This library supports many embedding techniques and makes sure that they all adhere to the scikit-learn API. If you want to use the popular sentence-transformers library, you can use the following snippet.

from sentence_models import SentenceModel
from embetter.text import SentenceEncoder

# Learn a new sentence-model using a stateless encoder from sentence-transformers.
smod = SentenceModel(encoder=SentenceEncoder())

API

SentenceModel

This is the main object that you'll interact with.

SentenceModel

This object represents a model that can apply predictions per sentence.

Usage:

from sentence_model import SentenceModel

smod = SentenceModel()

You can customise some of the settings if you like, but it comes with sensible defaults.

from sentence_model import SentenceModel
from embetter.text import SentenceEncoder
from sklearn.linear_model import LogisticRegression

smod = SentenceModel(
    encoder=SentenceEncoder(), 
    clf_head=LogisticRegression(class_weight="balanced"),
    spacy_model="en_core_web_sm", 
    verbose=False
)
Source code in sentence_models/__init__.py
class SentenceModel:
    """
    **SentenceModel**

    This object represents a model that can apply predictions per sentence.

    **Usage:**

    ```python
    from sentence_model import SentenceModel

    smod = SentenceModel()
    ```

    You can customise some of the settings if you like, but it comes with sensible defaults.

    ```python
    from sentence_model import SentenceModel
    from embetter.text import SentenceEncoder
    from sklearn.linear_model import LogisticRegression

    smod = SentenceModel(
        encoder=SentenceEncoder(), 
        clf_head=LogisticRegression(class_weight="balanced"),
        spacy_model="en_core_web_sm", 
        verbose=False
    )
    ```
    """
    def __init__(self,
                 encoder: TransformerMixin,
                 clf_head: ClassifierMixin = LogisticRegression(class_weight="balanced"),
                 spacy_model: str = "en_core_web_sm",
                 verbose: bool = False,
                 finetuner = None
                 ):
        self.encoder = encoder
        self.clf_head = clf_head
        self.spacy_model = spacy_model if isinstance(spacy_model, Language) else spacy.load(spacy_model, disable=["ner", "lemmatizer", "tagger"])
        self.classifiers = {}
        self.verbose = verbose
        self.finetuner = finetuner
        self.log("SentenceModel initialized.")

    def log(self, msg: str) -> None:
        if self.verbose:
            console.log(msg)

    # TODO: add support for finetuners
    # def _generate_finetune_dataset(self, examples):
    #     if self.verbose:
    #         console.log("Generating pairs for finetuning.")
    #     all_labels = {cat for ex in examples for cat in ex['target'].keys()}

    #     # Calculating embeddings is usually expensive so only run this once
    #     arrays = {}
    #     for label in all_labels:
    #         subset = [ex for ex in examples if label in ex['target'].keys()]
    #         texts = [ex['text'] for ex in subset]
    #         arrays[label] = self.encoder.transform(texts)

    #     def concat_if_exists(main, new):
    #         """This function is only used here, so internal"""
    #         if main is None:
    #             return new
    #         return np.concatenate([main, new])

    #     X1 = None
    #     X2 = None
    #     lab = None
    #     for label in all_labels:
    #         subset = [ex for ex in examples if label in ex['target'].keys()]
    #         labels = [ex['target'][label] for ex in subset]
    #         pairs = generate_pairs_batch(labels)
    #         X = arrays[label]
    #         X1 = concat_if_exists(X1, np.array([X[p.e1] for p in pairs]))
    #         X2 = concat_if_exists(X2, np.array([X[p.e2] for p in pairs]))
    #         lab = concat_if_exists(lab, np.array([p.val for p in pairs], dtype=float))
    #     if self.verbose:
    #         console.log(f"Generated {len(lab)} pairs for contrastive finetuning.")
    #     return X1, X2, lab

    # def _learn_finetuner(self, examples):
    #     X1, X2, lab = self._generate_finetune_dataset(examples)
    #     self.finetuner.construct_models(X1, X2)
    #     self.finetuner.learn(X1, X2, lab)

    def _prepare_stream(self, stream):
        lines = LazyLines(stream).map(lambda d: Example(**d))
        lines_orig, lines_new = lines.tee()
        labels = {lab for ex in lines_orig for lab in ex.target.keys()}

        mapper = {}
        for ex in lines_new:
            if ex.text not in mapper:
                mapper[ex.text] = {}
            for lab in ex.target.keys():
                mapper[ex.text][lab] = ex.target[lab]
        self.log(f"Found {len(mapper)} examples for {len(labels)} labels.")
        return labels, mapper

    def learn(self, examples: List[Dict]) -> "SentenceModel":
        """
        Learn from a generator of examples. Can update a previously loaded model.

        Each example should be a dictionary with a "text" key and a "target" key.
        Internally this method checks via this Pydantic model:

        ```python
        class Example(BaseModel):
            text: str
            target: Dict[str, bool]
        ```

        As long as your generator emits dictionaries in this format, all will go well.

        **Usage:**

        ```python
        from sentence_model import SentenceModel

        smod = SentenceModel().learn(some_generator)
        ```
        """
        labels, mapper = self._prepare_stream(examples)
        # if self.finetuner is not None:
        #     self._learn_finetuner([{"text": k, "target": v} for k, v in mapper.items()])
        self.classifiers = {lab: clone(self.clf_head) for lab in labels}
        for lab, clf in self.classifiers.items():
            texts = [text for text, targets in mapper.items() if lab in targets]
            labels = [mapper[text][lab] for text in texts]
            X = self.encode(texts)
            clf.fit(X, labels)
            self.log(f"Trained classifier head for {lab=}")
        return self

    def learn_from_disk(self, path: Path) -> "SentenceModel":
        """
        Load a JSONL file from disk and learn from it.

        **Usage:**

        ```python
        from sentence_model import SentenceModel

        smod = SentenceModel().learn_from_disk("path/to/file.jsonl")
        ```
        """
        return self.learn(list(read_jsonl(Path(path))))

    def _to_sentences(self, text: str):
        for sent in self.spacy_model(text).sents:
            yield sent.text

    def encode(self, texts: List[str]):
        """
        Encode a list of texts into a matrix of shape (n_texts, n_features)

        **Usage::**

        ```python
        from sentence_model import SentenceModel

        smod = SentenceModel()
        smod.encode(["example text"])
        ```
        """
        if self.finetuner:
            console.log(self.finetuner)
        X = self.encoder.transform(texts)
        if self.finetuner is not None:
            return self.finetuner.encode(X) 
        return X

    def __call__(self, text:str):
        """
        Make a prediction for a single text.

        **Usage:**

        ```python
        from sentence_model import SentenceModel

        smod = SentenceModel().learn_from_disk("path/to/file.jsonl")
        smod("Predict this. Per sentence!")
        ```
        """
        result = {"text": text}
        sents = list(self._to_sentences(text))
        result["sentences"] = [{"sentence": sent, "cats": {}} for sent in sents]
        X = self.encode(sents)
        for lab, clf in self.classifiers.items(): 
            probas = clf.predict_proba(X)[:, 1]
            for i, proba in enumerate(probas):
                result["sentences"][i]['cats'][lab] = float(proba)
        return result

    def pipe(self, texts):
        # Currently undocumented because I want to make it faster
        for ex in texts:
            yield self(ex)

    def to_disk(self, folder: Union[str, Path]) -> None:
        """
        Writes a `SentenceModel` to disk.

        **Usage:**

        ```python
        from sentence_model import SentenceModel

        smod = SentenceModel().learn_from_disk("path/to/file.jsonl")
        smod.to_disk("path/to/model")
        ```
        """
        self.log(f"Storing {self}.")
        folder = Path(folder)
        folder.mkdir(exist_ok=True, parents=True)
        for name, clf in self.classifiers.items():
            self.log(f"Writing to disk {folder}/{name}.skops")
            dump(clf, folder / f"{name}.skops")
        if self.finetuner is not None:
            self.finetuner.to_disk(folder)
        settings = {
            "encoder_str": str(self.encoder)
        }
        srsly.write_json(folder / "settings.json", settings)
        self.log(f"Model stored in {folder}.")

    def __repr__(self):
        return f"SentenceModel(encoder={self.encoder}, heads={list(self.classifiers.keys())})"

    @classmethod
    def from_disk(cls, folder:Union[str, Path], encoder, spacy_model:str="en_core_web_sm", verbose:bool=False) -> "SentenceModel":
        """
        Loads a `SentenceModel` from disk.

        **Usage:**

        ```python
        from sentence_model import SentenceModel
        from embetter.text import SentenceEncoder

        # It's good to be explicit with the encoder. Internally this method will check if 
        # the encoder matches what was available during training. The spaCy model is less 
        # critical because it merely splits the sentences during inference.
        smod = SentenceModel.from_disk("path/to/model", encoder=SentenceEncoder(), spacy_model="en_core_web_sm")
        smod("Predict this. Per sentence!")
        ```
        """
        folder = Path(folder)
        models = {p.parts[-1].replace(".skops", ""): load(p, trusted=True) for p in folder.glob("*.skops")}
        if len(models) == 0:
            raise ValueError(f"Did not find any `.skops` files in {folder}. Are you sure folder is correct?")
        settings = srsly.read_json(folder / "settings.json")
        assert str(encoder) == settings["encoder_str"], f"The encoder at time of saving ({settings['encoder_str']}) differs from this one ({encoder})."
        smod = SentenceModel(
            encoder=encoder,
            clf_head=list(models.values())[0], 
            spacy_model=spacy_model, 
            verbose=verbose, 
            finetuner=None
        )
        smod.classifiers = models
        return smod

__call__(text)

Make a prediction for a single text.

Usage:

from sentence_model import SentenceModel

smod = SentenceModel().learn_from_disk("path/to/file.jsonl")
smod("Predict this. Per sentence!")
Source code in sentence_models/__init__.py
def __call__(self, text:str):
    """
    Make a prediction for a single text.

    **Usage:**

    ```python
    from sentence_model import SentenceModel

    smod = SentenceModel().learn_from_disk("path/to/file.jsonl")
    smod("Predict this. Per sentence!")
    ```
    """
    result = {"text": text}
    sents = list(self._to_sentences(text))
    result["sentences"] = [{"sentence": sent, "cats": {}} for sent in sents]
    X = self.encode(sents)
    for lab, clf in self.classifiers.items(): 
        probas = clf.predict_proba(X)[:, 1]
        for i, proba in enumerate(probas):
            result["sentences"][i]['cats'][lab] = float(proba)
    return result

encode(texts)

Encode a list of texts into a matrix of shape (n_texts, n_features)

Usage::

from sentence_model import SentenceModel

smod = SentenceModel()
smod.encode(["example text"])
Source code in sentence_models/__init__.py
def encode(self, texts: List[str]):
    """
    Encode a list of texts into a matrix of shape (n_texts, n_features)

    **Usage::**

    ```python
    from sentence_model import SentenceModel

    smod = SentenceModel()
    smod.encode(["example text"])
    ```
    """
    if self.finetuner:
        console.log(self.finetuner)
    X = self.encoder.transform(texts)
    if self.finetuner is not None:
        return self.finetuner.encode(X) 
    return X

from_disk(folder, encoder, spacy_model='en_core_web_sm', verbose=False) classmethod

Loads a SentenceModel from disk.

Usage:

from sentence_model import SentenceModel
from embetter.text import SentenceEncoder

# It's good to be explicit with the encoder. Internally this method will check if 
# the encoder matches what was available during training. The spaCy model is less 
# critical because it merely splits the sentences during inference.
smod = SentenceModel.from_disk("path/to/model", encoder=SentenceEncoder(), spacy_model="en_core_web_sm")
smod("Predict this. Per sentence!")
Source code in sentence_models/__init__.py
@classmethod
def from_disk(cls, folder:Union[str, Path], encoder, spacy_model:str="en_core_web_sm", verbose:bool=False) -> "SentenceModel":
    """
    Loads a `SentenceModel` from disk.

    **Usage:**

    ```python
    from sentence_model import SentenceModel
    from embetter.text import SentenceEncoder

    # It's good to be explicit with the encoder. Internally this method will check if 
    # the encoder matches what was available during training. The spaCy model is less 
    # critical because it merely splits the sentences during inference.
    smod = SentenceModel.from_disk("path/to/model", encoder=SentenceEncoder(), spacy_model="en_core_web_sm")
    smod("Predict this. Per sentence!")
    ```
    """
    folder = Path(folder)
    models = {p.parts[-1].replace(".skops", ""): load(p, trusted=True) for p in folder.glob("*.skops")}
    if len(models) == 0:
        raise ValueError(f"Did not find any `.skops` files in {folder}. Are you sure folder is correct?")
    settings = srsly.read_json(folder / "settings.json")
    assert str(encoder) == settings["encoder_str"], f"The encoder at time of saving ({settings['encoder_str']}) differs from this one ({encoder})."
    smod = SentenceModel(
        encoder=encoder,
        clf_head=list(models.values())[0], 
        spacy_model=spacy_model, 
        verbose=verbose, 
        finetuner=None
    )
    smod.classifiers = models
    return smod

learn(examples)

Learn from a generator of examples. Can update a previously loaded model.

Each example should be a dictionary with a "text" key and a "target" key. Internally this method checks via this Pydantic model:

class Example(BaseModel):
    text: str
    target: Dict[str, bool]

As long as your generator emits dictionaries in this format, all will go well.

Usage:

from sentence_model import SentenceModel

smod = SentenceModel().learn(some_generator)
Source code in sentence_models/__init__.py
def learn(self, examples: List[Dict]) -> "SentenceModel":
    """
    Learn from a generator of examples. Can update a previously loaded model.

    Each example should be a dictionary with a "text" key and a "target" key.
    Internally this method checks via this Pydantic model:

    ```python
    class Example(BaseModel):
        text: str
        target: Dict[str, bool]
    ```

    As long as your generator emits dictionaries in this format, all will go well.

    **Usage:**

    ```python
    from sentence_model import SentenceModel

    smod = SentenceModel().learn(some_generator)
    ```
    """
    labels, mapper = self._prepare_stream(examples)
    # if self.finetuner is not None:
    #     self._learn_finetuner([{"text": k, "target": v} for k, v in mapper.items()])
    self.classifiers = {lab: clone(self.clf_head) for lab in labels}
    for lab, clf in self.classifiers.items():
        texts = [text for text, targets in mapper.items() if lab in targets]
        labels = [mapper[text][lab] for text in texts]
        X = self.encode(texts)
        clf.fit(X, labels)
        self.log(f"Trained classifier head for {lab=}")
    return self

learn_from_disk(path)

Load a JSONL file from disk and learn from it.

Usage:

from sentence_model import SentenceModel

smod = SentenceModel().learn_from_disk("path/to/file.jsonl")
Source code in sentence_models/__init__.py
def learn_from_disk(self, path: Path) -> "SentenceModel":
    """
    Load a JSONL file from disk and learn from it.

    **Usage:**

    ```python
    from sentence_model import SentenceModel

    smod = SentenceModel().learn_from_disk("path/to/file.jsonl")
    ```
    """
    return self.learn(list(read_jsonl(Path(path))))

to_disk(folder)

Writes a SentenceModel to disk.

Usage:

from sentence_model import SentenceModel

smod = SentenceModel().learn_from_disk("path/to/file.jsonl")
smod.to_disk("path/to/model")
Source code in sentence_models/__init__.py
def to_disk(self, folder: Union[str, Path]) -> None:
    """
    Writes a `SentenceModel` to disk.

    **Usage:**

    ```python
    from sentence_model import SentenceModel

    smod = SentenceModel().learn_from_disk("path/to/file.jsonl")
    smod.to_disk("path/to/model")
    ```
    """
    self.log(f"Storing {self}.")
    folder = Path(folder)
    folder.mkdir(exist_ok=True, parents=True)
    for name, clf in self.classifiers.items():
        self.log(f"Writing to disk {folder}/{name}.skops")
        dump(clf, folder / f"{name}.skops")
    if self.finetuner is not None:
        self.finetuner.to_disk(folder)
    settings = {
        "encoder_str": str(self.encoder)
    }
    srsly.write_json(folder / "settings.json", settings)
    self.log(f"Model stored in {folder}.")