Classification
Train, use and download classification models
Request access to the preview by contacting us at contact@phospho.app
phospho can handle all the data processing, data engineering and model training for you. For now, only binary classification models are supported (learn here what binary classification is).
Why train your custom classification model?
Most LLM chains involve classification steps where the LLM is prompted with a classification task. Training your own classification model can help you to:
- improve the accuracy of the classification
- reduce the latency of the classification (as you have the model running in the application code)
- reduce the cost of the classification (as you don’t have to call an external LLM API)
- reduce risks of downtime (as you don’t depend on an external LLM API)
Available models
phospho-small
is a small text classification model that can be trained with a few examples (minimum 20 examples).
It runs on CPU and once trained using phospho, you can download your trained model from Hugging Face.
Train a model on your data
To train a model, you need to provide a list of examples for the modelat least 20 examples containing text, labels and a label description. Each example should have the following fields:
text
(str): the text to classify (for example, a user message)label
(bool): True or False according to the classificationlabel_text
(str): a few word description of the label when true (for example, “user asking for pricing”)
For example, your examples could look like this:
Start the training using the following API call or python code snippet:
You will get a model object in the response. You will need the model_id
to use the model. It should look like this: phospho-small-8963ba3
.
The training will take a few minutes. You can check the status of the model using the following API call:
Your model will be ready when the status will changed from training
to trained
.
Use the model
You can use the model 2 ways:
- directly download it from Hugging Face (
phospho-small
runs on CPU) - through the phospho API
Download and use locally your model (recommended for production)
You can download the model from phospho Hugging Face repo. The model id is the same as the one you got when training the model.
For example, if the model id is phospho-small-8963ba3
, you can download the model from Hugging Face with the id phospho-app/phospho-small-8963ba3
.
Then you can use the model like any other Hugging Face model:
Make sure to have enough RAM to load the model and the tokenizer in memory. The model is 420MB.
Use the model through the API
AI Models predict endpoints are in preview and not yet ready for production trafic.
To use the model through the API, you need to send a POST request to the /predict
endpoint with the model id and the batch of text to classify.
If it’s the first request you send, you might experience a delay as the model is loaded in memory.
List your models
You can also list all the models you have have access to and that can accept requests:
Was this page helpful?