Log to phospho with Python
Collect interactions and tasks
Log tasks to phospho
phospho is a text analytics tool. To send text, you need to log tasks.
What’s a task in phospho?
Tasks are the basic bricks that make up your LLM apps. If you’re a programmer, you can think of tasks like functions.
A task is made of at least two things:
input (str)
: What goes into a task. Eg: what the user asks to the assistant.output (Optional[str])
: What goes out of the task. Eg: what the assistant replied to the user.
The Task abstraction helps you structure your app and quickly explain what it does to an outsider: “Here’s what goes in, here’s what goes out.”
It’s the basic unit of text analytics. You can analyze the input and output of a task to understand the user’s intent, the system’s performance, or the quality of the response.
Examples of tasks
- Call to an LLM (input = query, output = llm response)
- Answering a question (input = question, output = answer)
- Searching in documents (input = search query, output = document)
- Summarizing a text (input = text, output = summary)
- Performing inference of a model (input = X, output = y)
How to log a task?
Install phospho module
The phospho Python module in the easiest way to log to phospho. It is compatible with Python 3.9+.
The phospho module is open source. Feel free to contribute!
Initialize phospho
In your app, initialize the phospho module. By default, phospho will look for PHOSPHO_API_KEY
and PHOSPHO_PROJECT_ID
environment variables.
Learn how to get your api key and project id by clicking here!
You can also pass the api_key
and project_id
parameters to phospho.init
.
Log with phospho.log
To log messages to phospho, use phospho.log
. This function logs a task to phospho. A task is a pair of input and output strings. The output is optional.
phospho is a text analytics tool. You can log any string input and output this way:
The output is optional.
The input and output logged to phospho are displayed in the dashboard and used to perform text analytics.
Common use cases
Log OpenAI queries and responses
phospho aims to be battery included. So if you pass something else than a str
to phospho.log
, phospho extracts what’s usually considered “the input” or “the output”.
For example, you can pass to phospho.log
the same input
as the arguments for openai.chat.completions.create
. And you can pass to phospho.log
the same output
as OpenAI’s ChatCompletion
objects.
Note that the input is a dict.
Log a list of OpenAI messages
In conversational apps, your conversation history is often a list of messages with a role
and a content
. This is because it’s the format expected by OpenAI’s chat API.
You can directly log this messages list as an input or an output to phospho.log
. The input, output, and system prompt are automatically extracted based on the messages’ role.
Note that consecutive messages with the same role are concatenated with a newline.
If you need more control, consider using custom extractors.
Custom extractors
Pass custom extractors to phospho.log
to extract the input and output from any object. The custom extractor is a function that is applied to the input or output before logging. The function should return a string.
The original object is converted to a dict (if jsonable) or a string, and stored in raw_input
and raw_output
.
Log metadata
You can log additional data with each interaction (user id, version id,…) by passing arguments to phospho.log
.
Log streaming outputs
phospho supports streamed outputs. This is useful when you want to log the output of a streaming API.
Example: OpenAI streaming
Out of the box, phospho supports streaming OpenAI completions. Pass stream=True
to phospho.log
to handle streaming responses.
When iterating over the response, phospho will automatically concatenate each chunk until the streaming is finished.
Example: Local Ollama streaming
Let’s assume you’re in a setup where you stream text from an API. The stream is a generator that yields chunks of the response. The generator is immutable by default.
To use this as an output
in phospho.log
, you need to:
- Wrap the generator with
phospho.MutableGenerator
orphospho.MutableAsyncGenerator
(for async generators) - Specify a
stop
function that returnsTrue
when the streaming is finished. This is used to trigger the logging of the task.
Here is an example with an Ollama endpoint that streams responses.
Wrap functions with phospho.wrap
If you wrap a function with phospho.wrap
, phospho automatically logs a task when they are called:
- The passed arguments are logged as
input
- The returned value is logged as
output
You can still use custom extractors and log metadata.
Use the @phospho.wrap
decorator
If you want to log every call to a python function, you can use the @phospho.wrap
decorator. This is a nice pythonic way to structure your LLM app’s code.
How to log metadata with phospho.wrap?
Like phospho.log, every extra keyword argument is logged as metadata.
Wrap an imported function with phospho.wrap
If you can’t change the function definition, you can wrap it this way:
If you want to wrap all calls to a function, override the function definition with the wrapped version:
Wrap a streaming function with phospho.wrap
phospho.wrap can handle streaming functions. To do that, you need two things:
- Pass
stream=True
. This tells phospho to concatenate the string outputs. - Pass a
stop
function, such thatstop(output) is True
when the streaming is finished and trigger the logging of the task.
Was this page helpful?