ALToolbox.llms
chat_completionβ
A light wrapper on the OpenAI chat endpoint.
Includes support for token limits, error handling, and moderation queue.
It is also possible to specify an alternative model, and we support GPT-4-turbo's JSON mode.
As of May 2024, json mode is available with both GPT-4-turbo and GPT-3.5-turbo (and no longer requires the 1106-preview versions)
Arguments:
system_message
str - The role the chat engine should playuser_message
str - The message (data) from the useropenai_client
Optional[OpenAI] - An OpenAI client object, optional. If omitted, will fall back to creating a new OpenAI client with the API key provided as an environment variableopenai_api
Optional[str] - the API key for an OpenAI client, optional. If provided, a new OpenAI client will be created.temperature
float - The temperature to use for the GPT APIjson_mode
bool - Whether to use JSON mode for the GPT API. Requires the wordjson
in the system message, but will add if you omit it.model
str - The model to use for the GPT APImessages
Optional[List[Dict[str, str]]] - A list of messages to send to the chat engine. If provided, system_message and user_message will be ignored.skip_moderation
bool - Whether to skip the OpenAI moderation step, which may save seconds but risks banning your account. Only enable when you have full control over the inputs.
Returns:
A string with the response from the API endpoint or JSON data if json_mode is True
extract_fields_from_textβ
Extracts fields from text.
Arguments:
text
str - The text to extract fields fromfield_list
Dict[str,str] - A list of fields to extract, with the key being the field name and the value being a description of the field
Returns:
A dictionary of fields extracted from the text
match_goals_from_textβ
Reads a user's message and determines whether it meets a set of goals, with the help of an LLM.
Arguments:
text
str - The text to extract goals fromfield_list
Dict[str,str] - A list of goals to extract, with the key being the goal name and the value being a description of the goal
Returns:
A dictionary of fields extracted from the text
classify_textβ
Given a text, classify it into one of the provided choices with the assistance of a large language model.
Arguments:
text
str - The text to classifychoices
Dict[str,str] - A list of choices to classify the text into, with the key being the choice name and the value being a description of the choiceopenai_client
Optional[OpenAI] - An OpenAI client object, optional. If omitted, will fall back to creating a new OpenAI client with the API key provided as an environment variableopenai_api
Optional[str] - the API key for an OpenAI client, optional. If provided, a new OpenAI client will be created.temperature
float - The temperature to use for GPT. Defaults to 0.model
str - The model to use for the GPT API
synthesize_user_responsesβ
Given a first draft and a series of follow-up questions and answers, use an LLM to synthesize the user's responses into a single, coherent reply.
Arguments:
custom_instructions
str - Custom instructions for the LLM to follow in constructing the synthesized responseinitial_draft
str - The initial draft of the response from the usermessages
List[Dict[str, str]] - A list of questions from the LLM and responses from the useropenai_client
Optional[OpenAI] - An OpenAI client object, optional. If omitted, will fall back to creating a new OpenAI client with the API key provided as an environment variableopenai_api
Optional[str] - the API key for an OpenAI client, optional. If provided, a new OpenAI client will be created.temperature
float - The temperature to use for GPT. Defaults to 0.model
str - The model to use for the GPT API
define_fields_from_dictβ
Assigns the values in a dictionary of fields to the corresponding fields in a Docassemble interview.
Docassemble and built-in keywords are never defined by this function. If fields_to_ignore is provided, those fields will also be ignored.
Arguments:
field_dict
Dict[str, Any] - A dictionary of fields to define, with the key being the field name and the value presumably taken from the output of extract_fields_from_text.fields_to_ignore
Optional[List] - A list of fields to ignore. Defaults to None. Should be used to ensure safety when defining fields from untrusted sources. E.g., ["user_is_logged_in"]
Returns:
None
Goal Objectsβ
class Goal(DAObject)
A class to represent a goal.
Attributes:
name
str - The name of the goaldescription
str - A description of the goalsatisfied
bool - Whether the goal is satisfied
response_satisfies_me_or_follow_upβ
Returns the text of the next question to ask the user or the string "satisfied" if the user's response satisfies the goal.
Arguments:
response
str - The response to check
Returns:
True if the response satisfies the goal, False otherwise
get_next_questionβ
Returns the text of the next question to ask the user.
GoalDict Objectsβ
class GoalDict(DADict)
A class to represent a DADict of Goals.
satisfiedβ
Returns True if all goals are satisfied, False otherwise.
GoalQuestion Objectsβ
class GoalQuestion(DAObject)
A class to represent a question about a goal.
Attributes:
goal
Goal - The goal the question is aboutquestion
str - The question to ask the userresponse
str - The user's response to the question
GoalSatisfactionList Objectsβ
class GoalSatisfactionList(DAList)
A class to help ask the user questions until all goals are satisfied.
Uses an LLM to prompt the user with follow-up questions if the initial response isn't complete. By default, the number of follow-up questions is limited to 10.
This can consume a lot of tokens, as each follow-up has a chance to send the whole conversation thread to the LLM.
By default, this will use the OpenAI API key defined in the global configuration under this path:
You can specify the path to an alternative configuration by setting the openai_configuration_path
attribute.
This object does NOT accept the key as a direct parameter, as that will be leaked in the user's answers.
open ai:
key: sk-...
Attributes:
goals
List[Goal] - The goals in the list, provided as a dictionarygoal_list
GoalList - The list of Goalsquestion_limit
int - The maximum number of follow-up questions to ask the userquestion_per_goal_limit
int - The maximum number of follow-up questions to ask the user per goalinitial_draft
str - The initial draft of the user's responseinitial_question
str - The original question posed in the interview
mark_satisfied_goalsβ
Marks goals as satisfied if the user's response satisfies the goal. This should be used as soon as the user gives their initial reply.
Returns:
None
keep_goingβ
Returns True if there is at least one unsatisfied goal and if the number of follow-up questions asked is less than the question limit, False otherwise.
need_more_questionsβ
Returns True if there is at least one unsatisfied goal, False otherwise.
Also has the side effect of checking the user's most recent response to see if it satisfies the goal and updating the next question to be asked.
satisfiedβ
Returns True if all goals are satisfied, False otherwise.
get_next_goal_and_questionβ
Returns the next unsatisfied goal, along with a follow-up question to ask the user, if relevant.
Returns:
A tuple of (Goal, str) where the first item is the next unsatisfied goal and the second item is the next question to ask the user, if relevant. If the user's response to the last question satisfied the goal, returns (None, None).
synthesize_draft_responseβ
Returns a draft response that synthesizes the user's responses to the questions.
provide_feedbackβ
Returns feedback to the user based on the goals they satisfied.
IntakeQuestion Objectsβ
class IntakeQuestion(DAObject)
A class to represent a question in an LLM-assisted intake questionnaire.
Attributes:
question
str - The question to ask the userresponse
str - The user's response to the question
IntakeQuestionList Objectsβ
class IntakeQuestionList(DAList)
Class to help create an LLM-assisted intake questionnaire.
The LLM will be provided a free-form set of in/out criteria (like that provided to a phone intake worker), an initial draft question from the user, and then guide the user through a series of follow-up questions to gather only enough information to determine if the user meets the criteria.
In/out criteria are often pretty short, so we do not make or support embeddings at the moment.
Attributes:
criteria
Dict[str, str] - A dictionary of criteria to match, indexed by problem typeproblem_type_descriptions
Dict[str, str] - A dictionary of descriptions of the problem typesproblem_type
str - The type of problem to match. E.g., a unit/department inside the law firminitial_problem_description
str - The initial description of the problem from the userinitial_question
str - The original question posed in the interviewquestion_limit
int - The maximum number of follow-up questions to ask the user. Defaults to 10.model
str - The model to use for the GPT API. Defaults to gpt-4-turbo. gpt-3.5 is not smart enoughllm_role
str - The role the LLM should play. Allows you to customize the script the LLM uses to guide the user. We have provided a default script that should work for most intake questionnaires.llm_user_qualifies_prompt
str - The prompt to use to determine if the user qualifies. We have provided a default prompt.out_of_questions
bool - Whether the user has run out of questions to answerqualifies
bool - Whether the user qualifies based on the criteria
need_more_questionsβ
Returns True if the user needs to answer more questions, False otherwise.
Also has the side effect of checking the user's most recent response to see if it satisfies the criteria and updating both the next question to be asked and the current qualification status.