Skip to main content



Pull ID values out of the LIST/NSMI results from Spot.


Call the Spot API ( to classify the text of a PDF using the NSMIv2/LIST taxonomy (, but returns only the IDs of issues found in the text.


Capture PascalCase, snake_case and kebab-case terms and add spaces to separate the joined words


Apply some heuristics to a field name to see if we can get it to match AssemblyLine conventions. See:


Transforms a string of text into a snake_case variable close in length to max_length name by summarizing the string and stitching the summary together in snake_case. h/t


Normalize a word vector.


Vectorize a string of text.


  • text - a string of multiple words to vectorize
  • tools_token - the token to, used for micro-service to reduce the amount of memory you need on your machine. If not passed, you need to have en_core_web_lg installed


Normalize a field name, if possible to the Assembly Line conventions, and if not, to a snake_case variable name of appropriate length.

HACK: temporarily all we do is re-case it and normalize it using regex rules. Will be replaced with call to LLM soon.


Groups the given fields into screens based on how much they are related.


  • fields - a list of field names

  • damping - a value >= 0.5 and < 1. Tunes how related screens should be

  • tools_token - the token to, needed of doing micro-service vectorization

  • Returns - a suggested screen grouping, each screen name mapped to the list of fields on it

InputType Objects

class InputType(Enum)

Input type maps onto the type of input the PDF author chose for the field. We only handle text, checkbox, and signature fields.


Transform the fields provided by get_existing_pdf_fields into a summary format. Result will look like: [ { "var_name": var_name, "type": "text | checkbox | signature", "max_length": n } ]

AnswerType Objects

class AnswerType(Enum)

Answer type describes the effort the user answering the form will require. "Slot-in" answers are a matter of almost instantaneous recall, e.g., name, address, etc. "Gathered" answers require looking around one's desk, for e.g., a health insurance number. "Third party" answers require picking up the phone to call someone else who is the keeper of the information. "Created" answers don't exist before the user is presented with the question. They may include a choice, creating a narrative, or even applying legal reasoning. "Affidavits" are a special form of created answers. See Jarret and Gaffney, Forms That Work (2008)


Apply heuristics to the field's original and "normalized" name to classify it as either a "slot-in", "gathered", "third party" or "created" field type.


Determines the bracketed length of an input field based on its max_length attribute, returning a float representing the approximate length of the field content.

The function chunks the answers into 5 different lengths (checkboxes, 2 words, short, medium, and long) instead of directly using the character count, as forms can allocate different spaces for the same data without considering the space the user actually needs.


  • field FieldInfo - An object containing information about the input field, including the "max_length" attribute.


  • float - The approximate length of the field content, categorized into checkboxes, 2 words, short, medium, or long based on the max_length attribute.


>>> get_adjusted_character_count({"type"}: InputType.CHECKBOX) 4.7 >>> get_adjusted_character_count({"max_length": 100}) 9.4 >>> get_adjusted_character_count({"max_length": 300}) 230 >>> get_adjusted_character_count({"max_length": 600}) 115 >>> get_adjusted_character_count({"max_length": 1200}) 1150


Apply a heuristic for the time it takes to answer the given field, in minutes. It is hand-written for now. It will factor in the input type, the answer type (slot in, gathered, third party or created), and the amount of input text allowed in the field. The return value is a function that can return N samples of how long it will take to answer the field (in minutes)


Provide an estimate of how long it would take an average user to respond to the questions on the provided form. We use signals such as the field type, name, and space provided for the response to come up with a rough estimate, based on whether the field is:

  1. fill in the blank
  2. gathered - e.g., an id number, case number, etc.
  3. third party: need to actually ask someone the information - e.g., income of not the user, anything else?
  4. created: a. short created (3 lines or so?) b. long created (anything over 3 lines)


Apply cleanup routines to text to provide more accurate readability statistics.


Run a prompt via openAI's API and return the result.


  • prompt str - The prompt to send to the API.
  • max_tokens int, optional - The number of tokens to generate. Defaults to 500.
  • creds Optional[OpenAiCreds], optional - The credentials to use. Defaults to None.
  • temperature float, optional - The temperature to use. Defaults to 0.


Combines some text with a command to send to open ai.


A conservative guess at if a given form needs the filler to make math calculations, something that should be avoided. If


Return a list of tuples, where each tuple represents a sentence in which passive voice was detected along with a list of the starting and ending position of each fragment that is phrased in the passive voice. The combination of the two can be used in the PDFStats frontend to highlight the passive text in an individual sentence.

Text can either be a string or a list of strings. If provided a single string, it will be tokenized with NTLK and sentences containing fewer than 2 words will be ignored.


Get citations and some extra surrounding context (the full sentence), if the citation is fewer than 5 characters (often eyecite only captures a section symbol for state-level short citation formats)


Given a list of fields, identify those related to sensitive information and return a dictionary with the sensitive fields grouped by type. A list of the old field names can also be provided. These fields should be in the same order. Passing the old field names allows the sensitive field algorithm to match more accurately. The return value will not contain the old field name, only the corresponding field name from the first parameter.

The sensitive data types are: Bank Account Number, Credit Card Number, Driver's License Number, and Social Security Number.


Substitute phrases in the input string and return the new string and positions of substituted phrases.


  • input_string str - The input string containing phrases to be replaced.
  • substitution_phrases Dict[str, str] - A dictionary mapping original phrases to their replacement phrases.


Tuple[str, List[Tuple[int, int]]]: A tuple containing the new string with substituted phrases and a list of tuples, each containing the start and end positions of the substituted phrases in the new string.


>>> input_string = "The quick brown fox jumped over the lazy dog." >>> substitution_phrases = {"quick brown": "swift reddish", "lazy dog": "sleepy canine"} >>> new_string, positions = substitute_phrases(input_string, substitution_phrases) >>> print(new_string) "The swift reddish fox jumped over the sleepy canine." >>> print(positions) [(4, 17), (35, 48)]


Substitute gendered phrases with neutral phrases in the input string. Primary source is


Substitute complex phrases with simpler alternatives. Source of terms is drawn from


Apply a function to a list of sentences and return only the sentences with changed terms. The result is a tuple of the original sentence, new sentence, and the starting and ending position of each changed fragment in the sentence.


Read in a pdf, pull out basic stats, attempt to normalize its form fields, and re-write the in_file with the new fields (if rewrite=1). If you pass a spot token, we will guess the NSMI code. If you pass openai creds, we will give suggestions for the title and description.


Gets a single number of how hard the form is to complete. Higher is harder.