ALKiln automated testing
Any docassemble package can use ALKiln, though it does have special features created especially for projects using AssemblyLine. Give us feedback and propose ideas by making issues at https://github.com/SuffolkLITLab/ALKiln/issues.
π§ Reference material for testing your interview with ALKiln. This is under very active development.
Introβ
(Moving)
The ALKiln (Assembly Line Kiln) framework runs tests on your docassemble interview either through the Playground or through GitHub, making sure your interviews are running correctly.
Docacon 2023, demo and examples of ALKiln automated testing:
You can skip straight to writing tests. A little earlier in the presentation, you can see a bit about why we test, what we test, and why to automate testing.
You can also read the presentation slides themselves with all the speaker's notes.
Startβ
(Moving)
You can run ALKiln in a variety of ways, each with their pros and cons and other details. The section below is about setting up different ways of running tests.
Set up for ALKilnInThePlaygroundβ
(Moving)
- Leave the Playground and go to your server's "Package Management" page.
- Install the ALKilnInThePlayground package from its
main
branch. - Follow docassemble's instructions to add it to the dispatch list. This will add it to the server's list of interviews. It may look something like this:
dispatch:
alkiln: docassemble.ALKilnInThePlayground:data/questions/run_alkiln_tests.yml
# Your other server interviews
- Go to your server's list of interviews and run the first page of the newly installed interview.
- On that first page, install the latest version of ALKiln by picking the top item in the list. Tap to install it.
- Make sure you have a Project that you want to test in your list of Projects.
- Make sure that Project has some tests. If you don't have any, you can start with your first test
- Refresh the ALKilnInThePlayground interview page or run the interview again.
- Pick a Project to test.
- Run the tests and see the output.
Set up for GitHub-triggered testsβ
(Moving)
You will need to have a GitHub account that has permissions to edit the GitHub repository that you want to test.
- Prepare your repository or organization for testing using ALKiln's test setup interview. Follow the instructions there to add new code to your repository, including a new "workflow" file. This can take over 30 minutes if you're unfamiliar with GitHub and docassemble API keys.
- In Docassemble, make a new Project and pull in the package's updated code.
- In the Project's Sources folder, add files with a
.feature
extension to write tests. You might already have an example test in that folder if you chose to create one during the ALKiln setup interview. - Commit the new files to GitHub to trigger the tests to run.
- See the results.
(Moving)
If the repositories you want to test belong to one GitHub organization, get one of the organization's admins to run the ALKiln test setup interview and create organization secrets or to create them manually. Each repo the organization has will use those secrets so you don't have to bother adding them to every repository. Otherwise, you have to create the same secrets for each repo and if one of the values changes, you'll have to update it in every repo.
Set up for isolated GitHub server testsβ
(Moving)
This is an advanced method and we are happy to help you with it. It would help a lot to be familiar with GitHub and GitHub actions. You'll need a GitHub repository or organization admin to do one of these steps. You'll need to set this up for each repo running ALKiln tests.
- Go through the same steps as for GitHub-triggered tests above.
- Make a new branch on the same GitHub repo.
- A repo or org admin should create a GitHub secret called
CONFIG_CONTENTS
. For its value, the admin should copy and paste your testing server's config. Using your production server's config is a security risk. No one will be able to see these values after you leave the secret's page, not even the person who made it. Note, whenever you change the server's config, you'll want to updateCONFIG_CONTENTS
. - Leave your other workflow secrets alone. Note, your
DOCASSEMBLE_DEVELOPER_API_KEY
andSERVER_URL
do not need to be on the same server as yourCONFIG_CONTENTS
. - Make a new workflow file so that it is similar to ALKiln's own workflow file. If you want to stop old tests from running, delete your old workflow file.
- Read the notes in that file to see what you need to change to adapt the file for your project. Each of the notes is marked with
#### Developer note
. - Commit the changes.
- See that the tests pass.
- Make a pull request to your default branch (probably
main
) to review the code and merge it in.
(Moving)
To use this method, avoid using hard-coded urls to go to a test interview. That is, avoid using https://my-server.com
to navigate to interviews or other server pages. That will send the tests to your server.
(Moving)
If you need to troubleshoot the docker setup because the step to start the GitHub server keeps failing, you can make the docker startup logs visible and allow ALKiln to create an artifact of the docker installation logs. Do this by passing the input SHOW_DOCKER_OUTPUT
to ALKiln's GitHub server action, like our script (linked above) does for itself. The input arguments might look like the below:
with:
CONFIG_CONTENTS: "${{ secrets.CONFIG_CONTENTS }}"
SHOW_DOCKER_OUTPUT: true
(Moving)
It's possible to do more docker troubleshooting using the tmate action. You can see code for that in our workflow file as well.
Quick remindersβ
(Moving)
- You write and edit
.feature
test files in your Sources folder. - There are some fields ALKiln cannot yet handle, including
object
-type fields, likeobject_multiselect
. - By default, each Step or field may only take 30 seconds. You can change that with the "the maximum seconds" Step listed in the Steps.
- If you're using GitHub, tests are run when anyone commits to GitHub.
- Tests can download docx files, but humans have to review them to see if they've come out right.
- You will be able to see pictures and the HTML of pages that errored. In GitHub, you can download them from the zip file in the Action's artifact section.
- ALKiln also creates test reports. In GitHub, you can download them in the same place.
Exampleβ
(Moving)
The tests use the gherkin language and syntax. Here's a complex example for a quick refresher on some of our features:
@children
Feature: I have children
@fast @child_benefits
Scenario: child has benefits
Given I start the interview at "some_file_name.yml"
And I get to the question id "benefits" with this data:
| var | value | trigger |
| x[i].name.first | Cashmere | children[0].name.first |
| x[i].name.last | Davis | children[0].name.first |
| x[i].name.first | Casey | children[1].name.first |
| x[i].name.last | Davis | children[1].name.first |
| x.there_are_any | True | children.there_are_any |
| x.target_number | 2 | children.there_is_another |
When I set the var "benefits['SSI']" to "True"
And I tap to continue
Then the question id should be "download"
And I download "some_motion.pdf"
First testβ
(Moving)
You can write a short test right away that just makes sure your YAML file runs.
- In the Playground of your Project, go to the "Sources" folder.
- Add an file that ends in
.feature
. For example,interviews_load.feature
. - In that file, write
Feature:
at the top, with a description of the general category of tests that will be in that file. Each.feature
file must start withFeature:
(with some special exceptions). - Add a
Scenario:
for each interview you want to test. The file should look similar to this:
Feature: Interviews load
Scenario: The 209A loads
Given I start the interview at "ma_209a_package.yml"
Scenario: The Plaintiff's Motion to Modify loads
Given I start the interview at "plaintiffs_motion_to_modify_209a.yml"
You can wait to write more complex tests till your code is more stable. For example, your variable names should be staying pretty much the same.
Story Tablesβ
(Moving)
In your Scenario, you will:
- Add a Step to go to the interview
- Add any other Steps you need
- Choose what page your Story Table should get you to
- Add the Story Table Step command that includes that page's id
- Add the Story Table header row
- Add a row for each variable
- Add whatever other Steps you need
Story Table Steps, in our opinion, are the most effective and flexible way to fill in fields in most cases. The items in the table are a snapshot of the user who is filling out the form for that test.
Example:
And I get to the question id "has sink" with this data:
| var | value | trigger |
| last_haircut_date | today - 730 | last_haircut_date |
| wants_virtual_haircut | True | last_haircut_date |
| user_name | Beth | user_name |
| intro_screen_accepts_terms | True | intro_screen_accepts_terms |
The row with | var | value | trigger |
is required.
How it works: You tell the Story Table Step what question
you want to get to. You also make a table of the variables and values the test will need to fill in on its way to that page. Whenever the test gets to a page, it checks your Story Table for any variable names in the table that match a variable name on the page. When the test finds a match, it sets the field to the corresponding value. When the test is done filling in fields on a page, it continues to the next page and repeats until it reaches that question
id you gave.
As you can see in the example, the order of the items doesn't matter. The test will fill in the fields no matter what order they come in. You will be able to make simple edits to the interview without needing to update your tests. For example, you can move pages around or even move fields to different pages.
It also doesn't matter if you include extra items accidentally, though you might want to check the test reports to make sure no necessary fields went unused.
You can write a Story Table that goes all the way through your interview, or a Story Table that only goes part way. You can have multiple Story Tables in one Scenario and you can put other Steps between tables.
Right now, Story Tables are unable to use GitHub secrets to set variables.
A Story Table Step must not be the first step in your Scenario. The interview
Step must come before it.
Generate a Story Tableβ
(Moving)
You can use the story table generator to generate a Scenario draft. Depending on your interview's code you might need to edit the table for it to work properly, but it can give you a good start.
Follow these instructions to use the generator:
- If you don't have one already, add a new test file. You can leave out the Scenario.
- Ensure your server config is set up to show debug info.
- Run your interview manually until you reach the page you want the story table to get to.
- Open the "source" display of the interview. Currently, that looks like angle brackets,
</>
, in the header of the page. - Note the
id
of the page. - Tap the "Show variables and values" button. It will open a new tab showing a big JSON object.
- Copy all the text on that page.
- Go to the story table generator.
- Paste the JSON into the text area there, as instructed.
- Use the other input fields to help finalize your Scenario, including the page
id
. - Copy the Scenario that has been generated for you.
- Paste that into the already prepared test file.
Step commandβ
(Moving)
The Step that triggers a story table is
And I get to the question id "some id!" with this data:
question id: The story table needs to know the id
of the page this story table should get to. You can find the id
in the question
block in the YAML in the Playground.
Rowsβ
(Moving)
Indented under the command, put the header row of the table:
| var | value | trigger |
var
lists the variable the field sets exactly as it appears in the code of the question.value
is the value you want the test to fill in.trigger
lists the variable that triggers that variable's page.
Under that, add a blank row for a field that you want the test to interact with during the interview:
| | | |
varβ
(Moving to Story Table and to setting values)
In the var
column, write the name of the variable that a field sets exactly as it appears in the question
block. Most times you can see that name in the YAML question
block. If code:
is used to create the field's variable name, you may have to talk to the developers who wrote that code to find out the name or names of the variable or variables it generates.
Examples:
court_date
users[0].name.first
users[i].children[j].benefits['SSI']
x.favorite_color
valueβ
(Moving to Story Table and to setting values)
In the value
column, write what you want the field to be set to. For checkboxes, True
means 'checked' and False
means 'unchecked'.
One special value you can include is today
. That will insert the date on which the test is being run. You can also subtract from, or add days to, today
. Examples:
| signature_date | today | |
| court_date | today + 20 | |
| minors_birth_date | today - 3650 | |
The last example makes sure that the date is 10 years in the past, ensuring that a minor always stays a minor for that test.
triggerβ
(We recommend you switch to a 2-column Story Table. We are moving this old 3-column Story Table documentation.)
trigger
is an optional value in most cases. It is only mandatory for rows that use index variables, like i
, j
, or k
, or generic objects (x
).
Your interview must always include some special HTML shown here to let the trigger variable work properly. You will get an annoying warning in the report if you leave that out.
In the trigger
column, write the name of the variable that triggers the page on which the field appears. Note that especially - the variable that triggers the page. If you have 10 different variables on one page, they will all have the same text in their trigger
column.
For the below, the trigger
is users[0].hair.how_much
.
---
id: interview order
mandatory: True
code: |
users[0].hair.how_much
---
id: hair
question: |
Tell us about your hair
fields:
- How much hair do you have?: users[i].hair.how_much
- What color is your hair?: users[i].hair.color
Your story table rows to set those values would look like this:
| var | value | trigger |
| users[i].hair.how_much | Enough | users[0].hair.how_much |
| users[i].hair.color | Sea green | users[0].hair.how_much |
Even though the var
columns are different, both trigger
columns have users[0].hair.how_much
. That's because the trigger is for the page, not for the fields. Both fields are on that same page - a page triggered by users[0].hair.how_much
.
Never use docassemble's x
, [i]
, [j]
, [k]
, etc. in the trigger column.
There are some rare cases where no trigger
exists. For example, question
blocks with the mandatory
specifier:
mandatory: True
question: |
Do you like mandatory questions?
yesno: likes_mandatory_questions
In those cases, leave the trigger
column empty.
Story Table examplesβ
(Moving)
Simple field types with their values.
(Moving)
The 'yes' choice of yesno
buttons or yesno
fields like yesno
checkboxes and yesnoradio
s.
| has_hair | True | has_hair |
(Moving)
The 'maybe' choice in yesnomaybe buttons and datatype: yesnomaybe fields.
| has_hair | None | has_hair |
(Moving)
Checkboxes with multiple choices. The value 'True' means to check the checkbox and 'False' means to uncheck it.
| benefits['SSI'] | True | benefits |
(Moving)
Radio or dropdown choices.
| favorite_color | green | favorite_color |
(Moving)
Text field or textarea. Even if the answer has multiple lines, you can only use one line. When a new line is supposed to appear, instead use \n
. See below:
| favorite_color | Blue.\nNo, green!\nAah... | favorite_color |
(Removed - no longer used)
A generic object with an index variable.
| x[i].name.first | Umi | users[1].name.first |
.there_is_another
loopβ
(Moving)
The .there_is_another
loop in a story table is more complicated than you might expect.
The story table must handle setting the .there_is_another
attribute automatically. You, as the developer, must pretend to use the .target_number
attribute instead, whether you actually use it or not.
In your var
column, replace any .there_is_another
rows for a particular variable with with one .target_number
row. In the value
column, put the number of items of the appropriate type.
The trigger
column should have the name of the page's trigger variable, as usual. Example:
| x[i].name.first | Jose | users[0].name.first |
| x[i].name.first | Sam | users[1].name.first |
| x[i].name.first | Umi | users[2].name.first |
| x.target_number | 3 | users.there_is_another |
Story Table signatureβ
(Moving)
The value
for a row setting a signature doesn't matter. All signatures will be a single dot.
| user.signature | | user.signature |
Other story table notesβ
(Moving)
Don't worry about accidentally including variables that won't show up during the test. Extra rows will be ignored.
Stepsβ
(Moving)
Steps must be written one after the other in the order they should happen. It's a bit more like you're the user clicking through the form. They can let you do things like download a file or make sure an user input invalidation message appears. If you change the order of the questions, even if you don't change any variable names, you may have to update these types of steps to change their order to match the new order of the screens.
(Moving)
Note: When
, Then
, And
, and Given
at the beginning of sentences can all be used interchangeably. It doesn't matter which you use.
Starting Stepsβ
(Moving)
Establishing Steps that you might use as the first few lines of a "Scenario" - a test. They can also be used at any other time.
(Moving)
You must include the interview
Step in each Scenario before setting any fields.
Use an interview's filename in the interview
Step to open the interview you want to test.
Given I start the interview at "yaml_file_name.yml"
This Step must always be included in each Scenario
before setting the values of any fields. There is no other way for the tests to know what website to go to.
(Moving)
The wait
Step can be a way to pause before the test tries to go to the interview's first page.
Given I wait 120 seconds
When I start the interview at "yaml_file_name.yml"
This Step can also be used anywhere in your scenario to wait between Steps.
And I wait 1 second
(Moving)
You can also start by making sure the test will give the interview's first page time to load once the test goes there. The default maximum time is 30 seconds. This Step can be useful if you know that your interview's first page takes longer to load.
Given the maximum seconds for each Step is 200
When I start the interview at "yaml_file_name.yml"
This Step can also be used anywhere else in your Scenario to give Steps more time to finish.
(Moving)
You can use the log in
Step to sign into your docassemble server before going to the interview:
Given I log in with the email "USER_EMAIL" and the password "USER_PASSWORD"
When I start the interview at "yaml_file_name.yml"
It will start a new session of the interview.
This is a complex Step to use and right now only works in GitHub (though we are working on developing the feature for the Playground version). You must use a GitHub "secret" to store each value as the information is sensitive. To learn how to create and add a secret for a test, see the GitHub secrets section.
"USER_EMAIL"
and "USER_PASSWORD"
are just examples of names. You can use any names you want.
Observe things about the pageβ
(Moving)
The question id
Step will make sure the page's question id is right. This Step can help humans keep track of what page the tests are on. It will also show up in the logs of the tests and can help you see where things went wrong.
Copy the id
value from the YAML question
block of the screen you want to test.
Then the question id should be "some yaml block id!"
(Moving)
The invalid answers
Step can check that the user was prevented from continuing.
Then I will be told an answer is invalid
(Moving)
The screenshot
Step will take a picture and download the HTML of the screen. In GitHub tests, it will be put in the GitHub action's artifacts.
Then I take a screenshot
(Removed in favor of "I see the phrase")
The link
Step can make sure a link appears on the page. For example, a link to quickly leave the page for forms that deal with domestic abuse.
Then I should see the link to "a-url.com"
(Moving)
The phrase
Steps can check for text on the page. Checking phrases will be language specific.
Getting the characters right can be tricky with docassemble. If you get told a phrase is missing, read about a possible reason in the errors section.
Then I SHOULD see the phrase "some phrase"
Then I should NOT see the phrase "some phrase"
The phrase should be inside double quotation marks and should NOT itself contain regular double quotation marks inside it. That usually isn't a problem with docassemble pages because docassemble transforms our code in ways we don't always expect. See the missing phrase section that talks about special characters.
(Moving)
The accessibility
Step can check a page for its accessibility by running aXe-core on the page.
Then I check the page for accessibility issues
This will include a separate JSON file if there are any accessibility issues with the page.
You can also check all pages past a certain point automatically:
Then I check all pages for accessibility issues
This is equivalent to running I check the page for accessibility issues
on every new page
that the test runner sees.
(Moving)
The text in JSON
Step can check that a variable on the page has a specific text value. This is a multi-line step. It will also save a copy of all of the page's JSON variables to a file that starts with 'json_for' followed by the question's id. The JSON variables are the same variables that you would see in the docassemble sources tab.
This step is unable to check values of nested objects. For example, it can test the value of a variable like user_affidavit
, but not an attribute like user.affidavit
.
Then the text in the JSON variable "user_affidavit" should be
"""
Three quotes then some affidavit text.
The text can be multi-line.
Then close with three quotes.
"""
(Moving)
The JSON variables
Step will add the page's JSON variables to the final test report. It's a bit messy, but you do get to see all the variables.
And I get the page's JSON variables and values
Set valuesβ
(Moving)
Fill in fields with answers, set values, and interact with the page in other ways.
(Moving)
The continue
Step will tap the button to continue to the next page. The text on the button itself doesn't matter.
When I tap to continue
(Moving)
Use the set the variable
Step to fill in an answer on the page. This sets the value of that field's variable.
The first set of quotes contain the name of the variable you need to set. The second set of quotes contain the value you want to set. This is the same as the story table's var
column and value
column.
Example of a text field:
(Moving)
When I set the variable "users[i].hair_color" to "Turqoise"
Example of a checkbox field:
(Moving)
When I set the variable "users[i].fruits['apple']" to "True"
And I set the variable "users[i].fruits['pear']" to "True"
Example of a radio button field:
(Moving)
When I set the variable "users[i].favorite_bear" to "cuddly_bear"
(Moving)
Be sure to use the actual value
of the field, not just the text that the user sees. For example, for this YAML code the value
would be "cuddly_bear", not "Teddy bear". "Teddy bear" is just the label.
question: Fav bear
field:
- Pick one: users[i].favorite_bear
data type: radio
choices:
- Black bear: wild_bear
- Brown bear: also_wild_bear
- Teddy bear: cuddly_bear
- Jaime: maybe_not_a_bear
One special value you can use is today
. That will insert the date on which the test is being run. You can subtract or add days using today
. Examples:
(Moving)
When I set the variable "signature_date" to "today"
When I set the variable "birthdate" to "today - 500"
When I set the variable "court_date" to "today + 12"
Be careful with today
math. If you're testing a "boundary" date, you want to pick a date that is well within a boundary to make the test more robust. For example, if you want to test a date that was 10 or fewer days ago, use today - 9
instead of today - 10
. At some point your tests might run over midnight. In that situation, if you used 10
the clock would tip into the next day - 11 days from the filled-in date. Your test would fail incorrectly. In fact, your users might have that experience, so design your interview with that in mind. For example, you can tell them the date of the deadline so they too will know about the boundary.
You can also use environment variables to set values with the secret variables
Step, even if the value doesn't need to be secret.
(Moving)
The secret variables
Step can set variables that have sensitive information. For example, a password. The value of this variable will not appear anywhere in the report or in the console. If there is an error on this page, ALKiln will still avoid taking a picture of the screen.
This is a complex Step to use and currently only works with tests running in GitHub (though it's in development for the Playground version). You can use a GitHub "secret" to store the value. To learn how to create and add a secret for the test, see the GitHub secrets section.
I set the variable "user_account_password" to the GitHub secret "USER1_PASSWORD"
You MUST use the log in
Step if you want to sign into your docassemble server. The secret variables
Step shown here is unable to do that.
(Moving)
Sign on a signature page. All signatures are the same - one dot.
AVOID taking screenshots of signature pages. There's a bug that will erase the signature if you do that.
When I sign
Use one of the tap element
Steps to tap an item on the page. You need to know a bit about HTML to work with this Step.
(Moving)
- You can tap on specific HTML elements on the page, like buttons, to navigate to the next page or a new page. You can use any valid CSS Selector to get an element on the page. You can add any additional wait time after tapping the element.
When I tap the "#element-id" element and go to a new page
# Or
And I tap the "#other-element" element and wait 5 seconds
(Moving)
- You can also tap on HTML elements without navigating. For example, you can tap on collapsible content to show the text inside.
When I tap the "#an_id .al_toggle" element and stay on the same page
You might want to add a wait
Step after that to let the contents become visible. For example And I wait .7 seconds
.
(Moving)
Use the tap tab
Step to interact with ALToolbox tabs. ALKiln will tap on the tab and then wait until the tab contents are fully visible.
When I tap the "TabGroup-specific_tab_name-tab" tab
Use the HTML id of the tab, but leave out the #
symbol (like the example shows).
(Moving)
Use the story table
Step to make sure the test reaches a particular screen given a set of fields with their values. See more details in sections above.
I get to the question id "some question block id" with this data:
(Moving)
The name
Step is specifically for the Document Assembly Line 4-part name questions.
Avoid punctuation. We recommend you just use 2 names - the first name and last name - but you can have all these formats:
- Firstname Lastname
- Firstname Middlename Lastname
- Firstname Middlename Lastname Suffix (where suffix is one of the dropdown suffix choices, like
II
)
When I set the name of "x[i]" to "Sam User"
(Moving)
The address
Step is specifically for the Document Assembly Line 4-part address questions.
It allows a US address format, but can otherwise be any address you want that matches the format of the example below. Remember the commas.
When I set the address of "users[0]" to "120 Tremont Street, Unit 1, Boston, MA 02108"
Other actionsβ
(Moving)
Use the download
Step to download files so that humans can check that they are correct. When the tests run in GitHub, the files will be in the GitHub action's artifacts. If you think this step could take more than 30 seconds, use the "maximum seconds for each Step" Step to give the file more time to download.
Then I download "file-name.pdf"
Leave out other parts of file's url.
(Moving)
You can compare example PDFs (sometimes called a baseline) to downloaded PDFs to make sure they're the same. The baseline PDF must be stored in your "Sources" folder along with your tests, and the downloaded PDF should have been downloaded by the above step (Then I download "download.pdf"
) earlier in the same scenario.
Then I expect the baseline PDF "baseline.pdf" and the new PDF "download.pdf" to be the same
This will compare all of the text in the baseline PDF with all of the text in the newly downloaded PDF, and then it'll compare the fillable fields of each. If anything is different, it will print out what differed in the report. You can use that info and search for the differing text in the baseline PDF and in the download PDF in the artifacts to see how they differ.
(Moving concept. Step deprecated in favor of regular variable setting)
Use the upload
step to upload one or more files. You must store files that you plan to upload in your "Sources" folder along with your tests.
As you can see in the examples, if you want to upload more than one file you must separate their names with a comma.
And I upload "irrefutable_evidence.jpg, refutable_evidence.pdf" to "evidence_files"
In a story table, use the name of the variable as usual and use the name of the file or files in the value
column.
| evidence_files | irrefutable_evidence.jpg, refutable_evidence.pdf | |
(Moving)
Use the custom timeout
Step to give your pages or Steps more time to finish. The default maximum time is 30 seconds. This Step can be useful if you know that a page or an interaction with a field will take longer. You can also use it to shorten the time to let tests fail faster. If you need, you can use it in multiple places in each Scenario.
Then the maximum seconds for each Step is 200
(Moving)
Use the wait
Step to pause once a page has loaded. will let you wait a number of seconds when you are on a page. The time must be shorter than the maximum amount of time for each Step. By default, that's 30 seconds, but you can increase that with the "maximum seconds for each Step" Step.
When I wait 10 seconds
This Step can be used multiple times.
Waiting can help in some situations where you run into problems with timing. The situations that need this are pretty rare, but here's an example: You navigate to a new page and set a field. Sometimes the test passes, but sometimes the test says an element on this page does not exist. The problem is probably that the page sometimes needs an extra few seconds to load. Add this step in to give it that time.
Example:
And I tap to continue
When I wait 10 seconds
And I set the variable "favorite_color" to "puce"
Tipsβ
(Moving)
Some of these are just good practices to follow when coding your interviews.
(Moving)
In questions with choices, give each label a value. See docassemble's documentation on buttons to read about key-value pairs.
Not great with just labels:
question: Tell me about yourself
fields:
- Favorite color
Better with values as well:
question: Tell me about yourself
fields:
- Favorite color: user_favorite_color
It's always possible to use the labels alone, but giving a value as well ensures your tests will work for translated versions of your interview. It also helps your code be more translatable in general.
(Moving)
Add a unique id to each question
block of your interview. This also helps your team communicate with each other more easily.
(Moving)
Avoid noyes
type fields. For one thing, the story table generator code will need less editing. For another, we've found that humans tend to find those confusing too.
(Moving)
If your package is not importing specifically al_package.yml from the styled Assembly Line package, make sure to add the trigger variable code to your interview.
(Moving)
You can write tests that just go part-way through an interview. That way, you can work on adding more content and yet also be sure that the work you've already done isn't affected by the new changes.
(Moving)
Use old Scenarios or story tables to help you make new ones. You don't have to make everything from scratch.
Test outputβ
(Moving)
ALKiln creates files and folders showing the output of the tests. In GitHub, you can download these GitHub "artifacts" at the bottom of the summary page for that run of tests.
The output ALKiln creates includes:
- A report of the result from all the tests.
- Information about the screens where ALKiln ran into errors or unexpected behavior, including:
- Pictures when the error happened
- The HTML, slightly modified so CSS styles will load locally
- A folder for each test (or Scenario) named using your Scenario description.
- A report for that specific Scenario, as well as pictures you took of screens and the associated HTML of that page, files you downloaded, and pictures of any errors it caused with its HTML.
Error pictures and HTML filesβ
(Moving)
ALKiln will try to take pictures of pages that run into errors. The names of those files use the id of the page where the error happened. There you might see that the test was unable to continue to the next page because required fields weren't filled, or that a variable wasn't defined. ALKiln avoids taking pictures of erroring pages when the page used GitHub secrets in case they contain sensitive information.
Each time ALKiln takes a picture, it also saves the HTML of the page; this HTML file will have the same name as the picture, but will end with .html
.
You can open this HTML file in your browser to interact with the page and inspect the page's HTML further.
The page in your browser might not look like the picture, and you shouldn't expect it too.
However, in the HTML, you can look at what particular options might have been available in a drop down, or examine any accessibility errors.
Reportsβ
(Moving)
We're always trying to understand what people would find helpful in these reports. Tell us about your experiences at https://github.com/SuffolkLITLab/ALKiln/issues.
A report might look something like this:
Assembly Line Kiln Automated Testing Report - Wed, 29 Dec 2021 17:49:00 GMT
===============================
===============================
Failed scenarios:
---------------
Scenario: I get to the download page
---------------
ERROR: The question id was supposed to be "download", but it's actually "agree-to-terms".
**-- Scenario Failed --**
===============================
===============================
Passed scenarios:
---------------
Scenario: I fill in my name
---------------
screen id: user-name
| user.name.first | Maru | |
| user.name.last | Plaintiff | |
A report has a title with the date and time. It also has two main sections - the failed Scenarios and the Scenarios that passed.
Within each of those, every Scenario will have its own section. In the Scenario's section, ALKiln will list the id of each screen where fields were set in the order in which they appeared. Under each screen id
will be the names of the variables whose fields were set and the values they were set to. We're still working out some issues here.
(Moving)
If you used a story table Step, a Scenario might look more like this:
---------------
Scenario: I fill in my name
---------------
screen id: user-name
| user.name.first | Maru | |
| user.name.last | Plaintiff | |
Rows that got set:
And I get to the question id "child information" with this data:
| var | value | trigger |
| user.name.first | Maru | |
| user.name.last | Plaintiff | |
Unused rows:
| defendant.name.first | Sam | |
| defendant.name.last | Defendant | |
Since story table Steps don't care about having extra unused rows, the report lets you know which rows did or did not get used. If rows are listed under "Unused rows", ALKiln couldn't find the fields for those variables during the test. Despite that, it was still able to get to the desired question id. You should check that section to make sure all your varibles got used.
Rows are listed in alphabetical order. If you have thoughts on pros and cons, we'd love to hear from you.
If everything looks right to you there, you can copy and paste the text under "Rows that got set" into your test to get rid of the extra rows you've got hanging around.
(Removed as unclear)
If a screen loaded with an error message, ALKiln will try to reload a few times, and will try to log the error message that it saw:
---------------
Scenario: I opened the interview
---------------
ERROR: On final attempt to load interview, got "Reference to invalid playground path"
ERROR: On final attempt to load interview, got "Reference to invalid playground path"
ERROR: On final attempt to load interview, got "Reference to invalid playground path"
ERROR: Failed to load "a-great-interview" after 3 tries. Each try gave the page 30 seconds to load.
**-- Scenario Failed --**
(Moving)
ALKiln will also try to take a picture of the page where the error happened. There will be two copies of that picture - one in the main folder of the output and one in the folder of the specific test (the Scenario) that caused the error.
Also watch the errors and warnings section for updates on similar information.
See GitHub test resultsβ
(Moving)
In GitHub, to see the list of previous tests or running tests, go to your repository's GitHub Actions page.
One of the rows should have the text of the commit you just made. The test may have a yellow dot next to it. That means it's still running. When the dot has turned into a red 'x' or a green checkmark, tap on the name to go to the test's summary page.
To see the output text of the test run online, its logs, follow these GitHub instructions.
ALKiln also creates files and folders showing the output of the tests. In GitHub, you can download these GitHub "artifacts" at the bottom of the summary page for that run of tests.
Errors and warningsβ
(Moving)
This section is a constant work in progress.
A missing trigger variableβ
(Moving)
This warning only matters for story tables that use index variables or generic objects.
That warning isn't a bug, but if the above doesn't apply to you, you can ignore it. A future goal of ours is to remove the warning from Steps that don't need it.
If you are using a story table with index variables or generic objects, you need to add some code to the interview file where you set your default screen parts
block.
(Moving)
Add exactly this code to your default screen parts
block to insert an invisible element in all your screens:
default screen parts:
# This HTML is for ALKiln automated tests
post: |
<div data-variable="${ encode_name(str( user_info().variable )) }" id="trigger" aria-hidden="true" style="display: none;"></div>
Use that HTML exactly. No customizations.
If you already have something in your post:
, just copy the <div>
and paste it in after the other code. Putting it at the end can avoid messing up other HTML.
If you want to see some very technical details about why we need it in the first place, you can go to https://github.com/SuffolkLITLab/ALKiln/issues/256, where we've tried to summarize the problem this is solving. Unfortunately, we haven't found another way to solve this particular problem.
Timeout or "took too long" errorβ
(Moving)
Different problems can cause the report to say that something "took too long" or caused a "timeout" error.
A "timeout" error can happen when a page took too long to load at some point in setup, when running tests, or during test cleanup. This can be because:
- The page was trying to load a big file.
- ALKiln could not continue to the next page for some reason.
- A Story Table was unable to reach the page with the specified
id
. - There's a typo in the name of the interview YAML file that the test should go to.
If a page was taking too long to load a big file, use the custom timeout
Step to give the page more time to load.
You might be able to look at the error page picture for more details. In GitHub, you can download the test artifacts to look for it.
In GitHub, this error can also happen when:
- The server was busy for too long.
- The server was down.
- That url is stored in the
SERVER_URL
GitHub secret is wrong or out of date.
If the server might have been busy or down, try re-running the tests.
You won't be able to tell if the SERVER_URL
is wrong - GitHub considers the value of the secret to be sensitive information, so it's impossible to see that value. You can still give it a new value, though, and that's worth trying. Find the address of the docassemble server where the docassemble testing account is located. Edit the secret to give it that url.
Invalid playground path errorβ
(Moving)
If you see the text "invalid playground path" in the report, that means the Given I start the interview at...
Step for that scenario is naming an interview that doesn't exist. Check for a typo.
UnhandledPromiseRejection errorβ
(Moving)
This is a misleading error. You need to read the text of the whole paragraph to see what the actual error is.
Phrase is missingβ
(Moving)
If you get an error message that an expected phrase is missing, make sure you copy and paste the text you're expecting directly from the running interview page.
Sometimes the characters in your code and the characters on screen are not the same. For example, in our code we often use apostrophes as quotes ('
) and docassemble changes them to actual opening and closing quote characters (β
and β
). Same for double quotes. In our code editor, we use the unicode character "
(U+0022) both for opening and closing quotes. On the running interview page, docassemble changes those into β
- "left double quotation mark" (U+201C) - and β
- "right double quotation mark" (U+201D)
They look very similar, but are not the same. It's best to copy the text straight from the screen the user sees.
Wrong:
I should see the phrase "a document called a 'Certified docket sheet'"
Example error:
The text "a document called a 'Certified docket sheet'" SHOULD be on this page, but it's NOT
Right:
I should see the phrase "a document called a βCertified docket sheetβ"
Inconsistent cell countβ
(Moving)
This error prevents all of your tests being run. The message is telling you that something about the syntax of the table is wrong. One of your story tables could missing a pipe (|
) or could have an extra pipe, etc.
To fix this you can find the syntax typos by using an editor like the editor at AssertThat. It will let you paste in your test code and will show a red 'x' next to the lines that have syntax errors. The editor will not show error next to lines that are commented out. Those are the ones that start with #
.
The error message will include text similar to this:
Error: Parse error in 'docassemble/ChildSupport/data/sources/new_case.feature': (10:5): inconsistent cell count within the table
Securityβ
(Moving)
Using a third-party library or package is always a risk. That said, we take measures to help secure our code, such as protecting our release branches and requiring reviews before merging any new code.
In addition, here are some actions you can take to manage the security of the tests, as well as general guidelines for server security.
Disable the testsβ
(Moving)
If you become worried about the tests, there are different ways you can stop the tests from running.
In order to run, the test setup interview added a "workflow" file to your repository. GitHub sometimes calls that an "action". That's what triggers the tests. You can manage that workflow, and your actions in general, in GitHub.
Disabling tests in one repositoryβ
(Moving)
GitHub lets you disable workflow files like these. See their instructions at https://docs.github.com/en/actions/managing-workflow-runs/disabling-and-enabling-a-workflow.
You can also delete the file from your repository completely. If you go to the front page of your repository, the file is in the workflows
folder of the .github
folder. It's called run_form_tests.yml
. GitHub's instructions about how to delete a file are at https://docs.github.com/en/repositories/working-with-files/managing-files/deleting-files-in-a-repository.
Another option is to disable or limit all tests, all actions, in your repository. GitHub's documentation for managing repository actions is at https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#managing-github-actions-permissions-for-your-repository.
Disabling tests for the whole organizationβ
(Moving)
You can disable these tests, or any actions, for a whole organization. GitHub's documentation for managing organization actions is at https://docs.github.com/en/organizations/managing-organization-settings/disabling-or-limiting-github-actions-for-your-organization#managing-github-actions-permissions-for-your-organization.
Use a separate server just for testingβ
(Moving)
Keep the test server completely separate from production server so that the tests never show sensitive information to potential malicious actors. To avoid running tests on your production server, use the testing server's address in the SERVER_URL
GitHub secret. The test files themselves are secure as long as you don't put sensitive info in them. They don't do anything by themselves.
In addition, some general good practices are:
- Never share API keys or passwords between servers.
- Periodically clear out the test server and start a new docker container from scratch.
- Occasionally check the test server to make sure it's not running resource stealing code (blockchain miners, etc.)
See GitHub's security docsβ
(Moving)
GitHub has documentation on some best practices as well: https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions#using-third-party-actions
Use ALKiln's commit shaβ
(Moving)
This one requires prior technical knowledge. To summarize, you can freeze the version of the ALKiln code your repository's tests use by referencing a specific ALKiln commit sha in your workflow file.
- Go to ALKiln's repository. For example, for the v4 branch, you can go to https://github.com/SuffolkLITLab/ALKiln/commits/releases/v4.
- Find the sha of a commit you like.
- In your repository's directory, go to .github/workflows and find the file running the tests. There's a line in there that look's something like this:
uses: suffolkLITLab/ALKiln@releases/v4
- Change
releases/v4
to the commit sha.
When you want to update to a new version of the ALKiln, update that sha manually.
Set ALKiln's npm versionβ
(Moving)
This section requires prior technical knowledge about npm and GitHub workflow files. Feel free to ask us any questions you might have.
You can use an exact npm version of ALKiln by using your workflow file's ALKILN_VERSION
input. The default uses a carat, for example ^4.0.0
. That means it will use the latest minor or patch in version 4 of ALKiln. You can instead use an exact version, for example 4.3.0
. See our section on setting optional inputs.
GitHub secretsβ
You can use GitHub secrets to set environment variable values with sensitive information. For example, a password. ALKiln will avoid taking error pictures or downloading the HTML of pages with sensitive information. The value of a secret variable will not appear anywhere in the report or in the console.
Avoid taking pictures of pages with sensitive information. It is possible to trigger those pictures in Steps you write yourself, but we highly recommend against that for security reasons.
GitHub secrets can be useful if an organization wants to create a variable that all of its repositories will be able to use, though right now Story Tables are unable to use GitHub secrets to set variables.
-
Follow the GitHub instructions to set one or more GitHub secrets. You can add these to one repository's secrets or you can add these to your organization's secrets, whichever is right for you.
-
Go to the home page of your repository. Tap on the
.github
folder, then onworkflows
folder, then on the YAML file in there that runs the ALKiln tests. -
It should include a line that looks like this:
jobs:
alkiln-tests:
runs-on: ubuntu-latest
name: Run ALKiln tests
If this is the first environment variable you're adding, below that line add a new line like this:
env:
- Whenever you want to add a secret, add a new line under
env:
indented once as shown below:
USER_PASSWORD: "${{ secrets.USER_PASSWORD }}"
USER_PASSWORD
is just a placeholder in our example. You can name your secrets whatever you want to. Make sure you use the same words as the GitHub secret you made.
- Write your Step and use the names of these secrets as the values.
If you're not worried about keeping the information secure, you don't have to use a GitHub secret - you can just put the value straight into your workflow file like this:
MAX_SECONDS_FOR_SERVER_RELOAD: "300"
ALKiln will still avoid printing this value and avoid taking screenshots when it is used. There is no way ALKiln can tell which environment variables have sensitive information and which ones are safe.
All together, this section can look similar to this:
jobs:
alkiln-tests:
runs-on: ubuntu-latest
name: Run ALKiln tests
env:
USER_PASSWORD: "${{ secrets.USER_PASSWORD }}"
MAX_SECONDS_FOR_SERVER_RELOAD: "300"
Things to worry about lessβ
(Moving)
There are some parts of testing that might look less secure than they are.
When you run the tests in GitHub, you can see the logs of the test action's job and you can download the results of the test, or "artifacts". That may seem very exposed, but only people who have permissions on the repository, like administrators, collaborators, moderators, can see or download that information.
If you are still worried about the logs and artifacts in GitHub, you can delete the logs and you can delete the artifacts.
Your workflow fileβ
(Moving)
Where is it?
Your ALKiln workflow file is in your repository. To find it, go to your .github
folder, then open the workflows
folder there. It was probably created when you ran the setup interview and it might be called "run_form_tests.yml" or "alkiln_tests.yml" or something similar.
What does it do?
Among other things, the workflow file:
- Triggers the ALKiln tests when desired, like when you push new code to your package.
- Gives ALKiln inputs that are needed to run your tests.
- Optionally gives ALKiln other inputs and environment variables it can use.
You can also use the whole suite of GitHub's workflow and action functionality to do other things, like creating issues when tests fail.
These following sections probably require prior technical knowledge about GitHub workflow files. Feel free to ask us any questions you might have.
Required inputsβ
(Moving)
The setup interview should have helped you create these required inputs
and their values. They are in the jobs:
section. They look something like this:
with:
SERVER_URL: "${{ secrets.SERVER_URL }}"
DOCASSEMBLE_DEVELOPER_API_KEY: "${{ secrets.DOCASSEMBLE_DEVELOPER_API_KEY }}"
SERVER_URL
is the url of the docassemble server the tests should run on.
DOCASSEMBLE_DEVELOPER_API_KEY
is the API key that you created for the account on your server that will store the Project in the Playground while the tests are being run. You probably created this in the setup interview. Alternatively, your organization admin may have created it.
We recommend keeping the API key a GitHub secret for security reasons, but the server url can be typed in plainly. For example SERVER_URL: "https://apps-test.example.com"
.
Optional inputsβ
(Moving - some optional inputs) (Moving - other optional inputs)
There are also optional inputs that can go under with:
.
MAX_SECONDS_FOR_SETUP
lets you to set how long to allow ALKiln to try to pull your interview package's code into the docassemble Playground. The default is currently 120 seconds (2 minutes).
ALKILN_VERSION
can be useful for security. It gives lets you control what npm version of ALKiln you're using. Read about that in the "ALKiln's npm version" security section.
If you're using a GitHub repository or organization secret, it will look very similar to the required inputs described above. Here the values are in context:
with:
SERVER_URL: "${{ secrets.SERVER_URL }}"
DOCASSEMBLE_DEVELOPER_API_KEY: "${{ secrets.DOCASSEMBLE_DEVELOPER_API_KEY }}"
MAX_SECONDS_FOR_SETUP: "${{ secrets.MAX_SECONDS_FOR_SETUP }}"
ALKILN_VERSION: "${{ secrets.ALKILN_VERSION }}"
Other than DOCASSEMBLE_DEVELOPER_API_KEY
, this information can usually be public. If your organization wants to share the values with multiple repositories you can still use an organization GitHub secret. If not, you can set them right there in the workflow file.
with:
SERVER_URL: "${{ secrets.SERVER_URL }}"
DOCASSEMBLE_DEVELOPER_API_KEY: "${{ secrets.DOCASSEMBLE_DEVELOPER_API_KEY }}"
MAX_SECONDS_FOR_SETUP: "300"
ALKILN_VERSION: "4.3.0"
ALKiln environment variablesβ
Deleted in favor of the SERVER_RELOAD_TIMEOUT_SECONDS input.
There are some environment variables that ALKiln uses internally that don't need to be "inputs". They go under the env:
section instead of under with:
.
MAX_SECONDS_FOR_SERVER_RELOAD
customizes how long to allow your tests to wait when the server is reloading. This helps avoid tests failing when they should otherwise be passing.
If you want to use this value in multiple repositories, you can use a GitHub organization secret. It can look something like the below:
env:
MAX_SECONDS_FOR_SERVER_RELOAD: "${{ secrets.MAX_SECONDS_FOR_SERVER_RELOAD }}"
You can also set its value without a secret as this value is probably not sensitive information.
env:
MAX_SECONDS_FOR_SERVER_RELOAD: "300"
You can read the GitHub secrets section on setting arbitrary environment variables for more background information.
Your server can reload whenever someone saves a python file or whenever a developer or another test pulls a package that contains a python file into the Playground. If a test is running while that happens, the page the test is trying to open might take too long to load and the test will fail. This is unhelpful to you because the test stops part way and you cannot tell if it would have failed or passed on its own.
The next test might then fail during the same reload. That becomes a problem when a chain of tests fail because of one reload.
ALKiln cannot stop the first test from failing, but MAX_SECONDS_FOR_SERVER_RELOAD
can help prevent the next tests from failing during the same reload.
Arbitrary environment variablesβ
(Moving)
See the GitHub secrets section on setting arbitrary environment variables. The section describes using GitHub secrets and using plain values.
Make a GitHub issue when tests failβ
(Moving)
- Go to your GitHub repository.
- Tap on the
.github
folder, then onworkflows
, then on the YAML file in there that runs the ALKiln tests. - Tap to edit the file.
- Add the below code under the last line of text in the file.
- Avoid adding any new GitHub secrets to your repository for this.
- name: If tests failed create an issue
if: ${{ failure() }}
uses: actions-ecosystem/action-create-issue@v1
with:
github_token: ${{ secrets.github_token }}
title: ALKiln tests failed
body: |
An ALKiln test failed. See the action at ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}.
labels: |
bug
Avoid changing the value github_token
and avoid creating a new secret for it. The variable secrets.github_token
is a value that your repository has by default.
If you use the code above, the GitHub issue will contain a link to the workflow's action page itself.
You can edit the values of title
, body
, and bug
to customize the issue.
If you've run the Setup interview more recently, you will already have this code in your file, though it will be inactive. You just have to remove the comment symbols (#
) from the lines of code.
Triggering GitHub testsβ
(Moving)
By default, the ALKiln setup interview makes sure that the tests are triggered when someone pushes code the repository. For example, when they press the 'Commit' button in docassemble. It also makes sure the tests can be triggered manually.
There is a file in your repository that GitHub uses to trigger this workflow. To see it:
- Go to your GitHub repository.
- Tap on the
.github
folder, then onworkflows
, then on the YAML file in there that runs the ALKiln tests. - You can tap to edit the file if you want.
The object in your workflow file that defines the triggers is named on
. You can think of it in a sentence - "on push
, run this workflow". It looks something like this:
on:
push:
workflow dispatch:
inputs:
tags:
required: False
description: 'Optional. Use a "tag expression" specify which tagged tests to run (https://cucumber.io/docs/cucumber/api/#tag-expressions)'
The keys that trigger the workflow (e.g. push
and workflow_dispatch
) can be in any order.
Scheduled testsβ
(Moving)
You can also run these tests on a schedule - daily, weekly, monthly, or on any other interval. To run the tests on a schedule, you must add schedule
to the on
object in your workflow file and give it an "interval" value. For example:
on:
push:
schedule:
- cron: '0 1 * * TUE'
# other stuff
The GitHub docs can tell you more about triggering workflows on a schedule. If you want to change the interval, these examples of cron syntax can help a lot.
If you've run the Setup interview more recently, you will already have this code in your file, though it will be inactive. You just have to remove the comment symbols (#
) from the lines of code.
Pull requestsβ
(Moving)
You can also trigger tests to run when someone makes a pull request in GitHub.
In the on
object in your workflow file, add the pull_request
trigger key. For example:
on:
push:
pull_request:
# other stuff
All togetherβ
After adding all those, your whole on
section might look something like this:
on:
push:
workflow dispatch:
inputs:
tags:
required: False
description: 'Optional. Use a "tag expression" specify which tagged tests to run (https://cucumber.io/docs/cucumber/api/#tag-expressions)'
pull_request:
schedule:
- cron: '0 1 * * TUE'
FAQβ
(Moving)
I have a private GitHub repository. Can I use this testing framework?β
(Moving)
Yes, you can use ALKiln with a private repository, though you have to do a bit of extra work.
- Pick a GitHub account that has permission to change the private repository.
- Make sure the account on your docassemble server that you linked to the tests is integrated with the GitHub account. See docassemble's documentation on integrating a GitHub account.
As that documentation explains, no two accounts on a docassemble server can be connected to the same GitHub account.
Also, there are some limits on the amount of time private repositories can run workflows: https://docs.github.com/en/billing/managing-billing-for-github-actions/about-billing-for-github-actions
How do I add a new test file?β
(Moving)
Go to your Playground > the dropdown Folders menu > Sources.
Add a new file that ends in the extension .feature
. Ex: has_children.feature
Add this to the blank file:
Feature: A description of the category of the tests you'll write in this file
Scenario: The specific situation that this test is written for
Given I start the interview at "name_of_the_interview_file_to_test.yml"
Make sure that
Feature:
and it's description is on the first line.- Each test starts with a
Scenario:
and its description. Given I start the interview...
is the first line underScenario
.
After that, you can add a Story Table or other Steps that will test your code. Save the new files in your Playground's Packages page and commit them to GitHub. From then on, GitHub will run that tests whenever you commit, or push, to GitHub.
How do I add a new test to an existing test file?β
To add a new test to the existing file you need:
- The keyword
Scenario:
with a description. - The step that loads the interview's page:
Given I start the interview at
. You must use it before you fill out any fields.
Example:
Scenario: I allow unsupervised visitation
Given I start the interview at "restraining_order.yml"
After that, you can add the story table or other Steps that will test your interview.
(Moving)
Make sure to leave the Feature:
line at the very top of the file. Avoid repeating the Feature:
key anywhere else in the file.
Why should I write a Scenario description?β
(Moving)
Scenario
descriptions affect the names of error screenshot files and report headings, so try to write something unique that you will recognize later. It will help you identify what happened with each test.
When do tests run?β
(Moving)
GitHub tests run when you commit your files to GitHub. That might be when you hit the 'Commit' button on the Packages page. It can also happen when you edit, add, or delete files in GitHub itself.
If you know how to use GitHub actions, you can also run the tests manually from GitHub actions with some more options.
Feedbackβ
(Moved)
Give us feedback and propose ideas by making issues at https://github.com/SuffolkLITLab/ALKiln/issues.
Built withβ
(Moved)
Kiln uses cucumberjs, puppeteerjs, cheerio, and runs the assertions using mocha and chai libraries.
Even though this is built using cucumberjs, this framework has a different, less lofty, purpose. cucumber focuses on BDD (behavior driven development). This framework mostly deals with regression testing and other conveniences.
Repositoriesβ
(Moved)
- ALKilnInThePlayground is the package that will let you run tests directly on your server
- ALKiln's own repository
- The developer test setup interview's repo