Writing ALKiln tests
WIP (Work in progress)
This page describes your tests files and how to write tests and other useful information. To see the table of contents on a small screen, tap on the dropdown menu at the top of the page. To see it on a wider screen, see the column on the right-hand side of the screen.
Do I need to read this?​
If you are writing tests or need to learn about test files, this page is for you. It is easier to write the tests if you have access to the interview YAML code and understand a bit about docassemble.
Quick refresher​
If you are coming back and just need some resource reminders, here is a short list:
- You can see what your first test might look like.
- You can generate a first draft for a more complex test with the help of the ALKiln Story Table Step generator.
- If your interview is using generic objects or index variables, you must include special HTML in your interview.
- Every time you change or add a required field variable, you must update the tests that need that variable.
- You write and edit
.feature
test files in your package's Sources folder. - You can check the syntax of your test file by using an editor like the editor at AssertThat.
- If you are using GitHub+You™ or GitHub Sandbox™ tests, those tests will run when anyone commits to GitHub.
- By default, each Step or page must be completed in at most 30 seconds. You can change that with the "max seconds" Step.
- ALKiln creates test reports and pictures. For example, ALKiln saves pictures and the HTML of pages that caused errors. In GitHub, those files are in the zip file in your workflow's artifact section.
- There are some field types that ALKiln cannot yet handle, including some
object
-type fields. - The tests use the gherkin language and syntax.
Here is an example of a more complex test for a quick refresher on some of ALKiln's features:
@children
Feature: I have children
@fast @child_benefits
Scenario: child has benefits
Given I start the interview at "some_file_name.yml"
And I get to the question id "benefits" with this data:
| var | value |
| children.there_are_any | True |
| children.target_number | 2 |
| children[0].name.first | Cashmere |
| children[0].name.last | Davis |
| children[1].name.first | Casey |
| children[1].name.last | Davis |
When I set the var "benefits['SSI']" to "True"
And I tap to continue
Then the question id should be "download"
And I download "some_motion.pdf"
Writing test Steps overview​
Most of the code in test files is code for ALKiln Steps.
ALKiln Steps are how you give your instructions to the ALKiln robot. Steps are made of commands1 that ALKiln has created for you. Those commands often need information from you. They are like fill-in-the-blank sentences. A sort of madlibs for your ALKiln robot. For example:
And I set the variable "likes_bears" to "True"
This Step is made of three parts.
And
is one of the keywords that can start a StepI set the variable "___" to "___"
is an ALKiln command with blanks that you need to fill inlikes_bears
andTrue
are the information you put in the blanks so that the ALKiln robot knows what answer to give to a question
That command description can also be written as I set the variable "<variable name>" to "<value>"
. This tells you what information you should put in each of these blanks.
In this case there are two blank spaces you should fill. In the first blank, you should put the name of the variable that the field will set. In the second blank, you should put the value that you want the variable to have.
Each of your tests - a Scenario
or a Scenario Outline
- usually contains many Steps. Any Step can be used as many times as you want anywhere in a test. If you follow good code-writing practices, most Steps work in any human language, so you can tests translations of your interviews.
Steps to start tests​
Authors often use these Steps to start their tests, though these Steps can actually be used anytime in a test.
I start the interview at "<file name>"
​
Use the "start interview" Step to start a new session of an interview.
The Step format:
Given I start the interview at "<YAML file name>"
The blank to fill in here is <YAML file name>
- the name of the YAML file that starts the interview.
Example with just the file name:
Given I start the interview at "favorite_bears"
Example with the file extension:
Given I start the interview at "favorite_bears.yml"
Example with url arguments (url_args):
Given I start the interview at "favorite_bears.yml&user_id=123&from=theinterwebs"
You can also instead use the full url of the interview, though we recommend you avoid that. For example, one advantage to using the file name instead of the full url is that you can then still use the GitHub Sandbox™ test method to run those tests on GitHub.
You must include the interview
Step in each Scenario before setting any fields.
I sign in with the email "<username env var>" and the password "<password env var>"
​
Use the "sign in" Step to sign into your docassemble server before going to an interview with the "start interview" Step. If you run your tests on GitHub, remember to use GitHub secrets to store this sensitive information.
The Step format:
Given I sign in with the email "<username env var>" and the password "<password env var>"
The two blanks to fill in are <username env var>
and <password env var>
. Each of them must be the name of an environment variable that you have already added to your workflow file.
The <username env var>
variable should contain the username of a dedicated account on your docassemble server. The <password env var>
variable should contain the password of that account.
To use this Step:
- When you create a user for ALKiln on your docassemble server, note the email address and the password for that account.
- Create GitHub secrets or config values to store the email address and password.
Example Step:
Given I sign in with the email "ALKILN_USER_EMAIL" and the password "ALKILN_USER_PASSWORD"
"ALKILN_USER_EMAIL"
and "ALKILN_USER_PASSWORD"
are just examples of environment variable names. You can use any names you want.
The relevant section of your workflow file would look like this:
env:
ALKILN_USER_EMAIL: "${{ secrets.ALKILN_USER_EMAIL }}"
ALKILN_USER_PASSWORD: "${{ secrets.ALKILN_USER_PASSWORD }}"
You can have as many of these accounts as you want.
If you run this in a GitHub+You™ test, you must use GitHub secrets to set the environment variables in order to keep the information secure. You should also use a separate account for signing in like this. Never use a real person's account or information.
ALKiln does avoid taking pictures or downloading the HTML for this Step, even when the test has an error on that page, so that information is still protected. ALKiln also avoids printing the value of these variables anywhere in the report or in the console. Even so, it is possible for you to expose this information.
I sign in with the email "<username env var>", the password "<password env var>", and the API key <api key env var>
​
Use the "sign in and clean up" Step to sign into your docassemble server before going to an interview with the "start interview" Step, then delete the interview from the account's "My interviews" list. If you run your tests on GitHub, remember to use GitHub secrets to store this sensitive information.
This is very similar to the "sign in" Step above. See that for basic information. The difference here is that you can also make sure that ALKiln deletes the interviews this user creates.
The new blank to fill in is <api key env var>
. It is the name of the environment variable containing an API key you create for the account that signs in.
Example Step:
Given I sign in with the email "ALKILN_USER_EMAIL" and the password "ALKILN_USER_PASSWORD", and the API key "ALKILN_USER_API_KEY"
"ALKILN_USER_EMAIL"
, "ALKILN_USER_PASSWORD"
, and "ALKILN_USER_API_KEY"
are just examples of environment variable names. You can use any names you want. See the "sign in" Step for more details.
To set up the API key, do the following:
-
Ask an admin on your docassemble server to create an API key for themselves in their user profile2.
-
Use that admin API key to create an API key for your user by using the docassemble API endpoint. If you lose this API key, you can make a new one. You can do that in your terminal or command prompt with this code:
$(curl -X POST \
<your server url>/api/user/<user's email>/api \
-H 'Content-Type: application/json' \
-d '{
"key": "<your admin API key>",
"name": "alkiln_key",
"method": "none"
}')<your server url>
is the url of your server in this format: https://your-server-url.org.<user's email>
is the email of ALKiln's user account you name in your sign in Step.<your admin API key>
is the API key your admin made. -
Press enter.
-
Copy the result you get from that curl request.
-
Int GitHub, put that variable in a GitHub secret. Avoid including the outer quotation marks.
-
Add that secret to your environment variables. Here's a GitHub workflow file example:
env:
ALKILN_USER_EMAIL: "${{ secrets.ALKILN_USER_EMAIL }}"
ALKILN_USER_PASSWORD: "${{ secrets.ALKILN_USER_PASSWORD }}"
ALKILN_USER_API_KEY: "${{ secrets.ALKILN_USER_API_KEY }}" -
Write this "sign in and clean up" Step in your
.feature
test file.
If you run this in a GitHub+You™ test, you must use GitHub secrets to set the environment variables in order to keep the information secure. You should also use a separate account for signing in like this. Never use a real person's account or information.
ALKiln does avoid taking pictures or downloading the HTML for this Step, even when the test has an error on that page, so that information is still protected. ALKiln also avoids printing the value of these variables anywhere in the report or in the console. Even so, it is possible for you to expose this information.
I go to "<url>"
​
Use the "go anywhere" Step to go to any arbitrary url.
The Step format:
Given I go to "<url>"
The blank to fill in here is <url>
- any valid url.
Example:
Given I go to "https://retractionwatch.com"
Test configuration Steps​
These Steps can change your settings for your tests. For example, the maximum amount of time one Step or page can take to finish.
the maximum seconds for each Step is <number>
​
Use the "max seconds" Step to set the maximum amount of time one Step can take to finish before it will fail. This is also the maximum amount of time ALKiln will spend trying to fill out the fields on a page before the test fails.
The default maximum time is 30 seconds.
The Step format:
Given the maximum seconds for each Step is <number>
The blank to fill in is <number>
- the number of seconds to allow a Step to run, or a page to be filled, before the test fails.
Example:
Given the maximum seconds for each Step is 200
You can set the max seconds as your first Step to give a slow-loading interview more time to load. If you are worried about increasing the time too much, you can immediately decrease the time after the interview loads. For example:
Given the maximum seconds for each Step is 200
And I start the interview at "favorite_bears.yml"
And the maximum seconds for each Step is 30
You can also increase this setting before you try to download a large file. ALKiln spends twice the value of this setting to download a file. You can read more about the timing in the "download" Step.
You can use this Step to change the maximum seconds setting at any time in your test.
Increasing the maximum time can have down-sides. Some kinds of failures take the maximum amount of time possible. Since each failed test is re-run, those failing tests will take twice the amount of time that you set for your maximum Step time.
Sometimes authors set the maximum seconds to a shorter time for most of their Steps and only lengthen the time for Steps that need more time. This can speed up failing tests.
I check all pages for accessibility issues
​
Use the "all accessible" Step to turn accessibility checking on.
Example:
Then I check all pages for accessibility issues
After this Step, ALKiln will check every following page for accessibility issues until the end of the test.
If there are problems with the accessibility of the page, ALKiln will continue to the end of the test, but it will save a separate JSON accessibility report file for a page that fails this check. For example, if text is too small or if contrast between the text and its background is too low. If ALKiln found problems, the test will fail at the end.
ALKiln uses aXe-core to check for problems. You can check the documentation there to understand the failure data better. A human can do a much more thorough job of improving accessibility, but this can be a start.
This is different than the "accessibility" Step, which only checks the current page.
Interactive Steps​
You can use some Steps to change what is on the page by filling in fields, pressing buttons or tabs, or performing other actions.
The Story Table Step​
Use the "Story Table" Step to set values of multiple fields on multiple pages. It is the Step many of our authors work with the most. In our opinion, it is the most flexible way to fill in fields in your forms, unlike the linear "I set" Step. This makes a test easier to maintain.
The Story Table Step is made of a sentence followed by a table. The data in the table is a snapshot of the user who is filling out the form. Each table row describes one fact about the user - one variable in the interview. It has this format:
And I get to the question id "<target question id>" with this data:
| var | value |
| <variable name> | <value to set> |
| <variable name> | <value to set> |
As you can see, there are three types of blanks you need to fill in for this Step:
<target question id>
is theid
3 of thequestion
block where you want ALKiln to stop using this table.<variable name>
is the name of an interview variable for a form field. Each table row needs one of these in the first column.<value to set>
is the value to set for this row's variable name.
The table must include all the variables and values necessary to get a user from the current page to the page with the given question id. The table must have two columns with the headings var
and value
.
An example table:
And I get to the question id "brought scissors" with this data:
| var | value |
| last_haircut_date | today - 730 |
| wants_virtual_haircut | True |
| user_name | Jaime |
| intro_screen_accepts_terms | True |
This Step is flexible, which makes the test easier to maintain. The order of the rows is unimportant. That means you can edit the order of your pages or move fields to different pages without worrying about breaking the test.
ALKiln sets a table variable every time it finds that variable on a page, no matter how many times that variable appears in your interview.
Avoid using .there_is_another
in your Story Table. Read about using .target_number
instead.
Things to note
You can use the Story Table generator to generate a first draft of a full test file.
If your interview uses generic objects (x
) or index variables (i
, j
, k
, etc), you must have special HTML in your interview to set the values of variables.
Right now, Story Tables are unable to use environment variables to fill in answers, so you cannot use them to set sensitive information like passwords.
The Story Table does not care about extra rows that never get used. This can make Story Tables easier to write, but it also means that the tests will technically "pass" even if variables that should get set are never filled in.
That can be a sign that a question that the user should have seen got skipped or that there is a typo or mistake in the test.
Because of this, the ALKiln report shows a list of unused rows for each table. You can check the report of passing tests to make sure none of the important rows got skipped.
How this works, for the curious folks out there
On each interview page, ALKiln:
- Checks if the page has the
<target question id>
you gave it at the start - If the page does have the
<target question id>
it stops the Step - Otherwise, it looks at each
<variable name>
in thevar
column to see if it can find that name in any fields - For each
<variable name>
it finds, it tries to answer the field using the matching<value to set>
- On pages with
show if
fields, it waits to see if a new field appears - If one does appear, it looks through each
<variable name>
again - It sets any value it can and repeats this process
- When no new fields get revealed, ALKiln tries to continue to the next page
- If it is unable to continue it logs an error and stops
- Otherwise, it repeats the above
ALKiln sets a table variable every time it finds that variable on any page, no matter how many times that variable appears in your interview.
I set the variable "<variable name>" to "<value to set>"
​
Use the "I set" Step to fill in one value on one page.
If your interview uses generic objects (x
) or index variables (i
, j
, k
, etc), you must have special HTML in your interview to set the values of variables.
The format of the Step:
And I set the variable "<variable name>" to "<value>"
There are two types of blanks you need to fill for this Step. The first set of quotes in this Step contain the name of the variable you need to set. The second set of quotes contain the value you want to set. This is the same as the Story Table's's var
column and value
column.
This Step is "linear". That is, unlike Story Table rows, these "linear" variable-setting Steps have to be in the correct order and you must make sure your test has reached the exact page that has the field. Otherwise your test will fail.
Example fields:
code: |
users[0].favorite_bear.name.first
users[0].favorite_bear.type
---
question: Name of your favorite bear
field:
- Bear name: users[i].favorite_bear.name.first
- Why is this one your favorite?: users[i].favorite_bear.why
input type: area
---
question: Favorite bear type
field:
- Pick one: users[i].favorite_bear.type
datatype: radio
choices:
- Black bear: wild_bear
- Teddy bear: cuddly_bear
- Jaime: maybe_not_a_bear
Example linear "I set" Step:
And I set the variable "users[0].favorite_bear.name.first" to "Ultraman"
And I set the variable "users[0].favorite_bear.why" to "Why not?"
And I tap to continue
And I set the variable "users[0].favorite_bear.type" to "cuddly_bear"
You can use this Step to upload files.
I set the variable "<variable name>" to the GitHub secret "<environment variable name>"
​
Use the "secret value" Step to use an environment variable to fill in one value on one page.
Even though the name has "GitHub" and "secret" in it, it can also use environment variables. It is still the only way to use sensitive information in your tests. To do that, you can use environment variables with GitHub secrets.
If your interview uses generic objects (x
) or index variables (i
, j
, k
, etc), you must have special HTML in your interview to set the values of variables.
The format of the Step:
And I set the variable "<variable name>" to the GitHub secret "<environment variable name>"
Here, <variable name>
is the name of the variable you want to set and <environment variable name>
is the name of the environment variable that has the value.
Example:
I set the variable "user1_account_password" to the GitHub secret "ALKILN_USER1_PASSWORD"
A lot of authors use this Step to set answers with sensitive information, like an account number. This is because ALKiln is extra careful with the security of these values.
Since workflow environment variables might hold sensitive information, ALKiln avoids taking pictures or downloading the HTML of pages that use environment variables. It even avoids taking pictures when the test has an error on that page and during the sign-in Step. ALKiln avoids printing the value of an environment variable anywhere in the report or in the console log.
This Step is unable to sign into your docassemble server. You must use the "sign in" Step to sign in to your docassemble server.
I tap to continue
​
Use the "continue" Step to tap the button to continue to the next page. If the continue button sets a variable, this Step will set that variable to True
. Story Table Steps handle continuing by themselves.
Example:
When I tap to continue
The test will still continue to the next Step if tapping the continue button fails to go to the next page. Because of this, you can then use the "invalid" Step to write tests that check that your answer validation works correctly. However, if the continue button was supposed to work and the next Step tries to interact with the next page, the test will probably fail.
I set the name of "<variable name>" to "<name>"
​
The "name" Step is specifically for the Document Assembly Line 4-part name questions.
Avoid punctuation. We recommend you just use 2 names - the first name and last name - but you can have all these formats:
- Firstname Lastname
- Firstname Middlename Lastname
- Firstname Middlename Lastname Suffix (where suffix is one of the dropdown suffix choices, like
II
)
And I set the name of "users[0]" to "Devon Upton"
I set the address of "<variable name>" to "<US address>
​
The "address" Step is specifically for the Document Assembly Line 4-part address questions. Right now, the "address" Step can only handle the format of a US address.
When I set the address of "users[0]" to "120 Tremont Street, Unit 1, Boston, MA 02108"
Remember the commas.
I download "<file name>"
​
Use the "download" Step to download files so that humans can check that they are correct. After all the tests are done the files will be the artifacts folder of the Scenario (test).
Then I download "file-name.pdf"
Use just the name of the file.
If the file is a PDF file, you can use the "compare files" Step to check that the PDF is correct.
Currently, ALKiln is unable to tell when a file has actually finished downloading, so it runs the download Step for about twice the maximum seconds allowed for a Step no matter what. If a download takes longer than the current maximum seconds value, the test will not fully download the file4.
If you think downloading the file will take more than 60 seconds, use the "max seconds" Step to give the file more time to download. The "download" Step will use the the full amount of time that you give it.
I tap the "<selector>" element and go to a new page
​
It could help to know about:
The Step:
Use one of the "tap and navigate" Steps to tap an element on the page, like a button, to go to a new page. If you just want to tap the continue button use the "continue" Step.
The format for variation 1:
And I tap the "<selector>" element and go to a new page
The format for variation 2:
And I tap the "<selector>" element and wait <number> seconds
One blank to fill in for these commands is <selector>
, a CSS selector of the element you need to tap. It can be any valid CSS Selector.
For variation 2, the second blank to fill in is <number>
- the number of seconds you want ALKiln to wait after tapping the element.
Example of variation 1:
When I tap the "#awesome_page_link" element and go to a new page
Example of variation 2:
And I tap the ".link-container #even_better_page a" element and wait 45 seconds
This is the only way to explicitly wait after navigating to the new page. Some people try to use the "wait" Step, but that will fail. Alternatively, you can use the "max seconds" Step to give your test more time to load new pages in general.
I tap the "<selector>" element and stay on the same page
​
It could help to know about:
The Step:
Use the "tap and stay" Step to tap on HTML elements without navigating to a new page. For example, you can tap on collapsible content to show the text inside.
The Step format:
And I tap the "<selector>" element and stay on the same page
The blank to fill in for this command is <selector>
, a CSS selector of the element you need to tap. It can be any valid CSS Selector.
Example:
When I tap the "#an_id .al_toggle" element and stay on the same page
It is a good idea to add a wait
Step after this Step to let the contents become visible before taking other actions, like pressing on other elements or checking for a phrase. For example:
When I tap the "#an_id .al_toggle" element and stay on the same page
And I wait .7 seconds
I tap the "<tab id>" tab
​
It could help to know about:
The Step:
Use the "tap tab" Step to interact with AssemblyLine's ALToolbox tabs. ALKiln will tap on the tab and then wait until the tab contents are fully visible.
The Step format:
When I tap the "<tab html id>" tab
The blank to fill in here is <tab html id>
- the text of the HTML id attribute of the tab.
<tab html id>
is not a selector. Unlike the other "tap" Steps, leave out the #
symbol (like the example shows).
Example:
When I tap the "TabGroup-specific_tab_name-tab" tab
I wait for <number> seconds
​
You can use the "wait" Step after the previous Step has completely finished to tell ALKiln to wait before going to the next Step.
In most cases, this Step is unable to help the test wait longer for a page to finish loading. To give a page time to load, use the "max seconds" Step.
The Step format:
When I wait <number> seconds
The blank to fill in here is <number>
- the number of seconds to wait. If you want, the number can be less than a second.
Examples:
When I wait .6 seconds
When I wait 150 seconds
When to use the "wait" Step:
If a test sometimes fails because ALKiln says it was unable find a field or element on the page, and other times the test passes, this Step can help. Use "wait" to pause for a moment before trying to interact. This is usually because some elements can take different amounts of time to fully appear.
Use "wait" to let page elements finish moving around. For example, after you use the "tap and stay" Step to expand content. When elements are moving on the page ALKiln has trouble interacting with them.
Use "wait" to wait for a background process to finish - ALKiln is currently unable to detect a background process finishing.
Use "wait" to give external data more time to load.
Setting values​
There are different ways to fill in fields. One is the Story Table Step another is the "I set" Step. Each has different syntax, but they set values the same way.
If your interview uses generic objects (x
) or index variables (i
, j
, k
, etc), you must have special HTML in your interview to set the values of variables.
Be sure to use the actual value
of the field, not just the text that the user sees. For example, for the first choice in this field the value
would be "wild_bear", not "Black bear". "Black bear" is just the text the user sees - the label.
question: Favorite bear type
field:
- Pick one: users[i].favorite_bear.type
datatype: radio
choices:
- Black bear: wild_bear
- Teddy bear: cuddly_bear
- Jaime: maybe_not_a_bear
Special HTML​
This HTML lets ALKiln set the correct variables and avoid other problems.
Most authors add this extra HTML to the post
section of their default screen parts
block. They do that by including the AssemblyLine's al_package.yml file or by adding this code themselves. For example:
default screen parts:
# HTML for interviews using ALKiln tests
# If your interview includes AssemblyLine's al_package.yml you can skip this
post: |
<div id="alkiln_proxy_var_values"
data-generic_object="${ encode_name(str( x.instanceName if defined('x') else '' )) }"
% for letter in [ 'i', 'j', 'k', 'l', 'm', 'n' ]:
data-index_var_${ letter }="${ encode_name(str( value( letter ) if defined( letter ) else '' )) }"
% endfor
aria-hidden="true" style="display: none;"></div>
<div id="trigger" data-variable="${ encode_name(str( user_info().variable )) }" aria-hidden="true" style="display: none;"></div>
This code will be invisible on the page. No one will see it. If you already have something in your post:
default screen part, just copy the code and paste it in after whatever you have there. Putting it at the end can avoid messing up other HTML.
Do I need this?
You probably need the "alkiln_proxy_var_values" HTML element if you use generic objects (x
) or index variables (i
, j
, k
, etc) in your interview.
You probably need the "trigger" HTML element if you set restrict input variables
to True
in your docassemble server configuration file.
Why x
and i
?
Without the help of the HTML, ALKiln has no way of knowing what the actual names are of variables that use generic objects (x
) or index variables (i
, j
, k
, etc).
Why "trigger"?
If a docassemble server's config value of restrict input variables
is set to True, ALKiln needs the help of the HTML to know what variable a signature Step is setting.
That said, even the "trigger" HTML may not help if the question is triggered by mandatory: True
.
Text​
Text-type fields are fields where the user needs to type text. The default field type is a text field. Other text fields include those with:
datatype: date
datatype: time
datatype: currency
datatype: password
datatype: email
datatype: phone
datatype: text
datatype: integer
datatype: number
input type: area
fields are textarea fields and also accept text for their value
You can see more about datatype
s and input type
s in the docassemble docs.
Example text fields:
question: Name of your favorite bear
field:
- Bear name: users[i].favorite_bear.name.first
- Why is this one your favorite?: users[i].favorite_bear.why
input type: area
Example linear Steps:
And I set the variable "users[0].favorite_bear.name.first" to "Ultraman"
And I set the variable "users[0].favorite_bear.why" to "Why not?\nIt seems ok to like Ultraman."
Example Story Table Step:
| var | value |
| users[0].favorite_bear.name.first | Ultraman |
| users[0].favorite_bear.why | Why not?\nIt seems ok to like Ultraman. |
If an answer needs to have multiple lines, you can still only use one line in your test code. Instead use \n
where a new line would be. See examples.
Dates​
Dates are a specific kind of text value. ALKiln expects the format ##/##/####. For example, 06/26/2015. For Assembly Line custom three-parts date fields, ALKiln will assume the date format is dd/mm/yyyy.
There is also a special ALKiln value named today
.
Today​
One special ALKiln value you can include is today
. That will insert the date on which the test is being run. You can also subtract from, or add days to, today
.
Example linear Steps:
And I set the value "signature_date" to "today"
And I set the value "court_date" to "today + 20"
And I set the value "minors_birth_date" to "today - 3650"
Example Story Table Step:
| signature_date | today |
| court_date | today + 20 |
| minors_birth_date | today - 3650 |
The last example makes sure that the date is 10 years in the past, ensuring that a minor always stays a minor for that test.
Be careful with today
math. If you are testing a "boundary" date, you want to pick a date that is well within a boundary to make the test more robust. For example, if you want to test a date that was 10 or fewer days ago, use today - 9
instead of today - 10
.
At some point your tests might run over night and hit midnight right in the middle of the test - after ALKlin fills in the answer, but before the interview finishes. In that situation, if ALKiln answered the question with today - 10
the clock would tip into the next day by the end of the interview - 11 days from the filled-in date. Your test would fail incorrectly.
The fact is, your users also might have that exact experience, so keep that in mind as you design your interview. For example, instead of just telling a user that they are or are not eligible, you can tell them the date of the deadline so they too will know about the boundary.
yesno
​
There are a few different yesno
fields - yesno
buttons or yesno
fields like yesno
checkboxes and yesnoradio
s.
Example yesno checkbox field:
question: Bears
field:
- Do you have any bears?: has_bear
datatype: yesno
Example of setting the yesno checkbox to "yes" in a linear Step:
When I set the variable "has_bear" to "True"
Example of setting the yesno checkbox to "yes" in a Story Table Step:
| var | value |
| has_bear | True |
noyes
​
Avoid noyes
type fields. For one thing, using noyes
will make you edit the code you get from the ALKiln Story Table Step generator more. For another, we've found that humans tend to find those confusing too.
If you do need to use a noyes
field, you will have to think differently about the value here. Set the variable to True
if you want the box to be checked. Set it to False
if you want the box to be unchecked.
Example of making sure a noyes
checkbox field gets checked in a linear Step:
And I set the variable "likes_bears" to "checked"
or, in a Story Table:
| var | value |
| likes_bears | checked |
yesnomaybe
​
There are a couple different yesnomaybe
fields - yesnomaybe
buttons and datatype: yesnomaybe
fields.
Example yesnomaybe field:
question: Bears
field:
- Do you have any bears?: has_bear
datatype: yesnomaybe
Example of setting the yesnomaybe radio buttons to "maybe" in a linear Step:
When I set the variable "has_bear" to "None"
Example of setting the yesnomaybe radio buttons to "maybe" in a Story Table Step:
| var | value |
| has_bear | None |
radio
​
Example radio button field:
question: Favorite bear type
field:
- Pick one: users[i].favorite_bear.type
datatype: radio
choices:
- Black bear: wild_bear
- Teddy bear: cuddly_bear
- Jaime: maybe_not_a_bear
Always use options that have both a label and a value.
Example linear Step:
When I set the variable "users[0].favorite_bear.type" to "cuddly_bear"
Example Story Table Step:
| var | value |
| users[0].favorite_bear.type | cuddly_bear |
checkboxes
​
Checkboxes are one of the most challenging variables to set. This might lead to a "variable not found" error or an ALKiln Story Table Step might just skip the field causing problems later on.
- The only value a checkbox can have is
True
orFalse
. - In ALKiln, the variable name of the checkbox is usually the same as its docassemble variable name. For example,
good_fruits['Apples']
. - The names of
object_checkboxes
variable names in ALKiln are similar to how docassemble uses them, but they have extra quotes inside the first square brackets. For example:best_friends['all_friends[0]']
An example of a checkbox field:
fields:
- Out of all your friends, choose your best friends: best_friends
datatype: checkboxes
choices:
- Emmet: emmet
- Maria: maria
- Sam: sam
Always use options that have both a label and a value.
The variable name of the checkbox for ALKiln is usually the same as its docassemble variable name. For example, good_fruits['Apples']
. The only value a checkbox can have is True
or False
.
And I set the variable "best_friends" to "Emmet"
And I set the variable "best_friends['Emmet']" to "True"
or, in a Story Table:
| var | value |
| best_friends['Emmet'] | True |
One exception is field created with an object
datatype, like object_checkboxes
. Their ALKiln names are similar to their docassemble names, but they need extra quotes. For example, look at this field:
fields:
- Out of all your friends, choose your best friends: best_friends
datatype: object_checkboxes
choices: all_friends
And I set the variable "best_friends" to "all_friends"
And I set the variable "best_friends" to "all_friends[0]"
And I set the variable "best_friends[all_friends[0]]" to "True"
And I set the variable "best_friends['all_friends[0]']" to "True"
or, in a Story Table:
| var | value |
| best_friends['all_friends[0]'] | True |
Especially notice the quotes just inside the first square brackets.
file
​
You can upload one or more files by setting a variable to a comma-separated list of file names to upload. You must store files that you plan to upload in your "Sources" folder along with your tests.
The format for a linear Steps:
And I set the variable "<variable name>" to "<comma separated list of a file or files>"
The format for a Story Table Step:
| var | value |
| <variable name> | <comma separated list of a file or files> |
Example of a linear Step to upload 1 file:
And I upload "refutable_evidence.jpg" to "evidence_files"
Example of a linear Step to upload multiple files:
And I upload "irrefutable_evidence.jpg, refutable_evidence.pdf" to "evidence_files"
Example of a Story Table Step to upload 1 file:
| var | value |
| evidence_files | refutable_evidence.jpg |
Example of a Story Table Step to upload multiple files:
| var | value |
| evidence_files | irrefutable_evidence.jpg, refutable_evidence.pdf |
To upload more than one file you must separate their names with a comma.
signature
​
Signatures look a bit funny in ALKiln at the moment, but they work fine.
Example of a docassemble signature page:
question: Sign your name
signature: user.signature
under: ${ user }
Example of linear Steps:
And I sign
# or
And I sign with the name "<signatory's name>"
The first version will show a single dot as the signature. The second version will show the name and also have a dot.
In a Story Table, a signature variable has nothing in its value
column. Example of a Story Table Step:
| var | value |
| user.signature | |
All Story Table signatures are a single dot.
.there_is_another
​
In a Story Table, setting the value of .there_is_another
for a docassemble loop is more complicated than you might expect. At first glance, to fill in the answers for two children some authors try to write the following:
| var | value |
| kids.there_are_any | True |
| kids.there_is_another | True |
Instead of using .there_are_any
and .there_is_another
, always use .target_number
in their place to set the number of items that you want to include in the test.
For 3 children:
| var | value |
| kids.target_number | 3 |
For no children:
| var | value |
| kids.target_number | 0 |
Observational Steps​
You can use some Steps to look at what is on the page or compare data. You can use these to make sure the interview has done what you want it to.
I check the page for accessibility issues
​
Use the "accessibility" Step to check the current page for problems with accessibility.
Example:
Then I check the page for accessibility issues
If there are problems with the accessibility of the page, ALKiln will continue to the end of the test, but it will save a separate JSON accessibility report file for a page that fails this check. For example, if text is too small or if contrast between the text and its background is too low. If ALKiln found problems, the test will fail at the end.
ALKiln uses aXe-core to check for problems. You can check the documentation there to understand the failure data better. A human can do a much more thorough job of improving accessibility, but this can be a start.
This is different than the "all accessible" Step, which checks all the subsequent pages for accessibility issues.
I expect the baseline PDF "<baseline file>" and the new PDF "<new file>" to be the same
​
You can compare an example PDF (sometimes called a baseline) to the PDF your interview created to make sure they are the same. You must save the baseline PDF in your "Sources" folder along with your tests. Then use the "download" Step to download the file your interview created. You can then compare the two PDFs.
Then I expect the baseline PDF "baseline.pdf" and the new PDF "download.pdf" to be the same
First ALKiln will compare all of the text in the PDFs. Then ALKiln will compare the fillable fields in the PDFs. If anything is different, ALKiln will print the difference in the report. You can then use the text in the report to search through the PDFs and find the difference there.
I SHOULD see the phrase "<page text>"
​
Use the "phrase" Step to check that specific text is visible on the page. You can also make sure that text that should NOT be visible is indeed absent.
Getting some text characters right can be tricky with interview pages. If you get told ALKiln cannot find a phrase, read about possible reasons for missing phrases in the troubleshooting documentation.
The Step format for variation 1 - the phrase does exist on the page:
Then I SHOULD see the phrase "<page text>"
The Step format for variation 1 - the phrase is absent from the page:
Then I should NOT see the phrase "<page text>"
The blank to fill in here is <page text>
- the phrase to test.
Examples for variation 1 - the phrase does exist on the page:
Then I SHOULD see the phrase "Yay! This page's text is great!"
Then I should NOT see the phrase "Boo! This page's text is bad!"
The <page text>
must be inside double quotation marks and must never itself contain double quotation marks inside it. Since docassemble interview text usually has special quotation marks, that is usually easy. See the troubleshooting section on missing phrases that talks about special characters.
Unlike many other Steps, checking phrases is language specific. That is, if you want to test translations as well, you will have to use an "Examples" table in your test.
the text in the JSON variable "<variable name>" should be
​
Use the "compare JSON" Step to check that a variable on the page has a specific text value. This is a multi-line step. It is a little limited at the moment.
The Step format:
Then the text in the JSON variable "<variable name>" should be
"""
<value>
"""
The first blank to fill in for this Step is <variable name>
which is the name of the variable that is in the JSON variables and values data of the page. The second blank to fill in is <value>
, which is the value you want the variable to have. The value must be a string (text).
This step is unable to check values of nested objects. For example, it can test the value of a variable like user_affidavit
, but not an attribute like user.affidavit
.
Example for one line of text:
Then the text in the JSON variable "nickname" should be
"""
Ursus
"""
Example for text with multiple lines:
Then the text in the JSON variable "user_affidavit" should be
"""
Three quotes then some affidavit text.
The text can be multi-line.
Then close with three quotes.
"""
This Step will save a copy of all of the page's JSON variables to a file that starts with 'json_for' followed by the current question's id. The JSON variables are the same variables that you would see in the docassemble sources tab.
Avoid JSON Steps when you have added sensitive information to your interview.
Variable values only show up in the JSON on the next page - the one after the page where you fill in the answers.
I take a picture
​
The "picture" Step will not only take a picture of the page, it will also download an HTML file of the page. The associated HTML file will have the same name as the picture, but will end with .html
. ALKiln will save those files in the Scenario folder of the test results.
You can open the HTML file in your browser to interact with the page and inspect the page's HTML further. You should expect this page to look and act different than the original page - ALKiln is unable to get the full code that controls the behavior and styles of the original page. However, in the HTML, you can look at what options were in a drop down, examine some accessibility errors, or edit the HTML to expand some collapsed content. In fact, some pages are too long for the picture to show all the content. In the HTML, you can see that content.
Example:
Then I take a picture
Since workflow environment variables might hold sensitive information, ALKiln avoids taking pictures or downloading the HTML of pages that use environment variables. It even avoids taking pictures when the test has an error on that page and during the sign-in Step. ALKiln avoids printing the value of an environment variable anywhere in the report or in the console log.
I get the page's JSON variables and values
​
Use the "get JSON" Step to add a page's JSON variables to the final test report.
Example:
And I get the page's JSON variables and values
The JSON variables are the same variables that you would see in the docassemble sources tab. It is a bit messy, but you do get to see all the variables.
Avoid JSON Steps when you have added sensitive information to your interview.
Variable values only show up in the JSON on the next page - the one after the page where you fill in the answers.
I will be told an answer is invalid
​
Use the "invalid answer" Step to check that the user got feedback that one or more of their answers were invalid.
Example:
Then I will be told an answer is invalid
the question id should be "<id>"
​
Use the "question id" Step to make sure the test has gotten to the right page.
The Step format:
Then the question id should be "<question id>"
The blank to fill here is <question id>
- the id of the question
block for this page. To get the right <question id>
, copy the id
value from that block and paste it into your test.
Example:
Then the question id should be "some YAML block id!"
This Step can help humans keep track of what page the tests are on. It will also show up in the test reports and can help you see where things went wrong.
Generate a Story Table​
You can use the story table generator to generate a Scenario draft. Depending on your interview's code you might need to edit the table for it to work properly, but it can give you a good start.
Follow these instructions to use the generator:
- Ensure your server config is set up to show debug info.
- Run your interview manually until you reach the page you want the story table to get to.
- Open the "source" display of the interview. Currently, that looks like angle brackets,
</>
, in the header of the page. - Note the
id
of the page. - Tap the "Show variables and values" button. It will open a new tab showing a big JSON object.
- Copy all the text on that page.
- Go to the story table generator.
- Paste the JSON into the text area there, as instructed.
- Use the other input fields to help finalize your Scenario, including the question
id
. - Download the file the generator created. It should end in
.feature
. - Upload that
.feature
file into your "Sources" folder in your Project.
Test file anatomy​
Your tests files have the code that tells ALKiln how to answer questions in your interview and do other things. The code may look different than other code you have seen, but it is still code. It is still rigid and needs you to use the right syntax and spelling.
You should save your test files in your package's "Sources" folder. If you used the ALKiln setup interview, there will already be a test there that just loads the interview.
In test files, you use Gherkin syntax and keywords5 with specific "Steps" that the ALKiln framework has created for docassemble interviews.
Add a new test file​
Go to the Playground of your Project. Tap the "Folders" dropdown menu. Tap on Sources. Add a new file that ends in the extension .feature
. For example, has_children.feature
.
Add this to the blank file:
Feature: A description of the category of the tests you will write in this file
Scenario: The specific situation that this test is written for
Given I start the interview at "name_of_the_interview_file_to_test.yml"
Change the parts you need to change. Example:
Feature: I have children
Scenario: I allow unsupervised visitation
Given I start the interview at "protective_order.yml"
You can then add the code for your tests.
Some rules​
Here are some rules about test files:
- You can have as many test files as you want.
- Each file can have one or more tests, called Scenarios or Scenario Outlines.
- Each file's name must end in
.feature
because that is the extension of that type of file. For example,user_has_children.feature
. - Each file must start with the keyword
Feature:
or a tag followed by a new line and theFeature:
keyword. - Each file must contain only one
Feature:
keyword. - You must use Gherkin syntax for your tests. Gherkin's own documentation is the best place to read about Gherkin syntax with some exceptions that you can read below. Ignore notes about "step definitions". Those are part of what ALKiln takes care of.
To troubleshoot Gherkin synatx problems in your tests, use an editor like the one at AssertThat. It will mark lines that have incorrect syntax. For example, if you have multiple Feature
keywords in your file, the editor should mark the second one as incorrect.
Gherkin keywords​
ALKiln only allows you to use some Gherkin keywords. Those are:
Feature:
- each file can only use oneFeature:
keyword.Scenario:
/Example:
- We recommend you useScenario
. Make theScenario
descriptions unique because ALKiln will use them in the names of the reports it creates for you. This keyword is different from the pluralExamples
/Scenarios
.Scenario Outline:
- Like aScenario
, but it can containExamples
.Examples:/Scenarios:
- We recommend you useExamples
. This can help you test translations, as shown below. When aScenario Outline
has anExamples
table, the test will run at least once for each row in the table. This keyword is different from the singularScenario
/Example
.- Any of these keywords that can start a Step -
Given
,When
,Then
, andAnd
. They look different, but that is for us humans. They can be used interchangeably.
It is useful to give a Scenario
or a Scenario Outline
a unique description that you can remember.
ALKiln also lets you use:
@
- Tags#
- Comments. They are a tiny bit different than python comments. They always have to be on their own line. The line can begin with zero or more spaces. You can use comments anywhere in your test file.|
- Data tables."""
- Doc strings.
Sample test file​
We usually write test text as if it is from the perspective of the user. For example, "I wait" and "I start".
Some Steps, like the "phrase" Step, are language-specific. You can still test translations with those Steps, though, if you use Examples
with Scenario Outline
. See the example below.
Here is an example of a file that uses most of these features. The name of this file is children.feature
.
@has_children
Feature: I have children
# This Scenario uses the Story Table Step. It lets you list the names of
# the variables in any order you want.
@disallow_visitation
Scenario: I disallow visitation
Given I go to the interview at "protective_order.yml"
And I get to the question id "what kind of visitation?" with this data:
| var | value |
| needs_allow_visitation | False |
| user_name | Jordan |
| user_has_children | True |
Then I should see the phrase "Jordan"
# This test will run once for each row in the `Examples` table. The text
# between the "<" and ">" is the name of a column in the `Examples` table.
@allow_visitation
Scenario Outline: I allow visitation
Given I go to the interview at "<url>"
Then I should see the phrase "<name>"
When I set "user_name" to "Rea"
And I tap to continue
Then I should see the phrase "<greeting> Rea!"
And I set "user_has_children" to "True"
And I tap to continue
When I set "needs_allow_visitation" to "True"
Then I should see see the phrase "This form can help you with that"
Examples:
| url | name | greeting |
| protective_order.yml | Your name | Hello |
| protective_order.yml&lang=es | Tu nombre | Holá |
Tags​
You can tell ALKiln to run only some tests this way:
- Add Gherkin tags at the top of your
.feature
file or above yourScenario
. - Use a tag expression when you run your tests.
For GitHub tests, you can use a tag expression when you trigger your tests manually. When you do that, GitHub will offer you a drop down where you can give some inputs for your test run (like that documentation describes). One of the optional fields there will let you write a tag expression if you want. If you want to always only run some GitHub tests, you can add a tag expression to your workflow file as an input.
The ALKilnInThePlayground™ test interview has a field where you can add a tag expression if you want.
This documentation will only show some basics about tags and tag expressions themselves.
This is an example of adding tags to your test files. You can add them above the Feature
keyword, or above Scenario
keywords, or both.
@likes_bears
Feature: I like bears
@wild_bear @climbs_fast
Scenario: I like wild bears
# Test 1 Steps
@cuddly_bear @climbs_fast
Scenario: I like cuddly bears
# Test 2 Steps...
@maybe_not_a_bear
Scenario: I like Jaime
# Test 3 Steps...
Based on the example test file above, these are some examples of tag expressions and which tests they will run:
Tag expression | What tests run? |
---|---|
@likes_bears | All tests in the file |
@cuddly_bear | Test 2 |
@climbs_fast | Tests 1 and 2 |
@cuddly_bear or @maybe_not_a_bear | Tests 2 and 3 |
not @wild_bear | All tests in all test files in your package except test 1 |
Tests (Scenarios) can inherit tags.
GitHub workflow files​
This is for: all tests that run on GitHub. That is, GitHub+You™ or GitHub Sandbox™ tests.
Your GitHub workflow file is what starts and controls a package's tests on GitHub. Only tests that run on GitHub need a workflow file. The file gives ALKiln some configuration information. You can use it to customize the ALKiln test runs and control when to trigger the GitHub tests.
This section has a lot of challenging technical information about GitHub workflow files. We will do our best to help. We would love to hear how we can improve.
You can use workflow inputs to change global configuration settings for your ALKiln test runs.
You can use custom environment variables to set values in your tests. Those are especially useful for answers that need to use sensitive information.
For workflow security, see various sections in the security documentation.
GitHub's documentation can tell you many more technical details about workflows.
Where in GitHub is the file?​
Your ALKiln workflow file is in your package's GitHub repository. To find it, go to the repository to the .github
folder, then open the workflows
folder there. The workflow file there might be called "run_form_tests.yml" or "alkiln_tests.yml" or something similar.
Example files​
ALKiln's own workflow files are the most up-to-date, so using those as an example can be useful. Read carefully, though - those files have features that are unique to ALKiln, but they also have notes on how you can change them to what you need.
You can see ALKiln's own version of the GitHub+You™ test workflow file here.
You can see ALKiln's own version of the GitHub Sandbox™ test workflow file here.
GitHub workflow inputs​
This is for: all tests that run on GitHub. That is, GitHub+You™ or GitHub Sandbox™ tests.
Workflow input values let you customize global ALKiln configuration settings.
This section is advanced and can be challenging for even experienced developers. Let us know how we can improve it.
For any input, you put the input's value under the with
in your workflow file:
with:
ALKILN_TAG_EXPRESSION: "@has_children"
For the items that have values with sensitive information, you first create a GitHub secret that stores the value. You then use that secret under the with
in your workflow file:
with:
DOCASSEMBLE_DEVELOPER_API_KEY: "${{ secrets.DOCASSEMBLE_DEVELOPER_API_KEY }}"
ALKILN_TAG_EXPRESSION: "@has_children"
Required inputs for GitHub+You™ tests​
This is for: GitHub+You™ tests
The GitHub+You™ test method workflow file has a few absolutely required inputs. The ALKiln setup interview should create the secrets and workflow inputs for this type of test environment.
You can see examples of these inputs in the correct ALKiln file.
SERVER_URL
is the url of your docassemble testing server - the server the tests should run on. If you are using ALKiln for a GitHub organization, you should store the value for this input in a GitHub secret so that all your organization repositories can share it.DOCASSEMBLE_DEVELOPER_API_KEY
is a developer API key for the ALKiln testing account on your server. You should store the value for this input in a GitHub secret so that no one can see this secure information.
If you do not already have these secrets, add them to your package's repository or GitHub organization6.
Use them the same way that the example workflow file uses them. They appear in 2 places.
Required inputs for GitHub Sandbox™ tests​
This is for: GitHub Sandbox™ tests
These are the only required inputs for GitHub Sandbox™ tests. We are listing them here for the sake of completeness, but none of them need to be secrets and you can copy the correct values straight from ALKiln's own GitHub Sandbox™ tests workflow file.
You can see examples of these inputs in the correct ALKiln file.
SERVER_URL
is the url of the new docassemble server GitHub just created. Use the value that an earlier step created. You can see that in the example file.DOCASSEMBLE_DEVELOPER_API_KEY
is a developer API key for a developer on the new server. Use the value that an earlier step created. You can see that in the example file.INSTALL_METHOD
is an internal value ALKiln needs from GitHub Sandbox™ tests. The value in ALKiln's example file is the correct value.
Optional inputs for all GitHub tests​
This is for: all tests that run on GitHub. That is, GitHub+You™ or GitHub Sandbox™ tests.
You can add some optional inputs to the workflow file of any tests that run on GitHub to control global ALKiln configuration values.
The options are:
ALKILN_VERSION
lets you use a specific version of the ALKiln framework for your tests. It can be useful for security or for trying out experimental versions of the ALKiln framework. The default value is the latest version.ALKILN_TAG_EXPRESSION
is the "tag expression" to use to limit which tests to run. These docs describe tags in the Gherkin keywords section. The default is no tag expression so that all the tests run.- (Extra Advanced)
DOCASSEMBLECLI_VERSION
is the version of docassemblecli to install. If this means nothing to you then you can probably ignore it. You can also ask us about this.
Use these inputs in the same way and in the same places as the required inputs. You can use non-secret values by just writing the value directly in the workflow file. For example:
with:
ALKILN_TAG_EXPRESSION: "@has_children"
If you have an organization and want to use these values in multiple repositories, you can use organization GitHub secrets. In that case, your workflow file code would look more like this:
with:
ALKILN_TAG_EXPRESSION: "${{ secrets.ALKILN_TAG_EXPRESSION }}"
Optional inputs just for GitHub+You™ tests​
This is for: GitHub+You™ tests
You can add some optional inputs to the workflow file of any tests that run with the GitHub+You™ test method to control global ALKiln configuration values:
MAX_SECONDS_FOR_SETUP
is the maximum amount of time to give the Project to upload your package's GitHub code. If your server is slow, increase this value. The default is 120 seconds.SERVER_RELOAD_TIMEOUT_SECONDS
is the maximum amount of time to give the server to reload while these tests are running. A server can reload for many reasons. A reload can make a test fail part-way through. If ALKiln waits long enough for the server to finish reloading, the next tests will get a chance to finish properly. The default maximum time is 150 seconds. If your server reloads more slowly, increase this value7.
Use these inputs in the same way and in the same places as the required inputs. You can use non-secret values by just writing the value directly in the workflow file. For example:
with:
SERVER_RELOAD_TIMEOUT_SECONDS: 500
If you have an organization and want to use these values in multiple repositories, you can use organization GitHub secrets. In that case, your workflow file code would look more like this:
with:
SERVER_RELOAD_TIMEOUT_SECONDS: "${{ secrets.MAX_SECONDS_FOR_SETUP }}"
Optional inputs just for GitHub Sandbox™ tests​
This is for: GitHub Sandbox™ tests
You can add some optional inputs to the workflow file of any GitHub Sandbox™ tests to control global ALKiln configuration values. We note when an option should also be a GitHub secret for security reasons:
MAX_SECONDS_FOR_DOCKER
is the maximum amount of time to give the docassemble docker container to get properly started. If your tests keep failing because your docker container is taking a long time to start, maybe because of configuration options, then you can increase this time. The default time is 600 seconds (10 min).SHOW_DOCKER_OUTPUT
you can give this the values "true" or "false". When you set it to "true", you will see more output when ALKiln creates the docker container and installs docassemble there. The output will be visible to anyone that can see the action running, like GitHub admins or collaborators. By default, it is "false". This is because our team is unsure about what information those logs might include, so we are unsure if it will show sensitive configuration information, like tokens and API keys.CONFIG_CONTENTS
is an input most authors can ignore. Its value is the contents of a docassemble config file for the new docassemble server GitHub will create. If you have special configuration settings on your own server, you can use this input to include them and make this new fresh server more like your production server. The default is the default config that docassemble itself uses. For security, we strongly recommend you put this information into a GitHub secret likeDOCASSEMBLE_DEVELOPER_API_KEY
above.
Use these inputs in the same way and in the same places as the required inputs. You can use non-secret values by just writing the value directly in the workflow file. For example:
with:
SHOW_DOCKER_OUTPUT: true
If you have an organization and want to use these values in multiple repositories, you can use organization GitHub secrets. In that case, your workflow file code would look more like this:
with:
SHOW_DOCKER_OUTPUT: "${{ secrets.SHOW_DOCKER_OUTPUT }}"
GitHub workflow extras​
This is for: all tests that run on GitHub
You can customize your GitHub workflows to run on a schedule, make GitHub issues when tests fail, and much more. We will give some examples other developers have found useful. To edit your workflow file:
- Go to your GitHub repository
- Find your workflow file
- Tap to start editing the file
- Follow instructions below to change your file
GitHub's documentation can tell you much more about workflows.
Schedule tests​
You can run GitHub tests on a schedule - daily, weekly, monthly, or on any other interval. To run the tests on a schedule, you must add schedule
to the on
object in your workflow file and give it an "interval" value. For example, these tests will get triggered by both a push (commit) and on a schedule every Tuesday:
on:
schedule:
- cron: "0 1 * * TUE"
push:
The GitHub docs can tell you more about triggering workflows on a schedule. The special syntax defining the interval uses cron syntax. If you want to change the interval, these examples of cron syntax can help a lot.
You can read GitHub's documentation about many other ways to trigger workflows.
Pull request tests​
You can trigger tests when someone makes a pull request in GitHub.
To do this, add the pull_request
trigger key to the on
object in your workflow file. Give it no values at all. For example, these tests will get triggered by both a pull request and on a schedule:
on:
pull_request:
schedule:
- cron: "0 1 * * TUE"
The pull request trigger can have various values, but leaving it empty uses the default values. We have only ever needed the defaults and pull request values can be confusing to customize.
If you use GitHub+You™ tests and decide to trigger tests with both push
and pull_request
you need to make sure that you avoid triggering both at the same time. When someone makes a pull request, changes some code, and then pushes to that pull request, both triggers will be... triggered. This is unnecessary and can cause GitHub+You™ tests to fail.
Use code like this to avoid running double tests:
jobs:
ALKiln tests:
if: (github.event_name != 'pull_request' && ! github.event.pull_request.head.repo.fork) || (github.event_name == 'pull_request' && github.event.pull_request.head.repo.fork)
See an example.
You can read GitHub's documentation about many other ways to trigger workflows.
Make a GitHub issue when tests fail​
To make a GitHub issue when tests fail, you can add the below code under the last line of text in the file. It might look like you need to create a GitHub secret for this, but you should avoid that. GitHub already has all these values.
- name: "If tests failed create an issue"
if: ${{ failure() }}
uses: actions-ecosystem/action-create-issue@v1
with:
github_token: ${{ secrets.github_token }}
title: "ALKiln tests failed"
body: |
An ALKiln test failed. See the action at ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}.
labels: |
bug
Avoid changing the value github_token
and avoid creating a new secret for it. The variable secrets.github_token
is a value that your repository already has by default and it needs that value.
If you use the code above, the GitHub issue will automatically contain a link to the workflow with these failing tests.
You can edit the values of title
, body
, and labels
to customize the issue.
Environment variables​
This is for: Everyone
Environment variables are one of the few ways for you to get information from GitHub or your server to ALKiln. (You do that in your workflow file.)
What is a variable?
A "variable" is a name with a value8. That name, a "variable name", lets the rest of the program ask for that value when needed.9 You might already know what they are, but we are going to use a metaphor to help with the rest of the explanation. A variable name is like a kid's name and when the program asks for the value, that's like calling a kid in for dinner.
What are environment variables?
Environment variables are variables that are available to the whole system - the whole environment the programs are working in. That's like the town where that kid lives and where their house is. That means every test in your test suite can use that variable. You can call that kid in for dinner no matter where in the city they are.
Because environment variables are so global, we recommend you start all your environment variable names with "ALKILN_" so they have less chance to interfere with your system's other environment variables. If you are going to broadcast your message across the town, every kid should have a unique name so everyone knows who you are yelling at.
Docassemble config file env vars​
In ALKilnInThePlayground™ tests on your server, you put these values in your config file under a key you can add called alkiln
. For example:
alkiln:
ALKILN_BEAR_SIZE: "small_bear"
ALKILN_BEAR_SIZE
is the name of the environment variable. "small_bear"
is the value of the environment variable. All the interviews on your server can see these config values, including the interview that runs your ALKilnInThePlayground™ tests, so you are actually setting these values for all the tests that you run on your server with ALKilnInThePlayground™ tests.
GitHub env vars
In GitHub, you can make GitHub workflow environment variables by putting them in the env
key in your workflow file:
# other code
env:
ALKILN_BEAR_SIZE: "small_bear"
job:
# other code
ALKILN_BEAR_SIZE
is the name of the environment variable. "small_bear"
is the value of the environment variable. In GitHub, there are different places to put env
, so don't worry if you see them in other places in other people's files.
You can combine environment variables with GitHub secrets to set sensitive information in your tests. Also, a GitHub organization can combine environment variables with GitHub secrets to get a value to all the test suites in all the organization's repositories.
Since workflow environment variables might hold sensitive information, ALKiln avoids taking pictures or downloading the HTML of pages that use environment variables. It even avoids taking pictures when the test has an error on that page and during the sign-in Step. ALKiln avoids printing the value of an environment variable anywhere in the report or in the console log.
GitHub secrets​
This is for: owners of GitHub repositories or organizations
Shhh, don't tell anyone else - a secret is a value GitHub encrypts for you. Not even you can see a secret in GitHub after you create it. In fact, GitHub redacts those values from your workflow logs.
You can make secrets for individual GitHub repositories or you can make secrets for your whole GitHub organization. Once you have made the secrets, you might use them in combination with GitHub environment variables and for multiple reasons:
- Secrets are useful for creating values with sensitive information since no one can see those values. For example, with the "sign in" Step. Here is one example of using a secret in a workflow file:
# other code
env:
ALKILN_USER_EMAIL: "${{ secrets.ALKILN_USER_EMAIL }}"
ALKILN_USER_PASSWORD: "${{ secrets.ALKILN_USER_PASSWORD }}"
job:
# other code
Remember to create a separate account for that "user".
- GitHub secrets are useful for sharing values between GitHub organization repositories. All the repositories of a GitHub organization can see its secrets, so you can affect all of those repositories at once from one location.
Example situation
For example, suppose Mo'nique has a GitHub organization named PolarisLaw. That organization has several repositories. Every test in every repository uses the value "small_bear" as an answer to a question. Next month, though, the firm might need to change that value to "large_bear". Mo'nique would have to edit that value in every test in every repository. Instead, she makes a GitHub secret in the PolarisLaw account called ALKILN_BEAR_SIZE
and sets it to small_bear
. Then she uses it in an environment variable she puts in all the workflow files in all the repositories.
env:
ALKILN_BEAR_SIZE: "${{ secrets.ALKILN_BEAR_SIZE }}"
She can then use that environment variable in her tests:
I set the variable "bear_size" to the GitHub secret "ALKILN_BEAR_SIZE"
Next month, if PolarisLaw needs to change that value, Mo'nique can just edit the value of that secret and change it to large_bear
. All the tests will use the new value without her changing anything else.
Why not create secrets on your server for ALKilnInThePlayground™ tests? On your server all config values are already secret because only authorized people can see them, so they are just like any other environment variable. And all of your server's interviews can see and share those config values already. I know, not quite as exciting.
Running tests​
This is for: Everyone
This section will talk about ways to start (trigger) different types of tests and will describe the journey of an ALKiln test run, from starting up to cleaning up.
Triggering tests​
This is for: Everyone
You control when your tests run and which tests run. How you do that depends on the type of test you are using.
GitHub+You™ and GitHub Sandbox™ tests:
By default, you can trigger tests on GitHub by committing your edits to your repository or by running the tests manually.
The ALKiln setup interview gives you a default workflow file that triggers GitHub tests when you commit your files to GitHub. That might be when you hit the 'Commit' button on the docassemble Packages page. It can also happen when you edit, add, or delete files in GitHub itself. Most authors go through this process:
- Edit the package's files in a Project on the docassemble server
- On your Project's "Packages" page, commit the changes to GitHub
- Go to the repository in GitHub and tap on the "Actions" tab to see the tests running
You can use your workflow file to customize some ALKiln test run settings.
You can also trigger your tests manually. When you run your tests manually, you can use tags to limit which tests to run. You can also use tags in your workflow file.
You can also edit your workflow file to trigger your GitHub tests other ways.
ALKilnInThePlayground™ tests:
You can run ALKilnInThePlayground™ tests manually whenever you want to.
- Make sure you are logged into your docassemble testing server.
- Go to your server's list of available interviews.
- Run the ALKiln interview that you or someone else installed on your server.
- On the first page, make sure you have the right version of ALKiln installed (usually the latest version) or install a different version.
- On the next page, pick a Project that already has tests in its Sources folder. If it has no tests, add a new test. It can be like the example first test.
- If you want to limit which tests to run, use a tag expression in the tag expression field.
- Run the tests and wait for the output.
GitHub+You™ test stages​
This is for: GitHub+You™ tests
The GitHub+You™ tests are made of three stages: the setup, the robot, and the take down.
Setup:
- ALKiln goes to its account and creates a new Project.
- It uploads the relevant GitHub branch of your package into the Project.
Robot:
- For each test, ALKiln follows the instructions in your test.
- Your first Steps could be any of a number of Steps, like waiting, signing into the server, or going to the interview.
- Usually authors at least name an interview that ALKiln should go to. To do that, ALKiln uses the url of ALKiln's Project in its account's Playground.
- ALKiln pretends to be a human and tries to fill out the fields and take the actions the tests describe.
Take down:
When all the tests finish, ALKiln cleans up the tests.
- ALKiln deletes the Project it created from ALKiln's account.
- It makes the test output, like the report and Scenario folders.
ALKilnInThePlayground™ test stages​
This is for: ALKilnInThePlayground™ tests
When ALKilnInThePlayground™ tests run, they have two stages: the robot and the output.
Robot:
- For each test, ALKiln follows the instructions in your test.
- Your first Steps could be any of a number of Steps, like waiting, signing into the server, or going to the interview.
- Usually authors at least name an interview that ALKiln should go to. To do that, ALKiln uses the url of your Project in your Playground.
- ALKiln pretends to be a human and tries to fill out the fields and take the actions the tests describe.
Output:
When all the tests finish, ALKiln cleans up the tests. It creates the test output to show on the final page.
GitHub Sandbox™ test stages​
This is for: GitHub Sandbox™ tests
The GitHub Sandbox™ tests are made of three stages: the setup, the robot, and the take down.
Setup:
- ALKiln creates a new docker container on GitHub.
- It installs docassemble on that new docker container. If the input
CONFIG_CONTENTS
is defined, ALKiln uses that value for the config. Otherwise, it installs docassemble with the default config. - ALKiln waits for docassemble to finish enough that it can load up the new server's website properly.
- ALKiln installs your package as a server package on that new server.
Robot:
- For each test, ALKiln follows the instructions in your test.
- Your first Steps could be any of a number of Steps, like waiting, signing into the server, or going to the interview.
- Usually authors at least name an interview that ALKiln should go to. To do that, ALKiln uses the new isolated server's url on GitHub.
- ALKiln pretends to be a human and tries to fill out the fields and take the actions the tests describe.
Take down:
When all the tests finish, ALKiln cleans up the tests.
- The digital space where GitHub was running the server gets deleted, taking the new server and container with it.
- ALKiln makes the test output, like the report and Scenario folders.
What can I see while tests are running?​
In GitHub, on the actions page and related pages of your repository, you can see your workflow status ("check") and logs about the current test processes. On the actions page itself, under "All workflows", your current workflow should have the message/description you wrote for that last commit next to the name of the branch you worked on. On those various pages you can see:
- How long a workflow has been running.
- A yellow dot next to the current workflow.
- You can see information about the running processes in the job's log (or console) as GitHub writes it.
The ALKilnInThePlayground™ tests show a waiting screen with how long the tests have been running, though we can add a yellow dot if enough users request it.
What can I see when the tests end?​
In GitHub, when the test run is done the yellow dot will change into a green circle with a check mark if the tests passed or a red circle with an "x" if something went wrong.
In GitHub and with ALKilnInThePlayground™ tests, you will see the results.
Failures get re-run once​
This is for: all tests that run on GitHub.
All tests that fail will get run a second time. These retries can make tests take double the time, but they can also help with flaky failing tests caused by server reloads, which means you are more likely to be able to count on your results to give you real feedback.
Currently, there is no way to choose whether to re-run tests or not.
Test results overview​
This is for: all tests.
After your tests are done, there are different ways to look at the results of those tests. You can see:
- Job/console logs: All the information about the processes that ran before, during, and after the tests
- Artifacts: The reports and other files ALKiln creates to give you more information about successes and failures
You can read more about errors and troubleshooting in other documentation.
Test logs​
This is for: all tests
The console (in GitHub called the job log) shows the full raw output of the logs of the processes that ran before, during, and after the tests.
In GitHub, you can see the test run's console at the workflow's job logs. You can see these messages appear as each process happens as well as after the tests finish running.
In ALKilnInThePlayground™ tests, you can only see the console output after tests are done. They should be at the bottom of the final page.
The console output should include at least one of these items, from top to bottom:
- Data from the start up processes, where errors can happen too.
- A list of each Scenario (test) description next to symbols that show what happened in that test:
.
- a Step ran. Also, the test started or finished.*
- the test set a page variable.v
- the test pressed a continue button to set a variable.>
- a Story table row automatically continued.U
- an "undefined" Step tried to run. This usually shows up when you have a typo in a Step you wrote.F
- a test failed.
- The ALKiln report.
- Any errors from the third-party libraries ALKiln uses. We are keeping the errors from the third-party libraries until we handle all cases of the errors ourselves.
- How many tests pass or failed.
- In GitHub, data from the take-down processes as ALKiln and GitHub clean up.
Test output/artifacts​
This is for: all tests
Once the tests are finished, ALKiln creates the test run "artifacts" - files and folders you can examine, like an archaeologist, to understand what happened.
If the test start up process failed, ALKiln will be unable to create artifacts.
- Text by @Thinkwert10 - Image by PerspectiveFriendly11
In GitHub, you can download a zip of these artifacts.
For ALKilnInThePlayground™ tests, you can see and download the artifacts on the final page. The page is very plain at the moment and we hope to add more useful styles and indicators in the future. The page includes:
- The name of the Project you tested.
- The version of ALKiln the tests used. This is the version that is installed on your docassemble server.
- How long it took the tests to run.
- A link to download the zip file with all the artifacts.
- The report file.
- The debug log file.
- Any pictures of error screens, along with that screen's HTML file.
- A list of Scenarios with the contents of their Scenario folders.
- The test console logs.
On that page, when you click on the link to a file, you will see that file in a new browser window and can download it there.
For all tests, the main artifact folder contains all the other artifacts. Its name includes the time the tests started in UTC format. When in GitHub, GitHub wraps the folder up in a zip file. The zip file's name includes the UTC time that the GitHub run started.
The ALKiln folder contains these files and folders:
- report.txt
- debug_log.txt
- Copies of all screenshots and HTML files of pages all that caused errors
- Scenario folders which have:
- An individual report.txt
- Another copy of error screenshots and html files
- Pictures you took of screens along with their associated HTML files
- Files you downloaded
- JSON variables you compared
- Accessibility failures
report.txt​
The report is the first place to look when a test fails. There are two places to find reports.
- Inside the main folder is the summary report containing all Scenario logs and results in one file
- In each Scenario folder is a report that only shows the logs and results for that test
The summary report is made up of multiple parts
- The heading that includes the date of the test run
- The "failed" Scenarios section that has the logs and results for each failed Scenario
- The "passed" Scenarios section that has the logs and results for each passed Scenario
A report might look something like this:
Assembly Line Kiln Automated Testing Report - Wed, 29 Dec 2021 17:49:00 GMT
===============================
===============================
Failed scenarios:
---------------
Scenario: I dislike bears
---------------
Trying to load the interview at "https://server.org/interview?new_session=1&i=docassemble.playgroundplayground25BearNecessitiesmain1707019219591%3Abears_and_you.yml"
screen id: bear_opinion
| likes_bears | False |
ERROR: The question id was supposed to be "all done", but it's actually "favorite_bear_types".
**-- Scenario Failed --**
===============================
===============================
Passed scenarios:
---------------
Scenario: I like wild bears
---------------
Trying to load the interview at "https://server.org/interview?new_session=1&i=docassemble.playgroundplayground25BearNecessitiesmain1707019219591%3Abears_and_you.yml"
screen id: bear_opinion
Accessibility on direct-standard-fields passed!
| likes_bears | True |
screen id: favorite_bear_types
Accessibility on direct-standard-fields passed!
| favorite_bears | wild_bear |
Rows that got set:
And I get to the question id "screen features" with this data:
| var | value |
| likes_bears | True |
| favorite_bears | wild_bear |
Unused rows:
| user_climbing_speed | very fast |
Even Scenarios that pass might have a section called "Unused rows" at the bottom of its part of the report, as you see above. A Story Table will skip rows if it is unable to find any matching fields. That might mean that something actually did go wrong with the test.
ALKiln currently only shows logs for some Steps, including:
- Accessibility checks
- The Story Table and "I set" Steps
- The "get JSON" Step
When you set variables, ALKiln will try to log the question ids for each page it comes to. Under each screen id
will be the names of the variables whose fields ALKiln set and the values ALKiln set them to.
We're always trying to understand what people would find helpful in these reports. Tell us about your experiences at https://github.com/SuffolkLITLab/ALKiln/issues.
debug_log.txt​
The debug_log.txt has the same content as report.txt, but a bit messier. It is messy because it writes the information right away, while the tests are actually running, instead of waiting to make them pretty at the end like the report does. This is so it can store the data immediately. If something goes wrong in the middle the tests, those logs will still be there. For example, you might stop the tests early. This way you should still get some data from whatever tests did run.
Scenario folder​
Each Scenario (test) has its own folder. A Scenario's folder contains its report, its errors, and the files you downloaded or created in that Scenario.
The folder's name uses its Scenario description. That is one reason you should make each Scenario description different. The folder contains:
- An individual report.txt
- Another copy of error screenshots and html files
- Pictures you took of screens along with their associated HTML files
- Files you downloaded
- JSON variables you compared
- Accessibility failures
Error pics and HTML​
ALKiln will try to take pictures of pages that run into errors. They are like the pictures you take with the "picture" Step. As you can read there, each time ALKiln takes a picture, it also saves the HTML of the page.
ALKiln avoids making these files for pages that use environment variables, even when they error, because environment variables can contain sensitive information.
These error files are one of the first places to look for clues about what might have gone wrong.
The files appear in 2 places:
- Directly in the main folder. This is so you can have a quick look without digging through all the Scenario folders. Here the file names use the Scenario description and the id of the page where the error happened.
- In the Scenario folder. Here the file names use just the id of the page where the error happened.
In those pictures you might see that a required file was missing, see why ALKiln was unable to continue to the next page, notice a missing phrase, or many other details. You can read more about errors and troubleshooting in other documentation.
You can read more about these files in the section about the "picture" Step.
Tips​
Some of these are just good practices to follow when coding your interviews.
Question id​
Add a unique id to each question
block of your interview. Those help make some kinds of test output more clear.
They have other uses too. Block ids are helpful for docassemble features and caching issues. They can also help your team communicate with each other about your code more easily.
Write unique Scenario descriptions​
You write each Scenario's description in your test files. ALKiln includes those descriptions in the names of test output folders, error screenshot files, and report headings, so try to write something different for each one - something that will help you later. It will help you identify what happened with each test.
Special HTML​
If your package is leaving out the al_package.yml file from the Assembly Line package, make sure to add some special ALKiln-specific HTML to your interview to let ALKiln work right.
Testing unfinished code​
If you are still developing your interview, you can still write a simple test to make sure your interview at least loads without an error. You might want to wait till your code is more stable to write more tests.
On the other hand, if there is a part of your interview that you think will stay pretty much the same, you can write tests that just go part-way through the interview to test just the stable part of the code. That way, you can work on adding more content and yet also be sure that the work you've already done stays working.
Recycle, reuse​
Copy old Scenarios or Story Tables to help you make new ones. You don't have to make everything from scratch.
Values of choices​
In questions with choices, use both labels and values. See docassemble's documentation on buttons to read about those key-value pairs. The key is the label and the value is... the value ! You can use this for other fields with choices, like radio buttons.
Just labels:
fields:
- Favorite color: favorite_color
datatype: radio
choices:
- Grape
- Lilac
The test:
And I set the variable "favorite_color" to "Lilac"
Every time you edit those labels to improve the text for your users, you will have to update the values in your tests:
fields:
- Favorite color: favorite_color
datatype: radio
choices:
- Grape
- Dark lilac
The edited test:
And I set the variable "favorite_color" to "Dark lilac"
Better with labels and values:
fields:
- Favorite color: favorite_color
datatype: radio
choices:
- Grape: purple1
- Lilac: purple2
If your user chooses "Lilac", docassemble will set favorite_color
to purple2
.
The test:
And I set the variable "favorite_color" to "purple2"
When you have both labels and values you can edit what the user sees without worrying about your tests, so the tests are easier to maintain.
fields:
- Favorite color: favorite_color
datatype: radio
choices:
- 🍇 Grape: purple1
- đź”® Dark lilac: purple2
The test stays the same:
And I set the variable "favorite_color" to "purple2"
The value can even be the same as the label if you want:
fields:
- Favorite color: favorite_color
datatype: radio
choices:
- Grape: Grape
- Lilac: Lilac
As a bonus, using labels and values also makes your interview easier to translate. If you use only labels, translations will break your code.
Quotes​
Many blanks you need to fill in for Steps need to be inside double quotation marks. You must avoid ever using double quotation marks inside those.
This is usually fine for test authors. Read about special characters to learn why.
Special page characters​
The text that docassemble shows the user - the text our tests need most often - is different than the text we write in our interview YAML file. Docassemble converts those characters to more semantically correct characters for the user. For example, in our code we write:
subquestion: |
This is called "relief"
That subquestion text uses the character "
. The official name for this character is "Quotation mark" and its unicode value is U+0022.
When docassemble shows that page to the user, it changes the first "
to a "left double quotation mark" with a unicode value of U+201C: “
. It changes the second "
to a "right double quotation mark" with a unicode value of U+201D: ”
The final text comes out looking more like this on the page:
This is called “relief”
They look similar to most humans, but computers think they look quite different.
Docassemble does this for many other characters including single quotes and ellipses. It is best to copy/paste text your tests need straight from the screen the user sees.
Add a tag to each test​
Add a tag to each of your tests so later you can pick specific tests to run. That way, if just one test is failing, you can avoid having to run all the tests just to find out what is going on in that one test.
Frequently asked questions (FAQ)​
Always ask more questions!
I have a private GitHub repository. Can I use ALKiln?​
Yes, you can use ALKiln with a private repository, though you have to do a bit of extra work.
- Pick a GitHub account that has permission to change the private repository.
- Make sure the account on your docassemble server that you linked to the tests is integrated with the GitHub account. See docassemble's documentation on integrating a GitHub account.
As that documentation explains, no two accounts on a docassemble server can be connected to the same GitHub account.
Also, there are some limits on the amount of time private repositories can use to run workflows
Do I have to fill in every field?​
No. If you leave a required field blank, though, the test will get stuck on that page.
Will GitHub charge me?​
It is unlikely you will run enough tests for GitHub to charge you. GitHub's quota limits are very big, especially for public repositories.
Will ALKiln tests affect my analytics?​
No, none of the tests types can affect your analytics. GitHub+You™ and ALKilnInThePlayground™ tests both use interviews that are in the developer's Playground, not ones that are installed on your server. GitHub Sandbox™ tests run on their own isolated server, not your server, so your analytics software knows nothing about them.
Does ALKiln collect my data?​
No, there is no way in which ALKiln collects your data or data from your tests. There are other services out of our control that do track your use of ALKiln. For example, GitHub keeps track of what repositories use ALKiln's code and npm keeps track of how many (unnamed) packages download ALKiln.
On that note, who is still using ALKiln version 4.11.2?! Update!
ALKiln has a lot of open issues​
That is very true.
Some projects use issues just for bugs. We use ours for everything - ideas for enhancements, research, internal testing, documentation, discussions, and the list goes on. We have a lot of those with more coming in all the time, and not much time to work on them, so they do build up.
Also bugs, of course. We take care of security bugs as soon as we find them. We take care of bugs that are having a big impact on our users as quickly as we can. We take care of smaller bugs when we have time, but sometimes spend that time on improving other parts of the project instead.
We do wish it looked prettier, though.
Can I compare docx files?​
Not yet. You can use ALKiln to compare PDF files to make sure they have come out right, but not .docx files yet. It is on the list of features we want to build.
Footnotes​
-
We are still working on finding a more descriptive name. Suggestions are welcome. ↩
-
If you lose this API key you can make a new one. You can make as many as you need. ↩
-
You can find the question
id
in your interview YAML in thequestion
block of that page. It is a good idea for everyquestion
block in your interview to have a unique id. It can be useful for other blocks too. ↩ -
Technically, the file will keep downloading until the test ends. ↩
-
Keywords are specific words that have a special meaning and purpose when you use them in specific places in your test code. They are similar to the words
else
andnot
in python. ↩ -
If your server is part of a GitHub organization, we recommend creating some of these values as organization secrets. That way, if you need to change the value of an input or environment variable you can do it one place. ↩
-
This also sometimes affects tests that fail due to genuine errors. Sadly, these types of failures look the same to ALKiln as a server reloading. That means that increasing the maximum server reload time will make some kinds of tests take an extra long time to fail. ↩
-
A variable name is only sort of a label for a value. It is actually more like a label for a box and in that box is the value. You can change the value that's in the box and as long as the box and name stay the same, the program is happy. It will use whatever is in that box. Even that is a little bit untrue because in some languages, it is impossible to change the value in the box. Those variables are called "immutable". ↩
-
You might notice people also using the word "variable" to mean "variable name". ↩
-
Name unknown [@Thinkwert]. (2021, Aug 19). When the archaeologists opened the ancient vase, little did they know what primeval dark power they had unleashed. [Tweet]. Twitter. https://twitter.com/Thinkwert/status/1428488345357856777 ↩
-
PerspectiveFriendly. "I'm waiting to bloom." Reddit, 08/29/2020, https://old.reddit.com/r/aww/comments/iit3ze/im_waiting_to_bloom/. ↩