Skip to main content

ALKiln troubleshooting and errors

warning

WIP (Work in progress)

This page will try to help you figure out what is going wrong when you have errors, warnings, or other problems with your ALKiln tests, whether they are from ALKiln itself or from the libraries ALKiln uses.

Maybe you made a mistake, maybe there is a bug in our code, in docassemble, or in a package our tools depend on. Coding is 99% errors and debugging and the pain you feel is real and shared by many. Whatever the source of the problem is, you have made it this far, you belong in this space, and we are excited to have you here.

This document is just starting out and would love contributions!

This page helps with some:

  • ALKiln errors and warnings
  • Docassemble errors and warnings
  • Errors and warnings from the third-party libraries that ALKiln uses

Failing tests​

There are some general troubleshooting steps that you can take when tests fail. If any of these steps help you find more specific problems, look for those on this page too because they may help you get to your solution faster.

For ALKilnInThePlaygroundâ„¢ tests, you can start by updating your version of ALKilnInThePlayground and updating your version of ALKiln.

If you are unsure if the test is acting correctly, go to your docassemble server, create a new Project, pull the code of your package into the Project, and go through the interview manually. Use the failing test as a guide and follow its Steps exactly.

Test file bugs​

There are some errors that come up because of various issues in your test files.

A syntax error in any of your test files will stop your tests from running at all. To troubleshoot syntax typos, you can use an editor like the editor at AssertThat. It will let you paste in your test code and will show a red 'x' next to the lines that have syntax errors.

You also might have a typo in a Step itself.

Inconsistent cell count error​

Symptom:

No tests will run. The error message will include text similar to this:

Error: Parse error in 'docassemble/ChildSupport/data/sources/new_case.feature': (10:5): inconsistent cell count within the table

Problem:

This error prevents all of your tests being run. The message is telling you that something about the syntax of the table is wrong. One of your story tables could missing a pipe - a | character - or could have an extra pipe, etc. This is an error from a library ALKiln uses.

Debugging:

To fix this you can find the syntax typos by using an editor like the editor at AssertThat. It will let you paste in your test code and will show a red 'x' next to the lines that have syntax errors.

Undefined step​

Symptom:

At the very bottom of your console output you see the message "Undefined" and "You can implement missing steps with the snippets below".

This will make just the current test fail. This is an error message from a library ALKiln uses called cucumber.

Problem:

You have a typo in the Step that failed, so cucumber thinks that you are trying to create a new Step. The error message should contain the text of the Step that you wrote.

Solution:

  1. Check the documentation for the Step again.
  2. Copy/paste the Step from the documentation into a new line in your test.
  3. If you need to fill in blanks in the Step, copy/paste those values from your existing test code.
  4. If you want to be helpful to us, you can check if there really was a typo. If the text of the two lines look the same visually, use a diff checker like https://www.diffchecker.com/text-compare/ to compare your original Step text with the new Step text. If the diff checker shows that the text is the same, contact us. Maybe we have a typo in our documentation.
  5. Delete your original Step text and try the test again.
  6. If the test has the same problem, contact us. Maybe we have a typo in our documentation.

UnhandledPromiseRejection error​

This is a misleading error from a library ALKiln uses. It is hiding the real error message inside it. You need to read the text of the whole paragraph to see what the actual error is. You might see more useful information in the report above those messages or in your artifacts.

Everything works fine when I run the test manually​

When your ALKlin tests fail, but your interviews work fine when you run them by hand, it is possible that something is wrong with your package, with the test, or possibly with your understanding of the test code or interview code.

For example, maybe when you run the interview manually, field by field, you think you are following your test to the letter, but a misunderstanding of how to set a variable has led you in the wrong direction.

Some of the common reasons for specifically this problem are under this heading. That is especially true for tests that fail on GitHub, but pass manually and with ALKilnInThePlaygroundâ„¢ tests. Some problems are more complex and might be happening for other reasons, so they are elsewhere in these docs. For example, when your tests pass manually, you may have a missing variable from a test typo.

Uncommitted files​

Sometimes we forget to include/select a file on our Project's Packages page before committing to GitHub. This might be an interview file, template file, static file, source file, or a module.

Leaving this out can lead to problems like a file not being found or an invalid playground path.

File not found​

This might mean that you have a file in your Playground Project, but that you still have to save it on the Project's Packages page and commit that to GitHub. Your tests will pass for that Project in your Playground, but your interview will break when you or others install it from GitHub.

Invalid playground path​

If you see the text "invalid playground path" in the report, that means the Given I start the interview at... Step for that scenario is naming an interview that doesn't exist. Check for a typo in your interview name and check that the file exists.

See uncommitted files.

Docassemble: Variable could not be looked up​

Symptom:

On the picture of the page with the error, you see this error message:

There was a reference to a variable ‘some_var’ that could not be looked up in the question file (for language ‘en’) or in any of the files incorporated by reference into the question file.

This is a "reference error".

Problem:

This is a reference error. It means docassemble tried to get the value for a variable, but could not find where you defined the variable anywhere in your code or in the code you included or imported. Possible causes:

  • It may be that you have not defined a variable in your interview or that you have not defined it in a way that docassemble can find it. That can be for many reasons. For example, maybe you are using an index variable or a generic object incorrectly or maybe you accidentally deleted the block where you were defining the variable.
  • It may be that you have a typo either where you use the variable or where you define the variable. The typo may be in the value instead.
  • It may be that you have a typo in a variable name or value in your test.
  • It may be that you are setting the value incorrectly in your test. Checkboxes are especially complex fields to set.

The mistake may have happened on a page much farther back than you expect with a field ALKiln skipped.

Debugging:

Sometime the easiest way to see what is wrong is to run the interview manually. Fill in the same values as the test fills in. You can then see what goes wrong when you reach the page with the error.

If the mistake is in your interview code, you can fix that bug. If your interview code is fine, you might have to troubleshoot a typo in your test code.

ALKiln could not find a field or value​

Some possible causes:

  1. You have changed your code to remove a variable or value and have not yet updated your tests.

Look at the error message closely. Look at the picture and HTML of the error page. Try to find the variable or value in your interview code or in the code of included or imported files. See if that variable or value is also in your test file.

  1. The field is hidden behind a show if of some kind.

Follow the above steps first. Then check if the field is hidden. If it is hidden, try to see what can make that field visible. Whatever variable and value reveals that answer, check for those in your interview file and in your test code.

  1. There might be a typo in your test. Reading the section about test typos might help.

Skipped rows or variables​

If you look in a test suite report and see that ALKiln skipped setting a variable, your test code might be missing a field or value that it needs in order to fill out that field.

If the test still passed, the variable probably belongs to a field that was optional.

Wrong variable or value​

Unfortunately, test code can be tricky to get right. You might have a simple typo or you might have a perfectly normal misunderstanding of how to write variable names and values.

You can check your test code for incorrect variable or value names in a few different ways.

  1. If you give every question in your interview an id, the report can help show you where the test failed or where ALKiln skipped setting variables. For each test, the report often has a list of question ids. For each question id, the report has a list of variables that ALKiln set on the page that had the error. If the test failed, you can see what question id the test got to. You can also check previous questions to see what variables are missing from each of those pages - what fields ALKiln skipped.
  2. It is more annoying, but in most cases1 you can copy the variable name from your test and search for it in your interview code. You can do the same for values of multiple choice questions.
  3. Alternatively, in most cases1 you can go through the interview manually, but follow this procedure:
    1. For every single page (even if you think this page has nothing to do with the problem), click on the </> in the nav bar to see the source of the page.
    2. Copy the name of the variable from your test and search for it on the page of the running interview.
    3. For multiple choice questions, do the same for the value.
    4. Use those values to answer the questions exactly as the test would answer them.
    5. Continue doing this till you find the problem.
  4. If you are unable to find the name of the variable or value in any other way, our last suggestion is the hardest to do, but most reliable - run the interview manually and inspect each page's HTML to find exact names. This is complex and we are happy to show you how to do this. Here is a refresher:
    1. Go to a new page.
    2. Click on the </> in the nav bar to see the source of the page.
    3. For every field in the page:
      1. If you cannot see the variable name or value, like for an object_checkbox field or a field created with code:
        1. Open the browser development tools.
        2. Examine the DOM of a field to find the encoded name of the variable.
        3. Use atob('the encoded name or id') with the encoded name or id to fully decode every part of the variable name. You may have to decode multiple times.
        4. From those decoded values, put together the variable name that you need to use.
      2. Copy the variable name and use your editor to search for that exact variable name in your test file. The variable name is the fully decoded and reconstructed variable name.
      3. In the final variable name, replace x, i, j, etc. with their actual values on that page.
      4. If the field is a multiple choice field like radio buttons, do the same for the value and use your editor to search for that exact value in your test file. Make sure it appears along with the correct variable name. For a checkbox field, the value is either True or False.
      5. Set the value exactly as the test would have set it.
    4. Go on to the next page till you reach the end of your interview or the error.

Phrase is missing​

Symptom

Example error:

The text "a document called a 'Certified docket sheet'" SHOULD be on this page, but it's NOT

Debugging

First, look over the HTML of the page that had the error and see what your test is seeing.

If your phrase is missing there, try running the interview manually and see if it works. If it does, check the section on interviews that work when you run them manually.

If it looks like the phrase is actually there, a diff checker can be your best friend. Use a diff checker like https://www.diffchecker.com/text-compare/ to copy/paste both the page's text and your test's text to check for differences. If the tool shows you differences, you can copy/paste the page's text into your test.

Even though the text might look correct to you visually and might be the same ones you put in your code, you might not have used the exact same characters that the user is seeing. Docassemble sometimes gives the user slightly different characters than the ones you typed in your code. Ignore the text that you wrote in your code.

See a section in tips about quotes for more detailed information. To sum up, it is best to copy/paste the text straight from the screen the user sees.

Wrong
  I should see the phrase "a document called a 'Certified docket sheet'"
Right
  I should see the phrase "a document called a ‘Certified docket sheet’"

Timeout or "it took too long to load the next page"​

Different problems can cause the report to say that something "took too long" or caused a "timeout" error.

A "timeout" error can happen when a page took too long to load at some point in setup, when running tests, or during test cleanup. This can be because:

  1. The page was trying to load a big file
  2. ALKiln could not continue to the next page for some reason
  3. A Story Table was unable to reach the page with the specified id
  4. There is a typo in the name of the interview YAML file that the test should go to

If a page was taking too long to load a big file, use the "max seconds" Step to give the page more time to load.

You might be able to look at the error page picture and HTML for more details. In GitHub, you can download the test artifacts to look for it.

In GitHub, this error can also happen when:

  1. The server was busy for too long.
  2. The server was down.
  3. That url is stored in the SERVER_URL GitHub secret is wrong or out of date.

If the server might have been busy or down, try re-running the tests.

The value of SERVER_URL will be invisible - GitHub considers the value of the secret to be sensitive information, so it is impossible to see that value. You can still give it a new value, though, and that is worth trying. Find the address of the docassemble server where the docassemble testing account is located. Edit the secret to give it that url.

No artifacts​

If ALKiln is missing all artifacts by the end of the test run, it means none of the tests ran. There are several possible reasons for this. For this type of error, it can be especially useful to look at the console logs and error messages.

Possible reasons for missing artifacts for any test:

  • You might have a gherkin syntax error somewhere in your test files. If this was the case, the setup should have completed and the test run should have started, but immediately failed. Find and fix that syntax error.

Possible reasons for missing artifacts for GitHub+Youâ„¢ tests:

  • Your test server was unavailable. If this was the case, your GitHub job logs should show that the test setup failed. Check if the server is running now and run the tests again.
  • Your docassemble developer account credentials were invalid. That is, the API key you created for the docassemble testing account and put in the DOCASSEMBLE_DEVELOPER_API_KEY GitHub input may no longer exist. Maybe the API key got deleted in a docassemble server update. Maybe someone else changed it. You should create a new API key for that testing account and change the value of your DOCASSEMBLE_DEVELOPER_API_KEY secret to the new value. You could also create a new testing account and create a new API key for the new account.

Possible reasons for missing artifacts for GitHub Sandboxâ„¢ tests:

  • The Docker container or the docassemble server installation is having trouble starting up. If this is the case, your GitHub job logs should show that the docker build step failed. This is a complex problem and takes experience with creating docassemble servers and maybe Docker. You can set the ALKiln SHOW_DOCKER_OUTPUT configuration input to "true" to show you more information about what is going on during that setup process. If you have experience with actions and the command line, you might also be able to use the tmate action to explore more about what is going on in your Docker container.

Possible reasons for missing artifacts for GitHub+Youâ„¢ and GitHub Sandboxâ„¢ tests:

  • The package manager that installs ALKiln had a problem. This does happen every now and then. If this was the case, your GitHub job logs should show test setup failure. You can check its status, but the problem would have to be happening for a lot of people at once for it to show up.
  • GitHub itself had a problem. This is pretty rare. It can cause a failure at any point in the test. You can check GitHub's status. Again, the problem would have to be happening to a lot of people at once for it to show up on that site.

GitHub Sandbox™ Docker trouble (advanced)​

This is for: GitHub Sandboxâ„¢ tests

You should be familiar with:

  • The terminal, command line, or command prompt
  • SSH
  • GitHub actions and workflows
  • Setting up and troubleshooting a docassemble docker container

This is a very advanced topic. If you want a hand, we are happy to help.

If your docker container is failing to start, failing to install docassemble, or failing to install your package, it is possible to do more docker troubleshooting using the tmate GitHub action and using the regular docassemble docker troubleshooting steps.

Slow tests​

This is for: Everyone

There are a various reasons your tests could be slow.

First, there are reasons even passing tests can be slow, including:

  1. Your pages may just be taking a long time to load. This might happen if you are building a large document, uploading a large document, or for other reasons. That can even cause the tests to fail if it takes too long. You can use the "max seconds" Step to give them more time. There are some things you can do to speed up some kinds of processes on your server, but that is out of scope for this document.
  2. Downloading a lot of documents can take a while. Right now, ALKiln is unable to tell when a document is done downloading, so it gives every document the maximum time possible. By default, that is a full 60 seconds. You can decrease that time with the "max seconds" Step, but that time might be too short for the documents to download. You can experiment a bit. You can also be tricky with the "max seconds" Step, though, and increase and decrease the value at appropriate times to improve both speed and downloading.
  3. If your server reloads a lot during GitHub+Youâ„¢ tests, it will take longer for tests to finish because the tests try to wait for the server to be available to avoid "flaky" tests. Your server can reload for many reasons. For example, saving your config file, saving a python module, or pulling a package that uses a python module can all cause your server to reload. This might even cause your tests to fail.

There are also some different reasons that tests get slower when they fail, including:

  1. Each failing test gets re-run once, so that can double the amount of time that tests take. Currently, there is no way to choose whether to re-run tests or not.
  2. When a Story Table is trying to reach a target page and gets stuck somewhere, it will wait the maximum seconds allowed for a page load (which you can change) to let the interview continue to the next page. By default, that is 30 seconds. Since those tests re-run, that can make a test take more than 1 minute.

Flaky tests​

This is for: Everyone

"Flake" happens when your tests fail for reasons that have nothing to do with your interview or test code. There can be many causes for flaky tests. For example:

  • Your server reloading during GitHub+Youâ„¢ tests could make it impossible for your tests to reach your interview webpage. You just have to re-run those tests.
  • A slow-loading interview page might make ALKiln think your server is unavailable. That can happen for different reasons. For example, a large document might be taking a long time to load. You might be able to avoid this failure by increasing the maximum wait time with the "max seconds" Step. That can make tests slower. You can be tricky about it, though. If you just increase that value for one Step and then reduce it again, that can avoid slowing down your tests too much.
  • Tapping on an element might show content and move other elements around. ALKiln might fail to click on the elements that are moving. You can use the "wait" Step to give elements more time to finish moving.
  • Third-party services, like GitHub or package managers, that the tests rely on do have problems every now and then. You just have to re-run those tests.
  • An interview page rendering slowly can hide a field that ALKiln is trying to fill in. If we find that there is a particular type of field that is particularly slow to load, we can slow down ALKiln for that type of field, but that is the most ALKiln can do.

The GitHub Sandboxâ„¢ tests have the best chance of avoiding flakiness because they get their own dedicated server, but even they can flake out sometimes.

Missing trigger variable​

This warning only matters for old 3-column Story Tables that use index variables or generic objects.

If you are using a story table with index variables (i, j, k, etc) or generic objects (x), you need to add some special HTML to the interview file where you set your default screen parts block. Without that HTML, ALKiln can get confused about what variable you are try to set.

The 3-column Story Table is old and you should stop using it. You should switch to the current 2-column Story Table when you get a chance.

This warning sometimes shows up when nothing is wrong. If you are not using a 3-column Story Table or your interview has no index variables or generic objects, you can ignore this warning.

Switch to 2-column Story Tables​

This is for: anyone using a 3-column Story Table and also has index variables or generic objects.

How to switch from the 3-column Story Table to the 2-column Story Table:

  1. Remove the trigger column (the 3rd column) from the header row and every other row
  2. Replace all index variables (i, j, k, etc) or generic objects (x) in the var column (the 1st column) with their actual values.
Before
    | var | value | trigger |
| x[i].hair.how_much | Enough | users[0].hair.how_much |
| x[i].hair.color | Sea green | users[0].hair.how_much |
After
    | var | value |
| users[0].hair.how_much | Enough |
| users[0].hair.color | Sea green |

Add more special HTML to your default screen parts block. To summarize:

Keep the trigger HTML in your default screen parts block. It can be useful for other reasons. Add the proxy variable HTML to that default screen parts block.

Footnotes​

  1. In some cases, you will be unable to find a variable or value name in your interview code. For example, object-type fields, like object_multiselect or for fields that use code might use values that are outside of your interview code. They might be in a module of your interview, in a module of an interview you include, in a database like S3, or somewhere else. ↩ ↩2