I’m going to share a little technique I have used to keep some of my guided interview logic externally. It’s something I designed just for myself to save time, so the implementation I’m sharing is quite rough around the edges. I have done this in both HotDocs and Docassemble, (I created a small Google Apps add-on to implement it for HotDocs) but I’ll go into detail about how to do it in Docassemble below. As I keep running into in the A2J realm, it turns out this is a reinvention and there are a few other ways to approach the same problem. I learned from Meng Wong that my model is closest to something called a DMN, which is part of the business process rules model. The basic approach is to store a list of conditional rules that will be applied to the interview in an external data source. For ease of updates, I use Google Sheets as the data source.

Why store logic externally?

There are a few reasons I’ve taken this approach rather than hardcoding the rules in Python:

  • Make it easy to add new rules and update old rules as I get feedback from subject matter experts
  • Make it easy to visually or programmatically confirm if I covered all of the rules
  • Save typing and reduce errors

Interview Flowchart showing external data source containing logical rules

Externalizing the logic adds a second layer of abstraction in the flowchart above. The interview file still contains interview flow logic. The externalized rules should be limited to those that need to be more flexible or represent logic that isn’t limited to the interview’s flow. For my tenant eviction interview, there is an Answer file which contains claims and defenses. Those are controlled directly in the interview YAML file for now. The external logic pulls in rules that connect each claim and defense to related discovery items. It might make sense at some point to also externalize the logic for the more complicated claims and defenses that involve multiple facts.

How does it work?

There are a few different components to this: the Google Sheet itself, which is just a spreadsheet with 3 columns. A Python module which connects to Google Sheets and loads the Sheet into a Python dictionary. And some Python code that “executes” the rules. I’ll walk through the components below.

Data Table

Column A contains an existing Docassemble variable name which represents a claim, defense or fact. Column B represents a possible value of the claim or fact. Together, they represent a logical test. Column C represents the action to take if the test is True. In my code, the only action taken is to set the variable in Column C to True. In Python code, the test could be represented as follows:

if A == B: C = True

Notice we have to special case some variable types, such as true/false, because the CSV file only stores the string “TRUE” or “FALSE”, not a Boolean value. I’ll show how I handle this at the end; there may be a better approach.

Loading the external data

Docassemble’s path_and_mimetype function allows you to connect to a website. And Google Sheets allows you to publish a spreadsheet as a CSV file that is available at a public URL. To publish the file as a CSV file in Google Sheets, select File | Publish to the Web.

The Docassemble website has instructions for a different approach you might take to this that makes use of the Google API instead of the Google Sheet’s publish as CSV option. For a more robust approach, you could also use a SQL database or something like Airtable.

Execute the stored logic

The code below shows how to put it all together: it uses the CSV loader function to gather information from Google Sheets. I also store the title and categories of the discovery objects in Google Sheets, in lines 23-42, but that’s not necessary to make use of this approach.

In lines 47 through 64, we iterate through the list of rules (which we store in a variable named lemma). If we find that the lemma’s test is true, we set the variable in column C to true. In my example, this is an item in a dictionary.

Line 60 is the key line that runs the test:

if defined(lemma[‘Claim’]) and value(lemma[‘Claim’]) == comparison:.
We check to make sure the variable is defined first to make sure we don’t generate an error by trying to set a dictionary key that doesn’t exist, and then check it against the temporary comparison value defined in column B. And then in my code, we are setting the checked attribute of a DiscoveryRequest object to True if the comparison matches.

This code is begging to be refactored and stored inside a module, but I hope that the basic approach I’ve taken is helpful to see. Keep in mind that there are some messy aspects that don’t make this a perfect example. For one, I renamed the variables (originally I had ints.claim_name, instead of ints[‘claim_name’]) without renaming them in the CSV. It’s also likely more complicated than necessary to store both the rules and the variable names in the spreadsheet. And finally, my example uses a custom object, but you may want to use this set a simple dictionary value to True instead. I may go back and rewrite and modularize this before I use it in another project, but I’m going to leave the working code in place for now.

Conclusion

As you can see, storing logic externally takes some effort up front. It’s probably not worthwhile for a short interview. But the time savings is there for a longer interview. In addition, once you’ve begun the process of separating the legal rules, you can then easily reuse the logic in multiple interviews. At Docacon, Jonathan Pyle talked about wrapping legal rules in Objects. Those objects could easily be represented in a DMN-type module like the example code above. What’s more, your subject matter experts might understand how to edit a spreadsheet or Airtable better than they can write or read Python to make sure your logic captures the law correctly.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.