After working on the Massachusetts Eviction Defense Tool with Rina Padua for the last 10 months, I learned a lot. Some of it was from helpful mentors like Marc Lauritsen and Caroline Robinson, who have been building interviews like this for years; some from presentations at other conferences I’ve attended over time, including at the LSC TIG conference; some from user testing; and some from our own experiences. This is all to say that these are probably not my unique insights, but I think they are worth sharing anyway.
I also learned a lot from the other sessions at Docacon 2018. To name just one example, I’m very impressed with the work that UpSolve is doing with engaging, affirming, and keeping users working on their forms with lessons from social psychology and want to dive into that further in the future. I won’t try to distill that here: watch UpSolve’s session to learn more. (It should be available on the Docacon site soon).
1. “Dumb” forms are not worth the work
If you are building a guided interview that follows a paper form exactly, you’re probably doing it wrong. It takes a lot of time just to put the form in the computer, and it won’t save the user time to type instead of write. Clicking a checkbox on a screen is not faster than on a piece of paper.
“Dumb” computer forms have a place, but they are probably best for helping an attorney who has some way to automatically provide the information, such as from a case management system. They may also make sense as part of a larger form library (where the user’s information must be entered in multiple places); and for longer legal pleadings where fitting the form to the user with search and replace is too error prone, but seldom make sense for a form that is already designed to be completed on paper.
2. Ask users to state facts, not conclusions
Avoid asking the user to make a decision about whether a claim or defense can be pled wherever possible. Legal pleadings usually have conclusions. But a user-friendly interview needs to stick to asking about facts. It’s not realistic to state the law, expect the user to understand it, and then apply it to their own facts. If the question takes a page of help text to answer, you’re probably asking the wrong question.
3. Use visuals
Everyone knows that a picture is worth a thousand words…except lawyers, in many cases. We liked using screenshots of PDF versions of the forms we needed our users to locate and copy information from. Greenshot is a helpful tool to take screenshots of just the text that you want and annotate right from within the built-in image editor.
4. Do the work for the user
If you need an address, let Google look it up (and avoid mistakes). If you want to know if a form was filed within 10 days, do the math for the user. Use context to save the user time wherever possible. This will show the user that your interview is responsive to their input and make them feel that you are valuing their time and perhaps forget that it is a computer at the other end rather than an attorney.
The nice thing is that these improvements can be layered in over time. A next version of our interview might automatically copy information from the standard forms from an uploaded picture, instead of showing the user how to copy it from a screenshot. It was interesting to learn how Upsolve decided to pull information directly from consumer credit bureaus.
5. But … make sure you guess it right
Making the wrong inference for the user may slow the user down instead of speeding things up. If you are relying on an external API, make sure that the user can enter the information manually if the API fails.
Make sure your interview degrades gracefully when the user doesn’t know certain information. For example, when asking the user to choose between multiple options, give them an “I don’t know” option whenever possible. Your program logic should account for this uncertainty. If the information isn’t critical to completing the form, in some cases, it may be best to allow the user to provide the conclusion (such as the form being filed late) as a fallback.
6. Give just the right amount of help in context
Some users want to be educated by your interview, and some just want to get to the end. It is best to ask simple, fact-oriented questions that don’t need to be explained, but you can offer more information in context. Docassemble offers 4-5 different tools that can satisfy the need to educate: the question text, the “green text” glossary terms, “toasts” that pop up and then fade away, and the help text that is accessed by clicking the Help button.
Information that the user needed to know to answer the question we always placed in the subquestion field. The other options were for our eager users who wanted to know the “why” and not just the “what”. In testing we found that the help button worked best when we used the option to place it in-line with the Continue button. We did not use “toasts” (Docassemble calls them “flashes”) but they could be used to display the status of background processes or to give the user affirmations.
7. Test, test, test and iterate, iterate, iterate
We used 8 Answer and Discovery forms, covering different categories of claims, defenses, and reasons for eviction, that had been completed on paper in our group clinics to catch bugs. Using realistic data helped us find and identify problems that we weren’t able to find speeding through the interview with fake information.
It was tempting at many points to keep working on adding new features instead of making sure the interview worked, but we resisted temptation. Our first version was relatively low fidelity, without a lot of the time saving features we’ve since added, but it let us quickly get it in front of users to start testing. After adding a number of new features all at once we found we were having to delay user testing, which led us to put a formal feature freeze in place for a few weeks. After finishing the feature freeze and quashing all of the bugs, adding the new features was much easier and we had the added benefit of feedback from testing with real users. We continued to test and make improvements based on feedback as new features were rolled out.
We made use of git branches as well (branches let you work on a temporary version of the project to add a new feature without disrupting work on the original version), but I’m still not sure how much benefit we got from it for our project. The benefit will depend in large part on how hard it will be to “merge” (integrate) the changes back into the original branch of the code. Keeping code in separate logical files helped with this.
Any lessons you want to share from your own work?