U.S. Air Force Medical Discharge Assistant

By Jeff Price

I wrote a chatbot, which you can find here, that helps active duty U.S. Air Force members navigate the medical discharge process. To write my bot, I used a tool called QnA Markup. Basically, this tool creates an interactive decision tree by nesting questions and answers. The source code for my final bot can be found here.

The Client & The Problem

My client is Attorney Tara Gaston. As documented in the notes from our first telephone conversation, Attorney Gaston's partner was medically discharged from the Navy, but was confused about their rights and next steps in the process at each stage of the discharge. This is an issue because confusion and missing documentation can result in having to loop back through previous steps to complete unfulfilled requirements or to secure a different result. This draws the process out, delaying receipt of any benefits and even unintentionally waiving certain rights (e.g. to appeal certain findings of fact). One of the biggest issues Attorney Gaston saw first hand as her partner navigated this process was the fear and anxiety created by high uncertainty in what came next.

Attorney Gaston wanted to create an automated tool that would clarify the medical discharge process, allow service members to advocate for themselves, and delineate when outside legal assistance is permissible. Such a tool would allow service members to understand future steps in the process, and help allay the fears Attorney Gaston's partner experienced.

While Attorney Gaston's partner is prior Navy, I currently work for the Air Force. A tool covering all the process eccentricities among the various military branches was beyond the scope of this project. So Attorney Gaston and I agreed that this tool would be designed to serve Air Force service members. Focusing on the Air Force still addressed the overall goal of making the military Disability Evaluation System (DES) easier to navigate (for at least one military branch), and my familiarity with Air Force processes, documentation repositories, and ready access to potential test users made this a convenient design choice.

Introductory Pitch

After initial consultation with Attorney Gaston and some preliminary research, the outline of a solution began to take shape. The creation of a chatbot to solve the above problem was pitched in class using this slide deck, which included expected challenges, planned next steps, and initial research source material.

The User

The target users for this product are active duty U.S. Air Force military who have been injured during service and want more information about the DES. This includes both Officer and Enlisted personnel. Due to time constraints, I had to de-scope support of the Air National Guard (ANG) and Air Force Reserve (AFR) components. The process for these groups is very similar to that of the active duty component, but there are enough differences that accommodating ANG and AFR personnel during the timeframe of this project would have been too difficult.

The target user need not be at any particular point in the medical discharge process. The tool is designed to assist at any point throughout the DES. Further, the chatbot did not presuppose any prior knowledge of the DES. Limitations of the QnA Markup language and my own lack of computer science skills also meant that sight impaired users would not be accommodated during this project. Any potential user does need access to a desktop computer and the internet.

The Process

The design process began with extensive research, some prototyping, and then much more research. After a few iterations, user testing of a beta chatbot was leveraged to make the final edits.

Research

This project required way more research than I had originally anticipated, and definitely an order of magnitude more research than any of the other three projects for this class. In my current job I am familiar with the overlapping regulations of Department of Defense Instructions (DoDI) and Manuals (DoDM); Air Force Instructions (AFI), Memorandum, Policy Directives, and Guides; U.S. Codes; and dependencies on the rules/instructions of other Federal agencies. However, cancelled/rescinded source material, as well as the integration of two previously separate processes (the DoD DES and the Department of Veterans Affairs Disability System) meant that I was tripped up more than once prototyping outdated regulations. In the end, I used the latest versions of DoDI 1332.18, 1332.39; DoDM 1332.18 Volumes 1-3; AFI36-3208; and a Secretary of Defense Memorandum. All told, these documents alone comprise 471 pages, and that does not include the outdated volumes I was mistakenly using for a time. While I didn't read every section of these materials in fine detail, I did scan the entirety of all these documents and relied heavily on specific detailed readings from each (especially DoDI 1332.18 and DoDM 1332.18, Volume 2). Further, my research discovered many online resources, some of which I link to from the chatbot.

During my search for currently existing solutions, I came across several websites with good overviews of the DES. Some had nice flowcharts, others included more detailed written outlines. As might be expected, each online resource was heavy on certain details and light on others. I believe this chatbot fills a gap by providing both a level of interaction absent from other resources I found, as well as a more comprehensive source of information. This chatbot references other resources to a degree I didn't see elsewhere, including links to some of the better website content as well as actually linking to the relevant DoD regulations.

Prototyping

As discussed above, early analysis concluded that ANG and AFR support would be too resource-intensive to incorporate. So this was pulled out of one of the early beta bots. Further, I initially wanted to provide a direct answer to users for what their disability rating percentage would be. I was also going to ask about years of service, as you can see from this beta, which combined with disability rating information can predict whether a service member will be offered medical retirement instead of a lump sum separation bonus. However, additional research soon revealed that these details would require very extensive decision trees within the chatbot. Further, even if the chatbot were to provide a user with an estimated outcome, the disability ratings are so fact-dependant (based on the findings of Veterans Affairs physicians) that such guesses would be of little true value and perhaps even misleading. Thus, these features were scrapped, and future prototypes focused more on providing information about the process itself rather than trying to predict individual outcomes.

That beta was also mistakenly informed by outdated legacy DES policy. The general flow of the current DES is still similar to its predecessor, but you can see the differences in the verbiage from this version and the final chatbot. For example, in that beta there is no mention of VA integration into the DES process, and the Medical Evaluation Board (MEB) was originally focused on findings of fitness for a service member. Whereas now the MEB is more focused on confirming the facts surrounding a service member's medical condition(s). Subsequent research brought these details to light, and resulted in some of the changes seen in the final version of the chatbot.

User Testing

After more revisioning, this later beta was eventually distributed to some test users. I contacted eight military personnel I know at Hanscom Air Force Base, trying to get input from a healthy mix of junior and more career military to represent the range of possible users. I sent out this generic feedback form to document their input. Four of these users responded in time for me to incorporate their feedback into the final chatbot. See feedback 1, feedback 2, feedback 3, and feedback 4. Some of these users tested the chatbot more extensively than others. Their findings are discussed in the refinement section below.

I also received verbal feedback from my client, Attorney Gaston, regarding her testing of the chatbot. She completed all decision paths to the best of her knowledge, and had minimal critiques. I documented her input in the notes from our final phone conversation, which I link to in the Real-World Viability section below. Her testing was more focused on the accuracy of the content since she knows the medical discharge process well, and her feedback was that the substantive content of the chatbot did not need any revisions.

Refinement

As you can see from the feedback worksheets, several testers keyed in on the same deficiencies. I updated the chatbot to incorporate the following input:

1) Some of the answer options didn't have a GOTO looping that selection back to the main chat path. This was an easy fix.

2) Attorney Gaston's one real comment was that the Formal Physical Evaluation Board (FPEB) is discussed in the Informal PEB appeals section, which is before the FPEB is mentioned elsewhere. Thus, a user seeing the FPEB mentioned there for the first time would be unsure what the FPEB is. So I added another option to this section briefly explaining the FPEB and its purpose.

3) A common complaint was that links to external content were not opening in new windows. Again, this was an easy fix.

There were some tester inputs that were not incorporated, mostly due to the technical limitations of the QnA Markup language and/or my computer skills. These included:

1) There was a request for mouse-over tooltip displaying the content of a button selection before it is clicked. I don't know if that can even be done in QnA Markup, but I didn't have to explore the feasibility of this feature.

2) There were a few requests to delete previous text bubbles when reverting back to earlier parts of the chat conversation. Again, I think this is a limitation of the programming language, but I also personally see this as beneficial since it allows a user to scroll up and review their previous answers and/or previously defined acronyms. Even if this were technically feasible, I would want to conduct some specific A/B testing to gauge the strength of this user preference.

3) There was also a request for static display of reference material along side the chatbot, as in a side pane. I think this would be a useful feature to incorporate if QnA Markup would allow it, but again this seemed to stretch my technical skills and I didn't have time to pursue it further.

In addition to the above refinements, I found and fixed a few typos and made some verbiage edits from the tested beta to the final version.

Real-World Viability

Per my final phone call with the client, documented here, I am very excited that Attorney Gaston believes this chatbot is ready for real world deployment. Long term, I do not know where this chatbot will reside, but as discussed in the sustainability section below I plan to keep in touch with Attorney Gaston and support the bot for the foreseeable future.

Impact and Effectiveness

As previously discussed, I believe this chatbot fills a current gap in the manner and comprehensiveness of the information it provides. The client was very pleased, and she stated the delivered solution exceeded her expectations. Further, all of the user feedback forms indicated the testers would themselves use this chatbot if they had to navigate the DES, and they would recommend it to other service members with similar needs.

It is difficult to quantify the impact of this chatbot, since its original goal was to ease fears and facilitate self-advocacy, rather than to increase productivity or bottom-line profit. I do think, as an interactive tool, this bot can more quickly provide an overview of the DES by allowing the uninitiated to drill down directly into the phases they have the most questions about. How much faster this education will be with this tool is hard to say.

Attorney Gaston did state several times during our final call that she believed this would alleviate a lot of anxiety for service members, especially those who are just beginning or are anticipating entering the DES. In that regard, this represents a vast improvement over the status quo.

Fit/Completeness

The stated goal from the Initial Conversation with Attorney Gaston was to "allow service members to find info to advocate for themselves or realize when to seek counsel." As outlined above, this chatbot directly addresses that problem, and judging by the feedback almost all users would see this as a great improvement over the status quo.

Documentation

All documentation for the content of tool itself is contained within the chatbot, except where external references are linked to.

All documentation for this project, including the chatbot .html file and source code, has been uploaded to GitHub.

The Future

With more time and research there are several improvements I could make to the current chatbot. First, I would expand the content to support the other Air Force components (ANG and AFR). Next, the bot could be expanded to support the other military branches. This would be the most arduous improvement, as I'm sure it would require extensive additional research to properly capture each service's nuanced implementation of DoD regulations.

Further improvement could see the content transferred out of QnA Markeup to a language allowing for more user features. Such a refactoring might also facilitate use by mobile applications.

Sustainability

Attorney Gaston is already planning to forward the link to the chatbot out to other people. She and I agreed that I will continue maintaining this chatbot for time being. I think it turned out really well, and I included my email in the chatbot signoff in order to support additional user feedback. I plan to explore further where the bot will reside, but one option I have in mind is on an Air Force server. That would provide a long term permanent home.

In [ ]: