ICAIL 2021: Part 1
ICAIL is the International Conference on Artificial Intelligence and Law. It is held bi-annually, and it is currently being “hosted” virtually in Sao Paulo, Brazil.
You can “attend” it virtually by going to the Associação Lawgorithm youtube channel and following along with the live presentations today, tomorrow, and Thursday.
On the Monday and Friday of the conference there are typically a number of parallel workshops. Yesterday I followed along with the Explainable and Responsible AI in Law workshop, which was very interesting. Trevor Bench Capon did a presentation on how using issues-based case based reasoning approaches can allow you to generate coherent explanations for predictions with regard to open-textured legal questions. That is the technique that I followed in my Docassemble-OpenLCBR demonstration at ICAIL 2019.
Katie Atkinson also did a very interesting presentation on some work her team has been working on to automate predictions about whether or not application to a human rights tribunal will be accepted by the tribunal. Again, it is a case-based reasoning approach that involves the encoding by legal experts of domain knowledge. She uses a particular approach that uses a type of graph notation to describe the issues and factors involved. I will admit to not understanding the “angelic” methodology as well as I would like, but it seems fundamentally to be a knowledge representation approach for case-based reasoning.
The big take-away I had from these presentations is that symbolic artificial intelligence, even in the area of prediction on open-textured legal questions, still has significant advantages over machine learning approaches in explainability. There is some interesting work that is being pursued in the topic of whether we can test machine learning algorithms for their consistency with certain symbolic representations, but that work has not progressed very far, and depends on there being some sort of symbolic analogue for the information included in the machine learning models, which in practical use cases is typically not true.
The explanations generated by the systems described on Monday for case-based reasoning are less sophisticated than those generated by L4-Docassemble, which has been demonstrated only on statutory reasoning tasks. That gives me some confidence that the stuff I have been working on for the last few years really is pushing the boundaries of what’s possible.
Combining the technology for doing statutory interpretation and case-based reasoning is very possible, and desirable, but it is not something that has actually happened in any deployed tools, of which I am aware. That is still something on my to-do list, because I have done all the parts, I have just never put them all together in the same tool.
There is an interesting bit of terminology that I learned, also, which is what the word “responsible” means in the context of automated legal reasoning. Explicability is different from explainability, where an explanation refers to things in human terms, and considers counter-arguments. But “responsible” automated reasoning can not only explain itself, it’s explanations serve as subjectively good justifications for its decisions, according to some criteria.
It’s not clear to me what constitutes sufficiently good justification for an AI to be considered “responsible” in this way. The highest standard we have for explanations for decisions when dealing with human beings is whether the loser is satisfied that their argument was fairly considered, which is an entirely subjective standard, and unlikely to be well related to things like “fairness”. Measuring whether or not an explanation is “fair” would be more useful, in my mind, than measuring whether or not it makes disappointed parties go away. But fairness is difficult to define separately from whether or not parties feel that they have been treated fairly by a decision.
Evidently there is a persuasive paper on what constitutes a good explanation that was referred to by many of the participants. On first blush, it seems to me that there is a direct correlation between what is considered a good explanation, and the kind of capabilities that you get from using answer set programming to represent legal knowledge. Which encourages me that working with tools like s(CASP) is a step in the right direction, and it makes sense to create systems that allow the user to modify inputs so as to see whether the changes make any difference in the conclusion reached.
I will be presenting my extended abstract on using constraint answer set programming to improve legislative drafting tomorrow morning at 11:00AM GMT, if you would like to join. The talk should also be available on the YouTube channel thereafter.