Incomplete efforts to automatically discover, check, and/or explain properties of federated execution #2242
petervdonovan
started this conversation in
Show and tell
Replies: 1 comment 1 reply
-
This looks really interesting. The links to the PDF and natural language descriptions don't work for me. Perhaps forgot to push the files? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
This post includes the results from the project that I worked on last semester. I am posting about it now because the project is about to be indefinitely shelved and because I hope the statements that I formulated and empirically checked might be relevant to people who are currently using axioms about federated execution or who are currently documenting how it works in detail.
I feel that some discussions about federated execution are hindered by the following challenges:
Because these challenges make it difficult for many of us (including me) to make true statements about federated execution without making any mistakes, I am under the impression that this has led to an attitude of skepticism about even basic facts about federated execution.
Documenting federated execution manually seems like a great idea to me, but because of the challenges, I am interested in and excited about automated ways of discovering or checking properties of the implementation. Such techniques might supplement or help us validate manual documentation.
Currently, I think that my (still incomplete) work here can at least kind of help us do the following:
protocol-test
directory of the repo together with theordering-server
directory)trace-ord
directory)explain
directory, which is currently in theexplain
feature branch)My dream is basically to be able to produce correct, meaningful statements about federated execution automatically. Arguably the biggest missing piece here is the program synthesis aspect of it, for automatically generating the logical properties instead of writing them manually; I still think this is possible but while I was trying to implement it I got bogged down in complicated refactorings. I realize that this (specification mining and program synthesis) is already a pretty well-trodden path (i.e., lots of prior work), and in the foreseeable future I'm not even trying to write a paper about it.
This workflow is discussed in more detail in this PDF, which I warn you is a school report that was written in a bit of a rush and that has not been well-edited. I wouldn't really recommend reading it from start to finish, but I'm sharing it anyway in case anyone wants explanation.
The natural-language explanations are here. The fact that they are LLM-generated is partly just for fun, but it is also because I wanted to make sure that everything was explained well enough that the LLM would be able to "understand" the basic ideas and re-explain them back to me. A lot of the machine-generated explanations are wrong, but overall I feel happy about them. The explanations partially relieve the reader from having to interpret the ugly syntax that I made up ad-hoc to try to express the logical rules. Additionally, because the explanations are generated by an LLM with a low temperature parameter, they are not as idiosyncratic or as pointlessly obscure as they would have been if I had written them myself. I think these explanations are actually helpful, with the caveats that:
master
if anyone thinks that would actually be helpfulBeta Was this translation helpful? Give feedback.
All reactions