Trying to find the key idea here, which I'm not picking up in the abstract.
3.4 Evaluation Strategy
The confluence of our semantics enables us to soundly step statements in unusual orders. However, for an implementation, we need an actual evaluation strategy. Sequential, call-by-value evaluation would step statements in order. Lazy evaluation would work backward from the return variable, stepping statements needed to turn it into a value. By contrast, the idea of opportunistic evaluation is to step as many statements as we can, anywhere we can. Some may not be able to step yet, e.g. a function call where the function does not yet have a known definition, but we will step all the rest.
> We demonstrate the versatility and performance of [our implementation in Python], focusing on programs that invoke heavy external computation through the use of large language models (LLMs) and other APIs. Across five scripts, we compare to several state-of-the-art baselines and show that opportunistic evaluation improves total running time (up to 6.2×) and latency (up to 12.7×) compared to standard sequential Python, while performing very close (between 1.3% and 18.5% running time overhead) to hand-tuned manually optimized asynchronous Rust. For Tree-of-Thoughts, a prominent LLM reasoning approach, we achieve a 6.2× performance improvement over the authors’ own implementation.
Trying to find the key idea here, which I'm not picking up in the abstract.
Reminds me a little of Haxl: https://engineering.fb.com/2014/06/10/web/open-sourcing-haxl...> We demonstrate the versatility and performance of [our implementation in Python], focusing on programs that invoke heavy external computation through the use of large language models (LLMs) and other APIs. Across five scripts, we compare to several state-of-the-art baselines and show that opportunistic evaluation improves total running time (up to 6.2×) and latency (up to 12.7×) compared to standard sequential Python, while performing very close (between 1.3% and 18.5% running time overhead) to hand-tuned manually optimized asynchronous Rust. For Tree-of-Thoughts, a prominent LLM reasoning approach, we achieve a 6.2× performance improvement over the authors’ own implementation.
Is there a public repository with the code?
I believe it's under https://github.com/stephenmell/opal-oopsla2025-artifact/tree... / https://doi.org/10.5281/zenodo.16929279