I'm interested in different notions or aspects of execution and how
they relate to each other: in particular reversible, incremental, lazy
and parallel computation.
These concepts are more closely related than they might seem on first
inspection. For example, lazy evaluation exploits the *parallel
structure* of a computation, i.e. the fact that some parts can be
executed independently of others. Biswas first noticed this in
his PhD
thesis on program slicing.

A reversible computation is one where you can scrub backwards and
forwards in time, as in a time-travel debugger. An incremental
computation is one that responds to input changes by *undoing*
and *redoing* affected parts of the computation. One can think of
this as stepping sideways into an alternate universe where a different
course of events took place. I call this *retroactive update*. Here
is
a blog
article I wrote about what retroactive update might mean for open
systems.

Since 2001 I've been interested in “transparent” programming models where where every value comes equipped with an account of how it was computed. James Cheney coined the nice phrase self-explaining computation to describe such a system. For distributed computations, being able to discover what happened after the fact is essential, since we will usually lack the authority, or even the means, to restart the computation in a debugger.

In our 2012 ICFP
paper, we made a connection between self-explaining computation and
reversibility: a trace of a computation *explains* a result *v*
precisely when it contains enough information to execute backwards from
*v* to a slice of the original program which preserves its capacity to
execute forwards to *v*. In later work we
extended this idea to a labelled transition semantics for reversible pi
calculus.