PPDP 2022 Accepted Papers with
Abstracts
James Smith, Xiangyu Guo and Ajay Bansal.
A Predicate Construct for Declarative
Programming in Imperative Languages
Abstract:
Imperative and object-oriented programming languages are
among the most common languages for general-purpose
programming.
These languages work well for handling many common tasks
necessary for most applications.
However, there are still many hard problems that remain
difficult to implement directly in imperative languages.
Declarative languages have worked well for solving many of
these problems by providing a syntax that allows the user to
focus on modeling the problem rather than on designing an
algorithm.
Logic programming languages, like Prolog, have seen success
in constraint satisfaction problems, logical databases, and
various NP-Hard problems.
Unfortunately, these languages have not seen the same
success in general-purpose programming, and most of the problems
they solve do not exist in isolation.
Furthermore, many imperative programmers are still
unfamiliar with and unaware of logic programming.
In this work, we aim to integrate a logical predicate,
borrowed from logic programming, into an imperative language to
facilitate a model-based approach to programming.
Rather than attempting to embed Prolog directly into the
language, we provide a logic for reasoning over imperative
expressions using the common Boolean operators familiar to
imperative programmers.
While logic programming relies on unification, our
predicate relies on variable assignment which is standard in
imperative languages.
We have implemented the predicate into the syntax of the
Python programming language along with a solver built into the
runtime for solving queries on the predicate.
We demonstrate the use of the construct through solving
commonly-known problems, such as graph coloring and the N-Queens
problem.
To evaluate the viability of our predicate, we compare the
performance results of our predicate against current
implementations of Prolog.
By bringing this predicate construct to imperative
languages, it is our hope that we can help to bridge the gap
between imperative and logic programming as well as provide an
approach to logic programming that is more intuitive for
imperative programmers.
Abstract:
DLV2 is an AI reasoner for Knowledge Representation and
Reasoning which supports Answer Set Programming (ASP) – a
logic-based declarative formalism, successfully used in both
academic and industrial applications. Given a logic program
modelling a computational problem, an execution of DLV2 produces
the so-called answer sets that correspond one-to-one to the
solutions. The computational process relies on the typical
Ground&Solve approach where the grounding step transforms
the input program into a new, equivalent ground program, and the
subsequent solving step applies propositional algorithms to
search for the answer sets. Recently, emerging applications in
contexts such as stream reasoning and event processing demand
for multi-shot reasoning: here, the system is expected to be
reactive while repeatedly executed over rapidly changing
data. In this work, we present a new incremental reasoner
obtained from the evolution of DLV2 towards multi-shot
reasoning. Rather than restarting the computation from scratch,
the system remains alive and incrementally handles the internal
grounding process: in a completely transparent fashion for the
user, at each shot, it reuses previous computations for building
and maintaining a large, more general ground program, from which
a smaller yet equivalent portion is determined and used for
computing answer sets. We describe the system, its usage, its
applicability and performance in some practically relevant
domains.
Abstract:
We present an ongoing formalization of algorithms of
ECMAScript, the specification describing the semantics of
JavaScript, in a tiny functional meta-language.
We show that this formalization is concise, readable,
maintainable, and textually close to the specification.
We extract an OCaml interpreter from our description and
run small JavaScript programs whose semantics is based on these
algorithms.
Pedro Ângelo and Mário Florido.
A Typed Lambda Calculus with Gradual
Intersection Types
Abstract:
Intersection types have the power to type expressions which
are all of many different types. Gradual types combine type
checking at both compile-time and run-time. Here we combine
these two approaches in a new typed calculus that harness both
of their strengths. We incorporate these two contributions in a
single typed calculus and define an operational semantics with
type cast annotations. We also prove several crucial properties
of the type system, namely that types are preserved during
compilation and evaluation, and that the refined criteria for
gradual typing holds.
Kai-Oliver Prott, Finn Teegen and Michael Hanus.
A Monadic Implementation of Functional Logic
Programs
Abstract:
Functional logic languages are a high-level approach
to programming by combining the most important declarative
features.
They abstract from small-step operational details
so that programmers can concentrate on the logical aspects
of an
application.
This is supported by appropriate evaluation strategies.
Demand-driven evaluation from functional programming is
amalgamated
with non-determinism from logic programming
so that solutions or values are computed whenever they
exist.
This frees the programmer from considering the influence
of an operational strategy to the success of a computation
but it is a challenge to the language implementer.
A non-deterministic demand-driven strategy might duplicate
unevaluated choices of an expression which could duplicate
the computational efforts.
In recent implementations, this problem has been tackled
by adding a kind of memoization of non-deterministic
choices
to the expression under evaluation.
Since this has been implemented in imperative target
languages,
it was unclear whether this could also be supported in
a functional programming environment, like Haskell.
This paper presents a solution to this challenge
by transforming functional logic programs into a monadic
representation.
Although this transformation is not new,
we present an implementation of the monadic interface
which supports memoization in non-deterministic branches.
We demonstrate that our approach yields a promising
performance
that outperforms current compilers for Curry.
Abstract:
Neural networks are increasingly relied upon as components
of complex safety-critical systems such as autonomous vehicles. There
is high demand for tools and methods that embed neural network
verification in a larger verification cycle. However, neural network
verification is difficult due to a wide range of verification
properties of interest, each typically only amenable to verification
in specialised solvers.
In this paper, we show how Imandra, a functional
programming language and a theorem prover originally designed for
verification, validation and simulation of financial infrastructure
can offer a holistic infrastructure for neural network verification.
We develop a novel library, CheckINN, that formalises
neural networks in Imandra, and covers different important facets of
neural network verification.
Abstract:
To support the understanding of declarative probabilistic
programming languages, we introduce a lambda-calculus with a
fair binary probabilistic choice that chooses between its
arguments with equal probability. The reduction strategy of the
calculus is a call-by-need strategy that performs lazy
evaluation and implements sharing by recursive
let-expressions.Expected convergence of expressions is the limit
of the sum of all successful reduction outputs weighted by their
probability.
We use contextual equivalence as program semantics: two
expressions are contextually equivalent if and only if the
expected convergence of the expressions plugged into any program
context is always the same. We develop and illustrate techniques
to prove equivalences including a context lemma, two derived
criteria to show equivalences and a syntactic diagram-based
method. This finally enables us to show correctness of a large
set of program transformations with respect to the contextual
equivalence.
Avishkar Mahajan,
Martin
Strecker
and Meng Wong.
User Guided Abductive Proof Generation for
Answer Set Programming Queries
Abstract:
We present a method for generating possible proofs of a
query with respect to a given Answer Set Programming (ASP) rule
set using an abductive process where the space of abducibles is
automatically constructed just from the input rules alone. Given
a (possibly empty) set of user provided facts, our method infers
any additional facts that may be needed for the entailment of a
query and then outputs these extra facts, without the user
needing to explicitly specify the space of all abducibles. We
also present a method to generate a set of directed edges
corresponding to the the justification graph for the
query. Furthermore, through different forms of implicit term
substitution, our method can take user provided facts into
account and suitably modify the abductive solutions. Past work
on abduction has been primarily based on goal directed
methods. However these methods can result in solvers that are
not truly declarative. Much less work has been done on realising
abduction in a bottom up solver like the Clingo ASP solver. We
describe novel ASP programs which can be run directly in Clingo
to yield the abductive solutions and directed edge sets without
needing to modify the underlying solving engine.
Gopalan Nadathur and Mary Southern.
A Logic for Formalizing Properties of LF
Specifications
Abstract:
A logic is presented for formalizing properties of
specifications in the Edinburgh Logical Framework or LF. In this
logic, typing judgments in LF serve as atomic formulas and
quantification is permitted over LF terms and LF
contexts. Quantifiers of the first variety are qualified by
simple types that describe the functional structure associated
with the variables they bind. Quantifiers over contexts are
typed by context schemas that constrain their instantiations to
adhere to a regular structure. The semantics of the logic is
based ultimately on an understanding of derivability in LF. As
such, valid formulas in the logic represent meta-theoretic
properties of object systems that are encoded via LF
signatures. The logic is complemented by a proof system that is
briefly discussed. There are two categories to the rules in this
proof system. One collection of rules captures the meanings of
logical connectives and quantifiers. Another collection provides
a means for analyzing atomic formulas based on an understanding
of derivability in LF; these rules build in the capability for a
case-analysis style reasoning about LF judgements and for
induction over the heights of LF derivations.
Abstract:
We introduce Nominal Matching Logic (NML) as an extension of
Matching Logic with names and binding following the Gabbay-Pitts
nominal approach. Matching logic is the foundation of the K
framework, used to specify programming languages and
automatically derive associated tools (compilers, debuggers,
model checkers, program verifiers). Matching logic does not
include a primitive notion of name binding, though binding
operators can be represented via an encoding that internalises
the graph of a function from bound names to expressions
containing bound names. This approach is sufficient to represent
computations involving binding operators, but has not been
reconciled with support for inductive reasoning over syntax with
binding (e.g., reasoning over $\lambda$-terms). Nominal logic
is a formal system for reasoning about names and binding, which
provides well-behaved and powerful principles for inductive
reasoning over syntax with binding, and NML inherits these
principles. We discuss design alternatives for the syntax and
the semantics of NML, prove meta-theoretical properties and give
examples to illustrate its expressive power. In particular, we
show how induction principles for lambda-terms (alpha-structural
induction) can be defined and used to prove standard properties
of the lambda-calculus.
Guillaume Ambal, Sergueï Lenglet,
Alan
Schmitt
and Camille Noûs.
Certified Derivation of Small-Step From
Big-Step Skeletal Semantics
Abstract:
We present an automatic translation of a skeletal semantics
written in big-step style into an equivalent structural
operational semantics. This translation is implemented on top of
the Necro tool, which lets us automatically generate an OCaml
interpreter for the small step semantics and a Coq mechanization
of both semantics. We prove the framework correct in two ways:
we provide a paper proof of the core of the transformation, and
we generate Coq certification scripts alongside the
transformation. We illustrate the approach using a simple
imperative language and show how it scales to larger languages.