Skip to content

Syntax and Semantics of μG

The syntax and denotational semantics of μG were published in our FORTE 2023 article1 The operational semantics will be added here after being published in a future article.

Syntax of μG

We start by recalling the abstract syntax of μG

Syntax of μG

Given a set X={X,Y,Z,} of variable symbols and a set S of function symbols, μG expressions can be composed with the following abstract syntax:

N::=ψσφσφN1;N2N1||N2Xf(N1,N2,,Nk)let X=N1 in N2def f(X~){N1} in N2if N1 then N2 else N3fix X=N1 in N2

with φ,σ,ψ,fS and XX .

Given a graph G and an (optional) edge-labeling ξ, the meaning of a μG expression is a graph neural network, a function between node-labeling functions.

One of the basic μG terms is the function application ψ. This represents the application of a given function (referenced by ψ) to the input node-labeling function.

Moreover, the pre-image operator σφ and the post-image operator σφ are used to compute the labeling of a node in terms of the labels of its predecessors and successors, respectively.

Basic terms are composed by sequential composition N1;N2 and parallel composition N1||N2.

Local variables X and functions f(X~) can be defined using let and def.

The choice operator if N1 then N2 else N3 allows to run different GNNs according to the result of a guard GNN.

Finally, the fixed point operator fix X=N1 in N2 is used to program recursive behavior.

Typing of μG

We say that a μG expression is well-formed whenever it can be typed with the rules of the following table. These rules guarantee that any well-formed μG expression can be interpreted as a GNN.

image

Denotational semantics of μG

We are now ready to discuss the denotational semantics of μG. In the following definition, η denotes a node-labeling function, OG(v) and IG(v) are the outgoing and incoming edges of node v, and by (η,ξ)(E) we denote the multi-set of tuples (η(u),ξ((u,v))),η(v)) for each (u,v)E.

Denotational semantics of μG

Given a graph G, an edge-labeling function ξ, and an environment ρ that comprises a variable evaluation function ρv:XΦG,ξ and a function evaluation function ρf:S((ΦG,ξ××ΦG,ξ)ΦG,ξ) we define the semantic interpretation function ρG,ξ:NΦG,ξ on μG formulas N by induction in the following way:

image

For any ψ,φ,σ,fS and any XX. The functions cond and g are defined as follows:

cond(t,f1,f2)(η)={f1(η)if t(η)=λv.Truef2(η)otherwise
g(Φ)(η)={Φ(η)if Φ(η)=N2ρ[XΦ]G,ξ(η)g(N2ρ[XΦ]G,ξ)(η)otherwise

Function application

The function symbols ψ1,ψ2,S are evaluated as the graph neural networks ψ1ρG,ξ,ψ2ρG,ξ, that map a node-labeling function η to a new node-labeling function by applying a function on both local and global node information. The local information consists of applying η to the input node, while the global information is the multiset of the labels of all the nodes in the graph. The graph neural network we obtain applies a possibly trainable function fψ to these two pieces of information. Two particular cases arise if fψ decides to ignore either of the two inputs. If fψ ignores the global information, the GNN returns a node-labeling function ηl, a purely local transformation of the node labels. On the other hand, if fψ ignores the local information, the GNN returns a node-labeling function ηg that assigns to each node a label that summarizes the entire graph, emulating what in the GNN literature is known as a global pooling operator.

Pre-image and Post-Image

The pre-image symbol and the post-image symbol , together with function symbols φS and σS are evaluated as the graph neural networks σφρG,ξ and σφρG,ξ for any σ,φS. In the case of the pre-image, for any symbol φS the corresponding function fφ generates a message from a tuple (η(u),ξ((u,v)),η(v)), and this is repeated for each (u,v)IG(v). Then for any symbol σS the corresponding function fσ generates a new label for a node v from the multiset of messages obtained from fφ and the current label η(v). The functions fφ and fσ may be trainable. The case of the post-image is analogous, with the difference that fφ is applied to tuples (η(v),ξ((v,u)),η(u)) for each (v,u)OG(v) instead.

Sequential composition

An expression of the form N1;N2 for μG formulas N1,N2 is evaluated as the graph neural network N1;N2ρG,ξ that maps a node-labeling function η to a new node-labeling function obtained from the function composition of N2ρG,ξ and N1ρG,ξ.

Parallel composition

An expression of the form N1||N2 for μG formulas N1,N2 is evaluated as the graph neural network N1||N2ρG,ξ that maps a node-labelling function η to a new node-labelling function that maps a node v to the tuple N1ρG,ξ(η)(v),N2ρG,ξ(η)(v). Moreover, combining this operator with the basic function symbols ψ1,ψ2,S allows the execution of k-ary operations on the node-labelling functions: a k-ary function application fopk(N1,,Nk) is expressible as the μG expression (((N1||N2)||)||Nk);opk.

Choice

The choice operator if N1 then N2 else N3 for μG formulas N1,N2,N3 is evaluated as the graph neural network N2ρG,ξ if η=N1ρG,ξ(η) is a node-labelling function such that vG,η(v)=True. Otherwise it is evaluated as the graph neural network N3ρG,ξ.

Fixed points

The fixed point operator is evaluated as the graph neural network that is obtained by applying N1ρG,ξ to the least fixed point of the functional F associated to g. Formally, it is {Fnn0}(N1ρG,ξ) where

F0=λΦ.λη.Fn+1=F(Fn)

  1. Matteo Belenchia, Flavio Corradini, Michela Quadrini, and Michele Loreti. 2023. Implementing a CTL Model Checker with μG, a Language for Programming Graph Neural Networks. In Formal Techniques for Distributed Objects, Components, and Systems: 43rd IFIP WG 6.1 International Conference, FORTE 2023, Held as Part of the 18th International Federated Conference on Distributed Computing Techniques, DisCoTec 2023, Lisbon, Portugal, June 19–23, 2023, Proceedings. Springer-Verlag, Berlin, Heidelberg, 37–54. https://doi.org/10.1007/978-3-031-35355-0_4