Communicating Sequential Processes

A formal language for describing concurrent systems


A process represents the behaviour pattern of an object. It consists of an alphabet, which is the set of the possible atomic events an object can engage on. There is no distinction between events originated by the user of the object or by the object itself.

The alphabet of process PP is denoted as αP\alpha P and the set of traces of PP is denoted as traces(P)traces(P). Two processes are considered independent if their alphabets are disjoint.

A process can be defined in terms of itself, like CLOCK=(tickCLOCK)CLOCK = (tick \rightarrow CLOCK).

A process can be defined as a function f:Σ(Σ,T)f : \Sigma \rightarrow (\Sigma, T) that takes an event and results in a process, and where its domain must be a subset of the alphabet. The function may return STOPΣSTOP_{\Sigma} to represent termination.

Two processes are equal if their event choices are the same and each resulting process is the same. The processes (x:Af(x))(x : A \rightarrow f(x)) and (y:Bg(y))(y : B \rightarrow g(y)) are the same if A=BzAf(z)=g(z)A = B \land \forall z \in A \bullet f(z) = g(z).

If two objects define the same process, then they are considered equal, therefore there exists only one object that defines a process. A more formal definition for a process such as CLOCK=(tickCLOCK)CLOCK = (tick \rightarrow CLOCK) is CLOCK=μX:{tick}(tickX)CLOCK = \mu X : \{ tick \} \bullet (tick \rightarrow X). Given a function ff that maps an event to a process, the unique-type definition can be generalized as μX:Σ(y:Σf(y,X))\mu X : \Sigma \bullet (y : \Sigma \rightarrow f(y, X)), where the second argument to ff is the process itself, in order to support recursion.


The prefix operator (\rightarrow) takes an event and a process and describes the process that engages in a given event and then behaves like the subsequent process, for example (xP)(x \rightarrow P). Notice that (xyzP)(x \rightarrow y \rightarrow z \rightarrow P) is equivalent to (x(y(zP)))(x \rightarrow (y \rightarrow (z \rightarrow P))).

The choice operator (\mid) takes two prefix applications (its not an operation on processes) and describes a process that has a choice over what event to engage on, for example (xPyQ)(x \rightarrow P \mid y \rightarrow Q). Notice that (xPyQ)=(yQxP)(x \rightarrow P \mid y \rightarrow Q) = (y \rightarrow Q \mid x \rightarrow P). The events on the involved prefixes must be different, as (xPxQ)(x \rightarrow P \mid x \rightarrow Q) is considered invalid.

The alphabet of the resulting process consists of the prefix events and the alphabets of the involved processes: α(xPyQ)={x,y}αPαQ\alpha(x \rightarrow P \mid y \rightarrow Q) = \{ x, y \} \cup \alpha P \cup \alpha Q.

There exists a shorthand to define choices over a set. Given A={foo,bar}A = \{ foo, bar \}, the shorthand (e:AP)(e: A \rightarrow P) evaluates to (fooPbarP)(foo \rightarrow P \mid bar \rightarrow P). If the set of choices is the empty set, the resulting process is the broken process. Given (x:BP(x))(x : B \rightarrow P(x)), if BAB \subseteq A where A=αP(x)A = \alpha P(x) for all xx, then α(x:BP(x))=A\alpha (x : B \rightarrow P(x)) = A.

A process PP is a subset of process QQ if they share the same alphabets and the traces of PP can all be consumed by QQ. This is denoted as: PQ=(αP=αQ)(traces(P)traces(Q))P \sqsubseteq Q = (\alpha P = \alpha Q) \land (traces(P) \subseteq traces(Q)). Of course, PPP \sqsubseteq P. If two processes are subsets of each other, then they must be equal: PQQPP=QP \sqsubseteq Q \land Q \sqsubseteq P \implies P = Q. The subset operation is transitive: PQQRPRP \sqsubseteq Q \land Q \sqsubseteq R \implies P \sqsubseteq R. Finally, a process is always the superset of its broken process: STOPαPPSTOP_{\alpha P} \sqsubseteq P.

Given a variable bb that is either truetrue or falsefalse, the process (P<|b>Q)(P {<|} \; b \mid> Q) behaves like PP if b=trueb = true, and like QQ otherwise. This operator obeys the following laws:

Given a boolean expression bb and a process PP, the process (b*P)(b * P) is the machine that restarts PP while bb is truetrue: (b*P)=μX((P;X)<|b|>SKIPαP)(b * P) = \mu X \bullet ((P ; X) {<|}\; b \; {|>} SKIP_{\alpha P}).

Process Mapping (or Change of Symbol)

Given a function ff that maps the alphabet (takes the alphabet of a process and returns a new set of the same size), a process PP, and an event xαPx \in \alpha P, then f(P)f(P) represents the process that engages in f(x)f(x) (the actual function application) whenever PP engages in xx. The function must be injective, so that f1f^{-1} makes functional sense.

It follows that αf(P)=f(αP)\alpha f(P) = f(\alpha P). The traces of the mapped process are traces(f(P))={f*(t)ttraces(P)}traces(f(P)) = \{ f^{*}(t) \mid t \in traces(P) \}. Notice the mapping function always preserves the broken process, as in that case it would map over an empty list of choices: Pf(STOPαP)=STOPαf(P)\forall P \bullet f(STOP_{\alpha P}) = STOP_{\alpha f(P)}.

Notice that process mapping distributes over the composition operator: f(PQ)=f(P)f(Q)f(P \parallel Q) = f(P) \parallel f(Q). When it comes to consuming a trace tt, then f(P)/f*(t)=f(P/t)f(P) / f^{*}(t) = f(P / t).

Given the choices shorthand definition of a process non-recursive process: f(x:AP(x))=(y:f(A)f(P(f1(y))))f(x : A \rightarrow P(x)) = (y : f(A) \rightarrow f(P(f^{-1}(y)))).

For example, consider CLOCK=(tickCLOCK)CLOCK = (tick \rightarrow CLOCK) and f(tick)=tockf(tick) = tock. In that case, f(CLOCK)=(tockf(CLOCK))f(CLOCK) = (tock \rightarrow f(CLOCK)).

Given a sequential process PP, then f()=f(\checkmark) = \checkmark must hold.

Given an arbitrary interleaving (PQ)(P \sqcap Q), f(PQ)=(f(P)f(Q))f(P \sqcap Q) = (f(P) \sqcap f(Q)).

A process defined by mapping diverges when its original process diverges. Given ff and PP, divergences(f(P))={f*(s)sdivergences(P)}divergences(f(P)) = \{ f^{*}(s) \mid s \in divergences(P) \}.

The failures of a mapped process are defined as failures(f(P))={(f*(s),f(X))(s,X)failures(P)}failures(f(P)) = \{ (f^{*}(s), f(X)) \mid (s, X) \in failures(P) \}. The refusals of a mapped process are defined as refusals(f(P))={f(X)Xrefusals(P)}refusals(f(P)) = \{ f(X) \mid X \in refusals(P) \}.

Concealing a mapped process is defined as f(P\C)=(f(P)\f(C))f(P \setminus C) = (f(P) \setminus f(C)).

Process Labelling

Labelling is a mapping technique to add prefix to all the events of a process. Its useful when constructing groups of processes, and where we need to make sure that they are independent (that their alphabets are disjoint). Process PP labelled with foofoo is denoted as foo:Pfoo : P. The prefix is added before every event, separated by a period.

For any prefix pp, there exists a function labelp(e)=p.elabel_{p}(e) = p.e. The labelled process p:Qp : Q is equivalent to labelp(Q)label_{p}(Q). Notice labelling changes the alphabet: αQα(p:Q)\alpha Q \neq \alpha (p : Q).

For example, consider CLOCK=(tickCLOCK)CLOCK = (tick \rightarrow CLOCK). In that case, foo:CLOCK=(foo.tickCLOCK)foo : CLOCK = (foo.tick \rightarrow CLOCK).


Concealment creates a new process that behaves like the new process but with certain events being hidden. This operation reduces the alphabet of a process and may introduce non-determinism. Concealing an infinite sequence of consecutive events leads to divergence.

This operation is defined as ((xP)\C)=(x(P\C))((x \rightarrow P) \setminus C) = (x \rightarrow (P \setminus C)) if xCx \notin C, and as ((xP)\C)=(P\C)((x \rightarrow P) \setminus C) = (P \setminus C) otherwise. Given the choice shorthand definition (x:BP(x))(x : B \rightarrow P(x)), concealing such process with CC expands to (x:B(P(x)\C))(x : B \rightarrow (P(x) \setminus C)) if BC=B \cap C = \emptyset, otherwise it expands to (Q(Q(x:(BC)P(x))))(Q \sqcap (Q \square (x : (B - C) \rightarrow P(x)))) where Q=xBC(P(x)\C)Q = \sqcap_{x \in B \cap C} (P(x) \setminus C), which introduces non-determinism.

The alphabet of a concealed process is α(P\C)=αPC\alpha (P \setminus C) = \alpha P - C. The traces of a concealed process are defined as traces(P\C)={t(αPC)ttraces(P)}traces(P \setminus C) = \{ t \restriction (\alpha P - C) \mid t \in traces(P) \}. The following laws apply: (P\)=P(P \setminus \emptyset) = P and ((P\B)\C)=(P\(BC))((P \setminus B) \setminus C) = (P \setminus (B \cup C)).

Given a concealed process, its divergences consist its original divergences plus the new ones potentially created by the concealment operation:

divergences(P\C)={(s(αPC))tt(αPC)*(sdivergences(P)(nuC*#u>n(su)traces(P)))}\begin{align} divergences(P \setminus C) &= \{ (s \restriction (\alpha P - C)) \frown t \mid \\ &t \in (\alpha P - C)^{*} \land (s \in divergences(P) \\ &\lor (\forall n \bullet \exists u \in C^{*} \bullet \#u > n \land (s \frown u) \in traces(P))) \} \end{align}

The alphabet of a concealed process is defined as α(P\C)=αPC\alpha (P \setminus C) = \alpha P - C.

The failures of a concealed process are defined as:

failures(P\C)={(s(αPC),X)(s,XC)failures(P)}{(s,X)sdivergences(P\C)}\begin{align} failures(P \setminus C) = \{ &(s \restriction (\alpha P - C), X) \mid (s, X \cup C) \in failures(P) \} \\ &\cup \{ (s, X) \mid s \in divergences(P \setminus C) \} \end{align}


Given (PQ)(P \parallel Q), PP is a slave or subordinate process of QQ if αPαQ\alpha P \subseteq \alpha Q, as each action of PP can occur if QQ permits it, while QQ can freely engage on events of its own (where αQαP\alpha Q - \alpha P). If we want to conceal communication between master and subordinate, then we can use the notation (P//Q)=(PQ)\αP(P // Q) = (P \parallel Q) \setminus \alpha P. Its alphabet is α(P//Q)=(αQαP)\alpha (P // Q) = (\alpha Q - \alpha P).

The subordinated process can be given a name in which case the master will communicate with it using a compound channel name, while the subordinated process will communicate with its master through a normal channel pair name. The benefit is that all communication between the master and the subordinate can’t be detected from outside as the new subordinate name acts as a local variable for the master.

For example, (m:P//Q(v))=(m:(c!vP)//(m.c?xQ(x)))(m : P // Q(v)) = (m : (c!v \rightarrow P) // (m.c?x \rightarrow Q(x))). Also consider (n:(m:(P//Q)//R))(n : (m : (P // Q) // R)). In this case, there is no way RR can communicate with PP as it doesn’t know about its existance or about its name, mm.

Communication from the master to the slave looks like this: (m:(c?xP(x)))//(m.c!vQ)=(m:P(v))//Q(m : (c?x \rightarrow P(x))) // (m.c!v \rightarrow Q) = (m : P(v)) // Q. Conversely, communication from the slave to the master: (m:(d!vP))//(m.d?xQ(x))=(m:P)//Q(v)(m : (d!v \rightarrow P)) // (m.d?x \rightarrow Q(x)) = (m : P) // Q(v).


Communication between processes is done using events that sets values on channels. This is described by the pair where channel(c.v)=cchannel(c.v) = c and message(c.v)=vmessage(c.v) = v or the triplet m.c.vm.c.v which includes the process as a prefix (a compound channel name). Notice that m.=m.\checkmark = \checkmark for any mm. A channel used by PP only to send events is an output channel of PP. A channel used by PP only to receive events is an input channel of PP.

Communication is unidirectional and only between two processes. Communication between two processes can only happen if both processes engage in the communication: one listens on a channel while the other one sends a message through that channel.

The set of all values that process PP can communicate on a channel cc is the alphabet of the channel: αc(P)={message(c.v)c.vαP}\alpha c(P) = \{ message(c.v) \mid c.v \in \alpha P \}. Given a compound channel name, αm.c(m:P)=αc(P)\alpha m.c(m : P) = \alpha c(P). Communication event pairs must be part of the alphabet of a process. For two processes to communicate over a channel cc, they need to share the same alphabet on such channel: αc(P)=αc(Q)\alpha c(P) = \alpha c(Q). Given PP and QQ communicate over channel cc, all the possible communication events are inside αPαQ\alpha P \cap \alpha Q.

Given proces PP, the event c!vc!v sends value vv over channel cc. For this to be valid, vαc(P)v \in \alpha c(P). This event is defined as (c!vP)=(c.vP)(c!v \rightarrow P) = (c.v \rightarrow P). The communication message can be a more complex expression: c!(x+y)c!(x + y). The event to wait for a value on a channel is c?xc?x, where xx takes the value of the received message. This is defined as a choice over all the valid communication events of that channel: (c?xP(x))=(e:{cchannel(e)=c})P(message(e))(c?x \rightarrow P(x)) = (e : \{ c \mid channel(e) = c \}) \rightarrow P(message(e)). A process can wait in more than one channel using the choice operator: (c?xP(x)d?yQ(y))(c?x \rightarrow P(x) \mid d?y \rightarrow Q(y)).


(x:=e)(x := e) is the process that sets xx to ee. Its defined as (x:=e)=(x:=e;SKIP)(x := e) = (x := e ; SKIP). It follows that (x:=x)=SKIP(x := x) = SKIP and that (x:=e;x:=f(x))=(x:=f(e))(x := e ; x := f(x)) = (x := f(e)). The assignment operator can be used with multiple variables: (x,y,z:=e,f,g)(x, y, z := e, f, g) but keep in mind that (x:=f;y:=g)(x,y:=f,g)(x := f ; y := g) \neq (x, y := f, g) as in the first case gg may depend on the new value of xx. With regards to the conditional operator: (x:=e;(P<|b(x)|>Q))=((x:=e;P)<|b(x)|>(x:=e;Q))(x := e ; (P {<|}\; b(x) \; {|>} Q)) = ((x := e ; P) {<|}\; b(x) \; {|>} (x := e ; Q)).

Given a process PP, var(P)var(P) is the set of all variables that can be assigned within PP, and acc(P)acc(P) is the set of all the variables that PP can access. Of course, all variables that can be set can be accessed: var(P)acc(P)var(P) \subseteq acc(P). Both sets are subsets of αP\alpha P.

No variable assigned in a process can be accessed by another concurrent process, so given PP and QQ, then var(P)acc(Q)=var(Q)acc(P)=var(P) \cap acc(Q) = var(Q) \cap acc(P) = \emptyset.

Deterministic Processes

A deterministic process is a tuple (Σ,T)(\Sigma, T) consisting of two sets: an alphabet and a set of traces. These laws must be satisfied for a deterministic process to be valid:

A process PP is deterministic if it can’t refuse any event in which it can engage: straces(P)Xrefusals(P/s)(X{xxtraces(P/s)}=)\forall s \in traces(P) \bullet X \in refusals(P / s) \iff (X \cap \{ x \mid \langle x \rangle \in traces(P / s) \} = \emptyset).

Deterministic processes can unambiguously pick between events whenever there is a choice. These processes obey an extra law: given PP then, PP=PP \parallel P = P.

The process that results by making a process consume a trace is a process with the same alphabet, and with all traces whose prefixes were consumed. Given process (Σ,T)(\Sigma, T) and sTs \in T, then (Σ,T)/s=(Σ,{t(st)T})(\Sigma, T) / s = (\Sigma, \{ t \mid (s \frown t) \in T\}). Of course, (xP)/x=P(x \rightarrow P) / \langle x \rangle = P, and given a function definition, (x:Af(x))/y=f(y)(x : A \rightarrow f(x)) / \langle y \rangle = f(y) assuming yAy \in A.

The choice prefix shorthand notation works like this: given an alphabet Σ\Sigma, a set of choices AΣA \subseteq \Sigma, and a function ff that maps over the set of traces, then (x:A(Σ,f(x)))=(Σ,{}{xsxAsf(x)})(x : A \rightarrow (\Sigma, f(x))) = (\Sigma, \{ \langle \rangle \} \cup \{ \langle x \rangle \frown s \mid x \in A \land s \in f(x) \}).

The process mapping operation using a function ff is defined as f((Σ,T))=(ran(f),{f*(t)tT})f((\Sigma, T)) = (ran(f), \{ f^{*}(t) \mid t \in T \}).

Special Processes

Every alphabet has a broken process that never engages in any event. Given alphabet Σ\Sigma, the broken process is STOPΣ=(Σ,{})STOP_{\Sigma} = (\Sigma, \{ \langle \rangle \}). Processes with different alphabets are always considered to be different, so given alphabets AA and BB, if ABA \neq B, then STOPASTOPBSTOP_{A} \neq STOP_{B}. The traces of the broken object are defined as traces(STOPΣ)={}traces(STOP_{\Sigma}) = \{ \langle \rangle \}.

Every alphabet has a run process that can always engage, at any given point, in any event of its alphabet. Given alphabet Σ\Sigma, the run process is RUNΣ=(Σ,Σ*)RUN_{\Sigma} = (\Sigma, \Sigma^{*}), or alternatively (x:ΣRUNΣ)(x : \Sigma \rightarrow RUN_{\Sigma}), and it follows that αRUNΣ=Σ\alpha RUN_{\Sigma} = \Sigma. Notice that the run process behaves like a restart process: straces(RUNΣ)RUNΣ/(s)=RUNΣ\forall s \in traces(RUN_{\Sigma}) \bullet \overset{\frown}{RUN_{\Sigma}} / (s \frown \langle \downarrow \rangle) = \overset{\frown}{RUN_{\Sigma}}.

Given alphabet Σ\Sigma, the SKIPΣSKIP_{\Sigma} process does nothing and terminates successfully: αSKIPΣ=Σ{}\alpha SKIP_{\Sigma} = \Sigma \cup \{ \checkmark \}. A process can terminate successfully by behaving like SKIPSKIP. Notice SKIPASKIPB=SKIPABSKIP_{A} \parallel SKIP_{B} = SKIP_{A \cup B}. Its traces are traces(SKIPΣ)={,}traces(SKIP_{\Sigma}) = \{ \langle \rangle, \langle \checkmark \rangle \}.

Concealing the broken process results in the broken process: X,YSTOPX\Y=STOPXY\forall X, Y \bullet STOP_{X} \setminus Y = STOP_{X - Y}.


A process is cyclic if in all circumstances it can go back to its original state. Given process PP, then straces(P)tP/(st)=P\forall s \in traces(P) \bullet \exists t \bullet P / (s \frown t) = P. The broken process is a simple example.

A process PP is free from deadlocks if none of its traces lead to the broken object: ttraces(P)(P/t)STOPαP\forall t \in traces(P) \bullet (P / t) \neq STOP_{\alpha P}.

Non-deterministic Processes

A non-deterministic process is defined as a triplet Σ,F,D\Sigma, F, D that stands for the process alphabet, its failures, as a relation F(Σ*×(Σ))F \subseteq (\Sigma^{*} \times \wp(\Sigma)) where (,)F(\langle \rangle, \emptyset) \in F, and a set of divergences where DΣ*D \subseteq \Sigma^{*} and Ddom(F)D \subseteq dom(F). The following laws must hold:

Given P=(Σ,F1,D1)P = (\Sigma, F_1, D_1) and Q=(Σ,F2,D2)Q = (\Sigma, F_2, D_2), the \sqsubseteq operator is defined as (PQ)(F2F1)(D2D1)(P \sqsubseteq Q) \iff (F_2 \subseteq F_1) \land (D_2 \subseteq D_1).


The failures relation over (Σ*×(Σ))(\Sigma^{*} \times \wp(\Sigma)) is defined as failures(P)={(s,X)straces(P)Xrefusals(P/s)}failures(P) = \{ (s, X) \mid s \in traces(P) \land X \in refusals(P / s) \}. If (s,X)(s, X) is a failure of PP then it means that PP will refuse XX after engaging in ss. Alternatively:

failures(x:BP(x))={(,X)X(αPB)}{(xs,X)xB(s,x)failures(P(x))}\begin{align} failures(x : B &\rightarrow P(x)) = \\ &\{ (\langle \rangle, X) \mid X \subseteq (\alpha P - B)\} \\ &\cup \\ &\{ (\langle x \rangle \frown s, X) \mid x \in B \land (s, x) \in failures(P(x)) \} \end{align}

Its interesting that the traces of a process PP can be defined in terms of its failures: traces(P)=dom(failures(P))traces(P) = dom(failures(P)).


A trace of a process after which the process behaves chaotically is called a divergence of the process: divergences(P)={sstraces(P)((P/s)=CHAOSαP)}divergences(P) = \{ s \mid s \in traces(P) \land ((P / s) = CHAOS_{\alpha P} ) \}. Of course, divergences(P)traces(P)divergences(P) \subseteq traces(P). Anything concatenated to a divergence is also a divergence: sdivergences(P)t(αP)*(st)divergences(P)s \in divergences(P) \land t \in (\alpha P)^{*} \implies (s \frown t) \in divergences(P). After engaging on a divergence, a process refuses everything: sdivergences(P)refusals(P/s)=(αP)s \in divergences(P) \implies refusals(P / s) = \wp(\alpha P).

A process defined by choice can’t diverge on its first event, so its divergences are defined by what happens after it: divergences(x:BP(x))={xsxBsdivergences(P(x))}divergences(x : B \rightarrow P(x)) = \{ \langle x \rangle \frown s \mid x \in B \land s \in divergences(P(x)) \}.


The refusals of a process PP are defined as refusals(P)={x(,X)failures(P)}refusals(P) = \{ x \mid (\langle \rangle, X) \in failures(P) \}. Given a process defined by prefix, such process refuses all events that it can’t engage in: refusals(x:BP(x))={XX(αPB)}refusals(x : B \rightarrow P(x)) = \{ X \mid X \subseteq (\alpha P - B) \}.

For any process PP, Xrefusals(P)XαPX \in refusals(P) \implies X \subseteq \alpha P and refusals(P)\emptyset \in refusals(P). If a process refuses a set, then it can refuse any of its subsets: (XY)refusals(P)Xrefusals(P)(X \cup Y) \in refusals(P) \implies X \in refusals(P). Also, any event that can’t initially occur in a process can be added to an existing refusal, and the result will remain a refusal: Xrefusals(P)xαP(X{x})refusals(P)xtraces(P)X \in refusals(P) \implies \forall x \in \alpha P \bullet (X \cup \{ x \}) \in refusals(P) \lor \langle x \rangle \in traces(P).


Given processes PP and QQ where αP=αQ\alpha P = \alpha Q, (PQ)(P \sqcap Q) is the process that behaves either like PP or like QQ where the selection between them is non-deterministic.

If P=(Σ,F1,D1)P = (\Sigma, F_1, D_1) and Q=(Σ,F2,D2)Q = (\Sigma, F_2, D_2), then (PQ)=(Σ,F1F2,D1D2)(P \sqcap Q) = (\Sigma, F_1 \cup F_2, D_1 \cup D_2). Then α(PQ)=αP=αQ\alpha (P \sqcap Q) = \alpha P = \alpha Q and (x(PQ))=((xP)(xQ))(x \rightarrow (P \sqcap Q)) = ((x \rightarrow P) \sqcap (x \rightarrow Q)), or more generally: (x:B(P(x)Q(x)))=((x:BP(x))(x:BQ(x)))(x : B \rightarrow (P(x) \sqcap Q(x))) = ((x : B \rightarrow P(x)) \sqcap (x : B \rightarrow Q(x))). The x:SP(x)\sqcap_{x : S}P(x) notation stands for (P(x0)P(x1)P(x2)...)(P(x_0) \sqcap P(x_1) \sqcap P(x_2) \sqcap ...).

Its traces are defined as traces(PQ)=traces(P)traces(Q)traces(P \sqcap Q) = traces(P) \cup traces(Q) where given straces(PQ)s \in traces(P \sqcap Q):

Notice that (PP)=P(P \sqcap P) = P, (PQ)=(QP)(P \sqcap Q) = (Q \sqcap P), and (PQ)R)=(P(QR))(P \sqcap Q) \sqcap R) = (P \sqcap (Q \sqcap R)). \sqcap distributes over \parallel: (P(QR))=((PQ)(PR))(P \parallel (Q \sqcap R)) = ((P \parallel Q) \sqcap (P \parallel R)) and ((PQ)R)=((PR)(QR))((P \sqcap Q) \parallel R) = ((P \parallel R) \sqcap (Q \parallel R)).

Keep in mind that \sqcap doesn’t distribute over a recursive process definition: μX((aX)(bX))((μX(aX))(μX(bX)))\mu X \bullet ((a \rightarrow X) \sqcap (b \rightarrow X)) \neq ((\mu X \bullet (a \rightarrow X)) \sqcap (\mu X \bullet (b \rightarrow X))) unless a=ba = b. The traces of the second process is a subset of traces of the first process. The second process can choose to always engage in aa or always engage in bb, while the first one can arbitrarily interleave them.

The concealment of a non-deterministic choice is defined as ((PQ)\C)=((P\C)(Q\C))((P \sqcap Q) \setminus C) = ((P \setminus C) \sqcap (Q \setminus C)).

Given processes PP and QQ where αP=αQ\alpha P = \alpha Q, (PQ)(P \square Q) is the process that behaves either like PP or like QQ where the selection between them may be determined by its first action. If the first action is in PP and not in QQ, then PP is selected. Conversely, if the first action is in QQ but not in PP, then QQ is selected. If the first action is in both processes, the selection remains non-deterministic.

More formally, α(PQ)=αP=αQ\alpha (P \square Q) = \alpha P = \alpha Q and given events aa and bb, if aba \neq b then (aPbQ)=(cPbQ)(a \rightarrow P \square b \rightarrow Q) = (c \rightarrow P \mid b \rightarrow Q), otherwise (aPbQ)=(cPbQ)(a \rightarrow P \square b \rightarrow Q) = (c \rightarrow P \sqcap b \rightarrow Q). Notice that (PP)=P(P \square P) = P, (PQ)=(QP)(P \square Q) = (Q \square P), and ((PQ)R)=(P(QR))((P \square Q) \square R) = (P \square (Q \square R)).

The traces of (PQ)(P \square Q) are defined as traces(PQ)=traces(P)traces(Q)traces(P \square Q) = traces(P) \cup traces(Q). Given a trace straces(PQ)s \in traces(P \square Q), then:

Notice that \square and \sqcap both distribute with each other. Also, traces(PQ)=traces(PQ)traces(P \square Q) = traces(P \sqcap Q), but it is possible for (PQ)(P \sqcap Q) to deadlock on its first step, as ((PQ)P)=P((P \square Q) \parallel P) = P but ((PQ)P)=(PSTOPα(PQ))((P \sqcap Q) \parallel P) = (P \sqcap STOP_{\alpha (P \sqcap Q)}).

With regards to divergences, any divergence of PP or QQ is a divergence of (PQ)(P \sqcap Q) and a divergence of (PQ)(P \square Q): divergences(PQ)=divergences(PQ)=divergences(P)divergences(Q)divergences(P \sqcap Q) = divergences(P \square Q) = divergences(P) \cup divergences(Q).

If PP can refuse XX, then (PQ)(P \sqcap Q) will refuse XX if PP is selected, and if QQ can refuse XX, (PQ)(P \sqcap Q) will refuse it if QQ is selected: refusals(PQ)=refusals(P)refusals(Q)refusals(P \sqcap Q) = refusals(P) \cup refusals(Q). In the case of \square, refusals(PQ)=refusals(P)refusals(Q)refusals(P \square Q) = refusals(P) \cap refusals(Q).

The failures of (PQ)(P \square Q) are defined as:

failures((PQ))={(s,X)(s,X)(failures(P)failures(Q))(s(s,X)failures(P)failures(Q))}{(s,X)sdivergences(PQ)}\begin{align} failures((P &\square Q)) = \\ &\{ (s, X) \mid (s, X) \in (failures(P) \cap failures(Q)) \\ &\lor (s \neq \langle \rangle \land (s, X) \in failures(P) \cup failures(Q))\} \\ &\cup \{ (s, X) \mid s \in divergences(P \square Q) \} \end{align}


Given two processes PP and QQ with the same alphabet, (P|||Q)(P ||| Q) joins both processes without any interaction or synchronization. The only case for non-determinism is when both processes could have engaged in the same event.

Given P=(x:BP(x))P = (x : B \rightarrow P(x)) and Q=(y:BQ(y))Q = (y : B \rightarrow Q(y)), (P|||Q)=(x:B(P(x)|||Q)(y:B(P|||Q(y))))(P ||| Q) = (x : B \rightarrow (P(x) ||| Q) \square (y : B \rightarrow (P ||| Q(y)))). The traces of (P|||Q)(P ||| Q) are arbitrary interleaves of a trace of PP and a trace of QQ, and are defined as traces(P|||Q)={sttraces(P)utraces(Q)sinterleaves(t,u)}traces(P ||| Q) = \{ s \mid \exists t \in traces(P) \bullet \exists u \in traces(Q) \bullet s interleaves (t, u) \}.

Refusals are defined as: refusals(P|||Q)={XYXrefusals(P)Yrefusals(Q)}refusals(P ||| Q) = \{ X \cup Y \mid X \in refusals(P) \land Y \in refusals(Q) \}.

Failures are defined as:

failures(P|||Q)={(s,X)t,u(t,X)failures(P)(u,X)failures(Q)}{(s,X)sdivergences(P|||Q)}\begin{align} failures(P & ||| Q) = \\ &\{ (s, X) \mid \exists t, u \bullet (t, X) \in failures(P) \land (u, X) \in failures(Q) \} \\ &\cup \\ &\{ (s, X) \mid s \in divergences(P ||| Q) \} \end{align}

Divergences are defined as:

divergences(P|||Q)={us,tuinterleaves(s,t)((sdivergences(P)ttraces(Q))(straces(P)tdivergences(Q)))}\begin{align} divergences(P & ||| Q) = \{ u \mid \exists s, t \bullet u interleaves (s, t) \\ &\land ((s \in divergences(P) \land t \in traces(Q)) \\ &\lor (s \in traces(P) \land t \in divergences(Q))) \} \end{align}

Of course, α(P|||Q)=αP=αQ\alpha (P ||| Q) = \alpha P = \alpha Q. (P|||Q)=(Q|||P)(P ||| Q) = (Q ||| P) and ((P|||Q)|||R)=(P|||(Q|||R))((P ||| Q) ||| R) = (P ||| (Q ||| R)), but (P|||P)P(P ||| P) \neq P. Also |||||| distributes over \sqcap, but not over \square.

Trace consumption has a more complex definition. Given straces(P|||Q)s \in traces(P ||| Q), assume X={(t,u)ttraces(P)utraces(Q)sinterleaves(t,u)}X = \{ (t, u) \mid t \in traces(P) \land u \in traces(Q) \land s interleaves (t, u) \}. Then ((P|||Q)/s)=t,uX(P/t)|||(Q/u)((P ||| Q) / s) = \sqcap_{t, u \in X} (P / t) ||| (Q / u).

Special Processes

Given an alphabet Σ\Sigma, CHAOSΣCHAOS_{\Sigma} is the most non-deterministic process and its represented as CHAOSΣ=(Σ,(Σ*×(Σ)),Σ*)CHAOS_{\Sigma} = (\Sigma, (\Sigma^{*} \times \wp (\Sigma)), \Sigma^{*}). There is nothing that it can’t do, and it can’t be extended with more non-deterministic choices. Notice that (PCHAOSαP)=CHAOSαP(P \sqcap CHAOS_{\alpha P}) = CHAOS_{\alpha P}, traces(CHAOSαP)=divergences(CHAOSαP)=(αP)*traces(CHAOS_{\alpha P}) = divergences(CHAOS_{\alpha P}) = (\alpha P)^{*}, and refusals(CHAOSαP)=(αP)refusals(CHAOS_{\alpha P}) = \wp (\alpha P). For any process PP, CHAOSαPPCHAOS_{\alpha P} \sqsubseteq P.

A function is said to be strict if the result is the CHAOSCHAOS process if CHAOSCHAOS is involved in any of its arguments. For example: //, \parallel, \\setminus, \square, and \sqcap. The prefix operator is a counter-example, as (aCHAOS)CHAOS(a \rightarrow CHAOS) \neq CHAOS.

The STOPΣSTOP_{\Sigma} process can be modelled as (Σ,{}×(Σ),)(\Sigma, \{ \langle \rangle \} \times \wp(\Sigma), \emptyset). It does nothing and refuses everything: refusals(STOPΣ)=(Σ)refusals(STOP_{\Sigma}) = \wp(\Sigma), but it doesn’t diverge: divergences(STOPΣ)=divergences(STOP_{\Sigma}) = \emptyset. Notice that for any PP, (PSTOPαP)=P(P \square STOP_{\alpha P}) = P. Notice that (P|||STOPαP)=P(P ||| STOP_{\alpha P}) = P given that PP doesn’t diverge.

The RUNΣRUN_{\Sigma} process can also be modelled as a non-deterministic process. Notice that Notice that (P|||RUNαP)=RUNαP(P ||| RUN_{\alpha P}) = RUN_{\alpha P} given that PP doesn’t diverge.


Two processes can be composed as PQP \parallel Q to model concurrency. The type of the resulting composition depends on the relationship between the alphabets of the composed processes.

Composition is commutative (PQ=QPP \parallel Q = Q \parallel P) and associative (P(QR)=(PQ)RP \parallel (Q \parallel R) = (P \parallel Q) \parallel R). Also PP=PP \parallel P = P.

Given an interaction, the resulting process is the intersection between the composed processes. Basically (Σ,T1)(Σ,T2)=(Σ,T1T2)(\Sigma, T_1) \parallel (\Sigma, T_2) = (\Sigma, T_1 \cap T_2), which means that traces(PQ)=traces(P)traces(Q)traces(P \parallel Q) = traces(P) \cap traces(Q). It follows that PSTOPαP=STOPαPP \parallel STOP_{\alpha P} = STOP_{\alpha P} and that PRUNαP=PP \parallel RUN_{\alpha P} = P.

Given two processes defined with the choice shorthand, (x:AP(x))(y:BQ(y))=(z:AB(P(z)Q(z)))(x : A \rightarrow P(x)) \parallel (y : B \rightarrow Q(y)) = (z : A \cap B \rightarrow (P(z) \parallel Q(z))). If no choices are shared, the result is the broken object: (xP)(yP)=STOPαP(x \rightarrow P) \parallel (y \rightarrow P) = STOP_{\alpha P}. Trace consumption is defined as (PQ)/t=(P/t)(Q/t)(P \parallel Q) / t = (P / t) \parallel (Q / t).

Given an interleave or a pure interleave, the resulting process is the concurrent combination of both processes with potential synchronization points, and it should consider every possible way in which independent events may happen. Processes PP and QQ only need to be synchronized on events from αPαQ\alpha P \cap \alpha Q.

Consider two processes P=(x:Af(x))P = (x : A \rightarrow f(x)) and Q=(y:Bg(y))Q = (y : B \rightarrow g(y)). The events from PQP \parallel Q will be drawn from C=(AB)(AαQ)(BαP)C = (A \cap B) \cup (A - \alpha Q) \cup (B - \alpha P). Assuming zCz \in C, there are three possibilities:

Notice zAzBz \notin A \land z \notin B is not a possibility given the definition of CC.

The resulting alphabet is α(PQ)=αPαQ\alpha (P \parallel Q) = \alpha P \cup \alpha Q. The set of valid traces is defined as traces(PQ)={t(tαP)(tαQ)(t(αPαQ)*)}traces(P \parallel Q) = \{ t \mid (t \restriction \alpha P) \land (t \restriction \alpha Q) \land (t \in (\alpha P \cup \alpha Q)^{*}) \} which means that the parts of a trace related to PP must be valid in PP, the parts related to QQ must be valid in QQ, and the events in a trace must have been drawn from the alphabet of either PP or QQ.

Consuming a trace tt is defined as (PQ)/t=(P/(tαP))(Q/(tαQ))(P \parallel Q) / t = (P / (t \restriction \alpha P)) \parallel (Q / (t \restriction \alpha Q)). Notice that if αP=αQ\alpha P = \alpha Q (which means the composition would be an interaction), then the previous expression is equal to our interaction definition: (PQ)/t=(P/t)(Q/t)(P \parallel Q) / t = (P / t) \parallel (Q / t).

Given composition between two processes that engage on communication: (c!vP)(c?xQ(x))=c!v(PQ(v))(c!v \rightarrow P) \parallel (c?x \rightarrow Q(x)) = c!v \rightarrow (P \parallel Q(v)). Notice the act of sending the message remains on the process definition (think of it as a log that can be conncealed if desired).

Given sequential processes PP and QQ, then (PQ)(P \parallel Q) is only valid if αPαQαQαP((αPαQ)(αP¯αQ¯))\alpha P \subseteq \alpha Q \lor \alpha Q \subseteq \alpha P \lor \checkmark \in ((\alpha P \cap \alpha Q) \cup (\overline{\alpha P} \cap \overline{\alpha Q})).

The failures of a composed process are defined as:

failures(PQ)={(s,(XY))s(αPαQ)*(sαP,X)failures(P)(sαQ,Y)failures(Q)}{(s,X)sdivergences(PQ)}\begin{align} failures(P &\parallel Q) = \{ (s, (X \cup Y)) \mid s \in (\alpha P \cup \alpha Q)^{*} \\ &\land (s \restriction \alpha P, X) \in failures(P) \land (s \restriction \alpha Q, Y) \in failures(Q) \} \\ &\cup \{ (s, X) \mid s \in divergences(P \parallel Q) \} \end{align}

Divergences are defined as:

divergences(PQ)={stt(αPαQ)*(((sαPdivergences(P))(sαQtraces(Q)))((sαPtraces(P))(sαQdivergences(Q))))}\begin{align} divergences&(P \parallel Q) = \{ s \frown t \mid t \in (\alpha P \cup \alpha Q)^{*} \\ &\land (((s \restriction \alpha P \in divergences(P)) \land (s \restriction \alpha Q \in traces(Q))) \\ &\lor ((s \restriction \alpha P \in traces(P)) \land (s \restriction \alpha Q \in divergences(Q)))) \} \end{align}

Restart Processes

Given a process PP where αP\downarrow \notin \alpha P, P\overset{\frown}{P} is the process that behaves like PP until \downarrow occurs, and then behaves like PP again.

It is defined as P=μX(PX)=(PPP...)\overset{\frown}{P} = \mu X \bullet (P \overset{\frown}{\downarrow} X) = (P \overset{\frown}{\downarrow} P \overset{\frown}{\downarrow} P \overset{\frown}{\downarrow} ...) with an alphabet αP=αP{}\alpha \overset{\frown}{P} = \alpha P \cup \{ \downarrow \}. Notice that straces(P)P/s=P\forall s \in traces(P) \bullet \overset{\frown}{P} / s \frown \langle \downarrow \rangle = \overset{\frown}{P}.

A process can “save” its state by engaging on a checkpoint event \bigodot.

Given PP where αP\bigodot \in \alpha P, Ch(P)Ch(P) is the process that goes back to the state after its more recent checkpoint or starts all over again if it engages in \downarrow. This is defined as straces(P)Ch(P)/(s)=Ch(P)\forall s \in traces(P) \bullet Ch(P) / (s \frown \langle \downarrow \rangle) = Ch(P) and straces(P)Ch(P)/(s)=Ch(P/(s))\forall s \in traces(P) \bullet Ch(P) / (s \frown \langle \bigotimes \rangle) = Ch(P / (s \frown \langle \bigodot \rangle)).

Mch(P)Mch(P) is the process that goes back to the state just before the last \bigodot if it engages on \downarrow. Its alphabet is defined as αMch(P)=αP{,}\alpha Mch(P) = \alpha P \cup \{ \bigodot, \downarrow \} and its behaviour is defined as straces(P)Mch(P)/(s)=Mch(P)\forall s \in traces(P) \bullet Mch(P) / (s \frown \langle \downarrow \rangle) = Mch(P) and (sαP)ttraces(P)Mch(P)/(st)=Mch(P)/s\forall (s \restriction \alpha P) \frown t \in traces(P) \bullet Mch(P) / (s \frown \langle \bigodot \rangle \frown t \frown \langle \downarrow \rangle) = Mch(P) / s.

Sequential Processes

A sequential process has \checkmark in its alphabet, and engages on it upon successful termination. \checkmark can only be the last event a sequential process engages with.

A trace of a sequential process PP is a sentence if PP terminates successfully after it engages on it.

Given a composition of a non sequential process AA with a sequential process BB, the sequential process dominates the result if its alphabet is a superset of the non sequential one: AABSTOPASKIPB=SKIPB\checkmark \notin A \land A \subseteq B \implies STOP_{A} \parallel SKIP_{B} = SKIP_{B}.

Notice that a successfully terminating process (like SKIPASKIP_{A}) doesn’t participate in any other event offered by another concurrent process: ((x:BP(x))SKIPA)=(x:(BA)(P(x)SKIPA))((x : B \rightarrow P(x)) \parallel SKIP_{A}) = (x : (B - A) \rightarrow (P(x) \parallel SKIP_{A})).

Sequential Composition

Given sequential processes PP and QQ with the same alphabet, (P;Q)(P ; Q) is the process that behaves like PP until successful termination, and then behaves like QQ. If PP doesn’t terminate successfully, then neither does (P;Q)(P ; Q). Notice that ((s;t);u)=(s;(t;u))((s ; t) ; u) = (s ; (t ; u)), that (;t)=(\langle \rangle ; t) = \langle \rangle and similarly (;t)=t(\langle \checkmark \rangle ; t) = t. Of course, (s;)=s(s ; \langle \checkmark \rangle) = s.

An infinite loop consisting of sequential composition of a process with itself is defined as *P=μX(P;X)*P = \mu X \bullet (P ; X), which expands to (P;P;P;P;...)(P ; P ; P ; P ; ...). Given *P*P is an infinite loop and doesn’t terminate, then \checkmark is not part of its alphabet: α(*P)=αP{}\alpha (*P) = \alpha P - \{ \checkmark \}.

Notice ((x:BP(x));Q)=(x:B(P(x);Q))((x : B \rightarrow P(x)) ; Q) = (x : B \rightarrow (P(x) ; Q)). Given one choice: ((xP(x));Q)=(x(P;Q))((x \rightarrow P(x)) ; Q) = (x \rightarrow (P ; Q)). Also, sequential composition is associative: ((P;Q);R)=(P;(Q;R))((P ; Q) ; R) = (P ; (Q ; R)).

Given PP and considering SKIPαPSKIP_{\alpha P} and STOPαPSTOP_{\alpha P}, then notice that (SKIPαP;P)=(P;SKIPαP)=P(SKIP_{\alpha P} ; P) = (P ; SKIP_{\alpha P}) = P, and (STOPαP;P)=STOPαP(STOP_{\alpha P} ; P) = STOP_{\alpha P}.

Given traces ss and tt where \checkmark is not in ss, then (s;t)=s(s ; t) = s and (s);t=st(s \frown \langle \checkmark \rangle) ; t = s \frown t. Also, events after the successful termination event are discarded: ((s0s1);t)=(s0t)((s_{0} \frown \langle \checkmark \rangle \frown s_{1}) ; t) = (s_{0} \frown t).

The traces operation is defined like this: traces(P;Q)={s;tstraces(P)ttraces(Q)}traces(P ; Q) = \{ s ; t \mid s \in traces(P) \land t \in traces(Q) \}.

Given a deterministic process PP, notice that straces(P)P/s=SKIPαPs \frown \langle \checkmark \rangle \in traces(P) \implies P / s = SKIP_{\alpha P}. For non-deterministic processes, this observation is loosen up to straces(P)(P/s)SKIPαPs \frown \langle \checkmark \rangle \in traces(P) \implies (P / s) \sqsubseteq SKIP_{\alpha P}.

In the case of non-deterministic processes, ;; distributes over \sqcap. Its refusals are:

refusals(P;Q)={X(X{})refusals(P)}{Xtraces(P)Xrefusals(Q)}\begin{align} refusals&(P ; Q) = \\ &\{ X \mid (X \cup \{ \checkmark \}) \in refusals(P) \} \\ &\cup \\ &\{ X \mid \langle \checkmark \rangle \in traces(P) \land X \in refusals(Q) \} \end{align}

Notice that if PP can refuse XX, then it can also refuse X{}X \cup \{ \checkmark \}. Failures are defined as:

failures(P;Q)={(s,X)(s,X{})failures(P)}{(st,X)straces(P)(t,X)failures(Q)}{(s,X)sdivergences(P;Q)}\begin{align} failures&(P ; Q) = \\ &\{ (s, X) \mid (s, X \cup \{ \checkmark \}) \in failures(P) \} \\ &\cup \\ &\{ (s \frown t, X) \mid s \frown \langle \checkmark \rangle \in traces(P) \land (t, X) \in failures(Q) \} \\ &\cup \\ &\{ (s, X) \mid s \in divergences(P ; Q) \} \end{align}

For non-deterministic sequential composition, (CHAOS;P)=P(CHAOS ; P) = P, as a divergent process must remain divergent. (P;Q)(P ; Q) diverges when PP diverges or when PP completes successfully and then QQ diverges:

divergences(P;Q)={ssdivergences(P)¬(ins)}{ststraces(P)¬(ins)tdivergences(Q)}\begin{align} divergences&(P ; Q) = \\ &\{ s \mid s \in divergences(P) \land \lnot (\langle \checkmark \rangle in s)\} \\ &\cup \\ &\{ s \frown t \mid s \frown \langle \checkmark \rangle \in traces(P) \land \lnot (\langle \checkmark \rangle in s) \land t \in divergences(Q) \} \end{align}


Given sequential processes PP and QQ, (PQ)(P \triangle Q) is a type of sequential composition that behaves like PP up to an arbitrary event where execution is interrupted, and then behaves like QQ. It must hold that αP\checkmark \notin \alpha P.

QQ starts on an arbitrary event initially offered by QQ but not offered by PP at all (this ensures determinism): (x:BP(x))Q=Q(x:B(P(x)Q))(x : B \rightarrow P(x)) \triangle Q = Q \square (x : B \rightarrow (P(x) \triangle Q)).

\triangle is associative and distributes over \sqcap. Also (PSTOPαP)=(STOPαPP)=P(P \triangle STOP_{\alpha P}) = (STOP_{\alpha P} \triangle P) = P and (PCHAOSαP)=(CHAOSαPP)=CHAOSαP(P \triangle CHAOS_{\alpha P}) = (CHAOS_{\alpha P} \triangle P) = CHAOS_{\alpha P}.

It is defined as α(PQ)=αPαQ\alpha (P \triangle Q) = \alpha P \cup \alpha Q where traces(PQ)={ststraces(P)ttraces(Q)}traces(P \triangle Q) = \{ s \frown t \mid s \in traces(P) \land t \in traces(Q) \}.

The catastrophic interrupt event is denoted \downarrow. (PQ)=(P(Q))(P \overset{\frown}{\downarrow} Q) = (P \triangle (\downarrow \rightarrow Q)), given αP\downarrow \notin \alpha P, which describes a process that behaves like PP, until \downarrow arbitrarily occurs, and then behaves like QQ. Its defined as (x:BP(x))Q=(x:B(P(x)Q)Q)(x : B \rightarrow P(x)) \overset{\frown}{\downarrow} Q = (x : B \rightarrow (P(x) \overset{\frown}{\downarrow} Q) \mid \downarrow \rightarrow Q). With regards to traces, straces(P)((PQ)/(s)=Q\forall s \in traces(P) \bullet ((P \overset{\frown}{\downarrow} Q) / (s \frown \langle \downarrow \rangle) = Q.

Random alternation between PP and QQ is denoted (PQ)(P \bigotimes Q). One of the processes will run at any given time, until its arbitrarily interrupted with the \bigotimes event, and then will switch to the other process until the same event occurs again. This is defined as (x:BP(x))Q=(x:B(P(x)Q)(QP))(x : B \rightarrow P(x)) \bigotimes Q = (x : B \rightarrow (P(x) \bigotimes Q) \mid \bigotimes \rightarrow (Q \bigotimes P)). Notice that (α(PQ)αPαQ)\bigotimes \in (\alpha (P \bigotimes Q) - \alpha P - \alpha Q) and that (PQ)/=(QP)(P \bigotimes Q) / \langle \bigotimes \rangle = (Q \bigotimes P). Also straces(P)(PQ)/s=(P/s)Q\forall s \in traces(P) \bullet (P \bigotimes Q) / s = (P / s) \bigotimes Q.


Processes with only two channels: an input channel leftleft and an output channel rightright are called pipes. Given two pipes that are non-stopping, their composition is also non-stopping. >>>> cannot introduce deadlock in pipes.

A pipe PP is left-guarded if it can never output an infinite sequence of messages to the right channel without inputs from the left channel. A pipe QQ is right-guarded if it can never input an infinite sequence of messages from the left channel without outputting to the right channel.

A pipe can be modelled as a relation between two sequences (left,right)(left, right) which represent a valid state of the pipe. Chaining pipes is then equivalent to relational composition. Given (P>>Q)(P >> Q), if PP is left-guarded or QQ is right-guarded, then: s(left,s)_P(s,right)_Q\exists s \bullet (left, s)\_{P} \land (s, right)\_{Q}.

Given pipes PP and QQ, sending PP’s output to QQ’s input is denoted P>>QP >> Q. The resulting process is a pipe. The alphabet of such pipe is α(P>>Q)=αleft(P)αright(Q)\alpha (P >> Q) = \alpha left(P) \cup \alpha right(Q). Of course, if P>>QP >> Q is valid, then αright(P)=αleft(Q)\alpha right(P) = \alpha left(Q). Notice that (P>>Q)>>R=P>>(Q>>R)(P >> Q) >> R = P >> (Q >> R).

Notice both pipes can keep infinitely communicating with each other while not communicating with the external world at all, which is called livelock. Proving that P>>QP >> Q requires proving that PP is left-guarded or that QQ is right-guarded.

Given P>>QP >> Q where both pipes are ready to output data, the right side of the pipe takes precedence:

(right!xP)>>(right!wQ)=right!w((right!xP)>>Q)\begin{align} (right!x \rightarrow P) &>> (right!w \rightarrow Q) = \\ &right!w \rightarrow ((right!x \rightarrow P) >> Q) \end{align}

Conversely, if both pipes are ready to input data, the left side of the pipe takes precedence:

(left?xP(x))>>(left?yQ(y))=left?x(P(x)>>(left?yQ(y)))\begin{align} (left?x \rightarrow P(x)) &>> (left?y \rightarrow Q(y)) = \\ &left?x \rightarrow (P(x) >> (left?y \rightarrow Q(y))) \end{align}

If PP is ready to input and QQ is ready to output, the order is unspecified:

(left?xP(x))>>(right!wQ)=(left?x(P(x)>>(right!wQ)))(right!w((left?xP(x))>>Q))\begin{align} (left?x \rightarrow P(x)) &>> (right!w \rightarrow Q) = \\ &(left?x \rightarrow (P(x) >> (right!w \rightarrow Q))) \\ &\mid \\ & (right!w \rightarrow ((left?x \rightarrow P(x)) >> Q)) \end{align}


A buffer is a special type of pipe that outputs the same sequence it received as input, potentially after some delay. Given the relational representation of a pipe, a buffer must obey the following laws:

It follows that all buffers are left-guarded.

If PP and QQ are buffers, then (P>>Q)(P >> Q) and (left?x(P>>(right!xQ)))(left?x \rightarrow (P >> (right!x \rightarrow Q))) are also buffers. (P>>Q)(P >> Q) is also a buffer if its equal to (left?x(P>>(right!xQ))(left?x \rightarrow (P >> (right!x \rightarrow Q))).


A trace is a sequence of events that can be applied in order to an object. The empty trace \langle \rangle is a valid trace in every object (the shortest possible trace), and its the default trace when an object didn’t engage on any event.

The resulting process that occurs after process PP engages in trace ss is denoted as P/sP / s. This expression only makes sense if straces(P)s \in traces(P). Notice that P/=PP / \langle \rangle = P and that P/(st)=(P/s)/tP / (s \frown t) = (P / s) / t

A function f:seqseqf : seq \rightarrow seq that maps a trace to a trace is strict if f()=f(\langle \rangle) = \langle \rangle, and distributive if f(st)=f(s)f(t)f(s \frown t) = f(s) \frown f(t). Notice that if the function is distributive, then its also strict.

Given a function mm that maps an event to an event, m*m^{*} stands for the function that maps a sequence to another sequence using mm on every element. For example, double*(1,2,3)=2,4,6double^{*}(\langle 1, 2, 3 \rangle) = \langle 2, 4, 6 \rangle. Notice that m*m^{*} is always strict and distributive. Of course, the mapping operator preserves the sequence length: given sequence ss, #(m*(s))=#s\# (m^{*}(s)) = \# s. Also, m*(s0)=m(s0)m^{*}(\langle s_{0} \rangle) = \langle m(s_{0}) \rangle.

Given a process defined as the function PP and a set of choices AA from its alphabet, its set of traces is defined as (x:AP(x))={}{yAtraces(P(y))}(x : A \rightarrow P(x)) = \{ \langle \rangle \} \cup \{ \bigcup_{y \in A} traces(P(y))\}. Similarly, given a prefix with event xx and result PP, traces(xP)={}{xtttraces(P)}traces(x \rightarrow P) = \{ \langle \rangle \} \cup \{ \langle x \rangle \frown t \mid t \in traces(P) \}. Finally, given a process composed of choices, traces(xPyQ)=traces(xP)traces(yQ)traces(x \rightarrow P \mid y \rightarrow Q) = traces(x \rightarrow P) \cup traces(y \rightarrow Q).

The traces of the resulting process after consuming a trace is all the traces that start with the consumed traces: traces(P/s)={tttraces(P)st}traces(P / s) = \{ t \mid t \in traces(P) \land s \leq t\}.



A chain is a set of processes {P1,P2,...,Pn,Pn+1}\{ P_1, P_2, ..., P_n, P_{n + 1} \} with equal alphabets ordered by the subset operator. It must hold that nPnPn+1\forall n \bullet P_{n} \sqsubseteq P_{n + 1}.

The limit (or least upper bound) of a chain is the process that can consume the traces of all processes in the chain, defined as {P1,P2,...}=(αP1,itraces(Pi))\sqcup \{ P_1, P_2, ...\} = (\alpha P_1, \bigcup_i traces(P_i)). The alphabet of the resulting process is αP1\alpha P_1, which is arbitrary, because all the processes in the chain share the same alphabet, so the alphabet of any of its elements would do.

Notice that given a process PP and a chain CC such that PCP \in C, then PCP \subseteq \sqcup C and given another process QQ, if all the elements of the chain are subsets of QQ, then the limit of the chain is also a subset of it: (cCcQ)CQ(\forall c \in C \bullet c \sqsubseteq Q) \implies \sqcup C \sqsubseteq Q.