\section{Related Work} \label{sect.relatedwork} An approach for GADTs consists of two components: a type information propagation component and a type conversion component. A type information propagation strategy determines what is known type information based on user-supplied type signatures, and what is inferred type information. A type conversion strategy deals with the construction of coercion terms. \subsection{Type information propagation strategies} \label{sect.relatedwork.prop} \metacomment { \item "Stratified type inference for generalized algebraic data types." (Franc¸ois Pottier and Yann R´egis-Gianas) } Pottier et al.~\cite{pottier06} use shape inference, where a shape represents known type information based on user-supplied signatures. These shapes are spread throughout the syntax tree in order to locate possible coercions. Their approach uses complex algorithms to spread the shapes as far as possible (depending on some quality versus performance tradeoff parameter). The essential part concerning \GADTs is that incompatible shapes are normalized with respect to some equation system, which is not made explicit in their work. From an implementer's point of view, this description is a concrete description of the equation system, and it requires infrastructure for spreading shapes. \metacomment { \item "Simple unification-based type inference for \GADTs" (Simon Peyton Jones, Dimitrios Vytiniotis, Stephanie Weirich, and Geoffrey Washburn) } Similarly, Peyton Jones et al.~\cite{jones06, jones04} define a notion of wobbly types to combine type checking and type inference, which is based on earlier work on boxy types~\cite{vytiniotis06}. A rigid type represents known type information based on user-supplied type information, whereas a wobbly type is based on inferred information. The idea is then that type conversions are only applied on rigid types, for reasons of predictability and most-general typing. Aside from wobbly types, the authors use concepts such as ``fresh most general unifiers'' and lexically scoped type variables in their presentation. It is hard to distinguish which of these concepts are really required, and which of these concepts are actually related to other language features that are covered by their approach (such as type families). We incorporated an equivalent notion of wobbly and rigid types in our specification: in an explicitly typed \SystemF-variant, the skolemnized type constants are rigid types, and we only allow type conversions on those. \metacomment { \item propagation conclusion } The exact choice of propagation strategy is an orthogonal issue. By allowing type conversions only on known type information, a better propagation strategy just means that less explicit types have to be given by the programmer. This is the reason why we have chosen an explicitly typed source language in the first place. In fact, for our own implementation of this specification, we piggy backed on the infrastructure of the implementation of a higher-ranked impredicate type system. It has two propagation strategies, of which the advanced one has a concept of hard and soft type context, which can be compared to rigid and boxy types~\cite{dijkstra05phd}. \subsection{Type conversion strategies} \metacomment { \item "Simple unification-based type inference for \GADTs" (Simon Peyton Jones, Dimitrios Vytiniotis, Stephanie Weirich, and Geoffrey Washburn) } Peyton~Jones et al.~\cite{jones06, jones04} use a unification-based strategy, where type conversion is called type refinement. A fixpoint substitution is constructed that, for each type equation introduced by pattern matching, contains a mapping for each (non-wobbly) type component of the left hand side of an equation, to the corresponding type component on the right hand side of an equation. The substitution is then used to normalize the types under consideration. The presentation is intertwined with the type propagation strategy, which makes it hard to separate these concepts. More recent continuation of this work by Sulzmann et al.~\cite{sulzmann07} puts the \GADT aspect in relative isolation. \metacomment { \item "Type inference and type error diagnosis for Hindly/Milner with extensions" \item no specific solver for \GADTs, complicated solver } Wazny~\cite{wazny06} uses a constraint-based strategy. The \GADT aspect of this strategy is covered separately by Sulzmann et al.~\cite{sulzmann06, stuckey05}. They also formulate the typing problems in terms of solving constraints with CHRs. The difference is that we restrict ourselves to equality constraints, and do not need the machinery required to solve implication constraints. Furthermore, their typing/translation rules do not mention how to deal with existential data types, which may be transparent to the given approach, but is of interest to a reader because \GADT examples~\cite{leftcorner} often use them. Finally, in contrast to Peyton Jones et al., restrictions on type conversions are not mentioned. We rely on their encoding for GADTs as qualified types in our approach (i.e. data constructors take a list of equality constraints). This encoding makes explicit which type variables are equated to what types, which gives slightly more information than an encoding with type signature notation (although these encodings can be mapped to each other). As an overall observation concerning the related work, we use a constraint-based strategy similar to that of Wazny, but use the unification-based strategy of Peyton~Jones et al.\ for the specific part of the implementation that deals with the decomposition of equality constraints.