QuickSearch:   Number of matching entries: 0.

Search Settings

    AuthorTitleYearJournal/ProceedingsReftypeDOI/URL
    Aït-Kaci, H. Warren's Abstract Machine 1999   book URL 
    BibTeX:
    @book{WAM99,
      author = {Hassan Aït-Kaci},
      title = {Warren's Abstract Machine},
      publisher = {MIT Press},
      year = {1999},
      url = {http://web.archive.org/web/20030213072337/http://www.vanx.org/archive/wam/wam.html}
    }
    
    Adams, M.D. & Dybvig, R.K. Efficient Nondestructive Equality Checking for Trees and Graphs 2008 International Conference on Functional Programming (ICFP), pp. 179-188  inproceedings DOI  
    Abstract: The Revised$^6$ Report on Scheme requires its generic equivalence predicate, equal?, to terminate even on cyclic inputs. While the terminating equal? can be implemented via a DFA-equivalence or union-find algorithm, these algorithms usually require an additional pointer to be stored in each object, are not suitable for multithreaded code due to their destructive nature, and may be unacceptably slow for the small acyclic values that are the most likely inputs to the predicate.

    This paper presents a variant of the union-find algorithm for equal? that addresses these issues. It performs well on large and small, cyclic and acyclic inputs by interleaving a low-overhead algorithm that terminates only for acyclic inputs with a more general algorithm that handles cyclic inputs. The algorithm terminates for all inputs while never being more than a small factor slower than whichever of the acyclic or union-find algorithms would have been faster. Several intermediate algorithms are also presented, each of which might be suitable for use in a particular application, though only the final algorithm is suitable for use in a library procedure, like equal?, that must work acceptably well for all inputs.

    BibTeX:
    @inproceedings{AD08,
      author = {Michael D. Adams and R. Kent Dybvig},
      title = {Efficient Nondestructive Equality Checking for Trees and Graphs},
      booktitle = {International Conference on Functional Programming (ICFP)},
      year = {2008},
      pages = {179--188},
      doi = {http://doi.acm.org/10.1145/1411204.1411230}
    }
    
    Ahlswede, R., Cai, N., Li, S.R. & Yeung, R.W. Network Information Flow 2000 IEEE Transactions on Information Theory
    Vol. 46, pp. 1204-1216 
    article DOI  
    Abstract: We introduce a new class of problems called network information flow which is inspired by computer network applications. Consider a point-to-point communication network on which a number of information sources are to be mulitcast to certain sets of destinations. We assume that the information sources are mutually independent. The problem is to characterize the admissible coding rate region. This model subsumes all previously studied models along the same line. In this paper, we study the problem with one information source, and we have obtained a simple characterization of the admissible coding rate region. Our result can be regarded as the Max-flow Min-cut Theorem for network information flow. Contrary to one's intuition, our work reveals that it is in general not optimal to regard the information to be multicast as a ``fluid'' which can simply be routed or replicated. Rather, by employing coding at the nodes, which we refer to as network coding, bandwidth can in general be saved. This finding may have significant impact on future design of switching systems.
    BibTeX:
    @article{ACLY00,
      author = {Rudolf Ahlswede and Ning Cai and Shuo-yen Robert Li and Raymond W. Yeung},
      title = {Network Information Flow},
      journal = {IEEE Transactions on Information Theory},
      year = {2000},
      volume = {46},
      pages = {1204--1216},
      doi = {http://dx.doi.org/10.1109/18.850663}
    }
    
    Aho, A.V. Indexed Grammars--An Extension of Context-Free Grammars 1968 Journal of the ACM
    Vol. 15, pp. 647-671 
    article DOI  
    Abstract: A new type of grammar for generating formal languages, called an indexed grammar, is presented. An indexed grammar is an extension of a context-free grammar, and the class of languages generated by indexed grammars has closure properties and decidability results similar to those for context-free languages. The class of languages generated by indexed grammars properly includes all context-free languages and is a proper subset of the class of context-sensitive languages. Several subclasses of indexed grammars generate interesting classes of languages.
    BibTeX:
    @article{Aho68,
      author = {Alfred V. Aho},
      title = {Indexed Grammars--An Extension of Context-Free Grammars},
      journal = {Journal of the ACM},
      year = {1968},
      volume = {15},
      pages = {647--671},
      doi = {http://doi.acm.org/10.1145/321479.321488}
    }
    
    Aho, A.V., Hopcroft, J.E. & Ullman, J.D. A General Theory of Translation 1969 Mathematical Systems Theory
    Vol. 3, pp. 193-221 
    article DOI  
    Abstract: The concept of a translation is fundamental to any theory of compiling. Formally, atranslation is any set of pairs of words. Classes of finitely describable translations are considered in general, from the point of view of balloon automata [17, 18, 19].

    A translation can be defined by atransducer, a device with an input tape and an output terminal. If, with inputx, the stringy appears at the output terminal, then (x, y) is in the translation defined by the transducer. One can also define a translation by a two input taperecognizer. Ifx andy are placed on the two tapes, the recognizer tells if (x, y) is in the defined translation.

    One can define closed classes of transducers and recognizers by:

    (1) restricting the way in which infinite storage may be used (pushdown structure, stack structure, etc.),

    (2) allowing the finite control to be nondeterministic or deterministic,

    (3) allowing one way or two way motion on the input tapes.

    We have some results on classes of translations which can be categorized roughly into three types.

    (a) Translations defined by certain classes of transducers and recognizers are equivalent.

    (b) Translations of a given class are sometimes closed under composition and decomposition with a finite memory translation (gsm mapping).

    (c) A nondeterministically defined translation can be expressed as the composition of a finitely defined translation and a related deterministically defined translation in many cases.

    In addition, if C is a class of translations, then one can write a compiler-compiler to render any translationT in C and only if the following question is solvable: For any translationT inC and stringx, does there exist ay such that (x, y) is inT? We shall show that, in general, the decidability of this question is equivalent to the decidability of one or more questions from automata theory, depending upon the type of devices defining the class C.

    BibTeX:
    @article{AHU69,
      author = {A. V. Aho and J. E. Hopcroft and J. D. Ullman},
      title = {A General Theory of Translation},
      journal = {Mathematical Systems Theory},
      year = {1969},
      volume = {3},
      pages = {193--221},
      doi = {http://dx.doi.org/10.1007/BF01703920}
    }
    
    Aho, A.V. & Ullman, J.D. Translations on a Context-Free Grammar 1971 Information and Control
    Vol. 19, pp. 439-475 
    article DOI  
    Abstract: Two schemes for the specification of translations on a context-free grammar are proposed. The first scheme, called a generalized syntax directed translation (GSDT), consists of a context free grammar with a set of semantic rules associated with each production of the grammar. In a GSDT an input word is parsed according to the underlying context free grammar, and at each node of the tree, a finite number of translation strings are computed in terms of the translation strings defined at the descendants of that node. The functional relationship between the length of input and length of output for translations defined by GSDT's is investigated.

    The second method for the specification of translations is in terms of tree automata--finite automata with output, walking on derivation trees of a context free grammar. It is shown that tree automata provide an exact characterization for those GSDT's with a linear relationship between input and output length.

    BibTeX:
    @article{AU71,
      author = {Alfred V. Aho and Jeffrey D. Ullman},
      title = {Translations on a Context-Free Grammar},
      journal = {Information and Control},
      year = {1971},
      volume = {19},
      pages = {439--475},
      doi = {http://dx.doi.org/10.1016/S0019-9958(71)90706-6}
    }
    
    Alblas, H. Attribute Evaluation Methods 1991 Attribute Grammars, Applications and Systems, pp. 48 -113  inproceedings DOI  
    Abstract: This paper surveys the main evaluation strategies for non-circular attribute grammars, i.e., passes, sweeps, visits and plans. Attention is also focused on the iteration of evaluation passes for circular attribute grammars.
    BibTeX:
    @inproceedings{Alblas91,
      author = {Henk Alblas},
      title = {Attribute Evaluation Methods},
      booktitle = {Attribute Grammars, Applications and Systems},
      year = {1991},
      pages = {48 --113},
      doi = {http://dx.doi.org/10.1007/3-540-54572-7_3}
    }
    
    Allauzen, C. & Mohri, M. Finitely Subsequential Transducers 2003 International Journal of Foundations of Computer Science
    Vol. 14, pp. 983-994 
    article  
    Abstract: Finitely subsequential transducers are efficient finite-state transducers with a finite number of final outputs and are used in a variety of applications. Not all transducers admit equivalent finitely subsequential transducers however. We brefly describe an existing generalized determinization algorithm for finitely subsequential transducers and give the first characterrization of finitely subsentiable transducers, transducers that admit equivalent finitely subsequential transducers. Our characterization shows the existence of an efficient algorithm for testing finite subsequentiability. We have fully implemented the generalizaed determinization algorithm and the algorithm for testing finite subsequentiability. We report experimental results showing that these algorithms are practical in large-vocabulary speech recognition applications. The theoretical formulation of our results is the equivalence of the following three properties for finite-state transducers: determinizability in the sense of the generalized algorithm, finite subsequentiability, and the twins property.
    BibTeX:
    @article{AM03,
      author = {Cyril Allauzen and Mehryar Mohri},
      title = {Finitely Subsequential Transducers},
      journal = {International Journal of Foundations of Computer Science},
      year = {2003},
      volume = {14},
      pages = {983--994}
    }
    
    Alur, R. & Madhusudan, P. Visibly Pushdown Languages 2004 ACM Symposium on Theory of Computing (STOC), pp. 202-211  inproceedings DOI  
    Abstract: We propose the class of visibly pushdown languages as embeddings of context-free languages that is rich enough to model program analysis questions and yet is tractable and robust like the class of regular languages. In our definition, the input symbol determines when the pushdown automaton can push or pop, and thus the stack depth at every position. We show that the resulting class Vpl of languages is closed under union, intersection, complementation, renaming, concatenation, and Kleene-*, and problems such as inclusion that are undecidable for context-free languages are Exptime-complete for visibly pushdown automata. Our framework explains, unifies, and generalizes many of the decision procedures in the program analysis literature, and allows algorithmic verification of recursive programs with respect to many context-free properties including access control properties via stack inspection and correctness of procedures with respect to pre and post conditions. We demonstrate that the class Vpl is robust by giving two alternative characterizations: a logical characterization using the monadic second order (MSO) theory over words augmented with a binary matching predicate, and a correspondence to regular tree languages. We also consider visibly pushdown languages of infinite words and show that the closure properties, MSO-characterization and the characterization in terms of regular trees carry over. The main difference with respect to the case of finite words turns out to be determinizability: nondeterministic Büchi visibly pushdown automata are strictly more expressive than deterministic Muller visibly pushdown automata.
    BibTeX:
    @inproceedings{AM04,
      author = {Rajeev Alur and P. Madhusudan},
      title = {Visibly Pushdown Languages},
      booktitle = {ACM Symposium on Theory of Computing (STOC)},
      year = {2004},
      pages = {202--211},
      doi = {http://doi.acm.org/10.1145/1007352.1007390}
    }
    
    Andoni, A. & Krauthgamer, R. The Smoothed Complexity of Edit Distance 2008 International Colloquium on Automata, Languages and Programming (ICALP), pp. 357-369  inproceedings DOI  
    Abstract: We initiate the study of the smoothed complexity of sequence alignment, by proposing a semi-random model of edit distance between two input strings, generated as follows. First, an adversary chooses two binary strings of length d and a longest common subsequence A of them. Then, every character is perturbed independently with probability p, except that A is perturbed in exactly the same way inside the two strings.

    We design two efficient algorithms that compute the edit distance on smoothed instances up to a constant factor approximation. The first algorithm runs in near-linear time, namely $d^1+epsilon$ for any fixed $epsilon > 0$. The second one runs in time sublinear in d, assuming the edit distance is not too small. These approximation and runtime guarantees are significantly better then the bounds known for worst-case inputs, e.g. near-linear time algorithm achieving approximation roughly $d^1/3$, due to Batu, Ergün, and Sahinalp [SODA 2006].

    Our technical contribution is twofold. First, we rely on finding matches between substrings in the two strings, where two substrings are considered a match if their edit distance is relatively small, a prevailing technique in commonly used heuristics, such as PatternHunter of Ma, Tromp and Li [Bioinformatics, 2002]. Second, we effectively reduce the smoothed edit distance to a simpler variant of (worst-case) edit distance, namely, edit distance on permutations (a.k.a. Ulam's metric). We are thus able to build on algorithms developed for the Ulam metric, whose much better algorithmic guarantees usually do not carry over to general edit distance.

    BibTeX:
    @inproceedings{AK08,
      author = {Alexandr Andoni and Robert Krauthgamer},
      title = {The Smoothed Complexity of Edit Distance},
      booktitle = {International Colloquium on Automata, Languages and Programming (ICALP)},
      year = {2008},
      pages = {357--369},
      doi = {http://dx.doi.org/10.1007/978-3-540-70575-8_30}
    }
    
    Antimirov, V. Partial Derivatives of Regular Expressions and Finite Automata Constructions 1996 Theoretical Computer Science
    Vol. 155, pp. 291-319 
    article DOI  
    Abstract: We introduce a notion of partial derivative of a regular expression and apply it to finite automaton constructions. The notion is a generalization of the known notion of word derivative due to Brzozowski: partial derivatives are related to non-deterministic finite automata (NFA's) in the same natural way as derivatives are related to deterministic ones (DFA's). We give a constructive definition of partial derivatives and prove several facts, in particular:

    (1) any derivative of a regular expression r can be represented by a finite set of partial derivatives of r;

    (2) the set of all partial derivatives of r is finite and its cardinality is less than or equal to one plus the number of occurrences of letters from A appearing in r;

    (3) any partial derivative of r is either a regular unit, or a subterm of r, or a concatenation of several such subterms.

    These theoretical results lead us to a new algorithm for turning regular expressions into relatively small NFA's and allow us to provide certain improvements to Brzozowski's algorithm for constructing DFA's. We also report on a prototype implementation of our NFA construction and present several examples.

    BibTeX:
    @article{Antimirov96,
      author = {Valentin Antimirov},
      title = {Partial Derivatives of Regular Expressions and Finite Automata Constructions},
      journal = {Theoretical Computer Science},
      year = {1996},
      volume = {155},
      pages = {291--319},
      doi = {http://dx.doi.org/10.1016/0304-3975(95)00182-4}
    }
    
    Aoki, K. Abilities of Context Free Tree Grammar (ORIGINAL TITLE IN JAPANESE) 1983 IEICE Transactions on Information and Systems
    Vol. J66-D, pp. 1009-1014 
    article URL 
    BibTeX:
    @article{Aoki83,
      author = {Kyota Aoki},
      title = {Abilities of Context Free Tree Grammar (ORIGINAL TITLE IN JAPANESE)},
      journal = {IEICE Transactions on Information and Systems},
      year = {1983},
      volume = {J66-D},
      pages = {1009--1014},
      url = {http://search.ieice.org/bin/summary.php?id=j66-d_9_1009&category=D&year=1983&lang=J&abst=}
    }
    
    Aoki, K. & Matsuura, K. A Bottom-up Parsing Method for Complete Context-Free Tree Languages 1986 Systems and Computers in Japan
    Vol. 17, pp. 66-74 
    article DOI  
    BibTeX:
    @article{AM86,
      author = {Kyota Aoki and Kazumi Matsuura},
      title = {A Bottom-up Parsing Method for Complete Context-Free Tree Languages},
      journal = {Systems and Computers in Japan},
      year = {1986},
      volume = {17},
      pages = {66--74},
      doi = {http://dx.doi.org/10.1002/scj.4690170308}
    }
    
    Arenas, M., Barceló, P. & Libkin, L. Combining Temporal Logics for Querying XML Documents 2007 International Conference on Database Theory (ICDT)  inproceedings DOI  
    Abstract: Close relationships between XML navigation and temporal logics have been discovered recently, in particular between logics LTL and CTL* and XPath navigation, and between the $-calculus and navigation based on regular expressions. This opened up the possibility of bringing model-checking techniques into the field of XML, as documents are naturally represented as labeled transition systems. Most known results of this kind, however, are limited to Boolean or unary queries, which are not always sufficient for complex querying tasks.

    Here we present a technique for combining temporal logics to capture n-ary XML queries expressible in two yardstick languages: FO and MSO. We show that by adding simple terms to the language, and combining a temporal logic for words together with a temporal logic for unary tree queries, one obtains logics that select arbitrary tuples of elements, and can thus be used as building blocks in complex query languages. We present general results on the expressiveness of such temporal logics, study their model-checking properties, and relate them to some common XML querying tasks.

    BibTeX:
    @inproceedings{ABL07,
      author = {Marcelo Arenas and Pablo Barceló and Leonid Libkin},
      title = {Combining Temporal Logics for Querying XML Documents},
      booktitle = {International Conference on Database Theory (ICDT)},
      year = {2007},
      doi = {http://dx.doi.org/10.1007/11965893_25}
    }
    
    Arnold, A. & Dauchet, M. Un Théorème de Duplication pour les Forêts Algébriques 1976 Journal of Computer and System Sciences
    Vol. 13, pp. 223-244 
    article DOI  
    Abstract: On caractérise les forêts algébriques dont tous les arbres sont de la forme $*(t, t)$. On utilise cette caractérisation pour montrer que la classe des forêts algébriques n'est par fermée par homomorphisme non linéaire, et pour montrer qu'il existe des forêts reconnaissables généralisées, au sens de Maibaum, dont le feuillage n'est pas une forêt algébrique.
    BibTeX:
    @article{AD76,
      author = {André Arnold and Max Dauchet},
      title = {Un Théorème de Duplication pour les Forêts Algébriques},
      journal = {Journal of Computer and System Sciences},
      year = {1976},
      volume = {13},
      pages = {223--244},
      doi = {http://dx.doi.org/10.1016/S0022-0000(76)80032-3}
    }
    
    Asai, K. & Kameyama, Y. Polymorphic Delimited Continuations 2007 Asian Symposium on Programming Languages and Systems (APLAS), pp. 239-254  inproceedings DOI URL 
    Abstract: This paper presents a polymorphic type system for a language with delimited control operators, shift and reset. Based on the monomorphic type system by Danvy and Filinski, the proposed type system allows pure expressions to be polymorphic. Thanks to the explicit presence of answer types, our type system satisfies various important properties, including strong type soundness, existence of principal types and an inference algorithm, and strong normalization. Relationship to CPS translation as well as extensions to impredicative polymorphism are also discussed. These technical results establish the foundation of polymorphic delimited continuations.
    BibTeX:
    @inproceedings{AK07,
      author = {Kenichi Asai and Yukiyoshi Kameyama},
      title = {Polymorphic Delimited Continuations},
      booktitle = {Asian Symposium on Programming Languages and Systems (APLAS)},
      year = {2007},
      pages = {239--254},
      url = {http://pllab.is.ocha.ac.jp/~asai/papers/papers.html},
      doi = {http://dx.doi.org/10.1007/978-3-540-76637-7_16}
    }
    
    Asveld, P.R. Time and Space Complexity of Inside-Out Macro Languages 1981 International Journal of Computer Mathematics
    Vol. 10, pp. 3-14 
    article URL 
    Abstract: Starting from Fischer's IO Standard Form Theorem we show that for each inside-out (or IO-) macro language $L$, there is a $-free IO-macro grammar with the following property: for each $x$ in $L$, there is a derivation of $x$ of length at most linear in the length of $x$. Then we construct a nondeterministic log-space bounded auxiliary pushdown automaton which accepts $L$ in polynomial time. Therefore the IO-macro languages are (many-one) log-space reducible to the context-free languages. Consequently, the membership problem for IO-macro languages can be solved deterministically in polynomial time and in space $(log n)^2$.
    BibTeX:
    @article{Asveld81,
      author = {Peter R.J. Asveld},
      title = {Time and Space Complexity of Inside-Out Macro Languages},
      journal = {International Journal of Computer Mathematics},
      year = {1981},
      volume = {10},
      pages = {3--14},
      url = {http://eprints.eemcs.utwente.nl/3663/}
    }
    
    Asveld, P.R. Abstract Grammars Based on Transductions 1991 Theoretical Computer Science
    Vol. 81, pp. 269-288 
    article DOI  
    Abstract: We study an abstract grammatical model in which the effect (or application) of a production -- determined by a so-called transduction -- plays the main part rather than the notion of production itself. Under appropriately chosen assumptions on the underlying family $T$ of transductions, we establish elementary, decidability, and complexity properties of the corresponding family $L(T)$ of languages generated by $T$-grammars. These results are special instances of slightly more general properties of so-called $-controlled $T$-grammars, since regular control does not increase the generating power of $T$-grammars. In a $-controlled $T$-grammar we strict the iteration of $T$-transductions to those sequences of transductions that belong to a given control language, taken from a family $ of control languages.
    BibTeX:
    @article{Asveld91,
      author = {Peter R.J. Asveld},
      title = {Abstract Grammars Based on Transductions},
      journal = {Theoretical Computer Science},
      year = {1991},
      volume = {81},
      pages = {269--288},
      doi = {http://dx.doi.org/10.1016/0304-3975(91)90195-8}
    }
    
    Asveld, P.R. Fuzzy Context-Free Languages -- Part 1: Generalized Fuzzy Context-Free Grammars 2005 Theoretical Computer Science
    Vol. 347, pp. 167-190 
    article DOI  
    Abstract: Motivated by aspects of robustness in parsing a context-free language, we study generalized fuzzy context-free grammars. These fuzzy context-free K-grammars provide a general framework to describe correctly as well as erroneously derived sentences by a single generating mechanism. They model the situation of making a finite choice out of an infinity of possible grammatical errors during each context-free derivation step. Formally, a fuzzy context-free K-grammar is a fuzzy context-free grammar with a countable rather than a finite number of rules satisfying the following condition: for each symbol $, the set containing all right-hand sides of rules with left-hand side equal to $ forms a fuzzy language that belongs to a given family K of fuzzy languages. We investigate the generating power of fuzzy context-free K-grammars, and we show that under minor assumptions on the parameter K, the family of languages generated by fuzzy context-free K-grammars possesses closure properties very similar to those of the family of ordinary context-free languages.
    BibTeX:
    @article{Asveld05,
      author = {Peter R.J. Asveld},
      title = {Fuzzy Context-Free Languages -- Part 1: Generalized Fuzzy Context-Free Grammars},
      journal = {Theoretical Computer Science},
      year = {2005},
      volume = {347},
      pages = {167--190},
      doi = {http://dx.doi.org/10.1016/j.tcs.2005.06.012}
    }
    
    Asveld, P.R. Fuzzy Context-Free Languages -- Part 2: Recognition and Parsing Algorithms 2005 Theoretical Computer Science
    Vol. 347, pp. 191-213 
    article DOI  
    Abstract: In a companion paper [P.R.J. Asveld, Fuzzy context-free languages -- Part 1: Generalized fuzzy context-free grammars, Theoret. Comput. Sci., (2005).] we used fuzzy context-free grammars in order to model grammatical errors resulting in erroneous inputs for robust recognizing and parsing algorithms for fuzzy context-free languages. In particular, this approach enables us to distinguish between small errors (``tiny mistakes'') and big errors (``capital blunders'').

    In this paper, we present some algorithms to recognize fuzzy context-free languages: particularly, a modification of Cocke-Younger-Kasami's algorithm and some recursive descent algorithms. Then we extend these recognition algorithms to corresponding parsing algorithms for fuzzy context-free languages. These parsing algorithms happen to be robust in some very elementary sense.

    BibTeX:
    @article{Asveld05a,
      author = {Peter R.J. Asveld},
      title = {Fuzzy Context-Free Languages -- Part 2: Recognition and Parsing Algorithms},
      journal = {Theoretical Computer Science},
      year = {2005},
      volume = {347},
      pages = {191--213},
      doi = {http://dx.doi.org/10.1016/j.tcs.2005.06.013}
    }
    
    Asveld, P.R. Generating All Permutations by Context-Free Grammars in Chomsky Normal Form 2006 Theoretical Computer Science
    Vol. 354, pp. 118-130 
    article DOI  
    Abstract: Let $L_n$ be the finite language of all $n!$ strings that are permutations of $n$ different symbols ($n geq 1$). We consider context-free grammars $G_n$ in Chomsky normal form that generate $L_n$. In particular we study a few families $G_n_n_1$, satisfying $L(G_n)=L_n$ for $n geq 1$, with respect to their descriptional complexity, i.e. we determine the number of nonterminal symbols and the number of production rules of $G_n$ as functions of $n$.
    BibTeX:
    @article{Asveld06,
      author = {Peter R.J. Asveld},
      title = {Generating All Permutations by Context-Free Grammars in Chomsky Normal Form},
      journal = {Theoretical Computer Science},
      year = {2006},
      volume = {354},
      pages = {118--130},
      doi = {http://dx.doi.org/10.1016/j.tcs.2005.11.010}
    }
    
    Autebert, J., Berstel, J. & Boasson, L. Context-Free Languages and Push-Down Automata 1997 Handbook of Formal Languages, Vol 1: Word, Language, Grammar, pp. 111-174  incollection  
    BibTeX:
    @incollection{ABB97,
      author = {Jean-Michel Autebert and Jean Berstel and Luc Boasson},
      title = {Context-Free Languages and Push-Down Automata},
      booktitle = {Handbook of Formal Languages, Vol 1: Word, Language, Grammar},
      publisher = {Springer-Verlag},
      year = {1997},
      pages = {111--174}
    }
    
    Azuma, A. & Matsumoto, Y. A Generalization of Forward-Backward Algorithm for Sequential Labeling 2009 IPSJ SIG Technical Report, pp. 103-110  inproceedings URL 
    BibTeX:
    @inproceedings{AM09,
      author = {Ai Azuma and Yuji Matsumoto},
      title = {A Generalization of Forward-Backward Algorithm for Sequential Labeling},
      booktitle = {IPSJ SIG Technical Report},
      year = {2009},
      pages = {103--110},
      url = {http://cl.naist.jp/~ai-a/}
    }
    
    Böhm, C. & Jacopini, G. Flow Diagrams, Turing Machines and Languages with Only Two Formation Rules 1966 Communications of the ACM
    Vol. 9, pp. 366-371 
    article DOI  
    BibTeX:
    @article{BJ66,
      author = {Corrado Böhm and Giuseppe Jacopini},
      title = {Flow Diagrams, Turing Machines and Languages with Only Two Formation Rules},
      journal = {Communications of the ACM},
      year = {1966},
      volume = {9},
      pages = {366--371},
      doi = {http://doi.acm.org/10.1145/355592.365646}
    }
    
    Badouel, E. & Tchendji, M.T. Merging Hierarchically-Structured Documents in Workflow Systems 2008 Coalgebraic Methods in Computer Science (CMCS), pp. 3-24  inproceedings DOI  
    Abstract: We consider the manipulation of hierarchically-structured documents within a complex workflow system. Such a system may consist of several subsystems distributed over a computer network. These subsystems can concurrently update partial views of the document. At some points in time we need to reconcile the various local updates by merging the partial views into a coherent global document. For that purpose, we represent the potentially-infinite set of documents compatible with a given partial view as a coinductive data structure. This set is a regular set of trees that can be obtained as the image of the partial view of the document by the canonical morphism (anamorphism) associated with a coalgebra (some kind of tree automaton). Merging partial views then amounts to computing the intersection of the corresponding regular sets of trees which can be obtained using a synchronization operation on coalgebras.
    BibTeX:
    @inproceedings{BT08,
      author = {Eric Badouel and Maurice Tchoupé Tchendji},
      title = {Merging Hierarchically-Structured Documents in Workflow Systems},
      booktitle = {Coalgebraic Methods in Computer Science (CMCS)},
      year = {2008},
      pages = {3--24},
      doi = {http://dx.doi.org/10.1016/j.entcs.2008.05.017}
    }
    
    Bagan, G. MSO Queries on Tree Decomposable Structures Are Computable with Linear Delay 2006 Computer Science Logic (CSL), pp. 167-181  inproceedings DOI  
    Abstract: Linear-Delay_lin is the class of enumeration problems computable in two steps: the first step is a precomputation in linear time in the size of the input and the second step computes successively all the solutions with a delay between two consecutive solutions y1 and y2 that is linear in |y2|. We prove that evaluating a fixed monadic second order (MSO) query (i.e. computing all the tuples that satisfy the MSO formula) in a binary tree is a Linear-Delaylin problem. More precisely, we show that given a binary tree T and a tree automaton $ representing an MSO query $X)$ , we can evaluate $ on T with a preprocessing in time and space complexity O($|^3|T|$) and an enumeration phase with a delay O(|S|) and space O(max|S|) where |S| is the size of the next solution and max|S| is the size of the largest solution. We introduce a new kind of algorithm with nice complexity properties for some algebraic operations on enumeration problems. In addition, we extend the precomputation (with the same complexity) such that the ith (with respect to a certain order) solution S is produced directly in time O(|S|log(|T|)). Finally, we generalize these results to bounded treewidth structures.
    BibTeX:
    @inproceedings{Bagan06,
      author = {Guillaume Bagan},
      title = {MSO Queries on Tree Decomposable Structures Are Computable with Linear Delay},
      booktitle = {Computer Science Logic (CSL)},
      year = {2006},
      pages = {167--181},
      doi = {http://dx.doi.org/10.1007/11874683_11}
    }
    
    Baker, B.S. Tree Transductions and Families of Tree Languages 1973 ACM Symposium on Theory of Computing (STOC), pp. 200-206  inproceedings DOI  
    Abstract: Interest in the study of sets of trees, tree languages, has led to the definition of finite automata which accept trees [2,11] and transducers which map trees into other trees [7,9,10]. These generalized machines may read treesfinite automata which accept trees [2,11] and of transducers which map trees into other trees [7,9,10]. These generalized machines may read trees either ``top-down'' (from the root toward the leaves) or ``bottom-up'' (from the leaves toward the root). Here it is shown that both the class of top-down transductions and the class of bottom-up transductions can be characterized in terms of two restricted classes of tree transductions. From these ductions and the class of bottom-up transductions can be characterized in terms of two restricted classes of tree transductions. From these characterizations, it is shown that the composition of any n bottom-up transductions can be realized by the composition of n+1 top-down transductions, and similarly, the composition of any n top-down transductions can be realized by the composition of n+1 bottom-up transductions. Next, we study the families of tree languages which can be obtained from the recognizable sets (sets accepted by finite tree automata) by the composition of n top-down or bottom-up transductions, n>0. The yield operation, which concatenates the leaves of a tree from left to right to form a of string, languages from the hierarchy of families of tree languages. It is shown that each family of string languages in this hierarchy is properly contained in the family of context-sensitive languages.
    BibTeX:
    @inproceedings{Baker73,
      author = {Brenda S. Baker},
      title = {Tree Transductions and Families of Tree Languages},
      booktitle = {ACM Symposium on Theory of Computing (STOC)},
      year = {1973},
      pages = {200--206},
      doi = {http://doi.acm.org/10.1145/800125.804051}
    }
    
    Baker, B.S. Non-Context-Free Grammars Generating Context-Free Languages 1974 Information and Control
    Vol. 24, pp. 231-246 
    article DOI  
    Abstract: If G is a grammar such that in each non-context-free rule of G, the right side contains a string of terminals longer than any terminal string appearing between two nonterminals in the left side, then the language generated by G is context free. Six previous results follow as corollaries of this theorem.
    BibTeX:
    @article{Baker74,
      author = {Brenda S. Baker},
      title = {Non-Context-Free Grammars Generating Context-Free Languages},
      journal = {Information and Control},
      year = {1974},
      volume = {24},
      pages = {231--246},
      doi = {http://dx.doi.org/10.1016/S0019-9958(74)80038-0}
    }
    
    Baker, B.S. Tree Transducers and Tree Languages 1978 Information and Control
    Vol. 37, pp. 241-266 
    article DOI  
    Abstract: Tree transducers (automata which read finite labeled trees and output finite labeled trees) are used to define a hierarchy of families of ``tree languages'' (sets of trees). In this hierarchy, families generated by ``top-down'' tree transducers (which read trees from the root toward the leaves) alternate with families generated by ``bottom-up'' tree transducers (which read trees from the leaves toward the root). A hierarchy of families of string languages is obtained from the first hierarchy by the ``yield'' operation (concatenating the labels of the leaves of the trees). Both hierarchies are conjectured to be infinite, and some results are presented concerning this conjecture. A study is made of the closure properties of the top-down and bottom-up families in the hierarchies under various tree and string operations. The families are shown to be closed under certain operations if and only if the hierarchies are finite.
    BibTeX:
    @article{Baker78,
      author = {Brenda S. Baker},
      title = {Tree Transducers and Tree Languages},
      journal = {Information and Control},
      year = {1978},
      volume = {37},
      pages = {241--266},
      doi = {http://dx.doi.org/10.1016/S0019-9958(78)90538-7}
    }
    
    Baker, B.S. Generalized Syntax Directed Translation, Tree Transducers, and Linear Space 1978 SIAM Journal on Computing
    Vol. 7, pp. 376-391 
    article  
    BibTeX:
    @article{Baker78a,
      author = {Brenda S. Baker},
      title = {Generalized Syntax Directed Translation, Tree Transducers, and Linear Space},
      journal = {SIAM Journal on Computing},
      year = {1978},
      volume = {7},
      pages = {376--391}
    }
    
    Baker, B.S. Composition of Top-Down and Bottom-Up Tree Transductions 1979 Information and Control
    Vol. 41, pp. 186-213 
    article DOI  
    Abstract: Two classes of tree transducers (finite-state automata which read trees and output trees) are studied: ``top-down'' transducers, which read trees from the root toward the leaves, and ``bottom-up'' transducers, which read trees from the leaves toward the root. The closure properties of each class under composition are investigated, and some decomposition theorems are presented. It is shown that the two classes can be decomposed into the same two simple classes of transductions. Consequently, for every n, the composition of n transductions in one direction can always be realized by the composition of n + 1 transductions in the other direction. Finally, some results are presented concerning the role of the finite-state control in tree transductions.
    BibTeX:
    @article{Baker79,
      author = {Brenda S. Baker},
      title = {Composition of Top-Down and Bottom-Up Tree Transductions},
      journal = {Information and Control},
      year = {1979},
      volume = {41},
      pages = {186--213},
      doi = {http://dx.doi.org/10.1016/S0019-9958(79)90561-8}
    }
    
    Bar-Yossef, Z., Fontoura, M. & Josifovski, V. Buffering in Query Evaluation over XML Streams 2005 Principles of Database Systems (PODS), pp. 216-227  inproceedings DOI  
    Abstract: All known algorithms for evaluating advanced XPath queries (e.g., ones with predicates or with closure axes) on XML streams employ buffers to temporarily store fragments of the document stream. In many cases, these buffers grow very large and constitute a major memory bottleneck. In this paper, we identify two broad classes of evaluation problems that independently necessitate the use of large memory buffers in evaluation of queries over XML streams: (1) full-fledged evaluation (as opposed to just filtering) of queries with predicates; (2) evaluation (whether full-fledged or filtering) of queries with "multi-variate" predicates.We prove quantitative lower bounds on the amount of memory required in each of these scenarios. The bounds are stated in terms of novel document properties that we define. We show that these scenarios, in combination with query evaluation over recursive documents, cover the cases in which large buffers are required. Finally, we present algorithms that match the lower bounds for an important fragment of XPath.
    BibTeX:
    @inproceedings{BFJ05,
      author = {Ziv Bar-Yossef and Marcus Fontoura and Vanja Josifovski},
      title = {Buffering in Query Evaluation over XML Streams},
      booktitle = {Principles of Database Systems (PODS)},
      year = {2005},
      pages = {216--227},
      doi = {http://doi.acm.org/10.1145/1065167.1065195}
    }
    
    Bar-Yossef, Z., Fontoura, M. & Josifovski, V. On the Memory Requirements of XPath Evaluation over XML Streams 2007 Journal of Computer and System Sciences
    Vol. 73, pp. 391-441 
    article DOI  
    Abstract: The important challenge of evaluating XPath queries over XML streams has sparked much interest in the past few years. A number of algorithms have been proposed, supporting wider fragments of the query language, and exhibiting better performance and memory utilization. Nevertheless, all the algorithms known to date use a prohibitively large amount of memory for certain types of queries. A natural question then is whether this memory bottleneck is inherent or just an artifact of the proposed algorithms.

    In this paper we initiate the first systematic and theoretical study of lower bounds on the amount of memory required to evaluate XPath queries over XML streams. We present a general lower bound technique, which given a query, specifies the minimum amount of memory that any algorithm evaluating the query on a stream would need to incur. The lower bounds are stated in terms of new graph-theoretic properties of queries. The proofs are based on tools from communication complexity.

    We then exploit insights learned from the lower bounds to obtain a new algorithm for XPath evaluation on streams. The algorithm uses space close to the optimum. Our algorithm deviates from the standard paradigm of using automata or transducers, thereby avoiding the need to store large transition tables.

    BibTeX:
    @article{BFJ07,
      author = {Ziv Bar-Yossef and Marcus Fontoura and Vanja Josifovski},
      title = {On the Memory Requirements of XPath Evaluation over XML Streams},
      journal = {Journal of Computer and System Sciences},
      year = {2007},
      volume = {73},
      pages = {391--441},
      doi = {http://dx.doi.org/10.1016/j.jcss.2006.10.002}
    }
    
    Barmpalias, G., Lewis, A.E. & Stephan, F. $_1$ Classes, LR Degrees and Turing Degrees 2008 Annals of Pure and Applied Logic
    Vol. 156, pp. 21-38 
    article DOI  
    Abstract: We say that $A LR B$ if every B-random set is A-random with respect to Martin-Löf randomness. We study this relation and its interactions with Turing reducibility, classes, hyperimmunity and other recursion theoretic notions.
    BibTeX:
    @article{BLS08,
      author = {George Barmpalias and Andrew E.M. Lewis and Frank Stephan},
      title = {$_1$ Classes, LR Degrees and Turing Degrees},
      journal = {Annals of Pure and Applied Logic},
      year = {2008},
      volume = {156},
      pages = {21--38},
      doi = {http://dx.doi.org/10.1016/j.apal.2008.06.004}
    }
    
    Barnes, B.H. A Two-Way Automaton with Fewer States than Any Equivalent One-Way Automaton 1971 IEEE Transactions on Computers
    Vol. 20, pp. 474-475 
    article DOI  
    Abstract: This correspondence presents an example of a two-way automaton which has significantly fewer states than any one-way automaton accepting the same set of tapes. Thus, in this particular case, memory space can be saved by using a two-way automaton. This savings in space, however, is accompanied by an increase in recognition time.
    Review: The language L = w | forall i leq n. sigma_i in w is recognizable in O(n) state 2-way automaton, but for one-way automaton, it requires O(2^n) states.
    BibTeX:
    @article{Barnes71,
      author = {Bruce H. Barnes},
      title = {A Two-Way Automaton with Fewer States than Any Equivalent One-Way Automaton},
      journal = {IEEE Transactions on Computers},
      year = {1971},
      volume = {20},
      pages = {474--475},
      doi = {http://dx.doi.org/10.1109/T-C.1971.223273}
    }
    
    Berlea, A. Online Evaluation of Regular Tree Queries 2006 Nordic Journal of Computing
    Vol. 13, pp. 240-265 
    article URL 
    Abstract: Regular tree queries (RTQs) are a class of queries considered especially

    relevant for the expressiveness and evaluation of XML query languages. The

    algorithms proposed so far for evaluating queries online, while scanning the

    input data rather than by explicitly building the tree representation of the

    input beforehand, only cover restricted subsets of RTQs. In contrast, we

    introduce here an efficient algorithm for the online evaluation of

    unrestricted RTQs. We prove our algorithm is optimal in the sense that it

    finds matches at the earliest possible time for the query and the input

    document at hand. The time complexity of the algorithm is quadratic in the

    input size in the worst case and linear in many practical cases. Preliminary

    experimental evaluation of our practical implementation are very encouraging.

    BibTeX:
    @article{Berlea06,
      author = {Alexandru Berlea},
      title = {Online Evaluation of Regular Tree Queries},
      journal = {Nordic Journal of Computing},
      year = {2006},
      volume = {13},
      pages = {240--265},
      url = {http://www2.in.tum.de/~berlea/publications.html}
    }
    
    Berlea, A. & Seidl, H. Binary Queries for Document Trees 2004 Nordic Journal of Computing
    Vol. 11, pp. 41-71 
    article URL 
    Abstract: Motivated by XML applications, we address the problem of answering k-ary queries, i.e. simultaneously locating k nodes of an input tree as specified by a given relation. In particular, we discuss how binary queries can be used as a means of navigation in XML document transformations. We introduce a grammar-based approach to specifying k-ary queries. An efficient tree-automata based implementation of unary queries is reviewed and the extensions needed in order to implement k-ary queries are presented. In particular, an efficient solution for the evaluation of binary queries is provided and proven correct. We introduce fxgrep, a practical implementation of unary and binary queries for XML. By means of fxgrep and of the fxt XML transformation language we suggest how binary queries can be used in order to increase expressivity of rule-based transformations. We compare our work with other querying languages and discuss how our ideas can be used for other existing settings.
    BibTeX:
    @article{BS04,
      author = {Alexandru Berlea and Helmut Seidl},
      title = {Binary Queries for Document Trees},
      journal = {Nordic Journal of Computing},
      year = {2004},
      volume = {11},
      pages = {41--71},
      url = {http://www2.in.tum.de/~berlea/publications.html}
    }
    
    Bernardy, J., Jansson, P., Zalewski, M., Schupp, S. & Priesnitz, A. A Comparison of C++ Concepts and Haskell Type Classes 2008 Workshop on Generic Programming (WGP)  inproceedings URL 
    Abstract: Earlier studies have introduced a list of high-level evaluation criteria to assess how well a language supports generic programming. Since each language that meets all criteria is considered generic, those criteria are not fine-grained enough to differentiate between two languages for generic programming. We refine these criteria into a taxonomy that captures differences between type classes in Haskell and concepts in C++, and discuss which differences are incidental and which ones are due to other language features. The taxonomy allows for an improved understanding of language support for generic programming, and the comparison is useful for the ongoing discussions among language designers and users of both languages
    BibTeX:
    @inproceedings{BJZSP08,
      author = {Jean-Philippe Bernardy and Patrik Jansson and Marcin Zalewski and Sibylle Schupp and Andreas Priesnitz},
      title = {A Comparison of C++ Concepts and Haskell Type Classes},
      booktitle = {Workshop on Generic Programming (WGP)},
      year = {2008},
      url = {http://publications.lib.chalmers.se/cpl/record/index.xsql?pubid=72479}
    }
    
    Berstel, J. Transductions and Context-Free Languages 1979   book URL 
    BibTeX:
    @book{Berstel79,
      author = {Jean Berstel},
      title = {Transductions and Context-Free Languages},
      publisher = {Teubner Studienbucher},
      year = {1979},
      url = {http://www-igm.univ-mlv.fr/~berstel/LivreTransductions/LivreTransductions.html}
    }
    
    Bezáková, I. & Pál, M. Planar Finite Automata 1999 Student Science Conference, Slovakia  inproceedings URL 
    Abstract: Several aspects of graph representations of finite automata have been studied in the literature, e.g., the complexity considerations involving the number of states. It is natural to ask whether we indeed need the full generality of possible interconnections among the states provided by the general definition of finite automata. We study the influence of restricting the interconnection network to planar graphs.

    We introduce planar finite automata as a type of automata whose graph representation is planar. We study families of languages defined by deterministic, nondeterministic, and epsilon-free nondeterministic planar finite automata and compare them to R, the family of regular languages. We show that the deterministic planar automata are weaker than finite automata, except for the case of a one-letter alphabet. We also prove that R is equivalent to the family of languages accepted by (epsilon-free) nondeterministic planar automata.

    In addition, we study the influence of restricting the graph representation to a particular architecture studied in parallel and distributed computing, the d-dimensional mesh. We show that for any k there is a language which cannot be accepted by a mesh k-epsilon-bounded nondeterministic finite automaton, i.e., an automaton whose graph representation can be embedded into a mesh and its maximal number of epsilon moves is bounded by k.

    BibTeX:
    @inproceedings{BP99,
      author = {Ivona Bezáková and Martin Pál},
      title = {Planar Finite Automata},
      booktitle = {Student Science Conference, Slovakia},
      year = {1999},
      url = {http://www.cs.rit.edu/~ib/publications.html}
    }
    
    Birman, A. & Ullman, J.D. Parsing Algorithms with Backtrack 1973 Information and Control
    Vol. 23, pp. 1-34 
    article DOI  
    Abstract: Two classes of restricted top-down parsing algorithms modeling "recursive descent" are considered. We show that the smaller class recognizes all deterministic context free languages, and that both classes can be simulated in linear time on a random access machine. Certain generalizations of these parsing algorithms are shown equivalent to the larger class. Finally, it is shown that the larger class has the property that loops and other "failures" can always be eliminated.
    BibTeX:
    @article{BU73,
      author = {Alexander Birman and Jeffrey D. Ullman},
      title = {Parsing Algorithms with Backtrack},
      journal = {Information and Control},
      year = {1973},
      volume = {23},
      pages = {1--34},
      doi = {http://dx.doi.org/10.1016/S0019-9958(73)90851-6}
    }
    
    Bjørner, N.S. Minimal Typing Derivations 1994 Workshop on ML and its Applications, pp. 120-126  inproceedings  
    Abstract: We present an algorithm which finds typing derivations for ML typeable expressions, such that polymorphic abstraction is minimized where possible. Consequently, unnecessary boxing or code duplication can be avoided, allowing more efficient ML implementations.
    BibTeX:
    @inproceedings{Bjoerner94,
      author = {Nikolaj Skallerud Bjørner},
      title = {Minimal Typing Derivations},
      booktitle = {Workshop on ML and its Applications},
      year = {1994},
      pages = {120--126}
    }
    
    Bloem, R. & Engelfriet, J. A Comparison of Tree Transductions Defined by Monadic Second Order Logic and by Attribute Grammars 2000 Journal of Computer and System Sciences
    Vol. 61, pp. 1-50 
    article DOI URL 
    BibTeX:
    @article{BE00,
      author = {Roderick Bloem and Joost Engelfriet},
      title = {A Comparison of Tree Transductions Defined by Monadic Second Order Logic and by Attribute Grammars},
      journal = {Journal of Computer and System Sciences},
      year = {2000},
      volume = {61},
      pages = {1--50},
      url = {http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.8268},
      doi = {http://dx.doi.org/10.1006/jcss.1999.1684}
    }
    
    Blum, M. A Machine-Independent Theory of the Complexity of Recursive Functions 1967 Journal of the ACM
    Vol. 14, pp. 322-336 
    article DOI  
    Abstract: The number of steps required to compute a function depends, in general, on the type of computer that is used, on the choice of computer program, and on the input-output code. Nevertheless, the results obtained in this paper are so general as to be nearly independent of these considerations. A function is exhibited that requires an enormous number of steps to be computed, yet has a ``nearly quickest'' program: Any other program for this function, no matter how ingeniously designed it may be, takes practically as many steps as this nearly quickest program. A different function is exhibited with the property that no matter how fast a program may be for computing this function another program exists for computing the function very much faster.
    BibTeX:
    @article{Blum67,
      author = {Manuel Blum},
      title = {A Machine-Independent Theory of the Complexity of Recursive Functions},
      journal = {Journal of the ACM},
      year = {1967},
      volume = {14},
      pages = {322--336},
      doi = {http://doi.acm.org/10.1145/321386.321395}
    }
    
    Bogaert, B. & Tison, S. Equality and Disequality Constraints on Direct Subterms in Tree Automata 1992 Symposium on Theoretical Aspects of Computer Science (STACS), pp. 161-171  inproceedings DOI  
    Abstract: We define an extension of tree automata by adding some tests in the rules. The goal is to handle non linearity. We obtain a family which has good closure and decidability properties and we give some applications.
    BibTeX:
    @inproceedings{BT92,
      author = {Bruno Bogaert and Sophie Tison},
      title = {Equality and Disequality Constraints on Direct Subterms in Tree Automata},
      booktitle = {Symposium on Theoretical Aspects of Computer Science (STACS)},
      year = {1992},
      pages = {161--171},
      doi = {http://dx.doi.org/10.1007/3-540-55210-3_181}
    }
    
    Boigelot, B., Brusten, J. & Bruyère, V'e. On the Sets of Real Numbers Recognized by Finite Automata in Multiple Bases 2008 International Colloquium on Automata, Languages and Programming (ICALP), pp. 112-123  inproceedings DOI  
    Abstract: This paper studies the expressive power of finite automata recognizing sets of real numbers encoded in positional notation. We consider Muller automata as well as the restricted class of weak deterministic automata, used as symbolic set representations in actual applications. In previous work, it has been established that the sets of numbers that are recognizable by weak deterministic automata in two bases that do not share the same set of prime factors are exactly those that are definable in the first order additive theory of real and integer numbers (R,Z,+,<). This result extends Cobham's theorem, which characterizes the sets of integer numbers that are recognizable by finite automata in multiple bases.

    In this paper, we first generalize this result to multiplicatively independent bases, which brings it closer to the original statement of Cobham's theorem. Then, we study the sets of reals recognizable by Muller automata in two bases. We show with a counterexample that, in this setting, Cobham's theorem does not generalize to multiplicatively independent bases. Finally, we prove that the sets of reals that are recognizable by Muller automata in two bases that do not share the same set of prime factors are exactly those definable in (R,Z,+,<). These sets are thus also recognizable by weak deterministic automata. This result leads to a precise characterization of the sets of real numbers that are recognizable in multiple bases, and provides a theoretical justification to the use of weak automata as symbolic representations of sets.

    BibTeX:
    @inproceedings{BBB08,
      author = {Bernard Boigelot and Julien Brusten and V'eronique Bruyère},
      title = {On the Sets of Real Numbers Recognized by Finite Automata in Multiple Bases},
      booktitle = {International Colloquium on Automata, Languages and Programming (ICALP)},
      year = {2008},
      pages = {112--123},
      doi = {http://dx.doi.org/10.1007/978-3-540-70583-3_10}
    }
    
    Bojańczyk, M. & Colcombet, T. Tree-Walking Automata Do Not Recognize All Regular Languages 2005 ACM Symposium on Theory of Computing (STOC), pp. 234-243  inproceedings DOI  
    Abstract: Tree-walking automata are a natural sequential model for recognizing tree languages. Every tree language recognized by a tree-walking automaton is regular. In this paper, we present a tree language which is regular but not recognized by any (nondeterministic) tree-walking automaton. This settles a conjecture of Engelfriet, Hoogeboom and Van Best. Moreover, the separating tree language is definable already in first-order logic over a signature containing the left-son, right-son and ancestor relations.
    BibTeX:
    @inproceedings{BC05,
      author = {Mikolaj Bojańczyk and Thomas Colcombet},
      title = {Tree-Walking Automata Do Not Recognize All Regular Languages},
      booktitle = {ACM Symposium on Theory of Computing (STOC)},
      year = {2005},
      pages = {234--243},
      doi = {http://dx.doi.org/10.1145/1060590.1060626}
    }
    
    Bojańczyk, M. & Colcombet, T. Tree-walking Automata Cannot be Determinized 2006 Theoretical Computer Science
    Vol. 350, pp. 164-173 
    article DOI  
    Abstract: Tree-walking automata are a natural sequential model for recognizing languages of finite trees. Such automata walk around the tree and may decide in the end to accept it. It is shown that deterministic tree-walking automata are weaker than nondeterministic tree-walking automata.
    BibTeX:
    @article{BC06,
      author = {Mikolaj Bojańczyk and Thomas Colcombet},
      title = {Tree-walking Automata Cannot be Determinized},
      journal = {Theoretical Computer Science},
      year = {2006},
      volume = {350},
      pages = {164--173},
      doi = {http://dx.doi.org/10.1016/j.tcs.2005.10.031}
    }
    
    Bojańczyk, M. & Parys, P. XPath Evaluation in Linear Time 2008 Principles of Database Systems (PODS), pp. 241-250  inproceedings DOI  
    Abstract: We consider a fragment of XPath where attribute values can only be tested for equality. We show that for any fixed unary query in this fragment, the set of nodes that satisfy the query can be calculated in time linear in the document size.
    BibTeX:
    @inproceedings{BP08,
      author = {Mikolaj Bojańczyk and Pawel Parys},
      title = {XPath Evaluation in Linear Time},
      booktitle = {Principles of Database Systems (PODS)},
      year = {2008},
      pages = {241--250},
      doi = {http://doi.acm.org/10.1145/1376916.1376951}
    }
    
    Bojańczyk, M. & Segoufin, L. Tree Languages Defined in First-Order Logic with One Quantifier Alternation 2008 International Colloquium on Automata, Languages and Programming (ICALP), pp. 233-245  inproceedings DOI  
    Abstract: We study tree languages that can be defined in $2$. These are tree languages definable by a first-order formula whose quantifier prefix is $ $, and simultaneously by a first-order formula whose quantifier prefix is $ $, both formulas over the signature with the descendant relation. We provide an effective characterization of tree languages definable in $$. This characterization is in terms of algebraic equations. Over words, the class of word languages definable in $$ forms a robust class, which was given an effective algebraic characterization by Pin and Weil [11].
    BibTeX:
    @inproceedings{BS08,
      author = {Mikolaj Bojańczyk and Luc Segoufin},
      title = {Tree Languages Defined in First-Order Logic with One Quantifier Alternation},
      booktitle = {International Colloquium on Automata, Languages and Programming (ICALP)},
      year = {2008},
      pages = {233--245},
      doi = {http://dx.doi.org/10.1007/978-3-540-70583-3_20}
    }
    
    Boustani, N.E. & Hage, J. Improving Type Error Messages for Generic Java 2009 Partial Evaluation and Semantics-Based Program Manipulation (PEPM), pp. 131-140  inproceedings DOI  
    Abstract: Since version 1.5, generics (parametric polymorphism) are part of the Java language. However, adding parametric polymorphism to a language that is built on inclusion polymorphism can be confusing to a novice programmer, because the typing rules are suddenly different and, in the case of Generic Java, quite complex. Indeed, the main Java compilers, Eclipse's EJC compiler and Sun's JAVAC, do not even accept the same set of programs. Moreover, experience with these compilers shows that the error messages provided by them leave more than a little to be desired. To alleviate the latter problem, we describe how to adapt the type inference process of Java to obtain better error diagnostics for generic method invocations. The extension has been implemented into the JastAdd extensible Java compiler.
    BibTeX:
    @inproceedings{BH09,
      author = {Nabil El Boustani and Jurriaan Hage},
      title = {Improving Type Error Messages for Generic Java},
      booktitle = {Partial Evaluation and Semantics-Based Program Manipulation (PEPM)},
      year = {2009},
      pages = {131--140},
      doi = {http://doi.acm.org/10.1145/1480945.1480964}
    }
    
    Brainerd, W.S. Tree Generating Regular Systems 1969 Information and Control
    Vol. 14, pp. 217-231 
    article DOI  
    Abstract: Trees are defined as mappings from tree structures (in the graph-theoretic sense) into sets of symbols.

    Regular systems are defined in which the production rules are of the form $phi to , where $ and $ are trees. An application of a rule involves replacing a subtree $ by the tree $.

    The main result is that the sets of trees generated by regular systems are exactly those that are accepted by tree automata. This generalizes a theorem of Büchi, proved for strings.

    BibTeX:
    @article{Brainerd69,
      author = {Walter S. Brainerd},
      title = {Tree Generating Regular Systems},
      journal = {Information and Control},
      year = {1969},
      volume = {14},
      pages = {217--231},
      doi = {http://dx.doi.org/10.1016/S0019-9958(69)90065-5}
    }
    
    Brookes, S. A Semantics for Concurrent Separation Logic 2007 Theoretical Computer Science
    Vol. 375, pp. 227-270 
    article DOI  
    Abstract: We present a trace semantics for a language of parallel programs which share access to mutable data. We introduce a resource-sensitive logic for partial correctness, based on a recent proposal of O'Hearn, adapting separation logic to the concurrent setting. The logic allows proofs of parallel programs in which ``ownership'' of critical data, such as the right to access, update or deallocate a pointer, is transferred dynamically between concurrent processes. We prove soundness of the logic, using a novel ``local'' interpretation of traces which allows accurate reasoning about ownership. We show that every provable program is race-free.
    BibTeX:
    @article{Brookes07,
      author = {Stephen Brookes},
      title = {A Semantics for Concurrent Separation Logic},
      journal = {Theoretical Computer Science},
      year = {2007},
      volume = {375},
      pages = {227--270},
      doi = {http://dx.doi.org/10.1016/j.tcs.2006.12.034}
    }
    
    Brzozowski, J.A. Derivatives of Regular Expressions 1964 Journal of the ACM
    Vol. 11, pp. 481-494 
    article DOI  
    BibTeX:
    @article{Brzozowski64,
      author = {Janusz A. Brzozowski},
      title = {Derivatives of Regular Expressions},
      journal = {Journal of the ACM},
      year = {1964},
      volume = {11},
      pages = {481--494},
      doi = {http://doi.acm.org/10.1145/321239.321249}
    }
    
    Buneman, P., Fernandez, M. & Suciu, D. UnQL: A Query Language and Algebra for Semistructured Data Based on Structural Recursion 2000 VLDB Journal
    Vol. 9, pp. 76-110 
    article DOI  
    Abstract: This paper presents structural recursion as the basis of the syntax and semantics of query languages for semistructured data and XML. We describe a simple and powerful query language based on pattern matching and show that it can be expressed using structural recursion, which is introduced as a top-down, recursive function, similar to the way XSL is defined on XML trees. On cyclic data, structural recursion can be defined in two equivalent ways: as a recursive function which evaluates the data top-down and remembers all its calls to avoid infinite loops, or as a bulk evaluation which processes the entire data in parallel using only traditional relational algebra operators. The latter makes it possible for optimization techniques in relational queries to be applied to structural recursion. We show that the composition of two structural recursion queries can be expressed as a single such query, and this is used as the basis of an optimization method for mediator systems. Several other formal properties are established: structural recursion can be expressed in first-order logic extended with transitive closure; its data complexity is PTIME; and over relational data it is a conservative extension of the relational calculus. The underlying data model is based on value equality, formally defined with bisimulation. Structural recursion is shown to be invariant with respect to value equality.
    BibTeX:
    @article{BFS00,
      author = {Peter Buneman and Mary Fernandez and Dan Suciu},
      title = {UnQL: A Query Language and Algebra for Semistructured Data Based on Structural Recursion},
      journal = {VLDB Journal},
      year = {2000},
      volume = {9},
      pages = {76--110},
      doi = {http://dx.doi.org/10.1007/s007780050084}
    }
    
    Burrows, M. & Wheeler, D.J. A Block-Sorting Lossless Data Compression Algorithm 1994 (124)  techreport URL 
    Abstract: We describe a block-sorting, lossless data compression algorithm, and our implementation of that algorithm. We compare the performance of our implementation with widely available data compressors running on the same hardware.

    The algorithm works by applying a reversible transformation to a block of input text. The transformation does not itself compress the data, but reorders it to make it easy to compress with simple algorithms such as move-to-front coding.

    Our algorithm achieves speed comparable to algorithms based on the techniques of Lempel and Ziv, but obtains compression close to the best statistical modelling techniques. The size of the input block must be large (a few kilobytes) to achieve good compression.

    BibTeX:
    @techreport{BW94,
      author = {M. Burrows and D. J. Wheeler},
      title = {A Block-Sorting Lossless Data Compression Algorithm},
      year = {1994},
      number = {124},
      url = {http://gatekeeper.dec.com/pub/DEC/SRC/research-reports/abstracts/src-rr-124.html}
    }
    
    Busatto, G., Lohrey, M. & Maneth, S. Efficient Memory Representation of XML Document Trees 2008 Information Systems
    Vol. 33, pp. 456-474 
    article DOI  
    Abstract: Implementations that load XML documents and give access to them via, e.g., the DOM, suffer from huge memory demands: the space needed to load an XML document is usually many times larger than the size of the document. A considerable amount of memory is needed to store the tree structure of the XML document. In this paper, a technique is presented that allows to represent the tree structure of an XML document in an efficient way. The representation exploits the high regularity in XML documents by compressing their tree structure; the latter means to detect and remove repetitions of tree patterns. Formally, context-free tree grammars that generate only a single tree are used for tree compression. The functionality of basic tree operations, like traversal along edges, is preserved under this compressed representation. This allows to directly execute queries (and in particular, bulk operations) without prior decompression. The complexity of certain computational problems like validation against XML types or testing equality is investigated for compressed input trees.
    BibTeX:
    @article{BLM08,
      author = {Giorgio Busatto and Markus Lohrey and Sebastian Maneth},
      title = {Efficient Memory Representation of XML Document Trees},
      journal = {Information Systems},
      year = {2008},
      volume = {33},
      pages = {456--474},
      doi = {http://dx.doi.org/10.1016/j.is.2008.01.004}
    }
    
    Calcagno, C., Dinsdale-Young, T. & Gardner, P. Adjunct Elimination in Context Logic for Trees 2007 Asian Symposium on Programming Languages and Systems (APLAS), pp. 255-270  inproceedings DOI  
    Abstract: We study adjunct-elimination results for Context Logic applied to trees, following previous results by Lozes for Separation Logic and Ambient Logic. In fact, it is not possible to prove such elimination results for the original single-holed formulation of Context Logic. Instead, we prove our results for multi-holed Context Logic.
    BibTeX:
    @inproceedings{CDG07,
      author = {Cristiano Calcagno and Thomas Dinsdale-Young and Philippa Gardner},
      title = {Adjunct Elimination in Context Logic for Trees},
      booktitle = {Asian Symposium on Programming Languages and Systems (APLAS)},
      year = {2007},
      pages = {255--270},
      doi = {http://dx.doi.org/10.1007/978-3-540-76637-7_17}
    }
    
    Calcagno, C., Gardner, P. & Zarfaty, U. Context Logic and Tree Update 2005 Principles of Programming Languages (POPL), pp. 271-282  inproceedings DOI  
    Abstract: Spatial logics have been used to describe properties of tree-like structures (Ambient Logic) and in a Hoare style to reason about dynamic updates of heap-like structures (Separation Logic). We integrat this work by analyzing dynamic updates to tree-like structures with pointers (such as XML with identifiers and idrefs). Naive adaptations of the Ambient Logic are not expressive enough to capture such local updates. Instead we must explicitly reason about arbitrary tree contexts in order to capture updates throughout the tree. We introduce Context Logic, study its proof theory and models, and show how it generalizes Separation Logic and its general theory BI. We use it to reason locally about a small imperative programming language for updating trees, using a Hoare logic in the style of O'Hearn, Reynolds and Yang, and show that weakest preconditions are derivable. We demonstrate the robustness of our approach by using Context Logic to capture the locality of term rewrite systems.
    BibTeX:
    @inproceedings{CGZ05,
      author = {Cristiano Calcagno and Philippa Gardner and Uri Zarfaty},
      title = {Context Logic and Tree Update},
      booktitle = {Principles of Programming Languages (POPL)},
      year = {2005},
      pages = {271--282},
      doi = {http://doi.acm.org/10.1145/1040305.1040328}
    }
    
    Cannarozzi, D.J., Plezbert, M.P. & Cytron, R.K. Contaminated Garbage Collection 2000 Programming Language Design and Implementation (PLDI), pp. 264-273  inproceedings DOI  
    Abstract: We describe a new method for determining when an object can be garbage collected. The method does not require marking live objects. Instead, each object X is dynamically associated with a stack frame M, such that X is collectable when M pops. Because X could have been dead earlier, our method is conservative. Our results demonstrate that the method nonetheless identifies a large percentage of collectable objects. The method has been implemented in Sun's Java Virtual Machine interpreter, and results are presented based on this implementation.
    BibTeX:
    @inproceedings{CPC00,
      author = {Dante J. Cannarozzi and Michael P. Plezbert and Ron K. Cytron},
      title = {Contaminated Garbage Collection},
      booktitle = {Programming Language Design and Implementation (PLDI)},
      year = {2000},
      pages = {264--273},
      doi = {http://doi.acm.org/10.1145/349299.349334}
    }
    
    Cantin, F., Legay, A. & Wolper, P. Computing Convex Hulls by Automata Iteration 2008 Conference on Implementation and Application of Automata (CIAA), pp. 112-121  inproceedings DOI URL 
    Abstract: This paper considers the problem of computing the real convex hull of a finite set of n-dimensional integer vectors. The starting point is a finite-automaton representation of the initial set of vectors. The proposed method consists in computing a sequence of automata representing approximations of the convex hull and using extrapolation techniques to compute the limit of this sequence. The convex hull can then be directly computed from this limit in the form of an automaton-based representation of the corresponding set of real vectors. The technique is quite general and has been implemented. Also, our result fits in a wider scheme whose objective is to improve the techniques for converting automata-based representation of constraints to formulas.
    BibTeX:
    @inproceedings{CLW08,
      author = {François Cantin and Axel Legay and Pierre Wolper},
      title = {Computing Convex Hulls by Automata Iteration},
      booktitle = {Conference on Implementation and Application of Automata (CIAA)},
      year = {2008},
      pages = {112--121},
      url = {http://www.montefiore.ulg.ac.be/~legay/papers/indexpubli.html},
      doi = {http://dx.doi.org/10.1007/978-3-540-70844-5_12}
    }
    
    ten Cate, B. & Segoufin, L. XPath, Transitive Closure Logic, and Nested Tree Walking Automata 2008 Principles of Database Systems (PODS), pp. 251-260  inproceedings DOI  
    Abstract: We consider the navigational core of XPath, extended with two operators: the Kleene star for taking the transitive closure of path expressions, and a subtree relativisation operator, allowing one to restrict attention to a specific subtree while evaluating a subexpression. We show that the expressive power of this XPath dialect equals that of FO(MTC), first order logic extended with monadic transitive closure. We also give a characterization in terms of nested tree-walking automata. Using the latter we then proceed to show that the language is strictly less expressive than MSO. This solves an open question about the relative expressive power of FO(MTC) and MSO on trees. We also investigate the complexity for our XPath dialect. We show that query evaluation be done in polynomial time (combined complexity), but that satisfiability and query containment (as well as emptiness for our automaton model) are 2ExpTime-complete (it is ExpTime-complete for Core XPath).
    BibTeX:
    @inproceedings{CS08,
      author = {Balder ten Cate and Luc Segoufin},
      title = {XPath, Transitive Closure Logic, and Nested Tree Walking Automata},
      booktitle = {Principles of Database Systems (PODS)},
      year = {2008},
      pages = {251--260},
      doi = {http://doi.acm.org/10.1145/1376916.1376952}
    }
    
    Caucal, D. Boolean Algebras of Unambiguous Context-Free Languages 2008 Foundations of Software Technology and Theoretical Computer Science (FSTTCS)  inproceedings URL 
    Abstract: Several recent works have studied subfamilies of deterministic

    context-free languages with good closure properties, for instance

    the families of input-driven or visibly pushdown languages, or more

    generally families of languages accepted by pushdown automata whose

    stack height can be uniquely determined by the input word read so

    far. These ideas can be described as a notion of synchronization.

    In this paper we present an extension of synchronization to all

    context-free languages using graph grammars. This generalization

    allows one to define boolean algebras of non-deterministic but

    unambiguous context-free languages containing regular languages.

    BibTeX:
    @inproceedings{Caucal08,
      author = {Didier Caucal},
      title = {Boolean Algebras of Unambiguous Context-Free Languages},
      booktitle = {Foundations of Software Technology and Theoretical Computer Science (FSTTCS)},
      year = {2008},
      url = {http://drops.dagstuhl.de/portals/FSTTCS08/}
    }
    
    Chazelle, B. Discrepancy Method: Randomness and Complexity 2001   book URL 
    BibTeX:
    @book{Chazelle01,
      author = {Bernard Chazelle},
      title = {Discrepancy Method: Randomness and Complexity},
      publisher = {Cambridge University Press},
      year = {2001},
      url = {http://www.cs.princeton.edu/~chazelle/book.html}
    }
    
    Chin, W. Safe Fusion of Functional Expressions 1992 ACM SIGPLAN Lisp Pointers
    Vol. 5, pp. 11-20 
    article DOI  
    Abstract: Large functional programs are often constructed by decomposing each big task into smaller tasks which can be performed by simpler functions. This hierarchical style of developing programs has been found to improve programmers' productivity because smaller functions are easier to construct and reuse. However, programs written in this way tend to be less efficient. Unnecessary intermediate data structures may be created. More function invocations may be required. To reduce such performance penalties, Wadler proposed a transformation algorithm, called deforestation, which could automatically fuse certain composed expressions together in order to eliminate intermediate tree-like data structures. However, his technique is only applicable to a subset of first-order expressions. This paper will generalise the deforestation technique to make it applicable to all first-order and higher-order functional programs. Our generalisation is made possible by the adoption of a model for safe fusion which views each function as a producer and its parameters as consumers. Through this model, static program properties are proposed to classify producers and consumers as either safe or unsafe. This classification is used to identify sub-terms that can be safely fused/eliminated. We present the generalised transformation algorithm as a set of syntax-directed rewrite rules, illustrate it with examples, and provide an outline of its termination proof.
    BibTeX:
    @article{Chin92,
      author = {Wei-Ngan Chin},
      title = {Safe Fusion of Functional Expressions},
      journal = {ACM SIGPLAN Lisp Pointers},
      year = {1992},
      volume = {5},
      pages = {11--20},
      doi = {http://doi.acm.org/10.1145/141478.141494}
    }
    
    Chomsky, N. On Certain Formal Properties of Grammars 1959 Information and Control
    Vol. 2, pp. 137-167 
    article DOI  
    Abstract: A grammar can be regarded as a device that enumerates the sentences of a language. We study a sequence of restrictions that limit grammars first to Turing machines, then to two types of system from which a phrase structure description of the generated language can be drawn, and finally to finite state Markov sources (finite automata). These restrictions are shown to be increasingly heavy in the sense that the languages that can be generated by grammars meeting a given restriction constitute a proper subset of those that can be generated by grammars meeting the preceding restriction. Various formulations of phrase structure description are considered, and the source of their excess generative power over finite state sources is investigated in greater detail.
    BibTeX:
    @article{Chomsky59,
      author = {Noam Chomsky},
      title = {On Certain Formal Properties of Grammars},
      journal = {Information and Control},
      year = {1959},
      volume = {2},
      pages = {137--167},
      doi = {http://dx.doi.org/10.1016/S0019-9958(59)90362-6}
    }
    
    Cohen, R. & Harry, E. Automatic Generation of Near-Optimal Linear-Time Translators for Non-Circular Attribute Grammars 1979 Principles of Programming Languages (POPL), pp. 121-134  inproceedings DOI  
    Abstract: Attribute grammars are an extension of context-free grammars devised by Knuth as a formalism for specifying the semantics of a context-free language along with the syntax of the language. The syntactic phase of the translation process has been extensively studied and many techniques are available for automatically generating efficient parsers for context-free grammars. Attribute grammars offer the prospect of similarly automating the implementation of the semantic phase. In this paper we present a general method of constructing, for any non-circular attribute grammar, a deterministic translator which will perform the semantic evaluation of each syntax tree of the grammar in time linear with the size of the tree. Each tree is traversed in a manner particularly suited to the shape of the tree, yielding a near optimal evaluation order for that tree. Basically, the translator consists of a finite set of "Local Control Automata", one for each production; these are ordinary finite-state acyclic automata augmented with some special features, which are used to regulate the evaluation process of each syntax tree. With each node in the tree there will be associated the Local Control Automaton of the production applying at the node. At any given time during the translation process all Local Control Automata are inactive, except for the one associated with the currently processed node, which is responsible for directing the next steps taken by the translator until control is finally passed to a neighbour node, reactivating its Local Control Automaton. The Local Control Automata of neighbour nodes communicate with each other.The construction of the translator is custom tailored to each individual attribute grammar. The dependencies among the attributes occurring in the semantic rules are analysed to produce a near-optimal evaluation strategy for that grammar. This strategy ensures that during the evaluation process, each time the translator enters some subtree of the syntax tree, at least one new attribute evaluation will occur at each node visited. It is this property which distinguishes the method presented here from previously known methods of generating translators for unrestricted attribute grammars, and which causes the translators to be near-optimal.
    BibTeX:
    @inproceedings{CH79,
      author = {Rina Cohen and Eli Harry},
      title = {Automatic Generation of Near-Optimal Linear-Time Translators for Non-Circular Attribute Grammars},
      booktitle = {Principles of Programming Languages (POPL)},
      year = {1979},
      pages = {121--134},
      doi = {http://doi.acm.org/10.1145/567752.567764}
    }
    
    Colcombet, T. A Combinatorial Theorem for Trees 2007 International Colloquium on Automata, Languages and Programming (ICALP), pp. 901-912  inproceedings DOI  
    Abstract: Following the idea developed by I. Simon in his theorem of Ramseyan factorisation forests, we develop a result of `deterministic factorisations'. This extra determinism property makes it usable on trees (finite or infinite).

    We apply our result for proving that, over trees, every monadic interpretation is equivalent to the composition of a first-order interpretation (with access to the ancestor relation) and a monadic marking. Using this remark, we give new characterisations for prefix-recognisable structures and for the Caucal hierarchy.

    Furthermore, we believe that this approach has other potential applications.

    BibTeX:
    @inproceedings{Colcombet07,
      author = {Thomas Colcombet},
      title = {A Combinatorial Theorem for Trees},
      booktitle = {International Colloquium on Automata, Languages and Programming (ICALP)},
      year = {2007},
      pages = {901--912},
      doi = {http://dx.doi.org/10.1007/978-3-540-73420-8_77}
    }
    
    Comon, H., Dauchet, M., Gilleron, R., Löding, C., Jacquemard, F., Lugiez, D., Tison, S. & Tommasi, M. Tree Automata Techniques and Applications 2007 http://www.grappa.univ-lille3.fr/tata  misc URL 
    BibTeX:
    @misc{TATA07,
      author = {H. Comon and M. Dauchet and R. Gilleron and C. Löding and F. Jacquemard and D. Lugiez and S. Tison and M. Tommasi},
      title = {Tree Automata Techniques and Applications},
      year = {2007},
      url = {http://www.grappa.univ-lille3.fr/tata}
    }
    
    Coquidé, J., Dauchet, M., Gilleron, Ré. & Vágvölgyi, Sá. Bottom-up Tree Pushdown Automata: Classification and Connection with Rewrite Systems 1994 Theoretical Computer Science
    Vol. 127, pp. 69-98 
    article DOI  
    Abstract: We define different types of bottom-up tree pushdown automata and study their connections with rewrite systems. Along this line of research we complete and generalize the results of Gallier, Book and Salomaa. We define the notion of a tail-reduction-free (trf) rewrite system. Using the decidability of ground reducibility, we prove the decidability of the trf property. Monadic rewrite systems of Book, Gallier and Salomaa become a natural particular case of trf rewrite systems. We associate a deterministic bottom-up tree pushdown automaton with any left-linear trf rewrite system. Finally, we generalize monadic rewrite systems by introducing the notion of a semi-monadic rewrite system and show that, like a monadic rewrite system, it preserves recognizability.
    BibTeX:
    @article{CDGV94,
      author = {Jean-Luc Coquidé and Max Dauchet and Rémi Gilleron and Sándor Vágvölgyi},
      title = {Bottom-up Tree Pushdown Automata: Classification and Connection with Rewrite Systems},
      journal = {Theoretical Computer Science},
      year = {1994},
      volume = {127},
      pages = {69--98},
      doi = {http://dx.doi.org/10.1016/0304-3975(94)90101-5}
    }
    
    Courcelle, B. A Representation of Trees by Languages I 1978 Theoretical Computer Science
    Vol. 6, pp. 255-279 
    article DOI  
    Abstract: A tree can be represented by a language consisting of a suitable coding of its finite branches. We investigate this representation and derive a number of reductions between certain equivalence problems for context-free tree grammars and recursive program schemes and the (open) equivalence problem for DPDA's. This is the first part of this work: it is devoted to technical results on prefix-free languages and strict deterministic grammars. Application to context-free tree grammars will be published in the second part.
    BibTeX:
    @article{Courcelle78,
      author = {Bruno Courcelle},
      title = {A Representation of Trees by Languages I},
      journal = {Theoretical Computer Science},
      year = {1978},
      volume = {6},
      pages = {255--279},
      doi = {http://dx.doi.org/10.1016/0304-3975(78)90008-7}
    }
    
    Courcelle, B. A Representation of Trees by Languages II 1978 Theoretical Computer Science
    Vol. 7, pp. 25-55 
    article DOI  
    Abstract: We apply certain technical results, published in the first part of the present work to obtain reductions between equivalence problems for simple deterministic tree grammars recursive program schemes and the (open) equivalence problem for DPDA's.
    BibTeX:
    @article{Courcelle78a,
      author = {Bruno Courcelle},
      title = {A Representation of Trees by Languages II},
      journal = {Theoretical Computer Science},
      year = {1978},
      volume = {7},
      pages = {25--55},
      doi = {http://dx.doi.org/10.1016/0304-3975(78)90039-7}
    }
    
    Courcelle, B. Monadic Second-Order Definable Graph Transductions: A Survey 1994 Theoretical Computer Science
    Vol. 126, pp. 53-75 
    article DOI  
    Abstract: Formulas of monadic second-order logic can be used to specify graph transductions, i.e., multi-valued functions from graphs to graphs. We obtain in this way classes of graph transductions, called monadic second-order definable graph transductions (or, more simply, definable transductions) that are closed under composition and preserve the two known classes of context-free sets of graphs, namely the class of hyperedge replacement (HR) and the class of vertex replacement (VR) sets. These two classes can be characterized in terms of definable transductions and recognizable sets of finite trees, independently of the rewriting mechanisms used to define the HR and VR grammars. When restricted to words, the definable transductions are strictly more powerful than the rational transductions such that the image of every finite word is finite; they do not preserve context-free languages. We also describe the sets of discrete (edgeless) labelled graphs that are the images of HR and VR sets under definable transductions: this gives a version of Parikh's theorem (i.e., the characterization of the commutative images of context-free languages) which extends the classical one and applies to HR and VR sets of graphs.
    BibTeX:
    @article{Courcelle94,
      author = {Bruno Courcelle},
      title = {Monadic Second-Order Definable Graph Transductions: A Survey},
      journal = {Theoretical Computer Science},
      year = {1994},
      volume = {126},
      pages = {53--75},
      doi = {http://dx.doi.org/10.1016/0304-3975(94)90268-2}
    }
    
    Courcelle, B. Linear Delay Enumeration and Monadic Second-Order Logic 2009 Discrete Applied Mathematics  article DOI  
    Abstract: The results of a query expressed by a monadic second-order formula on a tree, on a graph or on a relational structure of tree-width at most k, can be enumerated with a delay between two outputs proportional to the size of the next output. This is possible by using a preprocessing that takes time O(nlog(n)), where n is the number of vertices or elements. One can also output directly the i-th element with respect to a fixed ordering, however, in more than linear time in its size. These results extend to graphs of bounded clique-width. We also consider the enumeration of finite parts of recognizable sets of terms specified by parameters such as size, height or Strahler number.
    BibTeX:
    @article{Courcelle09,
      author = {Bruno Courcelle},
      title = {Linear Delay Enumeration and Monadic Second-Order Logic},
      journal = {Discrete Applied Mathematics},
      year = {2009},
      doi = {http://dx.doi.org/10.1016/j.dam.2008.08.021}
    }
    
    Courcelle, B. & Franchi-Zannettacci, P. Attribute Grammars and Recursive Program Schemes I 1982 Theoretical Computer Science
    Vol. 17, pp. 163-191 
    article DOI  
    Abstract: We show that an attribute system can be translated (in a certain way) into a recursive program scheme if and only if it is strongly noncircular. This property introduced by Kennedy and Warren [20] is decidable in polynomial time. We obtain an algorithm to decide the equivalence problem for purely synthesized attribute systems.
    BibTeX:
    @article{CF82,
      author = {Bruno Courcelle and Paul Franchi-Zannettacci},
      title = {Attribute Grammars and Recursive Program Schemes I},
      journal = {Theoretical Computer Science},
      year = {1982},
      volume = {17},
      pages = {163--191},
      doi = {http://dx.doi.org/10.1016/0304-3975(82)90003-2}
    }
    
    Courcelle, B. & Franchi-Zannettacci, P. Attribute Grammars and Recursive Program Schemes II 1982 Theoretical Computer Science
    Vol. 17, pp. 235-257 
    article DOI  
    Abstract: We show that an attribute system can be translated (in a certain way) into a recursive program scheme if and only if it is strongly noncircular. This property introduced by Kennedy and Warren [20] is decidable in polynomial time. We obtain an algorithm to decide the equivalence problem for purely synthesized attribute systems.

    This is the second part of a work which has been divided for editorial reasons. Sections 1,2,3,4 can be found in the first part. The numbering of theorems, propositions, etc.... indicates the section: Theorem 3.16 can be found in Section 3.

    BibTeX:
    @article{CF82a,
      author = {Bruno Courcelle and Paul Franchi-Zannettacci},
      title = {Attribute Grammars and Recursive Program Schemes II},
      journal = {Theoretical Computer Science},
      year = {1982},
      volume = {17},
      pages = {235--257},
      doi = {http://dx.doi.org/10.1016/0304-3975(82)90024-X}
    }
    
    Crawford, J.M. & Auton, L.D. Experimental Results on the Crossover Point in Random 3-SAT 1996 Artificial Intelligence
    Vol. 81, pp. 31-57 
    article DOI URL 
    Abstract: Determining whether a propositional theory is satisfiable is a prototypical example of an NP-complete problem. Further, a large number of problems that occur in knowledge-representation, learning, planning, and other ares of AI are essentially satisfiability problems. This paper reports on the most extensive set of experiments to date on the location and nature of the cross-over point in satisfiability problems. These experiments generally confirm previous results with two notable exceptions. First, we have found that neither of the functions previously proposed accurately models the location of the cross-over point. Second, we have found no evidence of any hard problems in the underconstrained region. In fact the hardest problems found in the underconstrained region were many times easier than the easiest unsatisfiable problems found in the neighborhood of the cross-over point. We offer explanations for these apparent contradictions of previous results.
    BibTeX:
    @article{CA96,
      author = {James M. Crawford and Larry D. Auton},
      title = {Experimental Results on the Crossover Point in Random 3-SAT},
      journal = {Artificial Intelligence},
      year = {1996},
      volume = {81},
      pages = {31--57},
      url = {http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.49.9940},
      doi = {http://dx.doi.org/10.1016/0004-3702(95)00046-1}
    }
    
    Damm, W. The IO- and OI-hierarchies 1982 Theoretical Computer Science
    Vol. 20, pp. 95-207 
    article DOI  
    Abstract: An analysis of recursive procedures in ALGOL 68 with finite modes shows, that a denotational semantics of this language can be described on the level of program schemes using a typed $-calculus with fixed-point operators. In the first part of this paper, we derive classical schematological theorems for the resulting class of level-n schemes. In part two, we investigate the language families obtained by call-by-value and call-by-name interpretation of level-n schemes over the algebra of formal languages. It is proved, that differentiating according to the functional level of recursion leads to two infinite hierarchies of recursive languages, the IO- and OI-hierarchies, which can be characterized as canonical extensions of the regular, context-free, and IO- and OI-macro languages, respectively. Sufficient conditions are derived to establish strictness of IO-like hierarchies. Finally we derive, that recursion on higher types induces an infinite hierarchy of control structures by proving that level-n schemes are strictly less powerful than level-n+1 schemes.
    BibTeX:
    @article{Damm82,
      author = {Werner Damm},
      title = {The IO- and OI-hierarchies},
      journal = {Theoretical Computer Science},
      year = {1982},
      volume = {20},
      pages = {95--207},
      doi = {http://dx.doi.org/10.1016/0304-3975(82)90009-3}
    }
    
    Damm, W. & Goerdt, A. An Automata-Theoretic Characterization of the OI-hierarchy 1982 International Colloquium on Automata, Languages and Programming (ICALP)International Colloquium on Automata, Languages and Programming (ICALP), pp. 141-153  inproceedings DOI  
    Abstract: Though is was obvious to insiders, that the level-n pds --which circulated in unformalized versions prior to the knowledge of Maslov's papers --just had to be the automata model fitting to level-n languages, the complexity of the encodings in both directions shows, how far apart both concepts are. We hope that the technics develloped in establishing Theorem 5.1: $forall Omega geq 1: n - L_OI( = n - PDA($ will turn out to be useful in further applications, e.g. reducing the equivalence problem of level-n schemes [Da 1] to that of deterministic n-pda's (c.f. [Cou], [Gal] for the case n=1).
    BibTeX:
    @inproceedings{DG82,
      author = {Werner Damm and Andreas Goerdt},
      title = {An Automata-Theoretic Characterization of the OI-hierarchy},
      booktitle = {International Colloquium on Automata, Languages and Programming (ICALP)},
      journal = {International Colloquium on Automata, Languages and Programming (ICALP)},
      year = {1982},
      pages = {141--153},
      doi = {http://dx.doi.org/10.1007/BFb0012764}
    }
    
    Davidson, P.B.S. & Suciu, D. A Query Language and Optimization Techniques for Unstructured Data 1996 ACM SIGMOD International Conference on Management of Data (SIGMOD), pp. 505-516  inproceedings DOI  
    Abstract: A new kind of data model has recently emerged in which the database is not constrained by a conventional schema. Systems like ACeDB, which has become very popular with biologists, and the recent Tsimmis proposal for data integration organize data in tree-like structures whose components can be used equally well to represent sets and tuples. Such structures allow great flexibility y in data representation.What query language is appropriate for such structures? Here we propose a simple language UnQL for querying data organized as a rooted, edge-labeled graph. In this model, relational data may be represented as fixed-depth trees, and on such trees UnQL is equivalent to the relational algebra. The novelty of UnQL consists in its programming constructs for arbitrarily deep data and for cyclic structures. While strictly more powerful than query languages with path expressions like XSQL, UnQL can still be efficiently evaluated. We describe new optimization techniques for the deep or "vertical" dimension of UnQL queries. Furthermore, we show that known optimization techniques for operators on flat relations apply to the "horizontal" dimension of UnQL.
    BibTeX:
    @inproceedings{BDS96,
      author = {Peter Bunemanand Susan Davidson and Dan Suciu},
      title = {A Query Language and Optimization Techniques for Unstructured Data},
      booktitle = {ACM SIGMOD International Conference on Management of Data (SIGMOD)},
      year = {1996},
      pages = {505--516},
      doi = {http://doi.acm.org/10.1145/233269.233368}
    }
    
    Davis, M., Logemann, G. & Loveland, D. A Machine Program for Theorem-Proving 1962 Communications of the ACM
    Vol. 5, pp. 394-397 
    article DOI  
    Abstract: The programming of a proof procedure is discussed in connection with trial runs and possible improvements.
    BibTeX:
    @article{DLL62,
      author = {Martin Davis and George Logemann and Donald Loveland},
      title = {A Machine Program for Theorem-Proving},
      journal = {Communications of the ACM},
      year = {1962},
      volume = {5},
      pages = {394--397},
      doi = {http://doi.acm.org/10.1145/368273.368557}
    }
    
    Davis, M. & Putnam, H. A Computing Procedure for Quantification Theory 1960 Journal of the ACM
    Vol. 7, pp. 201-215 
    article DOI  
    Abstract: The hope that mathematical methods employed in the investigation of formal logic would lead to purely computational methods for obtaining mathematical theorems goes back to Leibniz and has been revived by Peano around the turn of the century and by Hilbert's school in the 1920's. Hilbert, noting that all of classical mathematics could be formalized within quantification theory, declared that the problem of finding an algorithm for determining whether or not a given formula of quantification theory is valid was the central problem of mathematical logic. And indeed, at one time it seemed as if investigations of this "decision" problem were on the verge of success. However, it was shown by Church and by Turing that such an algorithm can not exist. This result led to considerable pessimism regarding the possibility of using modern digital computers in deciding significant mathematical questions. However, recently there has been a revival of interest in the whole question. Specifically, it has been realized that while no decision procedure exists for quantification theory there are many proof procedures available-that is, uniform procedures which will ultimately locate a proof for any formula of quantification theory which is valid but which will usually involve seeking "forever" in the case of a formula which is not valid-and that some of these proof procedures could well turn out to be feasible for use with modern computing machinery. Hao Wang [9] and P. C. Gilmore [3] have each produced working programs which employ proof procedures in quantification theory. Gilmore's program employs a form of a basic theorem of mathematical logic due to Herbrand, and Wang's makes use of a formulation of quantification theory related to those studied by Gentzen. However, both programs encounter decisive difficulties with any but the simplest formulas of quantification theory, in connection with methods of doing propositional calculus. Wang's program, because of its use of Gentzen-like methods, involves exponentiation on the total number of truth-functional connectives, whereas Gilmore's program, using normal forms, involves exponentiation on the number of clauses present. Both methods are superior in many cases to truth table methods which involve exponentiation on the total number of variables present, and represent important initial contributions, but both run into difficulty with some fairly simple examples. In the present paper, a uniform proof procedure for quantification theory is given which is feasible for use with some rather complicated formulas and which does not ordinarily lead to exponentiation. The superiority of the present procedure over those previously available is indicated in part by the fact that a formula on which Gilmore's routine for the IBM 704 causes the machine to computer for 21 minutes without obtaining a result was worked successfully by hand computation using the present method in 30 minutes. Cf. Section 6, below. It should be mentioned that, before it can be hoped to employ proof procedures for quantification theory in obtaining proofs of theorems belonging to "genuine" mathematics, finite axiomatizations, which are "short," must be obtained for various branches of mathematics. This last question will not be pursued further here; cf., however, Davis and Putnam [2], where one solution to this problem is given for...
    BibTeX:
    @article{DP60,
      author = {Martin Davis and Hilary Putnam},
      title = {A Computing Procedure for Quantification Theory},
      journal = {Journal of the ACM},
      year = {1960},
      volume = {7},
      pages = {201--215},
      doi = {http://dx.doi.org/10.1145/321033.321034}
    }
    
    Deransart, P., Jourdan, M. & Lorho, B. Attribute Grammars: Definitions, Systems and Bibliography 1988   book DOI  
    BibTeX:
    @book{DJL88,
      author = {Pierre Deransart and Martin Jourdan and Bernard Lorho},
      title = {Attribute Grammars: Definitions, Systems and Bibliography},
      publisher = {Springer Berlin},
      year = {1988},
      doi = {http://dx.doi.org/10.1007/BFb0030509}
    }
    
    Diaz, J., Kirousis, L., Mitsche, D. & Perez-Gimenez, X. A New Upper Bound for 3-SAT 2008 Foundations of Software Technology and Theoretical Computer Science (FSTTCS)  inproceedings URL 
    Abstract: We show that a randomly chosen $3$-CNF formula over $n$ variables with clauses-to-variables ratio at least $4.4898$ is asymptotically almost surely unsatisfiable. The previous best such bound, due to Dubois in 1999, was $4.506$. The first such bound, independently discovered by many groups of researchers since 1983, was $5.19$. Several decreasing values between $5.19$ and $4.506$ were published in the years between. The probabilistic techniques we use for the proof are, we believe, of independent interest.
    BibTeX:
    @inproceedings{DKMP08,
      author = {Josep Diaz and Lefteris Kirousis and Dieter Mitsche and Xavier Perez-Gimenez},
      title = {A New Upper Bound for 3-SAT},
      booktitle = {Foundations of Software Technology and Theoretical Computer Science (FSTTCS)},
      year = {2008},
      url = {http://drops.dagstuhl.de/opus/volltexte/2008/1750/}
    }
    
    Diestel, R. Graph Theory (Third Edition) 2005   book URL 
    BibTeX:
    @book{Diestel05,
      author = {Reinhard Diestel},
      title = {Graph Theory (Third Edition)},
      publisher = {Springer-Verlag},
      year = {2005},
      url = {http://www.math.uni-hamburg.de/home/diestel/books/graph.theory/}
    }
    
    Dietz, P.F. Maintaining Order in a Linked List 1982 ACM Symposium on Theory of Computing (STOC), pp. 122-127  inproceedings DOI  
    Abstract: We present a new representation for linked lists. This representation allows one to efficiently insert objects into the list and to quickly determine the order of list elements. The basic data structure, called an indexed 2-3 tree, allows one to do n inserts in O(nlogn) steps and to determine order in constant time. We speed up the algorithm by dividing the data structure up into log*n layers. The improved algorithm does n insertions and comparisons in O(nlog*n) steps. The paper concludes with two applications: determining ancestor relationships in a growing tree and maintaining a tree structured environment (context tree).
    BibTeX:
    @inproceedings{Dietz82,
      author = {Paul F. Dietz},
      title = {Maintaining Order in a Linked List},
      booktitle = {ACM Symposium on Theory of Computing (STOC)},
      year = {1982},
      pages = {122--127},
      doi = {http://doi.acm.org/10.1145/800070.802184}
    }
    
    Dominus, M.J. Higher-Order Perl 2005   book URL 
    BibTeX:
    @book{Dominus05,
      author = {Mark Jason Dominus},
      title = {Higher-Order Perl},
      publisher = {Powell's Books},
      year = {2005},
      url = {http://hop.perl.plover.com/book/}
    }
    
    Dovier, A., Piazza, C. & Policriti, A. An Efficient Algorithm for Computing Bisimulation Equivalence 2004 Theoretical Computer Science
    Vol. 331, pp. 221-256 
    article DOI  
    Abstract: We propose an efficient algorithmic solution to the problem of determining a Bisimulation Relation on a finite structure working both on the explicit and on the implicit (symbolic) representation. As far as the explicit case is concerned, starting from a set-theoretic point of view we propose an algorithm that optimizes the solution to the Relational Coarsest Partition Problem given by Paige and Tarjan (SIAM J. Comput. 16(6) (1987) 973); its use in model-checking packages is also discussed and tested. For well-structured graphs our algorithm reaches a linear worst-case behaviour. The proposed algorithm is then re-elaborated to produce a symbolic version.
    BibTeX:
    @article{DPP04,
      author = {Agostino Dovier and Carla Piazza and Alberto Policriti},
      title = {An Efficient Algorithm for Computing Bisimulation Equivalence},
      journal = {Theoretical Computer Science},
      year = {2004},
      volume = {331},
      pages = {221--256},
      doi = {http://dx.doi.org/10.1016/S0304-3975(03)00361-X}
    }
    
    Drewes, F. Computation by Tree Transductions 1996 School: University of Bremen  phdthesis URL 
    BibTeX:
    @phdthesis{Drewes96,
      author = {Frank Drewes},
      title = {Computation by Tree Transductions},
      school = {University of Bremen},
      year = {1996},
      url = {http://www.cs.umu.se/~drewes/biblio/data/all.html}
    }
    
    Drewes, F. & Engelfriet, J. Decidability of the Finiteness of Ranges of Tree Transductions 1998 Information and Computation
    Vol. 145, pp. 1-50 
    article DOI  
    Abstract: The finiteness of ranges of tree transductions is shown to be decidable for TBY+, the composition closure of macro tree transductions. Furthermore, TBY+definable sets and TBY+computable relations are considered, which are obtained by viewing a tree as an expression that denotes an element of a given algebra. A sufficient condition on the considered algebra is formulated under which the finiteness problem is decidable for TBY+definable sets and for the ranges of TBY+computable relations. The obtained result applies in particular to the class of string languages that can be defined by TBY+transductions via the yield mapping. This is a large class which is proved to form a substitution-closed full AFL.
    BibTeX:
    @article{DE98,
      author = {Frank Drewes and Joost Engelfriet},
      title = {Decidability of the Finiteness of Ranges of Tree Transductions},
      journal = {Information and Computation},
      year = {1998},
      volume = {145},
      pages = {1--50},
      doi = {http://dx.doi.org/10.1006/inco.1998.2715}
    }
    
    Duske, Jü., Parchmann, R., Sedello, M. & Specht, J. IO-Macrolanguages and Attributed Translations 1977 Information and Control
    Vol. 35, pp. 87-105 
    article DOI  
    Abstract: By specializing the semantic-rules for attributed context-free grammars introduced by Knuth (Math. Systems Theory 2, 127-145) simple-L-attributed grammars are defined. The translations determined by these grammars are languages called value-languages. In this paper the question of classifying value-languages is considered. It is shown that the value-languages of simple-L-attributed grammars are exactly the IO-macrolanguages.
    BibTeX:
    @article{DPSS77,
      author = {Jürgen Duske and Rainer Parchmann and M. Sedello and Johann Specht},
      title = {IO-Macrolanguages and Attributed Translations},
      journal = {Information and Control},
      year = {1977},
      volume = {35},
      pages = {87--105},
      doi = {http://dx.doi.org/10.1016/S0019-9958(77)90309-6}
    }
    
    Emmanuel Filiot, J.T. & Tison, S. Tree Automata with Global Constraints 2008 Developments in Language Theory (DLT), pp. 314-326  inproceedings DOI  
    Abstract: A tree automaton with global tree equality and disequality constraints, TAGED for short, is an automaton on trees which allows to test (dis)equalities between subtrees which may be arbitrarily faraway. In particular, it is equipped with an (dis)equality relation on states, so that whenever two subtrees t and t' evaluate (in an accepting run) to two states which are in the (dis)equality relation, they must be (dis)equal. We study several properties of TAGEDs, and prove decidability of emptiness of several classes. We give two applications of TAGEDs: decidability of an extension of Monadic Second Order Logic with tree isomorphism tests and of unification with membership constraints. These results significantly improve the results of [10].
    BibTeX:
    @inproceedings{ET08,
      author = {Emmanuel Filiot, Jean-Marc Talbot and Sophie Tison},
      title = {Tree Automata with Global Constraints},
      booktitle = {Developments in Language Theory (DLT)},
      year = {2008},
      pages = {314--326},
      doi = {http://dx.doi.org/10.1007/978-3-540-85780-8_25}
    }
    
    Engelfriet, J. Bottom-up and Top-down Tree Transformations -- A Comparison 1975 Mathematical Systems Theory
    Vol. 9, pp. 198-231 
    article DOI  
    Abstract: The top-down and bottom-up tree transducer are incomparable with respect to their transformation power. The difference between them is mainly caused by the different order in which they use the facilities of copying and nondeterminism. One can however define certain simple tree transformations, independent of the top-down/bottom-up distinction, such that each tree transformation, top-down or bottom-up, can be decomposed into a number of these simple transformations. This decomposition result is used to give simple proofs of composition results concerning bottom-up tree transformations.

    A new tree transformation model is introduced which generalizes both the top-down and the bottom-up tree transducer.

    BibTeX:
    @article{Engelfriet75,
      author = {Joost Engelfriet},
      title = {Bottom-up and Top-down Tree Transformations -- A Comparison},
      journal = {Mathematical Systems Theory},
      year = {1975},
      volume = {9},
      pages = {198--231},
      doi = {http://dx.doi.org/10.1007/BF01704020}
    }
    
    Engelfriet, J. Surface Tree Languages and Parallel Derivation Trees 1976 Theoretical Computer Science
    Vol. 2, pp. 9-27 
    article DOI  
    Abstract: The surface tree languages obtained by top-down finite state transformation of monadic trees are exactly the frontier-preserving homomorphic images of sets of derivation trees of ETOL systems. The corresponding class of tree transformation languages is therefore equal to the class of ETOL languages.
    BibTeX:
    @article{Engelfriet76,
      author = {Joost Engelfriet},
      title = {Surface Tree Languages and Parallel Derivation Trees},
      journal = {Theoretical Computer Science},
      year = {1976},
      volume = {2},
      pages = {9--27},
      doi = {http://dx.doi.org/10.1016/0304-3975(76)90003-7}
    }
    
    Engelfriet, J. Top-down Tree Transducers with Regular Look-ahead 1977 Mathematical Systems Theory
    Vol. 10, pp. 289-303 
    article DOI  
    Abstract: Top-down tree transducers with regular look-ahead are introduced. It is shown how these can be decomposed and composed, and how this leads to closure properties of surface sets and tree transformation languages. Particular attention is paid to deterministic tree transducers.

    The research reported here was carried out during a one-year visit of the author to the Dept. of Computer Science of Aarhus University, Aarhus, Denmark.

    BibTeX:
    @article{Engelfriet77,
      author = {Joost Engelfriet},
      title = {Top-down Tree Transducers with Regular Look-ahead},
      journal = {Mathematical Systems Theory},
      year = {1977},
      volume = {10},
      pages = {289--303},
      doi = {http://dx.doi.org/10.1007/BF01683280}
    }
    
    Engelfriet, J. On Tree Transducers for Partial Functions 1978 Information Processing Letters
    Vol. 7, pp. 170-172 
    article DOI  
    BibTeX:
    @article{Engelfriet78,
      author = {Joost Engelfriet},
      title = {On Tree Transducers for Partial Functions},
      journal = {Information Processing Letters},
      year = {1978},
      volume = {7},
      pages = {170--172},
      doi = {http://dx.doi.org/10.1016/0020-0190(78)90060-1}
    }
    
    Engelfriet, J. The Complexity of Languages Generated by Attribute Grammars 1986 SIAM Journal on Computing
    Vol. 15, pp. 70-86 
    article DOI  
    Abstract: A string-valued attribute grammar (SAG) has a semantic domain of strings over some alphabet, with concatenation as basic operation. It is shown that the output language (i.e., the range of the translation) of a SAG is log-space reducible to a context-free language.
    BibTeX:
    @article{Engelfriet86,
      author = {Joost Engelfriet},
      title = {The Complexity of Languages Generated by Attribute Grammars},
      journal = {SIAM Journal on Computing},
      year = {1986},
      volume = {15},
      pages = {70--86},
      doi = {http://dx.doi.org/10.1137/0215005}
    }
    
    Engelfriet, J. The Complexity of Typechecking Tree-walking Tree Transducers 2008   techreport URL 
    BibTeX:
    @techreport{Engelfriet08,
      author = {Joost Engelfriet},
      title = {The Complexity of Typechecking Tree-walking Tree Transducers},
      year = {2008},
      url = {http://www.liacs.nl/~engelfri/}
    }
    
    Engelfriet, J. & Hoogeboom, H.J. MSO Definable String Transductions and Two-Way Finite State Transducers 2001 ACM Transactions on Computational Logic
    Vol. 2, pp. 216-254 
    article DOI  
    Abstract: We extend a classic result of Büchi, Elgot, and Trakhtenbrot: MSO definable string transductions i.e., string-to-string functions that are definable by an interpretation using monadic second-order (MSO) logic, are exactly those realized by deterministic two-way finite-state transducers, i.e., finite-state automata with a two-way input tape and a one-way output tape. Consequently, the equivalence of two mso definable string transductions is decidable. In the nondeterministic case however, MSO definable string tranductions, i.e., binary relations on strings that are mso definable by an interpretation with parameters, are incomparable to those realized by nondeterministic two-way finite-state transducers. This is a motivation to look for another machine model, and we show that both classes of MSO definable string transductions are characterized in terms of Hennie machines, i.e., two-way finite-state transducers that are allowed to rewrite their input tape, but may visit each position of their input only a bounded number of times.
    BibTeX:
    @article{EH01,
      author = {Joost Engelfriet and Hendrik Jan Hoogeboom},
      title = {MSO Definable String Transductions and Two-Way Finite State Transducers},
      journal = {ACM Transactions on Computational Logic},
      year = {2001},
      volume = {2},
      pages = {216--254},
      doi = {http://dx.doi.org/10.1145/371316.371512}
    }
    
    Engelfriet, J. & Hoogeboom, H.J. Automata with Nested Pebbles Capture First-Order Logic with Transitive Closure 2007 Logical Methods in Computer Science (LMCS)
    Vol. 3, pp. 1-27 
    article DOI  
    Abstract: String languages recognizable in (deterministic) log-space are characterized either by two-way (deterministic) multi-head automata, or following Immerman, by first-order logic with (deterministic) transitive closure. Here we elaborate this result, and match the number of heads to the arity of the transitive closure. More precisely, first-order logic with k-ary deterministic transitive closure has the same power as deterministic automata walking on their input with k heads, additionally using a finite set of nested pebbles. This result is valid for strings, ordered trees, and in general for families of graphs having a fixed automaton that can be used to traverse the nodes of each of the graphs in the family. Other examples of such families are grids, toruses, and rectangular mazes. For nondeterministic automata, the logic is restricted to positive occurrences of transitive closure. The special case of k=1 for trees, shows that single-head deterministic tree-walking automata with nested pebbles are characterized by first-order logic with unary deterministic transitive closure. This refines our earlier result that placed these automata between first-order and monadic second-order logic on trees.
    BibTeX:
    @article{EH07,
      author = {Joost Engelfriet and Hendrik Jan Hoogeboom},
      title = {Automata with Nested Pebbles Capture First-Order Logic with Transitive Closure},
      journal = {Logical Methods in Computer Science (LMCS)},
      year = {2007},
      volume = {3},
      pages = {1--27},
      doi = {http://dx.doi.org/10.2168/LMCS-3(2:3)2007}
    }
    
    Engelfriet, J., Hoogeboom, H.J. & Samwel, B. XML Transformation by Tree-walking Transducers with Invisible Pebbles 2007 Principles of Database Systems (PODS), pp. 63-72  inproceedings DOI  
    Abstract: The pebble tree automaton and the pebble tree transducer are enhanced by additionally allowing an unbounded number of "invisible" pebbles (as opposed to the usual ("visible" ones). The resulting pebble tree automata recognize the regular tree languages (i.e., can validate all generalized DTD's) and hence can find all matches of MSO definable n-ary patterns. Moreover, when viewed as a navigational device, they lead to an XPath-like formalism that has a path expression for every MSO definable binary pattern. The resulting pebbletree transducers can apply arbitrary MSO definable tests to (the observable part of) their configurations, they (still) have a decidable typechecking problem, and they can model the recursion mechanism of XSLT. The time complexity ofthe typechecking problem for conjunctive queries that use MSO definable binary patterns can often be reduced through the use of invisible pebbles.
    BibTeX:
    @inproceedings{EHS07,
      author = {Joost Engelfriet and Hendrik Jan Hoogeboom and Bart Samwel},
      title = {XML Transformation by Tree-walking Transducers with Invisible Pebbles},
      booktitle = {Principles of Database Systems (PODS)},
      year = {2007},
      pages = {63--72},
      doi = {http://dx.doi.org/10.1145/1265530.1265540}
    }
    
    Engelfriet, J., Lilin, E. & Maletti, A. Extended Multi Bottom-Up Tree Transducers 2008 Developments in Language Theory (DLT), pp. 289-300  inproceedings DOI  
    Abstract: Extended multi bottom-up tree transducers are defined and investigated. They are an extension of multi bottom-up tree transducers by arbitrary, not just shallow, left-hand sides of rules; this includes rules that do not consume input. It is shown that such transducers can compute any transformation that is computed by a linear extended top-down tree transducer. Moreover, the classical composition results for bottom-up tree transducers are generalized to extended multi bottom-up tree transducers. Finally, a characterization in terms of extended top-down tree transducers is presented.
    BibTeX:
    @inproceedings{ELM08,
      author = {Joost Engelfriet and Eric Lilin and Andreas Maletti},
      title = {Extended Multi Bottom-Up Tree Transducers},
      booktitle = {Developments in Language Theory (DLT)},
      year = {2008},
      pages = {289--300},
      doi = {http://dx.doi.org/10.1007/978-3-540-85780-8_23}
    }
    
    Engelfriet, J. & Maneth, S. Tree Languages Generated by Context-Free Graph Grammars 1998 Theory and Application of Graph Transformations (TAGT), pp. 15-29  inproceedings  
    BibTeX:
    @inproceedings{EM98,
      author = {Joost Engelfriet and Sebastian Maneth},
      title = {Tree Languages Generated by Context-Free Graph Grammars},
      booktitle = {Theory and Application of Graph Transformations (TAGT)},
      year = {1998},
      pages = {15--29}
    }
    
    Engelfriet, J. & Maneth, S. Macro Tree Transducers, Attribute Grammars, and MSO Definable Tree Translations 1999 Information and Computation
    Vol. 154, pp. 34-91 
    article DOI  
    Abstract: A characterization is given of the class of tree translations definable in monadic second-order logic (MSO), in terms of macro tree transducers. The first main result is that the MSO definable tree translations are exactly those tree translations realized by macro tree transducers (MTTs) with regular look-ahead that are single use restricted. For this the single use restriction known from attribute grammars is generalized to MTTs. Since MTTs are closed under regular look-ahead, this implies that every MSO definable tree translation can be realized by an MTT. The second main result is that the class of MSO definable tree translations can also be obtained by restricting MTTs with regular look-ahead to be finite copying, i.e., to require that each input subtree is processed only a bounded number of times. The single use restriction is a rather strong, static restriction on the rules of an MTT, whereas the finite copying restriction is a more liberal, dynamic restriction on the derivations of an MTT.
    BibTeX:
    @article{EM99,
      author = {Joost Engelfriet and Sebastian Maneth},
      title = {Macro Tree Transducers, Attribute Grammars, and MSO Definable Tree Translations},
      journal = {Information and Computation},
      year = {1999},
      volume = {154},
      pages = {34--91},
      doi = {http://dx.doi.org/10.1006/inco.1999.2807}
    }
    
    Engelfriet, J. & Maneth, S. Output String Languages of Compositions of Deterministic Macro Tree Transducers 2002 Journal of Computer and System Sciences
    Vol. 64, pp. 350-395 
    article DOI  
    Abstract: The composition of total deterministic macro tree transducers gives rise to a proper hierarchy with respect to their output string languages (these are the languages obtained by taking the yields of the output trees). There is a language not in this hierarchy which can be generated by a (quite restricted) nondeterministic string transducer, namely, a two-way generalized sequential machine. Similar results hold for attributed tree transducers, for controlled EDT0L systems, and for YIELD mappings (which proves properness of the IO-hierarchy). Witnesses for the properness of the macro tree transducer hierarchy can already be found in the latter three hierarchies.
    BibTeX:
    @article{EM02,
      author = {Joost Engelfriet and Sebastian Maneth},
      title = {Output String Languages of Compositions of Deterministic Macro Tree Transducers},
      journal = {Journal of Computer and System Sciences},
      year = {2002},
      volume = {64},
      pages = {350--395},
      doi = {http://dx.doi.org/10.1006/jcss.2001.1816}
    }
    
    Engelfriet, J. & Maneth, S. Macro Tree Translations of Linear Size Increase are MSO Definable 2003 SIAM Journal on Computing
    Vol. 32, pp. 950-1006 
    article DOI  
    Abstract: The first main result is that if a macro tree translation is of linear size increase, i.e., if the size of every output tree is linearly bounded by the size of the corresponding input tree, then the translation is MSO definable (i.e., definable in monadic second-order logic). This gives a new characterization of the MSO definable tree translations in terms of macro tree transducers: they are exactly the macro tree translations of linear size increase. The second main result is that given a macro tree transducer, it can be decided whether or not its translation is MSO definable, and if it is, then an equivalent MSO transducer can be constructed. Similar results hold for attribute grammars, which define a subclass of the macro tree translations.
    BibTeX:
    @article{EM03a,
      author = {Joost Engelfriet and Sebastian Maneth},
      title = {Macro Tree Translations of Linear Size Increase are MSO Definable},
      journal = {SIAM Journal on Computing},
      year = {2003},
      volume = {32},
      pages = {950-1006},
      doi = {http://dx.doi.org/10.1137/S0097539701394511}
    }
    
    Engelfriet, J. & Maneth, S. A Comparison of Pebble Tree Transducers with Macro Tree Transducers 2003 Acta Informatica
    Vol. 39, pp. 613-698 
    article DOI  
    Abstract: The n-pebble tree transducer was recently proposed as a model for XML query languages. The four main results on deterministic transducers are: First, (1) the translation of an n-pebble tree transducer can be realized by a composition of n+1 0-pebble tree transducers. Next, the pebble tree transducer is compared with the macro tree transducer, a well-known model for syntax-directed semantics, with decidable type checking. The -pebble tree transducer can be simulated by the macro tree transducer, which, by the first result, implies that (2) can be realized by an (n+1)-fold composition of macro tree transducers. Conversely, every macro tree transducer can be simulated by a composition of 0-pebble tree transducers. Together these simulations prove that (3) the composition closure of n-pebble tree transducers equals that of macro tree transducers (and that of 0-pebble tree transducers). Similar results hold in the nondeterministic case. Finally, (4) the output languages of deterministic n-pebble tree transducers form a hierarchy with respect to the number n of pebbles.
    BibTeX:
    @article{EM03,
      author = {Joost Engelfriet and Sebastian Maneth},
      title = {A Comparison of Pebble Tree Transducers with Macro Tree Transducers},
      journal = {Acta Informatica},
      year = {2003},
      volume = {39},
      pages = {613--698},
      doi = {http://dx.doi.org/10.1007/s00236-003-0120-0}
    }
    
    Engelfriet, J., Rozenberg, G. & Slutzki, G. Tree transducers, $L$ systems, and Two-Way Machines 1980 Journal of Computer and System Sciences
    Vol. 20, pp. 150-202 
    article DOI  
    Abstract: A relationship between parallel rewriting systems and two-way machines is investigated. Restrictions on the "copying power" of these devices endow them with rich structuring and give insight into the issues of determinism, parallelism, and copying. Among the parallel rewriting systems considered are the top-down tree transducer; the generalized syntax-directed translation scheme and the ETOL system, and among the two-way machines are the tree-walking automaton, the two-way finite-state transducer, and (generalizations of) the one-way checking stack automaton. The. relationship of these devices to macro grammars is also considered. An effort is made .to provide a systematic survey of a number of existing results.
    BibTeX:
    @article{ERS80,
      author = {Joost Engelfriet and Grzegorz Rozenberg and Giora Slutzki},
      title = {Tree transducers, $L$ systems, and Two-Way Machines},
      journal = {Journal of Computer and System Sciences},
      year = {1980},
      volume = {20},
      pages = {150--202},
      doi = {http://dx.doi.org/10.1016/0022-0000(80)90058-6}
    }
    
    Engelfriet, J. & Schmidt, E.M. IO and OI. I 1977 Journal of Computer and System Sciences
    Vol. 15, pp. 328-353 
    article DOI  
    Abstract: A fixed-point characterization of the inside-out (IO) and outside-in (OI) context-free tree languages is given. This characterization is used to obtain a theory of nondeterministic systems of context-free equations with parameters. Several ``Mezei-and-Wright-like'' results are obtained which relate the context-free tree languages to recognizable tree languages and to nondeterministic recursive program(scheme)s (called by value and called by name). Closure properties of the context-free tree languages are discussed. Hierarchies of higher level equational subsets of an algebra are considered.
    BibTeX:
    @article{ES77,
      author = {Joost Engelfriet and Erik Meineche Schmidt},
      title = {IO and OI. I},
      journal = {Journal of Computer and System Sciences},
      year = {1977},
      volume = {15},
      pages = {328--353},
      doi = {http://dx.doi.org/10.1016/S0022-0000(77)80034-2}
    }
    
    Engelfriet, J. & Schmidt, E.M. IO and OI. II 1978 Journal of Computer and System Sciences
    Vol. 16, pp. 67-99 
    article DOI  
    Abstract: In Part 1 of this paper (J. Comput. System Sci. 15, Number 3 (1977)) we presented a fixed point characterization of the (IO and OI) context-free tree languages. We showed that a context-free tree grammar can be viewed as a system of regular equations over a tree language substitution algebra. In this part we shall use these results to obtain a theory of systems of context-free equations over arbitrary continuous algebras. We refer to the Introduction of Part 1 for a description of the contents of this part.
    BibTeX:
    @article{ES78,
      author = {Joost Engelfriet and Erik Meineche Schmidt},
      title = {IO and OI. II},
      journal = {Journal of Computer and System Sciences},
      year = {1978},
      volume = {16},
      pages = {67--99},
      doi = {http://dx.doi.org/10.1016/0022-0000(78)90051-X}
    }
    
    Engelfriet, J., Schmidt, E.M. & van Leeuwen, J. Stack Machines and Classes of Nonnested Macro Languages 1980 Journal of the ACM
    Vol. 27, pp. 96-117 
    article DOI  
    BibTeX:
    @article{ESL80,
      author = {Joost Engelfriet and Erik Meineche Schmidt and Jan van Leeuwen},
      title = {Stack Machines and Classes of Nonnested Macro Languages},
      journal = {Journal of the ACM},
      year = {1980},
      volume = {27},
      pages = {96--117},
      doi = {http://doi.acm.org/10.1145/322169.322178}
    }
    
    Engelfriet, J. & Vogler, H. Macro Tree Transducers 1985 Journal of Computer and System Sciences
    Vol. 31, pp. 71-146 
    article DOI  
    Abstract: Macro tree transducers are a combination of top-down tree transducers and macrogrammars. They serve as a model for syntax-directed semantics in which context information can be handled. In this paper the formal model of macro tree transducers is studied by investigating typical automata theoretical topics like composition, decomposition, domains, and ranges of the induced translation classes. The extension with regular look-ahead is considered.
    BibTeX:
    @article{EV85,
      author = {Joost Engelfriet and Heiko Vogler},
      title = {Macro Tree Transducers},
      journal = {Journal of Computer and System Sciences},
      year = {1985},
      volume = {31},
      pages = {71--146},
      doi = {http://dx.doi.org/10.1016/0022-0000(85)90066-2}
    }
    
    Engelfriet, J. & Vogler, H. Pushdown Machines for the Macro Tree Transducer 1986 Theoretical Computer Science
    Vol. 42, pp. 251-368 
    article DOI  
    BibTeX:
    @article{EV86,
      author = {Joost Engelfriet and Heiko Vogler},
      title = {Pushdown Machines for the Macro Tree Transducer},
      journal = {Theoretical Computer Science},
      year = {1986},
      volume = {42},
      pages = {251--368},
      doi = {http://dx.doi.org/10.1016/0304-3975(86)90052-6}
    }
    
    Engelfriet, J. & Vogler, H. High Level Tree Transducers and Iterated Pushdown Tree Transducers 1988 Acta Informatica
    Vol. 26, pp. 131-192 
    article DOI  
    Abstract: n-level tree transducers ($n geq 0$) combine the features ofn-level tree grammars and of top-down tree transducers in the sense that the derivations of the tree grammars are syntax-directed by input trees. For runningn, the sequence ofn-level tree transducers starts with top-down tree transducers (n=0) and macro tree transducers (n=1). In this paper the class of tree-to-tree translations computed byn-level tree transducers is characterized byn-iterated pushdown tree transducers. Such a transducer can be considered as a regular tree grammar of which the derivations are syntax-directed byn-iterated pushdowns of trees; ann-iterated pushdown of trees is a pushdown of pushdowns of ... of pushdowns (n times) of trees. In particular, we investigate the total deterministic case, which is relevant for syntax-directed semantics of programming languages.
    BibTeX:
    @article{EV88,
      author = {Joost Engelfriet and Heiko Vogler},
      title = {High Level Tree Transducers and Iterated Pushdown Tree Transducers},
      journal = {Acta Informatica},
      year = {1988},
      volume = {26},
      pages = {131--192},
      doi = {http://dx.doi.org/10.1007/BF02915449}
    }
    
    Engelfriet, J. & Vogler, H. The Translation Power of Top-Down Tree-to-Graph Transducers 1994 Journal of Computer and System Sciences
    Vol. 49, pp. 258-305 
    article DOI  
    Abstract: We introduce a new syntax-directed translation device called top-down tree-to-graph transducer. Such transducers are very similar to the usual (total deterministic) top-down tree transducers except that the right-hand sides of their rules are hypergraphs rather than trees. Since we are aiming at a device which also allows us to translate trees into objects different from graphs, we focus our attention on so-called tree-generating top-down tree-to-graph transducers. Then the result of every computation is a hypergraph which represents a tree, and in its turn the tree can be interpreted in any algebra of appropriate signature. Although for both devices, top-down tree transducers and tree-generating top-down tree-to-graph transducers, the translation of a subtree of an input tree does not depend on its context, the latter trans-ducers have much more transformational power than the former. In this paper we prove that tree-generating top-down tree-to-graph transducers are equivalent to (total deterministic) macro tree transducers, which are transducers for which the translation of a subtree may depend upon its context. We also prove that tree-generating top-down tree-to-graph transducers are closed under regular look-ahead.
    BibTeX:
    @article{EV94,
      author = {Joost Engelfriet and Heiko Vogler},
      title = {The Translation Power of Top-Down Tree-to-Graph Transducers},
      journal = {Journal of Computer and System Sciences},
      year = {1994},
      volume = {49},
      pages = {258--305},
      doi = {http://dx.doi.org/10.1016/S0022-0000(05)80050-9}
    }
    
    Engelfriet, J. & Vogler, H. The Equivalence of Bottom-Up and Top-Down Tree-to-Graph Transducers 1998 Journal of Computer and System Sciences
    Vol. 56, pp. 332-356 
    article DOI  
    Abstract: We introduce the bottom-up tree-to-graph transducer, which is very similar to the usual (total deterministic) bottom-up tree transducer except that it translates trees into hypergraphs rather than trees, using hypergraph substitution instead of tree substitution. If every output hypergraph of the transducer is a jungle, i.e., a hypergraph that can be unfolded into a tree, then the tree-to-graph transducer is said to be tree-generating and naturally defines a tree-to-tree translation. We prove that bottom-up tree-to-graph transducers define the same tree-to-tree translations as the previously introduced top-down tree-to-graph transducers. This is in contrast with the well-known incomparability of the usual bottom-up and top-down tree transducers.
    BibTeX:
    @article{EV98,
      author = {Joost Engelfriet and Heiko Vogler},
      title = {The Equivalence of Bottom-Up and Top-Down Tree-to-Graph Transducers},
      journal = {Journal of Computer and System Sciences},
      year = {1998},
      volume = {56},
      pages = {332--356},
      doi = {http://dx.doi.org/10.1006/jcss.1998.1573}
    }
    
    Felleisen, M. On the Expressive Power of Programming Languages 1991 Science of Computer Programming
    Vol. 17, pp. 35-75 
    article DOI  
    Abstract: The literature on programming languages contains an abundance of informal claims on the relative expressive power of programming languages, but there is no framework for formalizing such statements nor for deriving interesting consequences. As a first step in this direction, we develop a formal notion of expressiveness and investigate its properties. To validate the theory, we analyze some widely held beliefs about the expressive power of several extensions of functional languages. Based on these results, we believe that our system correctly captures many of the informal ideas on expressiveness, and that it constitutes a foundation for further research in this direction.
    BibTeX:
    @article{Felleisen91,
      author = {Matthias Felleisen},
      title = {On the Expressive Power of Programming Languages},
      journal = {Science of Computer Programming},
      year = {1991},
      volume = {17},
      pages = {35--75},
      doi = {http://dx.doi.org/10.1016/0167-6423(91)90036-W}
    }
    
    Fernandes, J.P. & Saraiva, J. Tools and Libraries to Model and Manipulate Circular Programs 2007 Partial Evaluation and Semantics-Based Program Manipulation (PEPM), pp. 102-111  inproceedings DOI  
    Abstract: This paper presents techniques to model circular lazy programs in a strict, purely functional setting. Circular lazy programs model any algorithm based on multiple traversals over a recursive data structure as a single traversal function. Such elegant and concise circular programs are defined in a (strict or lazy) functional language and they are transformed into efficient strict and deforested, multiple traversal programs by using attribute grammars-based techniques. Moreover, we use standard slicing techniques to slice such circular lazy programs.

    We have expressed these transformations as an Haskell library and two tools have been constructed: the HaCirc tool that refactors Haskell lazy circular programs into strict ones, and the OCirc tool that extends Ocaml with circular definitions allowing programmers to write circular programs in Ocaml notation, which are transformed into strict Ocaml programs before they are executed. The first benchmarks of the different implementations are presented and show that for algorithms relying on a large number of traversals the resulting strict, deforested programs are more efficient than the lazy ones, both in terms of runtime and memory consumption.

    BibTeX:
    @inproceedings{FS07,
      author = {João Paulo Fernandes and João Saraiva},
      title = {Tools and Libraries to Model and Manipulate Circular Programs},
      booktitle = {Partial Evaluation and Semantics-Based Program Manipulation (PEPM)},
      year = {2007},
      pages = {102--111},
      doi = {http://dx.doi.org/10.1145/1244381.1244399}
    }
    
    Ferragina, P., Nitto, I. & Venturini, R. On the Bit-Complexity of Lempel-Ziv Compression 2009 Symposium on Discrete Algorithms (SODA), pp. 768-777  inproceedings URL 
    BibTeX:
    @inproceedings{FNV09,
      author = {Paolo Ferragina and Igor Nitto and Rossano Venturini},
      title = {On the Bit-Complexity of Lempel-Ziv Compression},
      booktitle = {Symposium on Discrete Algorithms (SODA)},
      year = {2009},
      pages = {768--777},
      url = {http://www.siam.org/proceedings/soda/2009/soda09.php}
    }
    
    Filiot, E. Logics for $n$-ary Queries in Trees 2008 School: Université des Sciences et Techonologies de Lille  phdthesis URL 
    BibTeX:
    @phdthesis{Filiot08,
      author = {Emmanuel Filiot},
      title = {Logics for $n$-ary Queries in Trees},
      school = {Université des Sciences et Techonologies de Lille},
      year = {2008},
      url = {http://www.ulb.ac.be/di/ssd/filiot/}
    }
    
    Filiot, E. & Tison, S. Regular $n$-ary Queries in Trees and Variable Independence 2008 International Conference on Theoretical Computer Science (IFIP TCS), pp. 429-443  inproceedings DOI  
    Abstract: Regular n-ary queries in trees are queries which are definable by an MSO formula with n free first-order variables. We investigate the variable independence problem -- originally introduced for databases -- in the context of trees. In particular, we show how to decide whether a regular query is equivalent to a union of cartesian products, independently of the input tree. As an intermediate step, we reduce this problem to the problem of deciding whether the number of answers to a regular query is bounded by some constant, independently of the input tree. As a (non-trivial) generalization, we introduce variable independence w.r.t. a dependence forest between blocks of variables, which we prove to be decidable.
    BibTeX:
    @inproceedings{FT08,
      author = {Emmanuel Filiot and Sophie Tison},
      title = {Regular $n$-ary Queries in Trees and Variable Independence},
      booktitle = {International Conference on Theoretical Computer Science (IFIP TCS)},
      year = {2008},
      pages = {429--443},
      doi = {http://dx.doi.org/10.1007/978-0-387-09680-3_29}
    }
    
    Finch, S.R. On the Regularity of Certain 1-Additive Sequences 1992 Journal of Combinational Theory, Series A
    Vol. 60, pp. 123-130 
    article DOI  
    Abstract: Queneau observed that certain 1-additive sequences (defined by Ulam) are regular in the sense that differences between adjacent terms are eventually periodic. Formulas for the period in one case are herein derived, based on Niederreiter's work on the distribution properties of linear recurring sequences and subject to the truth of a highly plausible conjecture.
    BibTeX:
    @article{Finch92,
      author = {Steven R. Finch},
      title = {On the Regularity of Certain 1-Additive Sequences},
      journal = {Journal of Combinational Theory, Series A},
      year = {1992},
      volume = {60},
      pages = {123--130},
      doi = {http://dx.doi.org/10.1016/0097-3165(92)90042-S}
    }
    
    Finger, M. SAT Solvers A Brief Introduction http://www.mat.unb.br/~ayala/MFingerSAT.pdf  misc URL 
    BibTeX:
    @misc{MFingerSAT,
      author = {Marcelo Finger},
      title = {SAT Solvers A Brief Introduction},
      url = {http://www.mat.unb.br/~ayala/MFingerSAT.pdf}
    }
    
    Fiore, M. & Leinster, T. Objects of Categories as Complex Numbers 2005 Advances in Mathematics
    Vol. 190, pp. 264-277 
    article DOI URL 
    Abstract: In many everyday categories (sets, spaces, modules, etc.) objects can be both added and multiplied. The arithmetic of such objects is a challenge because there is usually no subtraction. We prove a family of cases of the following principle: if an arithmetic statement about the objects can be proved by pretending that they are complex numbers, then there also exists an honest proof.
    BibTeX:
    @article{FL05,
      author = {Marcelo Fiore and Tom Leinster},
      title = {Objects of Categories as Complex Numbers},
      journal = {Advances in Mathematics},
      year = {2005},
      volume = {190},
      pages = {264--277},
      url = {http://arxiv.org/abs/math.CT/0212377},
      doi = {http://dx.doi.org/10.1016/j.aim.2004.01.002}
    }
    
    Fischer, M.J. Grammars with Macro-Like Productions 1968 IEEE Conference Record of 9th Annual Symposium on Switching and Automata Theory, pp. 131-142  inproceedings DOI  
    Abstract: Two new classes of grammars based on programming macros are studied. Both involve appending arguments to the intermediate symbols of a context-free grammar. They differ only in the order in which nested terms may be expanded: IO is expansion from the inside-out; OI from the outside-in. Both classes, in common with the context-free, have decidable emptiness and derivation problems, and both are closed under the operations of union, concatenation, Kleene closure (star), reversal, intersection with a regular set, and arbitrary homomorphism. OI languages are also closed under inverse homomorphism while IO languages are not. We exhibit two languages, one of which is IO but not OI and the other OI but not IO, showing that neither class contains the other. However, both trivially contain the class of context-free languages, and both are contained in the class of contextsensitive languages. Finally, the class of OI languages is identical to the class of indexed languages studied by Aho, and indeed many of the above. theorems about OI languages follow directly from the equivalence.
    BibTeX:
    @inproceedings{Fischer68a,
      author = {Michael J. Fischer},
      title = {Grammars with Macro-Like Productions},
      booktitle = {IEEE Conference Record of 9th Annual Symposium on Switching and Automata Theory},
      year = {1968},
      pages = {131--142},
      doi = {http://dx.doi.org/10.1109/SWAT.1968.12}
    }
    
    Fischer, M.J. Grammars with Macro-Like Productions 1968 School: Harvard University, Cambridge  phdthesis  
    BibTeX:
    @phdthesis{Fischer68,
      author = {Michael J. Fischer},
      title = {Grammars with Macro-Like Productions},
      school = {Harvard University, Cambridge},
      year = {1968}
    }
    
    Flum, Jö., Frick, M. & Grohe, M. Query Evaluation via Tree-Decompositions 2002 Journal of the ACM
    Vol. 49International Conference on Database Theory (ICDT), pp. 716-752 
    article DOI  
    Abstract: A number of efficient methods for evaluating first-order and monadic-second order queries on finite relational structures are based on tree-decompositions of structures or queries. We systematically study these methods. In the first-part of the paper we consider tree-like structures. We generalize a theorem of Courcelle [1990] by showing that on such structures a monadic second-order formula (with free first-order and second-order variables) can be evaluated in time linear in the structure size plus the size of the output. In the second part we study tree-like formulas. We generalize the notions of acyclicity and bounded tree-width from conjunctive queries to arbitrary first-order formulas in a straightforward way and analyze the complexity of evaluating formulas of these fragments. Moreover, we show that the acyclic and bounded tree-width fragments have the same expressive power as the well-known guarded fragment and the finite-variable fragments of first-order logic, respectively.
    BibTeX:
    @article{FFG02,
      author = {Jörg Flum and Markus Frick and Martin Grohe},
      title = {Query Evaluation via Tree-Decompositions},
      booktitle = {International Conference on Database Theory (ICDT)},
      journal = {Journal of the ACM},
      year = {2002},
      volume = {49},
      pages = {716--752},
      doi = {http://doi.acm.org/10.1145/602220.602222}
    }
    
    Ford, B. Parsing Expression Grammars: A Recognition-Based Syntactic Foundation 2004 Principles of Programming Languages (POPL), pp. 111-122  inproceedings URL 
    Abstract: For decades we have been using Chomsky's generative system of grammars, particularly context-free grammars (CFGs) and regular expressions (REs), to express the syntax of programming languages and protocols. The power of generative grammars to express ambiguity is crucial to their original purpose of modelling natural languages, but this very power makes it unnecessarily difficult both to express and to parse machine-oriented languages using CFGs. Parsing Expression Grammars (PEGs) provide an alternative, recognition-based formal foundation for describing machine-oriented syntax, which solves the ambiguity problem by not introducing ambiguity in the first place. Where CFGs express nondeterministic choice between alternatives, PEGs instead use prioritized choice. PEGs address frequently felt expressiveness limitations of CFGs and REs, simplifying syntax definitions and making it unnecessary to separate their lexical and hierarchical components. A linear-time parser can be built for any PEG, avoiding both the complexity and fickleness of LR parsers and the inefficiency of generalized CFG parsing. While PEGs provide a rich set of operators for constructing grammars, they are reducible to two minimal recognition schemas developed around 1970, TS/TDPL and gTS/GTDPL, which are here proven equivalent in effective recognition power.
    BibTeX:
    @inproceedings{Ford04,
      author = {Bryan Ford},
      title = {Parsing Expression Grammars: A Recognition-Based Syntactic Foundation},
      booktitle = {Principles of Programming Languages (POPL)},
      year = {2004},
      pages = {111-122},
      url = {http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.10.6583}
    }
    
    Fridlender, D. & Indrika, M. Do We Need Dependent Types? 2000 Journal of Functional Programming
    Vol. 10, pp. 409-415 
    article DOI  
    Abstract: This pearl is about some functions whose definitions seem to require a language with dependent types. We describe a technique for defining them in Haskell or ML, which are languages without dependent types.

    Consider, for example, the scheme defining zipWith in figure 1. When this scheme is instantiated with n equal to 1 we obtain the standard function map. In practice, other instances of the scheme are often useful as well.

    Figure 1 cannot be used as a definition of a function in Haskell because of the ellipses `...'. More importantly, the type of zipWith is parameterized by n, which seems to indicate the need for dependent types. However, as mentioned above, Haskell does not allow dependent types.

    BibTeX:
    @article{FI00,
      author = {Daniel Fridlender and Mia Indrika},
      title = {Do We Need Dependent Types?},
      journal = {Journal of Functional Programming},
      year = {2000},
      volume = {10},
      pages = {409--415},
      doi = {http://dx.doi.org/10.1017/S0956796800003658}
    }
    
    Frisch, A. & Hosoya, H. Towards Practical Typechecking for Macro Tree Transducers 2007 Database Programming Languages (DBPL), pp. 246-260  inproceedings DOI  
    Abstract: Macro tree transducers (mtt) are an important model that both covers many useful XML transformations and allows decidable exact typechecking. This paper reports our first step toward an implementation of mtt typechecker that has a practical efficiency. Our approach is to represent an input type obtained from a backward inference as an alternating tree automaton, in a style similar to Tozawa's XSLT0 typechecking. In this approach, typechecking reduces to checking emptiness of an alternating tree automaton. We propose several optimizations (Cartesian factorization, state partitioning) on the backward inference process in order to produce much smaller alternating tree automata than the naive algorithm, and we present our efficient algorithm for checking emptiness of alternating tree automata, where we exploit the explicit representation of alternation for local optimizations. Our preliminary experiments confirm that our algorithm has a practical performance that can typecheck simple transformations with respect to the full XHTML in a reasonable time.
    BibTeX:
    @inproceedings{FH07,
      author = {Alain Frisch and Haruo Hosoya},
      title = {Towards Practical Typechecking for Macro Tree Transducers},
      booktitle = {Database Programming Languages (DBPL)},
      year = {2007},
      pages = {246--260},
      doi = {http://dx.doi.org/10.1007/978-3-540-75987-4_17}
    }
    
    Frisch, A. & Hosoya, H. Towards Practical Typechecking for Macro Tree Transducers 2007 (RR-6107)  techreport URL 
    Abstract: Macro tree transducers (mtt) are an important model that both covers many useful XML transformations and allows decidable exact typechecking. This paper reports our first step toward an implementation of mtt typechecker that has a practical efficiency. Our approach is to represent an input type obtained from a backward inference as an alternating tree automaton, in a style similar to Tozawa's XSLT0 typechecking. In this approach, typechecking reduces to checking emptiness of an alternating tree automaton. We propose several optimizations (Cartesian factorization, state partitioning) on the backward inference process in order to produce much smaller alternating tree automata than the naive algorithm, and we present our efficient algorithm for checking emptiness of alternating tree automata, where we exploit the explicit representation of alternation for local optimizations. Our preliminary experiments confirm that our algorithm has a practical performance that can typecheck simple transformations with respect to the full XHTML in a reasonable time.
    BibTeX:
    @techreport{FH07a,
      author = {Alain Frisch and Haruo Hosoya},
      title = {Towards Practical Typechecking for Macro Tree Transducers},
      year = {2007},
      number = {RR-6107},
      url = {http://hal.inria.fr/inria-00126895/en/}
    }
    
    Frisch, A. & Nakano, K. Streaming XML Transformation Using Term Rewriting 2007 Programming Language Technologies for XML (PLAN-X), pp. 2-13  inproceedings URL 
    BibTeX:
    @inproceedings{FN07,
      author = {Alain Frisch and Keisuke Nakano},
      title = {Streaming XML Transformation Using Term Rewriting},
      booktitle = {Programming Language Technologies for XML (PLAN-X)},
      year = {2007},
      pages = {2--13},
      url = {http://www.plan-x-2007.org/program/}
    }
    
    Fujiyoshi, A. Multi-Phase Tree Transformations 1997 IEICE Transactions on Information and Systems
    Vol. E80-A, pp. 761-768 
    article URL 
    Abstract: In this paper, we introduce a computational mode of a tree transducer called a bi-stage transducer and study its properties. We consider a mapping on trees realized by composition of any sequence of top-down transducers and bottom-up transducers, and call such a mapping a multi-phase tree transformation. We think a multi-phase tree transformation is sufficiently powerful. It is shown that in the case of rank-preserving transducers, a multi-phase tree transformation is realized by a bi-stage transducer.
    Review: If 'rank-preserving', T^* and B^* collapses to T^2 or B^2. I think it should be already mentioned by Baker...
    BibTeX:
    @article{Fujiyoshi97,
      author = {Akio Fujiyoshi},
      title = {Multi-Phase Tree Transformations},
      journal = {IEICE Transactions on Information and Systems},
      year = {1997},
      volume = {E80-A},
      pages = {761--768},
      url = {http://ci.nii.ac.jp/naid/110003225399/}
    }
    
    Fujiyoshi, A. Restrictions on Monadic Context-Free Tree Grammars 2004 Computational Linguistics (COLING), pp. 78-84  inproceedings DOI  
    Abstract: In this paper, subclasses of monadic contextfree tree grammars (CFTGs) are compared. Since linear, nondeleting, monadic CFTGs generate the same class of string languages as tree adjoining grammars (TAGs), it is examined whether the restrictions of linearity and nondeletion on monadic CFTGs are necessary to generate the same class of languages. Epsilon-freeness on linear, nondeleting, monadic CFTG is also examined.
    Review: Non-erasure of LM-CFTG. I think it has already been solved in more general setting (i.e., for arbitrary IO-grammars) by
    BibTeX:
    @inproceedings{Fujiyoshi04,
      author = {Akio Fujiyoshi},
      title = {Restrictions on Monadic Context-Free Tree Grammars},
      booktitle = {Computational Linguistics (COLING)},
      year = {2004},
      pages = {78--84},
      doi = {http://dx.doi.org/10.3115/1220355.1220367}
    }
    
    Fujiyoshi, A. Linearity and Nondeletion on Monadic Context-Free Tree Grammars 2005 Information Processing Letters
    Vol. 93, pp. 103-107 
    article DOI  
    Abstract: In this paper, subclasses of monadic context-free tree grammars (CFTGs) are compared. Since linear, nondeleting, monadic CFTGs generate the same class of string languages as tree adjoining grammars (TAGs), it is examined whether the restrictions of linearity and nondeletion on monadic CFTGs are necessary to generate the same class of languages.
    BibTeX:
    @article{Fujiyoshi05,
      author = {Akio Fujiyoshi},
      title = {Linearity and Nondeletion on Monadic Context-Free Tree Grammars},
      journal = {Information Processing Letters},
      year = {2005},
      volume = {93},
      pages = {103--107},
      doi = {http://dx.doi.org/10.1016/j.ipl.2004.10.008}
    }
    
    Fujiyoshi, A. Analogical Conception of Chomsky Normal Form and Greibach Normal Form for Linear, Monadic Context-Free Tree Grammars 2006 IEICE Transactions on Information and Systems
    Vol. E89-D, pp. 2933-2938 
    article DOI  
    Abstract: This paper presents the analogical conception of Chomsky normal form and Greibach normal form for linear, monadic context-free tree grammars (LM-CFTGs). LM-CFTGs generate the same class of languages as four well-known mildly context-sensitive grammars. It will be shown that any LM-CFTG can be transformed into equivalent ones in both normal forms. As Chomsky normal form and Greibach normal form for context-free grammars (CFGs) play a very important role in the study of formal properties of CFGs, it is expected that the Chomsky-like normal form and the Greibach-like normal form for LM-CFTGs will provide deeper analyses of the class of languages generated by mildly context-sensitive grammars.
    BibTeX:
    @article{Fujiyoshi06,
      author = {Akio Fujiyoshi},
      title = {Analogical Conception of Chomsky Normal Form and Greibach Normal Form for Linear, Monadic Context-Free Tree Grammars},
      journal = {IEICE Transactions on Information and Systems},
      year = {2006},
      volume = {E89-D},
      pages = {2933--2938},
      doi = {http://dx.doi.org/10.1093/ietisy/e89-d.12.2933}
    }
    
    Fujiyoshi, A. Application of the CKY Algorithm to Recognition of Tree Structures for Linear, Monadic Context-Free Tree Grammars 2007 IEICE Transactions on Information and Systems
    Vol. E90-D, pp. 388-394 
    article DOI  
    Abstract: In this paper, a recognition algorithm for the class of tree languages generated by linear, monadic context-free tree grammars (LM-CFTGs) is proposed. LM-CFTGs define an important class of tree languages because LM-CFTGs are weakly equivalent to tree adjoining grammars (TAGs). The algorithm uses the CKY algorithm as a subprogram and recognizes whether an input tree can be derived from a given LM-CFTG in O($n^4$) time, where n is the number of nodes of the input tree.
    BibTeX:
    @article{Fujiyoshi07,
      author = {Akio Fujiyoshi},
      title = {Application of the CKY Algorithm to Recognition of Tree Structures for Linear, Monadic Context-Free Tree Grammars},
      journal = {IEICE Transactions on Information and Systems},
      year = {2007},
      volume = {E90-D},
      pages = {388--394},
      doi = {http://dx.doi.org/10.1093/ietisy/e90-d.2.388}
    }
    
    Fujiyoshi, A. & Kawaharada, I. Deterministic Recognition of Trees Accepted by a Linear Pushdown Tree Automaton 2005 Conference on Implementation and Application of Automata (CIAA), pp. 129-140  inproceedings DOI  
    Abstract: In this paper, a deterministic recognition algorithm for the class of tree languages accepted by (nondeterministic) linear pushdown tree automata (L-PDTAs) is proposed. L-PDTAs accept an important class of tree languages since the class of their yield languages coincides with the class of yield languages generated by tree adjoining grammars (TAGs). The proposed algorithm is obtained by combining a bottom-up parsing procedure on trees with the CKY (Cocke-Kasami-Younger) algorithm. The running time of the algorithm is O($n^4$), where n is the number of nodes of an input tree.
    BibTeX:
    @inproceedings{FK05,
      author = {Akio Fujiyoshi and Ikuo Kawaharada},
      title = {Deterministic Recognition of Trees Accepted by a Linear Pushdown Tree Automaton},
      booktitle = {Conference on Implementation and Application of Automata (CIAA)},
      year = {2005},
      pages = {129--140},
      doi = {http://dx.doi.org/10.1007/11605157_11}
    }
    
    Fülöp, Z. On Attributed Tree Transducers 1981 Acta Cybernetica
    Vol. 5, pp. 261-279 
    article URL 
    BibTeX:
    @article{Fulop81,
      author = {Zoltán Fülöp},
      title = {On Attributed Tree Transducers},
      journal = {Acta Cybernetica},
      year = {1981},
      volume = {5},
      pages = {261--279},
      url = {http://www.inf.u-szeged.hu/actacybernetica/prevvols.xml}
    }
    
    Fülöp, Z. Undecidable Properties of Deterministic Top-Down Tree Transducers 1994 Theoretical Computer Science
    Vol. 134, pp. 311-328 
    article DOI  
    Abstract: Decidability questions concerning ranges of deterministic top-down tree transducers are considered. It is shown that the following nine problems are undecidable, for ranges $L_1$ and $L_2$ of arbitrary two deterministic, nondeleting and finite copying top-down tree transducers: Is $L_1 p L_2$ empty (infinite, recognizable)? Is the complement of $L_1$ empty (infinite, recognizable)? Is $L_1$ recognizable? Is $L_1 = L_2$ ($L_1 subseteq L_2$)?

    A deterministic top-down tree transducer is a special terminating and confluent term rewriting system. Hence, its range is the set of irreducible elements derivable from a recognizable tree language, namely from its domain. The questions corresponding to the above nine ones are considered and shown to be undecidable for terminating and confluent term rewriting systems as well. For example, the result corresponding to the undecidability of ``Is L1 recognizable?'' is as follows. It is undecidable, for an arbitrary terminating and confluent term rewriting system R and a recognizable tree language L, whether the set of elements irreducible with respect to R derivable from L is recognizable or not.

    BibTeX:
    @article{Fulop94,
      author = {Zoltán Fülöp},
      title = {Undecidable Properties of Deterministic Top-Down Tree Transducers},
      journal = {Theoretical Computer Science},
      year = {1994},
      volume = {134},
      pages = {311--328},
      doi = {http://dx.doi.org/10.1016/0304-3975(94)90241-0}
    }
    
    Fülöp, Z. & Gyenizse, P. On Injectivity of Deterministic Top-Down Tree Transducers 1993 Information Processing Letters
    Vol. 48, pp. 183-188 
    article DOI  
    Abstract: We give a simple proof for the decidability of injectivity of linear deterministic top-down tree transducers. Moreover, we show that injectivity is undecidable even for homomorphism tree transducers.
    BibTeX:
    @article{FG93,
      author = {Zoltán Fülöp and Pál Gyenizse},
      title = {On Injectivity of Deterministic Top-Down Tree Transducers},
      journal = {Information Processing Letters},
      year = {1993},
      volume = {48},
      pages = {183--188},
      doi = {http://dx.doi.org/10.1016/0020-0190(93)90143-W}
    }
    
    Fülöp, Z., Kühnemann, A. & Vogler, H. A Bottom-up Characterization of Deterministic Top-down Tree Transducers with Regular Look-ahead 2004 Information Processing Letters
    Vol. 91, pp. 57-67 
    article DOI  
    BibTeX:
    @article{FKV04,
      author = {Zoltán Fülöp and Armin Kühnemann and Heiko Vogler},
      title = {A Bottom-up Characterization of Deterministic Top-down Tree Transducers with Regular Look-ahead},
      journal = {Information Processing Letters},
      year = {2004},
      volume = {91},
      pages = {57--67},
      doi = {http://dx.doi.org/10.1016/j.ipl.2004.03.014}
    }
    
    Gabbay, M.J. & Mathijssen, A. One-and-a-halfth-order Logic 2008 Journal of Logic and Computation
    Vol. 18, pp. 521-562 
    article DOI URL 
    Abstract: The practice of first-order logic is replete with meta-level concepts. Most notably there are meta-variables ranging over formulae, variables, and terms, and properties of syntax such as alpha-equivalence, capture-avoiding substitution and assumptions about freshness of variables with respect to meta-variables. We present one-and-a-halfth-order logic, in which these concepts are made explicit. We exhibit both sequent and algebraic specifications of one-and-a-halfth-order logic derivability, show them equivalent, show that the derivations satisfy cut-elimination, and prove correctness of an interpretation of first-order logic within it.

    We discuss the technicalities in a wider context as a case-study for nominal algebra, as a logic in its own right, as an algebraisation of logic, as an example of how other systems might be treated, and also as a theoretical foundation for future implementation.

    BibTeX:
    @article{GM08,
      author = {Murdoch J. Gabbay and Aad Mathijssen},
      title = {One-and-a-halfth-order Logic},
      journal = {Journal of Logic and Computation},
      year = {2008},
      volume = {18},
      pages = {521--562},
      url = {http://www.gabbay.org.uk/papers.html},
      doi = {http://logcom.oxfordjournals.org/cgi/reprint/exm064}
    }
    
    Gabbay, M.J. & Pitts, A.M. A New Approach to Abstract Syntax with Variable Binding 2001 Formal Aspects of Computing
    Vol. 13, pp. 341-363 
    article DOI URL 
    Abstract: The permutation model of set theory with atoms (FM-sets), devised by Fraenkel and Mostowski in the 1930s, supports notions of `name-abstraction' and `fresh name' that provide a new way to represent, compute with, and reason about the syntax of formal systems involving variable-binding operations. Inductively defined FM-sets involving the name-abstraction set former (together with cartesian product and disjoint union) can correctly encode syntax modulo renaming of bound variables. In this way, the standard theory of algebraic data types can be extended to encompass signatures involving binding operators. In particular, there is an associated notion of structural recursion for defining syntax-manipulating functions (such as capture avoiding substitution, set of free variables, etc) and a notion of proof by structural induction, both of which remain pleasingly close to informal practice in computer science.
    BibTeX:
    @article{GP01,
      author = {Murdoch J. Gabbay and A. M. Pitts},
      title = {A New Approach to Abstract Syntax with Variable Binding},
      journal = {Formal Aspects of Computing},
      year = {2001},
      volume = {13},
      pages = {341--363},
      url = {http://www.gabbay.org.uk/papers.html},
      doi = {http://doi.ieeecomputersociety.org/10.1109/LICS.1999.782617}
    }
    
    Gallier, J.H. & Book, R.V. Reductions in Tree Replacement Systems 1985 Theoretical Computer Science
    Vol. 37, pp. 123-150 
    article DOI  
    BibTeX:
    @article{GB85,
      author = {Jean H. Gallier and Ronald V. Book},
      title = {Reductions in Tree Replacement Systems},
      journal = {Theoretical Computer Science},
      year = {1985},
      volume = {37},
      pages = {123--150},
      doi = {http://dx.doi.org/10.1016/0304-3975(85)90089-1}
    }
    
    Gardner, P.A., Smith, G.D., Wheelhouse, M.J. & Zarfaty, U.D. Local Hoare Reasoning about DOM 2008 Principles of Database Systems (PODS)Principles of Database Systems (PODS), pp. 261-270  inproceedings DOI  
    Abstract: The W3C Document Object Model (DOM) specifies an XML update library. DOM is written in English, and is therefore not compositional and not complete. We provide a first step towards a compositional specification of DOM. Unlike DOM, we are able to work with a minimal set of commands and obtain a complete reasoning for straight-line code. Our work transfers O'Hearn, Reynolds and Yang's local Hoare reasoning for analysing heaps to XML, viewing XML as an in-place memory store as does DOM. In particular, we apply recent work by Calcagno, Gardner and Zarfaty on local Hoare reasoning about simple tree update to this real-world DOM application. Our reasoning not only formally specifies a significant subset of DOM Core Level 1, but can also be used to verify, for example, invariant properties of simple Javascript programs.
    BibTeX:
    @inproceedings{GSWZ08a,
      author = {Philippa A. Gardner and Gareth D. Smith and Mark J. Wheelhouse and Uri D. Zarfaty},
      title = {Local Hoare Reasoning about DOM},
      booktitle = {Principles of Database Systems (PODS)},
      journal = {Principles of Database Systems (PODS)},
      year = {2008},
      pages = {261--270},
      doi = {http://doi.acm.org/10.1145/1376916.1376953}
    }
    
    Gardner, P., Smith, G., Wheelhouse, M. & Zarfaty, U. DOM: Towards a Formal Specification 2008 Programming Language Technologies for XML (PLAN-X)  inproceedings URL 
    Abstract: The W3C Document Object Model (DOM) specifies an XML update library. DOM is written in English, and is therefore not compositional and not complete.We provide a first step towards a compositional specification of DOM. Unlike DOM, we are able to work with a minimal set of commands and obtain a complete reasoning for straight-line code. Our work transfers O'Hearn, Reynolds and Yang's local Hoare reasoning for analysing heaps to XML, viewing XML as an in-place memory store as does DOM. In particular, we apply recent work by Calcagno, Gardner and Zarfaty on local Hoare reasoning about a simple tree-update language to DOM, showing that our reasoning scales to DOM. Our reasoning not only formally specifies a significant subset of DOM Core Level 1, but can also be used to verify e.g. invariant properties of simple Javascript programs.
    BibTeX:
    @inproceedings{GSWZ08,
      author = {Philippa Gardner and Gareth Smith and Mark Wheelhouse and Uri Zarfaty},
      title = {DOM: Towards a Formal Specification},
      booktitle = {Programming Language Technologies for XML (PLAN-X)},
      year = {2008},
      url = {http://gemo.futurs.inria.fr/events/PLANX2008/?page=accepted}
    }
    
    Gauwin, O., Caron, A., Niehren, J. & Tison, S. Complexity of Earliest Query Answering with Streaming Tree Automata 2008 Programming Language Technologies for XML (PLAN-X)  inproceedings URL 
    BibTeX:
    @inproceedings{GCNT08,
      author = {Olivier Gauwin and Anne-Cécile Caron and Joachim Niehren and Sophie Tison},
      title = {Complexity of Earliest Query Answering with Streaming Tree Automata},
      booktitle = {Programming Language Technologies for XML (PLAN-X)},
      year = {2008},
      url = {http://gemo.futurs.inria.fr/events/PLANX2008/?page=accepted}
    }
    
    Giegerich, R. Composition and Evaluation of Attribute Coupled Grammars 1988 Acta Informatica
    Vol. 25, pp. 355-423 
    article DOI  
    Abstract: With attribute coupled grammars, descriptions of subsequent compilation phases can be composed to a single phase on the description level. This creates the opportunity for independent compiler modularization on the description versus the implementation level. The composition introduces a nontrivial reshaping of attribute dependencies, and hence affects the overall strategy for attribute evaluation. For a hierarchy of evaluation classes reaching from S-attributed to noncircular attribute couplings (ACs), we investigate whether they are closed under composition. Where closure does not hold, we identify subclasses which have the closure property. We show that closure of 1-ordered and simpler classes can only be achieved when there is only a single syntactic attribute allowed, while in the case of absolutely noncircular attribute couplings, an arbitrary number of synthesized syntactic attributes can be used. We show how (suboptimal) evaluators for the composed description are obtained directly from the evaluators of the separate phases. We also investigate the complementary problem of reducing the attribute evaluation complexity of a given, monolithic phase specification by finding an appropriate decomposition into subphases which all belong to simpler evaluation classes. In particular, we find that the closed subclass of sweep-evaluable ACs is generated by a proper subclass ofL-attributed ACs, while a similar characterization of 1-ordered ACs is proved to be impossible. Finally, we relate our observations to known results about descriptive power of attribute grammars and tree transducers.
    BibTeX:
    @article{Giegerich88,
      author = {Robert Giegerich},
      title = {Composition and Evaluation of Attribute Coupled Grammars},
      journal = {Acta Informatica},
      year = {1988},
      volume = {25},
      pages = {355--423},
      doi = {http://dx.doi.org/10.1007/BF02737108}
    }
    
    Gil, J. (Y. & Zibin, Y. Efficient Subtyping Tests with PQ-Encoding 2005 ACM Transactions on Programming Languages and Systems
    Vol. 27, pp. 819-856 
    article DOI  
    Abstract: Given a type hierarchy, a subtyping test determines whether one type is a direct or indirect descendant of another type. Such tests are a frequent operation during the execution of object-oriented programs. The implementation challenge is in a space-efficient encoding of the type hierarchy that simultaneously permits efficient subtyping tests. We present a new scheme for encoding multiple- and single-inheritance hierarchies, which, in the standard benchmark hierarchies, reduces the footprint of all previously published schemes. Our scheme is called PQ-encoding (PQE) after PQ-trees, a data structure previously used in graph theory for finding the orderings that satisfy a collection of constraints. In particular, we show that in the traditional object layout model, the extra memory requirements for single-inheritance hierarchies is zero. In the PQE subtyping, tests are constant time, and use only two comparisons. The encoding creation time of PQE also compares favorably with previous results. It is less than 1 s on all standard benchmarks on a contemporary architecture, while the average time for processing a type is less than 1 ms. However, PQE is not an incremental algorithm. Other than PQ-trees, PQE employs several novel optimization techniques. These techniques are applicable also in improving the performance of other, previously published, encoding schemes.
    BibTeX:
    @article{GZ05,
      author = {Joseph (Yossi) Gil and Yoav Zibin},
      title = {Efficient Subtyping Tests with PQ-Encoding},
      journal = {ACM Transactions on Programming Languages and Systems},
      year = {2005},
      volume = {27},
      pages = {819--856},
      doi = {http://doi.acm.org/10.1145/1086642.1086643}
    }
    
    Gill, A., Launchbury, J. & Jones, S.L.P. A Short Cut to Deforestation 1993 Functional Programming Languages and Computer Architecture (FPCA), pp. 223-232  inproceedings DOI  
    Abstract: Lists are often used as ``glue'' to connect separate parts of

    a program together. We propose an automatic teehnique

    for improving the efficiency of such programs, by removing

    many of these intermediate lists, based on a single, simple,

    local transformation. We have implemented the method in

    the Glasgow Haakell compiler.

    BibTeX:
    @inproceedings{GLJ93,
      author = {Andrew Gill and John Launchbury and Simon L. Peyton Jones},
      title = {A Short Cut to Deforestation},
      booktitle = {Functional Programming Languages and Computer Architecture (FPCA)},
      year = {1993},
      pages = {223--232},
      doi = {http://doi.acm.org/10.1145/165180.165214}
    }
    
    Glück, R. Is There a Fourth Futamura Projection? 2009 Partial Evaluation and Semantics-Based Program Manipulation (PEPM), pp. 51-60  inproceedings DOI  
    Abstract: The three classic Futamura projections stand as a cornerstone in the development of partial evaluation. The observation by Futamura [1983], that compiler generators produced by his third projection are self-generating, and the insight by Klimov and Romanenko [1987], that Futamura's abstraction scheme can be continued beyond the three projections, are systematically investigated, and several new applications for compiler generators are proposed. Possible applications include the generation of quasi-online compiler generators and of compiler generators for domain-specific languages, and the bootstrapping of compiler generators from program specializers. From a theoretical viewpoint, there is equality between the class of self-generating compiler generators and the class of compiler generators produced by the third Futamura projection. This exposition may lead to new practical applications of compiler generators, as well as deepen our theoretical understanding of program specialization.
    BibTeX:
    @inproceedings{Gluck09,
      author = {Robert Glück},
      title = {Is There a Fourth Futamura Projection?},
      booktitle = {Partial Evaluation and Semantics-Based Program Manipulation (PEPM)},
      year = {2009},
      pages = {51--60},
      doi = {http://doi.acm.org/10.1145/1480945.1480954}
    }
    
    Godoy, G., Maneth, S. & Tison, S. Classes of Tree Homomorphisms with Decidable Preservation of Regularity 2008 Foundations of Software Science and Computation Structures (FoSSaCS), pp. 127-141  inproceedings DOI  
    Abstract: Decidability of regularity preservation by a homomorphism is a well known open problem for regular tree languages. Two interesting subclasses of this problem are considered: first, it is proved that regularity preservation is decidable in polynomial time when the domain language is constructed over a monadic signature, i.e., over a signature where all symbols have arity 0 or 1. Second, decidability is proved for the case where non-linearity of the homomorphism is restricted to the root node (or nodes of bounded depth) of any input term. The latter result is obtained by proving decidability of this problem: Given a set of terms with regular constraints on the variables, is its set of ground instances regular? This extends previous results where regular constraints where not considered.
    BibTeX:
    @inproceedings{GMT08,
      author = {Guillem Godoy and Sebastian Maneth and Sophie Tison},
      title = {Classes of Tree Homomorphisms with Decidable Preservation of Regularity},
      booktitle = {Foundations of Software Science and Computation Structures (FoSSaCS)},
      year = {2008},
      pages = {127--141},
      doi = {http://dx.doi.org/10.1007/978-3-540-78499-9_10}
    }
    
    Gonthier, G. A Computer-Checked Proof of the Four Colour Theorem 2005   unpublished URL 
    BibTeX:
    @unpublished{Gonthier05,
      author = {Georges Gonthier},
      title = {A Computer-Checked Proof of the Four Colour Theorem},
      year = {2005},
      note = {http://research.microsoft.com/en-us/um/people/gonthier/4colproof.pdf},
      url = {http://research.microsoft.com/en-us/um/people/gonthier/4colproof.pdf}
    }
    
    Gottlob, G. & Koch, C. Monadic Datalog and the Expressive Power of Languages for Web Information Extraction 2004 Journal of the ACM
    Vol. 51, pp. 74-113 
    article DOI  
    Abstract: Research on information extraction from Web pages (wrapping) has seen much activity recently (particularly systems implementations), but little work has been done on formally studying the expressiveness of the formalisms proposed or on the theoretical foundations of wrapping. In this paper, we first study monadic datalog over trees as a wrapping language. We show that this simple language is equivalent to monadic second order logic (MSO) in its ability to specify wrappers. We believe that MSO has the right expressiveness required for Web information extraction and propose MSO as a yardstick for evaluating and comparing wrappers. Along the way, several other results on the complexity of query evaluation and query containment for monadic datalog over trees are established, and a simple normal form for this language is presented. Using the above results, we subsequently study the kernel fragment Elog- of the Elog wrapping language used in the Lixto system (a visual wrapper generator). Curiously, Elog- exactly captures MSO, yet is easier to use. Indeed, programs in this language can be entirely visually specified.
    BibTeX:
    @article{GK04,
      author = {Georg Gottlob and Christoph Koch},
      title = {Monadic Datalog and the Expressive Power of Languages for Web Information Extraction},
      journal = {Journal of the ACM},
      year = {2004},
      volume = {51},
      pages = {74--113},
      doi = {http://doi.acm.org/10.1145/962446.962450}
    }
    
    Gottlob, G., Koch, C. & Pichler, R. Efficient Algorithms for Processing XPath Queries 2005 ACM Transactions on Database Systems
    Vol. 30Very Large Data Base (VLDB), pp. 444-491 
    article DOI  
    Abstract: Our experimental analysis of several popular XPath processors reveals a striking fact: Query evaluation in each of the systems requires time exponential in the size of queries in the worst case. We show that XPath can be processed much more efficiently, and propose main-memory algorithms for this problem with polynomial-time combined query evaluation complexity. Moreover, we present two fragments of XPath for which linear-time query processing algorithms exist.
    BibTeX:
    @article{GKP05,
      author = {Georg Gottlob and Christoph Koch and Reinhard Pichler},
      title = {Efficient Algorithms for Processing XPath Queries},
      booktitle = {Very Large Data Base (VLDB)},
      journal = {ACM Transactions on Database Systems},
      year = {2005},
      volume = {30},
      pages = {444--491},
      doi = {http://doi.acm.org/10.1145/1071610.1071614}
    }
    
    Gou, G. & Chirkova, R. Efficient Algorithms for Evaluating XPath over Streams 2007 ACM SIGMOD International Conference on Management of Data (SIGMOD), pp. 269-280  inproceedings DOI  
    Abstract: In this paper we address the problem of evaluating XPath queries over streaming XML data. We consider a practical XPath fragment called Univariate XPath, which includes the commonly used '/' and '//' axes and allows *-node tests and arbitrarily nested predicates. It is well known that this XPath fragment can be efficiently evaluated in O($|D||Q|$) time in the non-streaming environment, where $|D|$ is the document size and $|Q|$ is the query size. However, this is not necessarily true in the streaming environment, since streaming algorithms have to satisfy stricter requirement than non-streaming algorithms, in that all data must be read sequentially in one pass. Therefore, it is not surprising that state-of-the-art stream-querying algorithms have higher time complexity than O($|D||Q|$).

    In this paper we revisit the XPath stream-querying problem, and show that Univariate XPath can be efficiently evaluated in O($|D||Q|$) time in the streaming environment. Specifically, we propose two O($|D||Q|$)-time stream-querying algorithms, LQ and EQ, which are based on the lazy strategy and on the eager strategy, respectively. To the best of our knowledge, LQ and EQ are the first XPath stream-querying algorithms that achieve O($|D||Q|$) time performance. Further, our algorithms achieve O($|D||Q|$) time performance without trading off space performance. Instead, they have better buffering-space performance than state-of-the-art stream-querying algorithms. In particular, EQ achieves optimal buffering-space performance. Our experimental results show that our algorithms have not only good theoretical complexity but also considerable practical performance advantages over existing algorithms.

    BibTeX:
    @inproceedings{GC07,
      author = {Gang Gou and Rada Chirkova},
      title = {Efficient Algorithms for Evaluating XPath over Streams},
      booktitle = {ACM SIGMOD International Conference on Management of Data (SIGMOD)},
      year = {2007},
      pages = {269--280},
      doi = {http://doi.acm.org/10.1145/1247480.1247512}
    }
    
    Graham, S.L., Harrison, M.A. & Ruzzo, W.L. On Line Context Free Language Recognition in less than Cubic Time 1976 ACM Symposium on Theory of Computing (STOC), pp. 112-120  inproceedings DOI  
    Abstract: A new on-line context free language recognition algorithm is presented which is derived from Earley's algorithm and has several advantages over the original. First, the new algorithm not only is conceptually simpler than Earley's, but also allows significant speed improvements. Second, our algorithm serves to explain the connections between Earley's algorithm and the Cocke-Kasami-Younger algorithm. Third, our algorithm allows an implementation which uses only $O(n^2/log n)$ operations on bit vectors of length $n$, or $O(n^3/log n)$ operations on a RAM. This makes it the fastest known on-line context free language recognition algorithm.
    BibTeX:
    @inproceedings{GHR76,
      author = {Susan L. Graham and Michael A. Harrison and Walter L. Ruzzo},
      title = {On Line Context Free Language Recognition in less than Cubic Time},
      booktitle = {ACM Symposium on Theory of Computing (STOC)},
      year = {1976},
      pages = {112--120},
      doi = {http://doi.acm.org/10.1145/800113.803638}
    }
    
    Greenlaw, R., Hoover, H.J. & Ruzzo, W.L. Limits to Parallel Computation: P-Completeness Theory 1995   book  
    BibTeX:
    @book{GHR95,
      author = {Raymond Greenlaw and H. James Hoover and Walter L. Ruzzo},
      title = {Limits to Parallel Computation: P-Completeness Theory},
      publisher = {Oxford University Press},
      year = {1995}
    }
    
    Guessarian, I. Pushdown Tree Automata 1983 Mathematical Systems Theory
    Vol. 16, pp. 237-263 
    article DOI  
    Abstract: We define topdown pushdown tree automata (PDTA's) which extend the usual string pushdown automata by allowing trees instead of strings in both the input and the stack. We prove that PDTA's recognize the class of context-free tree languages. (Quasi)realtime and deterministic PDTA's accept the classes of Greibach and deterministic tree languages, respectively. Finally, PDTA's are shown to be equivalent to restricted PDTA's, whose stack is linear: this both yields a more operational way of recognizing context-free tree languages and connects them with the class of indexed languages.
    BibTeX:
    @article{Guessarian83,
      author = {Irène Guessarian},
      title = {Pushdown Tree Automata},
      journal = {Mathematical Systems Theory},
      year = {1983},
      volume = {16},
      pages = {237--263},
      doi = {http://dx.doi.org/10.1007/BF01744582}
    }
    
    Haeupler, B., Kavitha, T., Mathew, R., Sen, S. & Tarjan, R.E. Faster Algorithms for Incremental Topological Ordering 2008 International Colloquium on Automata, Languages and Programming (ICALP), pp. 421-433  inproceedings DOI  
    Abstract: We present two online algorithms for maintaining a topological order of a directed acyclic graph as arcs are added, and detecting a cycle when one is created. Our first algorithm takes O($m^1/2$) amortized time per arc and our second algorithm takes O($n^2.5/m$) amortized time per arc, where n is the number of vertices and m is the total number of arcs. For sparse graphs, our O($m^1/2$) bound improves the best previous bound by a factor of logn and is tight to within a constant factor for a natural class of algorithms that includes all the existing ones. Our main insight is that the two-way search method of previous algorithms does not require an ordered search, but can be more general, allowing us to avoid the use of heaps (priority queues). Instead, the deterministic version of our algorithm uses (approximate) median-finding; the randomized version of our algorithm uses uniform random sampling. For dense graphs, our O($n^2.5/m$) bound improves the best previously published bound by a factor of $n^1/4$ and a recent bound obtained independently of our work by a factor of logn. Our main insight is that graph search is wasteful when the graph is dense and can be avoided by searching the topological order space instead. Our algorithms extend to the maintenance of strong components, in the same asymptotic time bounds.

    This paper combines the best results of [12] and [7]. The former gives two incremental topological ordering algorithms, with amortized time bounds per arc addition of O($m^1/2 + (n log n)/m^1/2$) and O($n^2.5/m$). The latter, which was written with knowledge of the former, gives an algorithm with a bound of O($m^1/2$).

    BibTeX:
    @inproceedings{HKMST08,
      author = {Bernhard Haeupler and Telikepalli Kavitha and Rogers Mathew and Siddhartha Sen and Robert E. Tarjan},
      title = {Faster Algorithms for Incremental Topological Ordering},
      booktitle = {International Colloquium on Automata, Languages and Programming (ICALP)},
      year = {2008},
      pages = {421--433},
      doi = {http://dx.doi.org/10.1007/978-3-540-70575-8_35}
    }
    
    Harel, D. On Folk Theorems 1980 Communications of the ACM
    Vol. 23, pp. 379-389 
    article DOI  
    BibTeX:
    @article{Harel80,
      author = {David Harel},
      title = {On Folk Theorems},
      journal = {Communications of the ACM},
      year = {1980},
      volume = {23},
      pages = {379--389},
      doi = {http://doi.acm.org/10.1145/358886.358892}
    }
    
    Henglein, F. Generic Discrimination: Sorting and Paritioning Unshared Data in Linear Time 2008 International Conference on Functional Programming (ICFP), pp. 91-102  inproceedings DOI  
    Abstract: We introduce the notion of discrimination as a generalization of both sorting and partitioning and show that worst-case linear-time discrimination functions (discriminators) can be defined generically, by (co-)induction on an expressive language of order denotations. The generic definition yields discriminators that generalize both distributive sorting and multiset discrimination. The generic discriminator can be coded compactly using list comprehensions, with order denotations specified using Generalized Algebraic Data Types (GADTs). A GADT-free combinator formulation of discriminators is also given.

    We give some examples of the uses of discriminators, including a new most-significant-digit lexicographic sorting algorithm.

    Discriminators generalize binary comparison functions: They operate on n arguments at a time, but do not expose more information than the underlying equivalence, respectively ordering relation on the arguments. We argue that primitive types with equality (such as references in ML) and ordered types (such as the machine integer type), should expose their equality, respectively standard ordering relation, as discriminators: Having only a binary equality test on a type requires $n^2)$ time to find all the occurrences of an element in a list of length n, for each element in the list, even if the equality test takes only constant time. A discriminator accomplishes this in linear time. Likewise, having only a (constant-time) comparison function requires $n log n)$ time to sort a list of n elements. A discriminator can do this in linear time.

    BibTeX:
    @inproceedings{Henglein08,
      author = {Fritz Henglein},
      title = {Generic Discrimination: Sorting and Paritioning Unshared Data in Linear Time},
      booktitle = {International Conference on Functional Programming (ICFP)},
      year = {2008},
      pages = {91--102},
      doi = {http://doi.acm.org/10.1145/1411203.1411220}
    }
    
    Henzinger, M.R., Henzinger, T.A. & Kopke, P.W. Computing Simulations on Finite and Infinite Graphs 1995 Foundations of Computer Science (FOCS), pp. 453-462  inproceedings DOI  
    Abstract: We present algorithms for computing similarity relations of labeled graphs. Similarity relations have applications for the refinement and verification of reactive systems. For finite graphs, we present an O(mn) algorithm for computing the similarity relation of a graph with n vertices and m edges (assuming m/spl ges/n). For effectively presented infinite graphs, we present a symbolic similarity-checking procedure that terminates if a finite similarity relation exists. We show that 2D rectangular automata, which model discrete reactive systems with continuous environments, define effectively presented infinite graphs with finite similarity relations. It follows that the refinement problem and the /spl forall/CTL* model-checking problem are decidable for 2D rectangular automata.
    BibTeX:
    @inproceedings{HHK95a,
      author = {Monika R. Henzinger and Thomas A. Henzinger and Peter W. Kopke},
      title = {Computing Simulations on Finite and Infinite Graphs},
      booktitle = {Foundations of Computer Science (FOCS)},
      year = {1995},
      pages = {453--462},
      doi = {http://dx.doi.org/10.1109/SFCS.1995.492576}
    }
    
    Hidaka, S., Hu, Z., Kato, H. & Nakano, K. Towards Compositional Approach to Model Transformations 2008 (GRACE-TR-2008-01)  techreport URL 
    BibTeX:
    @techreport{HHKN08,
      author = {Soichiro Hidaka and Zhenjiang Hu and Hiroyuki Kato and Keisuke Nakano},
      title = {Towards Compositional Approach to Model Transformations},
      year = {2008},
      number = {GRACE-TR-2008-01},
      url = {http://research.nii.ac.jp/~hidaka/big/www/papers.html}
    }
    
    Hidaka, S., Hu, Z., Kato, H. & Nakano, K. Towards Compositional Approach to Model Transformation for Software Development 2009 ACM Symposium on Applied Computing (SAC)  inproceedings URL 
    BibTeX:
    @inproceedings{HHKN09,
      author = {Soichiro Hidaka and Zhenjiang Hu and Hiroyuki Kato and Keisuke Nakano},
      title = {Towards Compositional Approach to Model Transformation for Software Development},
      booktitle = {ACM Symposium on Applied Computing (SAC)},
      year = {2009},
      url = {http://research.nii.ac.jp/~hidaka/big/www/papers.html}
    }
    
    Hidaka, S., Hu, Z., Kato, H. & Nakano, K. A Compositional Approach to Bidirectional Model Transformation 2009 International Conference on Software Engineering (ICSE)  inproceedings URL 
    BibTeX:
    @inproceedings{HHKN09a,
      author = {Soichiro Hidaka and Zhenjiang Hu and Hiroyuki Kato and Keisuke Nakano},
      title = {A Compositional Approach to Bidirectional Model Transformation},
      booktitle = { International Conference on Software Engineering (ICSE)},
      year = {2009},
      url = {http://research.nii.ac.jp/~hidaka/big/www/papers.html}
    }
    
    Hillebrand, G.G. Finite Model Theory in the Simply Typed Lambda Calculus 1994 School: Brown University  phdthesis  
    BibTeX:
    @phdthesis{Hillebrand94,
      author = {Gerd G. Hillebrand},
      title = {Finite Model Theory in the Simply Typed Lambda Calculus},
      school = {Brown University},
      year = {1994}
    }
    
    Hinze, R. Functional Pearl: Streams and Unique Fixed Points 2008 International Conference on Functional Programming (ICFP), pp. 189-200  inproceedings DOI  
    Abstract: Streams, infinite sequences of elements, live in a coworld: they are given by a coinductive data type, operations on streams are implemented by corecursive programs, and proofs are conducted using coinduction. But there is more to it: suitably restricted, stream equations possess unique solutions, a fact that is not very widely appreciated. We show that this property gives rise to a simple and attractive proof technique essentially bringing equational reasoning to the coworld. In fact, we redevelop the theory of recurrences, finite calculus and generating functions using streams and stream operators building on the cornerstone of unique solutions. The development is constructive: streams and stream operators are implemented in Haskell, usually by one-liners. The resulting calculus or library, if you wish, is elegant and fun to use. Finally, we rephrase the proof of uniqueness using generalised algebraic data types.
    BibTeX:
    @inproceedings{Hinze08,
      author = {Ralf Hinze},
      title = {Functional Pearl: Streams and Unique Fixed Points},
      booktitle = {International Conference on Functional Programming (ICFP)},
      year = {2008},
      pages = {189--200},
      doi = {http://doi.acm.org/10.1145/1411204.1411232}
    }
    
    Hodges, W. Compositional Semantics for a Language of Imperfect Information 1997 Logic Journal of IGPL
    Vol. 5, pp. 539-563 
    article DOI  
    Abstract: We describe a logic which is the same as first-order logic except that it allows control over the information that passes down from formulas to subformulas. For example the logic is adequate to express branching quantifiers. We describe a compositional semantics for this logic; in particular this gives a compositional meaning to formulas of the 'information-friendly' language of Hintikka and Sandu. For first-order formulas the semantics reduces to Tarski's semantics for first-order logic. We prove that two formulas have the same interpretation in all structures if and only if replacing an occurrence of one by an occurrence of the other in a sentence never alters the truth-value of the sentence in any structure.
    BibTeX:
    @article{Hodges97,
      author = {Wilfrid Hodges},
      title = {Compositional Semantics for a Language of Imperfect Information},
      journal = {Logic Journal of IGPL},
      year = {1997},
      volume = {5},
      pages = {539--563},
      doi = {http://dx.doi.org/10.1093/jigpal/5.4.539}
    }
    
    Hofbauer, D., Huber, M. & Kucherov, G. Some Results on Top-Context-Free Tree Languages 1994 Colloquium on Trees in Algebra and Programming (CAAP), pp. 157-171  inproceedings  
    BibTeX:
    @inproceedings{HHK94,
      author = {Dieter Hofbauer and Maria Huber and Gregory Kucherov},
      title = {Some Results on Top-Context-Free Tree Languages},
      booktitle = {Colloquium on Trees in Algebra and Programming (CAAP)},
      year = {1994},
      pages = {157--171}
    }
    
    Hofbauer, D., Huber, M. & Kucherov, G. How to Get Rid of Projection Rules in Context-Free Tree Grammars 1995 Symposium on Language, Logic and Computation, pp. 235-247  inproceedings  
    BibTeX:
    @inproceedings{HHK95,
      author = {Dieter Hofbauer and Maria Huber and Gregory Kucherov},
      title = {How to Get Rid of Projection Rules in Context-Free Tree Grammars},
      booktitle = {Symposium on Language, Logic and Computation},
      year = {1995},
      pages = {235--247}
    }
    
    Hopcroft, J., Paul, W. & Valiant, L. On Time Versus Space 1977 Journal of the ACM
    Vol. 24, pp. 332-337 
    article DOI  
    Abstract: It is shown that every deterministic multitape Turing machine of time complexity t(n) can be simulated by a deterministic Turing machine of tape complexity t(n)/logt(n). Consequently, for tape constructable t(n), the class of languages recognizable by multitape Turing machines of time complexity t(n) is strictly contained in the class of languages recognized by Turing machines of tape complexity t(n). In particular the context-sensitive languages cannot be recognized in linear time by deterministic multitape Turing machines.
    BibTeX:
    @article{HPV77,
      author = {John Hopcroft and Wolfgang Paul and Leslie Valiant},
      title = {On Time Versus Space},
      journal = {Journal of the ACM},
      year = {1977},
      volume = {24},
      pages = {332--337},
      doi = {http://doi.acm.org/10.1145/322003.322015}
    }
    
    Hopcroft, J.E. An $n log n$ Algorithm for Minimizing States in a Finite Automaton 1971 (CS-TR-71-190)  techreport URL 
    BibTeX:
    @techreport{Hopcroft71,
      author = {John E. Hopcroft},
      title = {An $n log n$ Algorithm for Minimizing States in a Finite Automaton},
      year = {1971},
      number = {CS-TR-71-190},
      url = {http://infolab.stanford.edu/pub/cstr/reports/cs/tr/71/190/?C=M;O=D}
    }
    
    Hosoya, H. Regular Expression Types for XML 2000 School: The University of Tokyo  phdthesis URL 
    BibTeX:
    @phdthesis{Hosoya00,
      author = {Haruo Hosoya},
      title = {Regular Expression Types for XML},
      school = {The University of Tokyo},
      year = {2000},
      url = {http://arbre.is.s.u-tokyo.ac.jp/~hahosoya/publ.html}
    }
    
    Hosoya, H. Foundations of XML Processing 2007 http://arbre.is.s.u-tokyo.ac.jp/~hahosoya/xmlbook/  misc URL 
    BibTeX:
    @misc{Hosoya07,
      author = {Haruo Hosoya},
      title = {Foundations of XML Processing},
      year = {2007},
      url = {http://arbre.is.s.u-tokyo.ac.jp/~hahosoya/xmlbook/}
    }
    
    Hosoya, H. & Pierce, B.C. Regular Expression Pattern Matching for XML 2003 Journal of Functional Programming
    Vol. 13, pp. 961-1004 
    article DOI  
    Abstract: We propose regular expression pattern matching as a core feature of programming languages for manipulating XML. We extend conventional pattern-matching facilities (as in ML) with regular expression operators such as repetition (*), alternation (|), etc., that can match arbitrarily long sequences of subtrees, allowing a compact pattern to extract data from the middle of a complex sequence. We then show how to check standard notions of exhaustiveness and redundancy for these patterns. Regular expression patterns are intended to be used in languages with type systems based on regular expression types. To avoid excessive type annotations, we develop a type inference scheme that propagates type constraints to pattern variables from the type of input values. The type inference algorithm translates types and patterns into regular tree automata, and then works in terms of standard closure operations (union, intersection, and difference) on tree automata. The main technical challenge is dealing with the interaction of repetition and alternation patterns with the first-match policy, which gives rise to subtleties concerning both the termination and precision of the analysis. We address these issues by introducing a data structure representing these closure operations lazily.
    BibTeX:
    @article{HP03,
      author = {Haruo Hosoya and Benjamin C. Pierce},
      title = {Regular Expression Pattern Matching for XML},
      journal = {Journal of Functional Programming},
      year = {2003},
      volume = {13},
      pages = {961--1004},
      doi = {http://dx.doi.org/10.1017/S0956796802004410}
    }
    
    Huet, G. The Zipper 1997 Journal of Functional Programming
    Vol. 7, pp. 549-554 
    article DOI  
    Abstract: Almost every programmer has faced the problem of representing a tree together with a

    subtree that is the focus of attention, where that focus may move left, right, up or down the

    tree. The Zipper is Huet's nifty name for a nifty data structure which ful lls this need. I wish

    I had known of it when I faced this task, because the solution I came up with was not quite

    so ecient or elegant as the Zipper.

    BibTeX:
    @article{Huet97,
      author = {Gérard Huet},
      title = {The Zipper},
      journal = {Journal of Functional Programming},
      year = {1997},
      volume = {7},
      pages = {549--554},
      doi = {http://dx.doi.org/10.1017/S0956796897002864}
    }
    
    Ierusalimschy, R., de Figueiredo, L.H. & Celes, W. The implementation of Lua 5.0 2005 Journal of Universal Computer Science
    Vol. 11, pp. 1159-1176 
    article URL 
    BibTeX:
    @article{IFC05,
      author = {R. Ierusalimschy and L. H. de Figueiredo and W. Celes},
      title = {The implementation of Lua 5.0},
      journal = {Journal of Universal Computer Science},
      year = {2005},
      volume = {11},
      pages = {1159--1176},
      url = {http://www.lua.org/papers.html}
    }
    
    Igarashi, A., Pierce, B.C. & Wadler, P. Featherweight Java: a Minimal Core Calculus for Java and GJ 2001 ACM Transactions on Programming Languages and Systems
    Vol. 23, pp. 396-450 
    article DOI URL 
    Abstract: Several recent studies have introduced lightweight versions of Java: reduced languages in which complex features like threads and reflection are dropped to enable rigorous arguments about key properties such as type safety. We carry this process a step further, omitting almost all features of the full language (including interfaces and even assignment) to obtain a small calculus, Featherweight Java, for which rigorous proofs are not only possible but easy. Featherweight Java bears a similar relation to Java as the lambda-calculus does to languages such as ML and Haskell. It offers a similar computational "feel," providing classes, methods, fields, inheritance, and dynamic typecasts with a semantics closely following Java's. A proof of type safety for Featherweight Java thus illustrates many of the interesting features of a safety proof for the full language, while remaining pleasingly compact. The minimal syntax, typing rules, and operational semantics of Featherweight Java make it a handy tool for studying the consequences of extensions and variations. As an illustration of its utility in this regard, we extend Featherweight Java with generic classes in the style of GJ (Bracha, Odersky, Stoutamire, and Wadler) and give a detailed proof of type safety. The extended system formalizes for the first time some of the key features of GJ.
    BibTeX:
    @article{IPW01,
      author = {Atsushi Igarashi and Benjamin C. Pierce and Philip Wadler},
      title = {Featherweight Java: a Minimal Core Calculus for Java and GJ},
      journal = {ACM Transactions on Programming Languages and Systems},
      year = {2001},
      volume = {23},
      pages = {396--450},
      url = {http://homepages.inf.ed.ac.uk/wadler/topics/gj.html},
      doi = {http://doi.acm.org/10.1145/503502.503505}
    }
    
    II, P.M.L. & Stearns, R.E. Syntax-Directed Transduction 1968 Journal of the ACM
    Vol. 15, pp. 465-488 
    article DOI  
    Abstract: A transduction is a mapping from one set of sequences to another. A syntax-directed transduction is a particular type of transduction which is defined on the grammar of a context-free language and which is meant to be a model of part of the translation process used in many compilers. The transduction is considered from an automata theory viewpoint as specifying the input-output relation of a machine. Special consideration is given to machines called translators which both transduce and recognize. In particular, some special conditions are investigated under which syntax-directed translations can be performed on (deterministic) pushdown machines. In addition, some time bounds for translations on Turing machines are derived.
    BibTeX:
    @article{LS68,
      author = {Philip M. Lewis II and Richard Edwin Stearns},
      title = {Syntax-Directed Transduction},
      journal = {Journal of the ACM},
      year = {1968},
      volume = {15},
      pages = {465--488},
      doi = {http://doi.acm.org/10.1145/321466.321477}
    }
    
    III, H.B.H. On the Complexity of Finite, Pushdown, and Stack Automata 1976 Mathematical Systems Theory
    Vol. 10, pp. 33-52 
    article DOI  
    Abstract: The complexity of predicates on several classes of formal languages is studied. For finite automata, pushdown automata, and several classes of stack automata, every nontrivial predicate on the languages accepted by the 1-way devices requires as much time and space as the recognition problem for any language accepted by the corresponding 2-way devices. Moreover there are nontrivial predicates on the languages accepted by the 1-way devices such that is the accepted language of some corresponding one or two head 2-way device. Thus our lower bounds are fairly tight.
    BibTeX:
    @article{Hunt76,
      author = {H. B. Hunt III},
      title = {On the Complexity of Finite, Pushdown, and Stack Automata},
      journal = {Mathematical Systems Theory},
      year = {1976},
      volume = {10},
      pages = {33--52},
      doi = {http://dx.doi.org/10.1007/BF01683261}
    }
    
    Inaba, K. Purely Applicative XML Cursor 2004 Senior Thesis, The University of Tokyo  misc URL 
    Abstract: Cursor model is a relatively new approach for XML processing. In this model, a cursor acts like a lens that focuses on one node. You can freely move the cursor back and forth in an XML document, and edit the node it indicates. This model can be easily implemented in imperative language like C or Java, by using a pointer to subtree in the XML tree as the cursor. In a fully applicative setting, however, this simple scheme does not work since subtree modification through pointers breaks the principle of referential transparency.

    We propose a purely functional data structure named ``Slit'' to realize a cursor on a tree efficiently in applicative manner. Slit is similar to the zipper data structure introduced by Huet, but has some improvements compared to the zipper in terms of efficiency and expressiveness while handling a tree with variadic child nodes. Using the slit, we implement an XML processing framework based on the cursor model. We also show a generalization of this framework to give an XML view for non XML data.

    BibTeX:
    @misc{Inaba04,
      author = {Kazuhiro Inaba},
      title = {Purely Applicative XML Cursor},
      year = {2004},
      url = {http://www.kmonos.net/pub/}
    }
    
    Inaba, K. XML Transformation Language Based on Monadic Second Order Logic 2006 School: The University of Tokyo  mastersthesis URL 
    BibTeX:
    @mastersthesis{Inaba06,
      author = {Kazuhiro Inaba},
      title = {XML Transformation Language Based on Monadic Second Order Logic},
      school = {The University of Tokyo},
      year = {2006},
      url = {http://www.kmonos.net/pub/}
    }
    
    Inaba, K. Complexity and Expressiveness of Models of XML Translations 2009 School: The University of Tokyo  phdthesis URL 
    BibTeX:
    @phdthesis{Inaba09,
      author = {Kazuhiro Inaba},
      title = {Complexity and Expressiveness of Models of XML Translations},
      school = {The University of Tokyo},
      year = {2009},
      url = {http://www.kmonos.net/pub/}
    }
    
    Inaba, K. & Hosoya, H. XML Transformation Language Based on Monadic Second Order Logic 2007 Programming Language Technologies for XML (PLAN-X), pp. 49-60  inproceedings URL 
    Abstract: Although monadic second-order logic (MSO) has been a foundation of XML queries, little work has attempted to take MSO formulae themselves as a programming construct. Indeed, MSO formulae are capable of expressing (1) all regular queries, (2) deep matching without explicit recursion, (3) queries in a ``don't-care semantics'' for unmentioned nodes, and (4) $n$-ary queries for locating $n$-tuples of nodes. While previous frameworks for subtree extraction (path expressions, pattern matches, etc.) each had some of these properties, none has satisfied all of them.

    In this paper, we have designed and implemented a practical XML transformation language called MTran that fully exploits the expressiveness of MSO. MTran is a language based on ``select-and-transform'' templates similar in spirit to XSLT. However, we design our templates specially suitable for expressing structure-preserving transformation, eliminating the need for explicit recursive calls to be written. Moreover, we allow templates to be nested so as to make use of an $n$-ary query that depends on the $n-1$ nodes selected by the preceding templates.

    For the implementation of the MTran language, we have developed, as the core part, an efficient evaluation strategy for $n$-ary MSO queries. This consists of (a) an exploitation of the existing MONA system for the translation from MSO formulae to tree automata and (b) a linear time query evaluation algorithm for tree automata. For the latter, our algorithm is similar to Flum-Frick-Grohe algorithm for MSO queries locating $n$-tuples of sets of nodes, except that ours is specialized to queries for tuples of nodes and employs a partially lazy set operations for attaining a simpler implementation with a fewer number of tree traversals. We have made experiments and confirmed that our strategy yields a practical performance.

    BibTeX:
    @inproceedings{IH07,
      author = {Kazuhiro Inaba and Haruo Hosoya},
      title = {XML Transformation Language Based on Monadic Second Order Logic},
      booktitle = {Programming Language Technologies for XML (PLAN-X)},
      year = {2007},
      pages = {49--60},
      url = {http://www.plan-x-2007.org/program/}
    }
    
    Inaba, K. & Hosoya, H. Multi-Return Macro Tree Transducers 2008 Programming Language Technologies for XML (PLAN-X)  inproceedings URL 
    Abstract: Macro tree transducers are a simple yet expressive formal model for XML transformation languages. The power of this model comes from its accumulating parameters, which allow to carry around several output tree fragments in addition to the input tree. However, while each procedure is enabled by this facility to propagate intermediate results in a top-down direction, it still cannot do it in a bottom-up direction since it is restricted to return only a single tree and such tree cannot be decomposed once created. In this paper, we introduce multi-return macro tree transducers as a mild extension of macro tree transducers with the capability of each procedure to return more than one tree at the same time, thus attaining symmetry between top-down and bottom-up propagation of information. We illustrate the usefulness of this capability for writing practically meaningful transformations. Our main technical contributions consists of two formal comparisons of the expressivenesses of macro tree transducers and its multi-return extension: (1) in the deterministic case, the expressive powers of these two coincide (2) in the nondeterministic case (with the call-by-value evaluation strategy) multi-return macro tree transducers are strictly more expressive.
    BibTeX:
    @inproceedings{IH08,
      author = {Kazuhiro Inaba and Haruo Hosoya},
      title = {Multi-Return Macro Tree Transducers},
      booktitle = {Programming Language Technologies for XML (PLAN-X)},
      year = {2008},
      url = {http://gemo.futurs.inria.fr/events/PLANX2008/?page=accepted}
    }
    
    Inaba, K., Hosoya, H. & Maneth, S. Multi-Return Macro Tree Transducers 2008 Conference on Implementation and Application of Automata (CIAA), pp. 102-111  inproceedings DOI  
    Abstract: An extension of macro tree transducers is introduced with the capability of states to return multiple trees at the same time. Under call-by-value semantics, the new model is strictly more expressive than call-by-value macro tree transducers, and moreover, it has better closure properties under composition.
    BibTeX:
    @inproceedings{IHM08,
      author = {Kazuhiro Inaba and Haruo Hosoya and Sebastian Maneth},
      title = {Multi-Return Macro Tree Transducers},
      booktitle = {Conference on Implementation and Application of Automata (CIAA)},
      year = {2008},
      pages = {102--111},
      doi = {http://dx.doi.org/10.1007/978-3-540-70844-5_11}
    }
    
    Inaba, K. & Maneth, S. The Complexity of Tree Transducer Output Languages 2008 Foundations of Software Technology and Theoretical Computer Science (FSTTCS), pp. 244-255  inproceedings URL 
    Abstract: Two complexity results are shown for the output languages generated by compositions of macro tree transducers. They are in NSPACE(n) and hence are context-sensitive, and the class is NP-complete.
    BibTeX:
    @inproceedings{IM08,
      author = {Kazuhiro Inaba and Sebastian Maneth},
      title = {The Complexity of Tree Transducer Output Languages},
      booktitle = {Foundations of Software Technology and Theoretical Computer Science (FSTTCS)},
      year = {2008},
      pages = {244--255},
      url = {http://drops.dagstuhl.de/portals/FSTTCS08/}
    }
    
    Inaba, K. & Maneth, S. The Complexity of Translation Membership for Macro Tree Transducers 2009 Programming Language Technologies for XML (PLAN-X)  inproceedings URL 
    Abstract: Macro tree transducers (mtts) are a useful formal model for XML query and transformation languages. In this paper one of the fundamental decision problems on translations, namely the ``translation membership problem'' is studied for mtts. For a fixed translation, the translation membership problem asks whether a given input/output pair is element of the translation. For call-by-name mtts this problem is shown to be NP-complete. The main result is that translation membership for call-by-value mtts is in polynomial time. For several extensions, such as addition of regular look-ahead or the generalization to multi-return mtts, it is shown that translation membership still remains in PTIME.
    BibTeX:
    @inproceedings{IM09,
      author = {Kazuhiro Inaba and Sebastian Maneth},
      title = {The Complexity of Translation Membership for Macro Tree Transducers},
      booktitle = {Programming Language Technologies for XML (PLAN-X)},
      year = {2009},
      url = {http://db.ucsd.edu/planx2009/papers.html}
    }
    
    Ishtiaq, S.S. & O'Hearn, P.W. BI as an Assertion Language for Mutable Data Structures 2004 Principles of Programming Languages (POPL), pp. 14-26  inproceedings DOI  
    Abstract: Reynolds has developed a logic for reasoning about mutable data structures in which the pre- and postconditions are written in an intuitionistic logic enriched with a spatial form of conjunction. We investigate the approach from the point of view of the logic BI of bunched implications of O'Hearnand Pym. We begin by giving a model in which the law of the excluded middleholds, thus showing that the approach is compatible with classical logic. The relationship between the intuitionistic and classical versions of the system is established by a translation, analogous to a translation from intuitionistic logic into the modal logic S4. We also consider the question of completeness of the axioms. BI's spatial implication is used to express weakest preconditions for object-component assignments, and an axiom for allocating a cons cell is shown to be complete under an interpretation of triplesthat allows a command to be applied to states with dangling pointers. We make this latter a feature, by incorporating an operation, and axiom, for disposing of memory. Finally, we describe a local character enjoyed by specifications in the logic, and show how this enables a class of frame axioms, which say what parts of the heap don't change, to be inferred automatically.
    BibTeX:
    @inproceedings{IO04,
      author = {Samin S. Ishtiaq and Peter W. O'Hearn},
      title = {BI as an Assertion Language for Mutable Data Structures},
      booktitle = {Principles of Programming Languages (POPL)},
      year = {2004},
      pages = {14--26},
      doi = {http://doi.acm.org/10.1145/373243.375719}
    }
    
    Johnsson, T. Attribute Grammars as a Functional Programming Paradigm 1987 Functional Programming Languages and Computer Architecture, pp. 154-173  inproceedings  
    BibTeX:
    @inproceedings{Johnsson87,
      author = {Thomas Johnsson},
      title = {Attribute Grammars as a Functional Programming Paradigm},
      booktitle = {Functional Programming Languages and Computer Architecture},
      year = {1987},
      pages = {154--173}
    }
    
    Kameyama, Y. & Yonezawa, T. Typed Dynamic Control Operators for Delimited Continuations 2008 Functional and Logic Programming (FLOPS), pp. 239-254  inproceedings DOI URL 
    Abstract: We study the dynamic control operators for delimited continuations, Control and Prompt. Based on recent developments on purely functional CPS translations for them, we introduce a polymorphically typed calculus for these control operators which allows answer-type modification. We show that our calculus enjoys type soundness and is compatible with the CPS translation. We also show that the typed dynamic control operators can macro-express the typed static ones (Shift and Reset), while the converse direction is not possible, which exhibits a sharp contrast with the type-free case.
    BibTeX:
    @inproceedings{KY08,
      author = {Yukiyoshi Kameyama and Takuo Yonezawa},
      title = {Typed Dynamic Control Operators for Delimited Continuations},
      booktitle = {Functional and Logic Programming (FLOPS)},
      year = {2008},
      pages = {239--254},
      url = {http://logic.cs.tsukuba.ac.jp/~kam/publication.html},
      doi = {http://dx.doi.org/10.1007/978-3-540-78969-7_18}
    }
    
    Kamimura, T. & Slutzki, G. Parallel and Two-Way Automata on Directed Ordered Acyclic Graphs 1981 Information and Control
    Vol. 49, pp. 10-51 
    article DOI  
    Abstract: In this paper we study automata which work on directed ordered acyclic graphs, in particular those graphs, called derivation dags (d-dags), which model derivations of phrase-structure grammars. A rather complete characterization of the relative power of the following features of automata on d-dags is obtained: parallel versus sequential, deterministic versus nondeterministic and finite state versus a (restricted type of) pushdown store. New results concerning trees follows as special cases. Closure properties of classes of d-dag languages definable by various automata are studied for some basic operations. Characterization of general directed ordered acyclic graphs by these automata is also given.
    BibTeX:
    @article{KS81,
      author = {Tsutomu Kamimura and Giora Slutzki},
      title = {Parallel and Two-Way Automata on Directed Ordered Acyclic Graphs},
      journal = {Information and Control},
      year = {1981},
      volume = {49},
      pages = {10--51},
      doi = {http://dx.doi.org/10.1016/S0019-9958(81)90438-1}
    }
    
    Kato, H., Hidaka, S., Hu, Z., Ishihara, Y. & Nakano, K. Rewriting XQuery to Avoid Redundant Expressions Based on Static Emulation of XML Store 2009 Programming Language Technologies for XML (PLAN-X)  inproceedings URL 
    BibTeX:
    @inproceedings{KHHIN09,
      author = {Hiroyuki Kato and Soichiro Hidaka and Zhenjiang Hu and Yasunori Ishihara and Keisuke Nakano},
      title = {Rewriting XQuery to Avoid Redundant Expressions Based on Static Emulation of XML Store},
      booktitle = {Programming Language Technologies for XML (PLAN-X)},
      year = {2009},
      url = {http://db.ucsd.edu/planx2009/papers.html}
    }
    
    Keisler, H.J. Elementary Calculus: An Approach Using Infinitesimals 2000   book URL 
    BibTeX:
    @book{Keisler00,
      author = {H. Jerome Keisler},
      title = {Elementary Calculus: An Approach Using Infinitesimals},
      publisher = {Prindle, Weber and Schmidt},
      year = {2000},
      url = {http://www.math.wisc.edu/~keisler/calc.html}
    }
    
    Kennedy, K. & Ramanathan, J. A Deterministic Attribute Grammar Evaluator Based on Dynamic Scheduling 1979 ACM Transactions on Programming Languages and Systems
    Vol. 1, pp. 142-160 
    article DOI  
    Abstract: The problem of semantic evaluation in a compiler-generating system can be addressed by specifying language semantics in an attribute grammar [19], a context-free grammar augmented with ``attributes'' for the nonterminals and ``semantic functions'' to compute the attributes. A deterministic method for evaluating all attributes in a ``semantic'' parse tree is derived and shown to have time and space complexities which are essentially linear in the size of the tree. In a prepass through the parse tree, the method determines an evaluation sequence for the attributes; thus it is somewhat analogous to dynamic programming. The constructor-evaluator system described should be suitable for inclusion in a general translator-writing system. for inclusion in a general translator-writing system.
    BibTeX:
    @article{KR79,
      author = {Ken Kennedy and Jayashree Ramanathan},
      title = {A Deterministic Attribute Grammar Evaluator Based on Dynamic Scheduling},
      journal = {ACM Transactions on Programming Languages and Systems},
      year = {1979},
      volume = {1},
      pages = {142--160},
      doi = {http://doi.acm.org/10.1145/357062.357072}
    }
    
    Kepser, S. & Mönnich, U. Closure Properties of Linear Context-free Tree Languages with an Application to Optimality Theory 2006 Theoretical Computer Science
    Vol. 354, pp. 82-97 
    article DOI  
    BibTeX:
    @article{KM06,
      author = {Stephan Kepser and Uwe Mönnich},
      title = {Closure Properties of Linear Context-free Tree Languages with an Application to Optimality Theory},
      journal = {Theoretical Computer Science},
      year = {2006},
      volume = {354},
      pages = {82--97},
      doi = {http://dx.doi.org/10.1016/j.tcs.2005.11.024}
    }
    
    Kimura, D. Computation in Classical Logic and Dual Calculus 2007 School: The Graduate University for Advanced Studies  phdthesis URL 
    BibTeX:
    @phdthesis{Kimura07,
      author = {Daisuke Kimura},
      title = {Computation in Classical Logic and Dual Calculus},
      school = {The Graduate University for Advanced Studies},
      year = {2007},
      url = {http://www.nii.ac.jp/graduate/thesis/index_e.html}
    }
    
    Klarlund, N. & Møller, A. MONA Version 1.4 User Manual   manual URL 
    Abstract: This manual describes MONA version 1.4. Section 1 contains an introductory example and a brief description of MONA applications. In Section 2, the basic features of the MONA tool are described through a number of examples. Section 3 discusses the automaton-logic connection and the MONA compilation semantics. Section 4 describes DAGification, formula reduction, and separate compilation. In Section 5, it is shown how to make MONA produce detailed information about the processing and the resulting automata. Section 6 describes some more advanced constructs, such as, exporting and importing automata, controlling restrictions, and emulating Presburger arithmetic and M2L-Str. In Section 7, the decision procedure for WS2S is presented along with the MONA concept of Guided Tree Automata and an extension with recursive types. Section 8 discusses our plans for future work. In the appendices, the full syntax is defined, the command-line usage of MONA is shown, and the MONA DFA, GTA, and BDD packages are described.
    BibTeX:
    @manual{MONA,
      author = {Nils Klarlund and Anders Møller},
      title = {MONA Version 1.4 User Manual},
      url = {http://www.brics.dk/~amoeller/mona/publications.html}
    }
    
    Knuth, D.E. On the Translation of Languages from Left to Right 1965 Information and Control
    Vol. 8, pp. 607-639 
    article DOI  
    Abstract: There has been much recent interest in languages whose grammar is sufficiently simple that an efficient left-to-right parsing algorithm can be mechanically produced from the grammar. In this paper, we define LR(k) grammars, which are perhaps the most general ones of this type, and they provide the basis for understanding all of the special tricks which have been used in the construction of parsing algorithms for languages with simple structure, e.g. algebraic languages. We give algorithms for deciding if a given grammar satisfies the LR(k) condition, for given k, and also give methods for generating recognizes for LR(k) grammars. It is shown that the problem of whether or not a grammar is LR(k) for some k is undecidable, and the paper concludes by establishing various connections between LR(k) grammars and deterministic languages. In particular, the LR(k) condition is a natural analogue, for grammars, of the deterministic condition, for languages.
    BibTeX:
    @article{Knuth65,
      author = {Donald E. Knuth},
      title = {On the Translation of Languages from Left to Right},
      journal = {Information and Control},
      year = {1965},
      volume = {8},
      pages = {607--639},
      doi = {http://dx.doi.org/10.1016/S0019-9958(65)90426-2}
    }
    
    Knuth, D.E. Semantics of Context-Free Languages 1968 Mathematical Systems Theory
    Vol. 2, pp. 127-145 
    article DOI  
    Abstract: ``Meaning'' may be assigned to a string in a context-free language by defining attributes of the symbols in a derivation tree for that string. The attributes can be defined by functions associated with each production in the grammar. This paper examines the implications of this process when some of the attributes are synthesized, i.e., defined solely in terms of attributes of thedescendants of the corresponding nonterminal symbol, while other attributes are inherited, i.e., defined in terms of attributes of theancestors of the nonterminal symbol. An algorithm is given which detects when such semantic rules could possibly lead to circular definition of some attributes. An example is given of a simple programming language defined with both inherited and synthesized attributes, and the method of definition is compared to other techniques for formal specification of semantics which have appeared in the literature.
    BibTeX:
    @article{Knuth68,
      author = {Donald E. Knuth},
      title = {Semantics of Context-Free Languages},
      journal = {Mathematical Systems Theory},
      year = {1968},
      volume = {2},
      pages = {127--145},
      doi = {http://dx.doi.org/10.1007/BF01692511}
    }
    
    Kobayashi, N. Types and Higher-Order Recursion Schemes for Verification of Higher-Order Programs 2009 Principles of Programming Languages (POPL), pp. 416-428  inproceedings DOI  
    Abstract: We propose a new verification method for temporal properties of higher-order functional programs, which takes advantage of Ong's recent result on the decidability of the model-checking problem for higher-order recursion schemes (HORS's). A program is transformed to an HORS that generates a tree representing all the possible event sequences of the program, and then the HORS is model-checked. Unlike most of the previous methods for verification of higher-order programs, our verification method is sound and complete. Moreover, this new verification framework allows a smooth integration of abstract model checking techniques into verification of higher-order programs. We also present a type-based verification algorithm for HORS's. The algorithm can deal with only a fragment of the properties expressed by modal mu-calculus, but the algorithm and its correctness proof are (arguably) much simpler than those of Ong's game-semantics-based algorithm. Moreover, while the HORS model checking problem is n-EXPTIME in general, our algorithm is linear in the size of HORS, under the assumption that the sizes of types and specification formulas are bounded by a constant.
    BibTeX:
    @inproceedings{Kobayashi09,
      author = {Naoki Kobayashi},
      title = {Types and Higher-Order Recursion Schemes for Verification of Higher-Order Programs},
      booktitle = {Principles of Programming Languages (POPL)},
      year = {2009},
      pages = {416--428},
      doi = {http://doi.acm.org/10.1145/1480881.1480933}
    }
    
    Kobayashi, N. & Ohsaki, H. Tree Automata for Non-linear Arithmetic 2008 Rewriting Techniques and Applications (RTA), pp. 291-305  inproceedings DOI  
    Abstract: Tree automata modulo associativity and commutativity axioms, called AC tree automata, accept trees by iterating the transition modulo equational reasoning. The class of languages accepted by monotone AC tree automata is known to include the solution set of the inequality $x times y geq z$ , which implies that the class properly includes the AC closure of regular tree languages. In the paper, we characterize more precisely the expressiveness of monotone AC tree automata, based on the observation that, in addition to polynomials, a class of exponential constraints (called monotone exponential Diophantine inequalities) can be expressed by monotone AC tree automata with a minimal signature. Moreover, we show that a class of arithmetic logic consisting of monotone exponential Diophantine inequalities is definable by monotone AC tree automata. The results presented in the paper are obtained by applying our novel tree automata technique, called linearly bounded projection.
    BibTeX:
    @inproceedings{KO08,
      author = {Naoki Kobayashi and Hitoshi Ohsaki},
      title = {Tree Automata for Non-linear Arithmetic},
      booktitle = {Rewriting Techniques and Applications (RTA)},
      year = {2008},
      pages = {291--305},
      doi = {http://dx.doi.org/10.1007/978-3-540-70590-1_20}
    }
    
    Koivisto, M. An O*($2^n$) Algorithm for Graph Coloring and Other Partitioning Problems via Inclusion--Exclusion 2006 Foundations of Computer Science (FOCS), pp. 583-590  inproceedings DOI  
    Abstract: We use the principle of inclusion and exclusion, combined with polynomial time segmentation and fast Mobius transform, to solve the generic problem of summing or optimizing over the partitions of n elements into a given number of weighted subsets. This problem subsumes various classical graph partitioning problems, such as graph coloring, domatic partitioning, and MAX k-CUT, as well as machine learning problems like decision graph learning and model-based data clustering. Our algorithm runs in O*($2^n$) time, thus substantially improving on the usual O*($3^n$)-time dynamic programming algorithm; the notation O* suppresses factors polynomial in $n$. This result improves, e.g., Byskov's recent record for graph coloring from O*($2.4023^n$) to O*($2^n$). We note that twenty five years ago, R. M. Karp used inclusion--exclusion in a similar fashion to reduce the space requirement of the usual dynamic programming algorithms from exponential to polynomial.
    BibTeX:
    @inproceedings{Koivisto06,
      author = {Mikko Koivisto},
      title = {An O*($2^n$) Algorithm for Graph Coloring and Other Partitioning Problems via Inclusion--Exclusion},
      booktitle = {Foundations of Computer Science (FOCS)},
      year = {2006},
      pages = {583--590},
      doi = {http://doi.ieeecomputersociety.org/10.1109/FOCS.2006.11}
    }
    
    Krauth, W. Introduction To Monte Carlo Algorithms 1996   electronic URL 
    Abstract: In these lectures, given in '96 summer schools in Beg-Rohu (France) and Budapest, I discuss the fundamental principles of thermodynamic and dynamic Monte Carlo methods in a simple light-weight fashion. The keywords are MARKOV CHAINS, SAMPLING, DETAILED BALANCE, A PRIORI PROBABILITIES, REJECTIONS, ERGODICITY, "FASTER THAN THE CLOCK ALGORITHMS".

    The emphasis is on ORIENTATION, which is difficult to obtain (all the mathematics being simple). A firm sense of orientation helps to avoid getting lost, especially if you want to leave safe trodden-out paths established by common usage.

    Even though I remain quite basic (and, I hope, readable), I make every effort to drive home the essential messages, which are easily explained: the crystal-clearness of detail balance, the main problem with Markov chains, the great algorithmic freedom, both in thermodynamic and dynamic Monte Carlo, and the fundamental differences between the two problems.

    BibTeX:
    @electronic{Krauth96,
      author = {Werner Krauth},
      title = {Introduction To Monte Carlo Algorithms},
      year = {1996},
      url = {http://arxiv.org/abs/cond-mat/9612186}
    }
    
    Kufleitner, M. A Proof of the Factorization Forest Theorem 2007   techreport URL 
    BibTeX:
    @techreport{Kufleitner07,
      author = {Manfred Kufleitner},
      title = {A Proof of the Factorization Forest Theorem},
      year = {2007},
      url = {http://www.fmi.uni-stuttgart.de/tibin/veroeff.pl?Kufleitner}
    }
    
    Kuiper, M.F. & Swierstra, S.D. Using Attribute Grammars to Derive Efficient Functional Programs 1987 (RUU-CS-86-16)  techreport  
    BibTeX:
    @techreport{KS87,
      author = {Matthijs F. Kuiper and S. Doaitse Swierstra},
      title = {Using Attribute Grammars to Derive Efficient Functional Programs},
      year = {1987},
      number = {RUU-CS-86-16}
    }
    
    Kuroda, S. Classes of Languages and Linear-Bounded Automata 1964 Information and Control
    Vol. 7, pp. 207-223 
    article DOI  
    BibTeX:
    @article{Kuroda64,
      author = {Sige-Yuki Kuroda},
      title = {Classes of Languages and Linear-Bounded Automata},
      journal = {Information and Control},
      year = {1964},
      volume = {7},
      pages = {207--223},
      doi = {http://dx.doi.org/10.1016/S0019-9958(64)90120-2}
    }
    
    Kähler, D. & Wilke, T. Complementation, Disambiguation, and Determinization of Büchi Automata Unified 2008 International Colloquium on Automata, Languages and Programming (ICALP), pp. 724-735  inproceedings DOI  
    Abstract: We present a uniform framework for (1) complementing Büchi automata, (2) turning B"uchi automata into equivalent unambiguous Büchi automata, and (3) turning B"uchi automata into equivalent deterministic automata. We present the first solution to (2) which does not make use of McNaughton's theorem (determinization) and an intuitive and conceptually simple solution to (3).

    Our results are based on Muller and Schupp's procedure for turning alternating tree automata into non-deterministic ones.

    BibTeX:
    @inproceedings{KW08,
      author = {Detlef Kähler and Thomas Wilke},
      title = {Complementation, Disambiguation, and Determinization of Büchi Automata Unified},
      booktitle = {International Colloquium on Automata, Languages and Programming (ICALP)},
      year = {2008},
      pages = {724--735},
      doi = {http://dx.doi.org/10.1007/978-3-540-70575-8_59}
    }
    
    Kühnemann, A. A Pumping Lemma for Output Languages of Macro Tree Transducers 1996 Colloquium on Trees in Algebra and Programming (CAAP), pp. 44-58  inproceedings DOI  
    Abstract: The concept of macro tree transducer is a formal model for studying properties of syntax-directed translations. In this paper, for output languages of producing, nondeleting, and noncopying macro tree transducers, we introduce a pumping lemma. We apply the pumping lemma to gain the following result: there is no producing and nondeleting macro tree transducer which computes the set of all monadic trees with double exponential height as output.
    BibTeX:
    @inproceedings{Kuhnemann96,
      author = {Armin Kühnemann},
      title = {A Pumping Lemma for Output Languages of Macro Tree Transducers},
      booktitle = {Colloquium on Trees in Algebra and Programming (CAAP)},
      year = {1996},
      pages = {44--58},
      doi = {http://dx.doi.org/10.1007/3-540-61064-2_28}
    }
    
    Kühnemann, A. A Two-Dimensional Hierarchy for Attributed Tree Transducers 1997 International Symposium on Fundamentals of Computation Theory, pp. 281-292  inproceedings  
    BibTeX:
    @inproceedings{Kuhnemann97,
      author = {Armin Kühnemann},
      title = {A Two-Dimensional Hierarchy for Attributed Tree Transducers},
      booktitle = {International Symposium on Fundamentals of Computation Theory},
      year = {1997},
      pages = {281--292}
    }
    
    Kühnemann, A. Benefits of Tree Transducers for Optimizing Functional Programs 1998 Foundations of Software Technology and Theoretical Computer Science (FSTTCS), pp. 146-157  inproceedings  
    BibTeX:
    @inproceedings{Kuhnemann98,
      author = {Armin Kühnemann},
      title = {Benefits of Tree Transducers for Optimizing Functional Programs},
      booktitle = {Foundations of Software Technology and Theoretical Computer Science (FSTTCS)},
      year = {1998},
      pages = {146--157}
    }
    
    Kühnemann, A. & Vogler, H. Synthesized and Inherited Functions: A New Computational Model for Syntax-Directed Semantic 1994 Acta Informatica
    Vol. 31, pp. 431-477 
    article DOI  
    BibTeX:
    @article{KV94,
      author = {Armin Kühnemann and Heiko Vogler},
      title = {Synthesized and Inherited Functions: A New Computational Model for Syntax-Directed Semantic},
      journal = {Acta Informatica},
      year = {1994},
      volume = {31},
      pages = {431--477},
      doi = {http://dx.doi.org/10.1007/BF01178667}
    }
    
    Lamping, J. An Algorithm for Optimal Lambda Calculus Reduction 1990 Principles of Programming Languages (POPL), pp. 16-30  inproceedings DOI  
    BibTeX:
    @inproceedings{Lamping90,
      author = {John Lamping},
      title = {An Algorithm for Optimal Lambda Calculus Reduction},
      booktitle = {Principles of Programming Languages (POPL)},
      year = {1990},
      pages = {16--30},
      doi = {http://doi.acm.org/10.1145/96709.96711}
    }
    
    Latour, L. From Automata To Formulas 2004 Logic in Computer Science (LICS), pp. 120-129  inproceedings DOI  
    Abstract: Automata-based representations have recently been investigated as a tool for representing and manipulating sets of integer vectors. In this paper, we study some structural properties of automata accepting the encodings (most significant digit first) of the natural solutions of systems of linear Diophantine inequations, i.e., convex polyhedra in /spl Nopf//sup n/. Based on those structural properties, we develop an algorithm that takes as input an automaton and generates a quantifier-free formula that represents exactly the set of integer vectors accepted by the automaton. In addition, our algorithm generates the minimal Hilbert basis of the linear system. In experiments made with a prototype implementation, we have been able to synthesize in seconds formulas and Hilbert bases from automata with more than 10,000 states.
    BibTeX:
    @inproceedings{Latour04,
      author = {Louis Latour},
      title = {From Automata To Formulas},
      booktitle = {Logic in Computer Science (LICS)},
      year = {2004},
      pages = {120--129},
      doi = {http://dx.doi.org/10.1109/LICS.2004.1319606}
    }
    
    Lee, L. Fast Context-Free Grammar Parsing Requires Fast Boolean Matrix Multiplication 2002 Journal of the ACM
    Vol. 49, pp. 1-15 
    article DOI  
    Abstract: In 1975, Valiant showed that Boolean matrix multiplication can be used for parsing context-free grammars (CFGs), yielding the asympotically fastest (although not practical) CFG parsing algorithm known. We prove a dual result: any CFG parser with time complexity $O(gn^3^epsilon)$, where $g$ is the size of the grammar and $n$ is the length of the input string, can be efficiently converted into an algorithm to multiply $m times m$ Boolean matrices in time $O(m^3-3)$. Given that practical, substantially subcubic Boolean matrix multiplication algorithms have been quite difficult to find, we thus explain why there has been little progress in developing practical, substantially subcubic general CFG parsers. In proving this result, we also develop a formalization of the notion of parsing.
    BibTeX:
    @article{Lee02,
      author = {Lillian Lee},
      title = {Fast Context-Free Grammar Parsing Requires Fast Boolean Matrix Multiplication},
      journal = {Journal of the ACM},
      year = {2002},
      volume = {49},
      pages = {1--15},
      doi = {http://doi.acm.org/10.1145/505241.505242}
    }
    
    Leguy, B. Grammars without Erasing Rules. the OI Case 1981 Trees in Algebra and Programming, pp. 268-279  inproceedings DOI  
    BibTeX:
    @inproceedings{Leguy81,
      author = {Bernard Leguy},
      title = {Grammars without Erasing Rules. the OI Case},
      booktitle = {Trees in Algebra and Programming},
      year = {1981},
      pages = {268--279},
      doi = {http://dx.doi.org/10.1007/3-540-10828-9_68}
    }
    
    Leijen, D. HMF: Simple Type Inference for First-Class Polymorphism 2008 International Conference on Functional Programming (ICFP), pp. 283-294  inproceedings DOI  
    Abstract: HMF is a conservative extension of Hindley-Milner type inference with first-class polymorphism. In contrast to other proposals, HML uses regular System F types and has a simple type inference algorithm that is just a small extension of the usual Damas-Milner algorithm W. Given the relative simplicity and expressive power, we feel that HMF can be an attractive type system in practice. There is a reference implementation of the type system available online together with a technical report containing proofs (Leijen 2007a,b).
    BibTeX:
    @inproceedings{Leijen08,
      author = {Daan Leijen},
      title = {HMF: Simple Type Inference for First-Class Polymorphism},
      booktitle = {International Conference on Functional Programming (ICFP)},
      year = {2008},
      pages = {283--294},
      doi = {http://doi.acm.org/10.1145/1411204.1411245}
    }
    
    Leiß, H. & de Rougemont, M. Automata on Lempel-Ziv Compressed Strings 2003 Computer Science Logic (CSL), pp. 384-396  inproceedings DOI  
    Abstract: Using the Lempel-Ziv-78 compression algorithm to compress a string yields a dictionary of substrings, i.e. an edge-labelled tree with an order-compatible enumeration, here called an LZ-trie. Queries about strings translate to queries about LZ-tries and hence can in principle be answered without decompression. We compare notions of automata accepting LZ-tries and consider the relation between acceptable and MSO-definable classes of LZ-tries. It turns out that regular properties of strings can be checked efficiently on compressed strings by LZ-trie automata.
    BibTeX:
    @inproceedings{LR03,
      author = {Hans Leiß and Michel de Rougemont},
      title = {Automata on Lempel-Ziv Compressed Strings},
      booktitle = {Computer Science Logic (CSL)},
      year = {2003},
      pages = {384--396},
      doi = {http://dx.doi.org/10.1007/b13224}
    }
    
    Lempp, S. Priority Arguments in Computability Theory, Model Theory, and Complexity Theory http://www.math.wisc.edu/~lempp/papers/list.html  unpublished URL 
    Abstract: These notes (to be extended over the next few years) are intended to present various priority arguments in classical computability theory, effective model theory, and complexity theory in a uniform style.
    BibTeX:
    @unpublished{LemppPA,
      author = {Steffen Lempp},
      title = {Priority Arguments in Computability Theory, Model Theory, and Complexity Theory},
      note = {http://www.math.wisc.edu/~lempp/papers/list.html},
      url = {http://www.math.wisc.edu/~lempp/papers/list.html}
    }
    
    Leo, J.M.I.M. A General Context-free Parsing Algorithm Running in Linear Time on Every LR(k) Grammar without Using Lookahead 1991 Theoretical Computer Science
    Vol. 82, pp. 165-176 
    article DOI  
    BibTeX:
    @article{Leo91,
      author = {Joop M. I. M. Leo},
      title = {A General Context-free Parsing Algorithm Running in Linear Time on Every LR(k) Grammar without Using Lookahead},
      journal = {Theoretical Computer Science},
      year = {1991},
      volume = {82},
      pages = {165--176},
      doi = {http://dx.doi.org/10.1016/0304-3975(91)90180-A}
    }
    
    Leroux, J. A Polynomial Time Presburger Criterion and Synthesis for Number Decision Diagrams 2005 Logic in Computer Science (LICS), pp. 147-156  inproceedings DOI  
    BibTeX:
    @inproceedings{Leroux05,
      author = {Jerome Leroux},
      title = {A Polynomial Time Presburger Criterion and Synthesis for Number Decision Diagrams},
      booktitle = {Logic in Computer Science (LICS)},
      year = {2005},
      pages = {147--156},
      doi = {http://dx.doi.org/10.1109/LICS.2005.2}
    }
    
    Lewis, P., Rosenkrantz, D. & Stearns, R. Attributed Translations 1974 Journal of Computer and System Sciences
    Vol. 9, pp. 279-307 
    article DOI  
    Abstract: Attributed translation grammars are introduced as a means of specifying a translation from strings of input symbols to strings of output symbols. Each of these symbols can have a finite set of attributes, each of which can take on a value from a possibly infinite set. Attributed translation grammars can be applied in depth to practical compiling problems.

    Certain augmented pushdown machines are defined and characterizations are given of the attributed translations they can perform both deterministically and non-deterministically. Classes of attributed translation grammars are defined whose translation can be performed deterministically while parsing top down or bottom up.

    Review: See Theorem 10. This is the earliest paper I could find that it states evaluation of attributes can be done in lineart time wrt the size of the tree.
    BibTeX:
    @article{LRS74,
      author = {P.M. Lewis and D.J. Rosenkrantz and R.E. Stearns},
      title = {Attributed Translations},
      journal = {Journal of Computer and System Sciences},
      year = {1974},
      volume = {9},
      pages = {279--307},
      doi = {http://dx.doi.org/10.1016/S0022-0000(74)80045-0}
    }
    
    Li, S.R., Yeung, R.W. & Cai, N. Linear Network Coding 2003 IEEE Transactions on Information Theory
    Vol. 49, pp. 371-381 
    article DOI  
    Abstract: Consider a communication network in which certain source nodes multicast information to other nodes on the network in the multihop fashion where every node can pass on any of its received data to others. We are interested in how fast each node can receive the complete information, or equivalently, what the information rate arriving at each node is. Allowing a node to encode its received data before passing it on, the question involves optimization of the multicast mechanisms at the nodes. Among the simplest coding schemes is linear coding, which regards a block of data as a vector over a certain base field and allows a node to apply a linear transformation to a vector before passing it on. We formulate this multicast problem and prove that linear coding suffices to achieve the optimum, which is the max-flow from the source to each receiving node.
    BibTeX:
    @article{LYC03,
      author = {Shuo-yen Robert Li and Raymond W. Yeung and Ning Cai},
      title = {Linear Network Coding},
      journal = {IEEE Transactions on Information Theory},
      year = {2003},
      volume = {49},
      pages = {371--381},
      doi = {http://dx.doi.org/10.1109/TIT.2002.807285}
    }
    
    Libkin, L. Logics for Unranked Trees: An Overview 2006 Logical Methods in Computer Science (LMCS)
    Vol. 2, pp. 1-31 
    article DOI  
    Abstract: Labeled unranked trees are used as a model of XML documents, and logical languages for them have been studied actively over the past several years. Such logics have different purposes: some are better suited for extracting data, some for expressing navigational properties, and some make it easy to relate complex properties of trees to the existence of tree automata for those properties. Furthermore, logics differ significantly in their model-checking properties, their automata models, and their behavior on ordered and unordered trees. In this paper we present a survey of logics for unranked trees.
    BibTeX:
    @article{Libkin06,
      author = {Leonid Libkin},
      title = {Logics for Unranked Trees: An Overview},
      journal = {Logical Methods in Computer Science (LMCS)},
      year = {2006},
      volume = {2},
      pages = {1--31},
      doi = {http://dx.doi.org/10.2168/LMCS-2(3:2)2006}
    }
    
    Lierler, Y. CMODELS - SAT-Based Disjunctive Answer Set Solver 2005 Logic Programming and Nonmonotonic Reasoning, pp. 447-451  inproceedings DOI  
    Abstract: Disjunctive logic programming under the stable model semantics [GL91] is a new methodology called answer set programming (ASP) for solving combinatorial search problems. This programming method uses answer set solvers, such as dlv [Lea05], gnt[Jea05], smodels [SS05], assat [LZ02], cmodels[Lie05a]. Systems dlv and gnt are more general as they work with the class of disjunctive logic programs, while other systems cover only normal programs. is uniquely designed to find the answer sets for disjunctive logic programs. On the other hand, gnt first generates possible stable model candidates and then tests the candidate on the minimality using system as an inference engine for both tasks. Systems and use SAT solvers as search engines. They are based on the relationship between the completion semantics [Cla78], loop formulas [LZ02] and answer set semantics for logic programs. Here we present the implementation of a SAT-based algorithm for finding answer sets for disjunctive logic programs within cmodels. The work is based on the definition of completion for disjunctive programs [LL03] and the generalisation of loop formulas [LZ02] to the case of disjunctive programs [LL03]. We propose the necessary modifications to the SAT based algorithm [LZ02] as well as to the generate and test algorithm from [GLM04] in order to adapt them to the case of disjunctive programs. We implement the algorithms in cmodels and demonstrate the experimental results.
    BibTeX:
    @inproceedings{Lierler05,
      author = {Yuliya Lierler},
      title = {CMODELS - SAT-Based Disjunctive Answer Set Solver},
      booktitle = {Logic Programming and Nonmonotonic Reasoning},
      year = {2005},
      pages = {447--451},
      doi = {http://dx.doi.org/10.1007/11546207_44}
    }
    
    Lin, F. & Zhao, Y. ASSAT: Computing Answer Sets of a Logic Program by SAT Solvers 2004 Artificial Intelligence
    Vol. 157, pp. 115-137 
    article DOI  
    Abstract: We propose a new translation from normal logic programs with constraints under the answer set semantics to propositional logic. Given a normal logic program, we show that by adding, for each loop in the program, a corresponding loop formula to the program's completion, we obtain a one-to-one correspondence between the answer sets of the program and the models of the resulting propositional theory. In the worst case, there may be an exponential number of loops in a logic program. To address this problem, we propose an approach that adds loop formulas a few at a time, selectively. Based on these results, we implement a system called ASSAT(X), depending on the SAT solver X used, for computing one answer set of a normal logic program with constraints. We test the system on a variety of benchmarks including the graph coloring, the blocks world planning, and Hamiltonian Circuit domains. Our experimental results show that in these domains, for the task of generating one answer set of a normal logic program, our system has a clear edge over the state-of-art answer set programming systems Smodels and DLV.
    BibTeX:
    @article{LZ04,
      author = {Fangzhen Lin and Yuting Zhao},
      title = {ASSAT: Computing Answer Sets of a Logic Program by SAT Solvers},
      journal = {Artificial Intelligence},
      year = {2004},
      volume = {157},
      pages = {115--137},
      doi = {http://dx.doi.org/10.1016/j.artint.2004.04.004}
    }
    
    Lindenmayer, A. Mathematical Models for Cellular Interactions in Development I. Filaments with One-Sided Inputs 1968 Journal of Theoretical Biology
    Vol. 18, pp. 280-299 
    article DOI  
    Abstract: A theory is proposed for the development of filamentous organisms, based on the assumptions that the filaments are composed of cells which undergo changes of state under inputs they receive from their neighbors, and the cells produce outputs as determined by their state and the input they receive. Cell division is accounted for by inserting two new cells in the filament to replace a cell of a specified state and input. Thus growing filaments are obtained which exhibit various developmental patterns, like constant apical pattern, non-dividing apical zone, and banded patterns. In this first part of this study the inputs are considered to pass only in one direction along the filament. Formal set-theoretical statement of the assumptions, and of some of the theorems derivable from them, is included.
    BibTeX:
    @article{Lindenmayer68,
      author = {Aristid Lindenmayer},
      title = {Mathematical Models for Cellular Interactions in Development I. Filaments with One-Sided Inputs},
      journal = {Journal of Theoretical Biology},
      year = {1968},
      volume = {18},
      pages = {280--299},
      doi = {http://dx.doi.org/10.1016/0022-5193(68)90079-9}
    }
    
    Lindenmayer, A. Mathematical Models for Cellular Interactions in Development II. Simple and Branching Filaments with Two-Sided Inputs 1968 Journal of Theoretical Biology
    Vol. 18, pp. 300-315 
    article DOI  
    Abstract: Continuing the presentation of a theory of growth models for filamentous organisms, the treatment is extended to cases where inputs are received by each cell from both directions along the filament, and the change of state and the output of a cell is determined by its present state and the two inputs it receives. Further symbolism is introduced to take care of branching filaments as well. Two entirely different models are constructed for a particular branching organism, resembling one of the red algae. These models are compared with reference to the number of states employed, and the presence or absence of instructions for unequal divisions and for inductive relationships among the cells. The importance of a morphogenetic control theory concerning these relationships is emphasized.
    BibTeX:
    @article{Lindenmayer68a,
      author = {Aristid Lindenmayer},
      title = {Mathematical Models for Cellular Interactions in Development II. Simple and Branching Filaments with Two-Sided Inputs},
      journal = {Journal of Theoretical Biology},
      year = {1968},
      volume = {18},
      pages = {300--315},
      doi = {http://dx.doi.org/10.1016/0022-5193(68)90080-5}
    }
    
    Lindenmayer, A. Developmental Systems without Cellular Interactions, Their Languages and Grammars 1971 Journal of Theoretical Biology
    Vol. 30, pp. 455-484 
    article DOI  
    Abstract: Formal systems are proposed and constructed to generate cellular arrays corresponding to developmental stages of some simple organisms: lower plants, snail embryos and leaves. Sets of these arrays are construed as developmental languages, and their complexity properties and generating grammars are compared with the classes of languages in the Chomsky hierarchy. Various branching patterns are compared with respect to such complexity classes. Theorems were obtained concerning partial characterizations of the class of developmental systems without cellular interactions, and some of the mathematical properties of this class are discussed.
    BibTeX:
    @article{Lindenmayer71,
      author = {Aristid Lindenmayer},
      title = {Developmental Systems without Cellular Interactions, Their Languages and Grammars},
      journal = {Journal of Theoretical Biology},
      year = {1971},
      volume = {30},
      pages = {455--484},
      doi = {http://dx.doi.org/10.1016/0022-5193(71)90002-6}
    }
    
    Liu, L.Y. & Weiner, P. An Infinite Hierarchy of Intersections of Context-Free Languages 1973 Mathematical Systems Theory
    Vol. 7, pp. 185-192 
    article DOI  
    Abstract: The class of languages expressible as the intersection ofk context-free languages is shown to be properly contained within the class of languages expressible as the intersection of $k + 1$ context-free languages. Hence an infinite hierarchy of classes of languages is exhibited between the class of context-sensitive languages and the class of context-free languages.
    BibTeX:
    @article{LW73,
      author = {Leonard Y. Liu and Peter Weiner},
      title = {An Infinite Hierarchy of Intersections of Context-Free Languages},
      journal = {Mathematical Systems Theory},
      year = {1973},
      volume = {7},
      pages = {185--192},
      doi = {http://dx.doi.org/10.1007/BF01762237}
    }
    
    Lohrey, M. & Maneth, S. The Complexity of Tree Automata and XPath on Grammar-Compressed Trees 2006 Theoretical Computer Science
    Vol. 363, pp. 196-210 
    article DOI  
    Abstract: The complexity of various membership problems for tree automata on compressed trees is analyzed. Two compressed representations are considered: dags, which allow to share identical subtrees in a tree, and straight-line context-free tree grammars, which moreover allow to share identical intermediate parts in a tree. Several completeness results for the classes NL, P, and PSPACE are obtained. Finally, the complexity of the evaluation problem for (structural) XPath queries on trees that are compressed via straight-line context-free tree grammars is investigated.
    BibTeX:
    @article{LM06,
      author = {Markus Lohrey and Sebastian Maneth},
      title = {The Complexity of Tree Automata and XPath on Grammar-Compressed Trees},
      journal = {Theoretical Computer Science},
      year = {2006},
      volume = {363},
      pages = {196--210},
      doi = {http://dx.doi.org/10.1016/j.tcs.2006.07.024}
    }
    
    Longley, J. When is a Functional Program not a Functional Program? 1999 International Conference on Functional Programming (ICFP), pp. 1-7  inproceedings DOI  
    Abstract: In an impure functional language, there are programs whose behaviour is completely functional (in that they behave extensionally on inputs), but the functions they compute cannot be written in the purely functional fragment of the language. That is, the class of programs with functional behaviour is more expressive than the usual class of pure functional programs. In this paper we introduce this extended class of "functional" programs by means of examples in Standard ML, and explore what they might have to offer to programmers and language implementors.After reviewing some theoretical background, we present some examples of functions of the above kind, and discuss how they may be implemented. We then consider two possible programming applications for these functions: the implementation of a search algorithm, and an algorithm for exact real-number integration. We discuss the advantages and limitations of this style of programming relative to other approaches. We also consider the increased scope for compiler optimizations that these functions would offer.
    BibTeX:
    @inproceedings{Longley99,
      author = {John Longley},
      title = {When is a Functional Program not a Functional Program?},
      booktitle = {International Conference on Functional Programming (ICFP)},
      year = {1999},
      pages = {1--7},
      doi = {http://doi.acm.org/10.1145/317636.317775}
    }
    
    Madsen, O.L. On Defining Semantics by Means of Extended Attribute Grammars 1980 Semantics-Directed Compiler Generation, pp. 259-299  inproceedings DOI  
    BibTeX:
    @inproceedings{Madsen80,
      author = {Ole Lehrmann Madsen},
      title = {On Defining Semantics by Means of Extended Attribute Grammars},
      booktitle = {Semantics-Directed Compiler Generation},
      year = {1980},
      pages = {259--299},
      doi = {http://dx.doi.org/10.1007/3-540-10250-7_25}
    }
    
    Man, K. A No-Frills Introduction to Lua 5.1 VM Instructions 2006 http://luaforge.net/docman/?group_id=83  misc URL 
    BibTeX:
    @misc{Man06,
      author = {Kein-Hong Man},
      title = {A No-Frills Introduction to Lua 5.1 VM Instructions},
      year = {2006},
      url = {http://luaforge.net/docman/?group_id=83}
    }
    
    Maneth, S. The Generating Power of Total Deterministic Tree Transducers 1998 Information and Computation
    Vol. 147, pp. 111-144 
    article DOI  
    Abstract: Attributed tree transducers are abstract models used to study properties of attribute grammars. One abstraction which occurs when modeling attribute grammars by attributed tree transducers is that arbitrary trees over a ranked alphabet are taken as input, instead of derivation trees of a context-free grammar. In this paper we show that with respect to the generating power this isnotan abstraction; i.e., we show that attributed tree transducers and attribute grammars generate the same class of term (or tree) languages. To prove this, a number of results concerning the generating power of top-down tree transducers are established, which are interesting in their own. We also show that the classes of output languages of attributed tree transducers form a hierarchy with respect to the number of attributes. The latter result is achieved by proving a hierarchy of classes of tree languages generated by context-free hypergraph grammars with respect to their rank.
    BibTeX:
    @article{Maneth98,
      author = {Sebastian Maneth},
      title = {The Generating Power of Total Deterministic Tree Transducers},
      journal = {Information and Computation},
      year = {1998},
      volume = {147},
      pages = {111--144},
      doi = {http://dx.doi.org/10.1006/inco.1998.2736}
    }
    
    Maneth, S. The Complexity of Compositions of Deterministic Tree Transducers 2002 Foundations of Software Technology and Theoretical Computer Science (FSTTCS), pp. 265-276  inproceedings DOI  
    BibTeX:
    @inproceedings{Maneth02,
      author = {Sebastian Maneth},
      title = {The Complexity of Compositions of Deterministic Tree Transducers},
      booktitle = {Foundations of Software Technology and Theoretical Computer Science (FSTTCS)},
      year = {2002},
      pages = {265--276},
      doi = {http://dx.doi.org/10.1007/3-540-36206-1_24}
    }
    
    Maneth, S. The Macro Tree Transducer Hierarchy Collapses for Functions of Linear Size Increase 2003 Foundations of Software Technology and Theoretical Computer Science (FSTTCS), pp. 326-337  inproceedings DOI  
    BibTeX:
    @inproceedings{Maneth03,
      author = {Sebastian Maneth},
      title = {The Macro Tree Transducer Hierarchy Collapses for Functions of Linear Size Increase},
      booktitle = {Foundations of Software Technology and Theoretical Computer Science (FSTTCS)},
      year = {2003},
      pages = {326--337},
      doi = {http://dx.doi.org/10.1007/b94618}
    }
    
    Maneth, S. Tree Transducers and Their Applications to XML 2004 XIX Tarragona Seminar on Formal Syntax and Semantics  misc URL 
    BibTeX:
    @misc{Maneth04,
      author = {Sebastian Maneth},
      title = {Tree Transducers and Their Applications to XML},
      year = {2004},
      url = {http://www.cse.unsw.edu.au/~smaneth/}
    }
    
    Maneth, S., Berlea, A., Perst, T. & Seidl, H. XML Type Checking with Macro Tree Transducers 2005 Principles of Database Systems (PODS), pp. 283-294  inproceedings DOI  
    BibTeX:
    @inproceedings{MBPS05,
      author = {Sebastian Maneth and Alexandru Berlea and Thomas Perst and Helmut Seidl},
      title = {XML Type Checking with Macro Tree Transducers},
      booktitle = {Principles of Database Systems (PODS)},
      year = {2005},
      pages = {283--294},
      doi = {http://dx.doi.org/10.1145/1065167.1065203}
    }
    
    Maneth, S. & Busatto, G. Tree Transducers and Tree Compressions 2004 Foundations of Software Science and Computation Structures (FoSSaCS), pp. 363-377  inproceedings DOI  
    BibTeX:
    @inproceedings{MB04,
      author = {Sebastian Maneth and Giorgio Busatto},
      title = {Tree Transducers and Tree Compressions},
      booktitle = {Foundations of Software Science and Computation Structures (FoSSaCS)},
      year = {2004},
      pages = {363--377},
      doi = {http://dx.doi.org/10.1007/b95995}
    }
    
    Maneth, S. & Nakano, K. XML Type Checking for Macro Tree Transducers with Holes 2008 Programming Language Technologies for XML (PLAN-X)  inproceedings URL 
    BibTeX:
    @inproceedings{MN08,
      author = {Sebastian Maneth and Keisuke Nakano},
      title = {XML Type Checking for Macro Tree Transducers with Holes},
      booktitle = {Programming Language Technologies for XML (PLAN-X)},
      year = {2008},
      url = {http://gemo.futurs.inria.fr/events/PLANX2008/?page=accepted}
    }
    
    Maneth, S., Perst, T. & Seidl, H. Exact XML Type Checking in Polynomial Time 2007 International Conference on Database Theory (ICDT), pp. 254-268  inproceedings URL 
    BibTeX:
    @inproceedings{MPS07,
      author = {Sebastian Maneth and Thomas Perst and Helmut Seidl},
      title = {Exact XML Type Checking in Polynomial Time},
      booktitle = {International Conference on Database Theory (ICDT)},
      year = {2007},
      pages = {254--268},
      url = {http://www.cse.unsw.edu.au/~smaneth/}
    }
    
    Maneth, S. & Seidl, H. Deciding Equivalence of Top-Down XML Transformations in Polynomial Time 2007 Programming Language Technologies for XML (PLAN-X), pp. 73-79  inproceedings URL 
    BibTeX:
    @inproceedings{MS07,
      author = {Sebastian Maneth and Helmut Seidl},
      title = {Deciding Equivalence of Top-Down XML Transformations in Polynomial Time},
      booktitle = {Programming Language Technologies for XML (PLAN-X)},
      year = {2007},
      pages = {73--79},
      url = {http://www.plan-x-2007.org/program/}
    }
    
    Marin, M. & Kutsia, T. Computational Methods in an Algebra of Regular Hedge Expressions 2009 ({RISC} Report Series 09-03)  techreport URL 
    Abstract: We propose an algebra of regular hedge expressions built on top of regular hedge grammars as a framework for the analysis and manipulation of hedge languages. We show how linear systems of hedge language equations (LS for short) can be used as an intermediate representation on which to perform the computation of quotient, intersection, product derivative, and factor matrix of regular hedge languages. Regular hedge grammars and LSs are shown to be formalisms of same expressive power for the representation of hedge languages, and we give algorithms to convert between these two formalisms.
    BibTeX:
    @techreport{MK09,
      author = {Mircea Marin and Temur Kutsia},
      title = {Computational Methods in an Algebra of Regular Hedge Expressions},
      year = {2009},
      number = {RISC Report Series 09-03},
      url = {http://www2.score.cs.tsukuba.ac.jp/publications/published_articles/techreportreference.2009-03-09.0132524332}
    }
    
    Martel, M. Program Transformation for Numerical Precision 2009 Partial Evaluation and Semantics-Based Program Manipulation (PEPM), pp. 101-110  inproceedings DOI  
    Abstract: This article introduces a new program transformation in order to enhance the numerical accuracy of floating-point computations. We consider that a program would return an exact result if the computations were carried out using real numbers. In practice, roundoff errors due to the finite representation of values arise during the execution. These errors are closely related to the way formulas are evaluated. Indeed, mathematically equivalent formulas, obtained using laws like associativity, distributivity, etc., may lead to very different numerical results in the computer arithmetic. We propose a semantics-based transformation in order to optimize the numerical accuracy of programs. This transformation is expressed in the abstract interpretation framework and it aims at rewriting pieces of numerical codes in order to obtain results closer to what the computer would output if it used the exact arithmetic.
    BibTeX:
    @inproceedings{Martel09,
      author = {Matthieu Martel},
      title = {Program Transformation for Numerical Precision},
      booktitle = {Partial Evaluation and Semantics-Based Program Manipulation (PEPM)},
      year = {2009},
      pages = {101--110},
      doi = {http://doi.acm.org/10.1145/1480945.1480960}
    }
    
    Martens, W. & Neven, F. On the Complexity of Typechecking Top-Down XML Transformations 2005 Theoretical Computer Science
    Vol. 336, pp. 153-180 
    article DOI  
    Abstract: We investigate the typechecking problem for XML transformations: statically verifying that every answer to a transformation conforms to a given output schema, for inputs satisfying a given input schema. As typechecking quickly turns undecidable for query languages capable of testing equality of data values, we return to the limited framework where we abstract XML documents as labeled ordered trees. We focus on simple top-down recursive transformations motivated by XSLT and structural recursion on trees. We parameterize the problem by several restrictions on the transformations (deleting, non-deleting, bounded width) and consider both tree automata and DTDs as input and output schemas. The complexity of the typechecking problems in this scenario ranges from PTIME to EXPTIME.
    BibTeX:
    @article{MN05,
      author = {Wim Martens and Frank Neven},
      title = {On the Complexity of Typechecking Top-Down XML Transformations},
      journal = {Theoretical Computer Science},
      year = {2005},
      volume = {336},
      pages = {153--180},
      doi = {http://dx.doi.org/10.1016/j.tcs.2004.10.035}
    }
    
    Martens, W. & Neven, F. Frontiers of Tractability for Typechecking Simple XML Transformations 2007 Journal of Computer and System Sciences
    Vol. 73, pp. 362-390 
    article DOI  
    Abstract: Typechecking consists of statically verifying whether the output of an XML transformation is always conform to an output type for documents satisfying a given input type. We focus on complete algorithms which always produce the correct answer. We consider top-down XML transformations incorporating XPath expressions and abstract document types by grammars and tree automata. By restricting schema languages and transformations, we identify several practical settings for which typechecking can be done in polynomial time. Moreover, the resulting framework provides a rather complete picture as we show that most scenarios cannot be enlarged without rendering the typechecking problem intractable. So, the present research sheds light on when to use fast complete algorithms and when to reside to sound but incomplete ones.
    BibTeX:
    @article{MN07,
      author = {Wim Martens and Frank Neven},
      title = {Frontiers of Tractability for Typechecking Simple XML Transformations},
      journal = {Journal of Computer and System Sciences},
      year = {2007},
      volume = {73},
      pages = {362--390},
      doi = {http://dx.doi.org/10.1016/j.jcss.2006.10.005}
    }
    
    Marx, M. Conditional XPath 2005 ACM Transactions on Database Systems
    Vol. 30, pp. 929-959 
    article DOI  
    Abstract: XPath 1.0 is a variable free language designed to specify paths between nodes in XML documents. Such paths can alternatively be specified in first-order logic. The logical abstraction of XPath 1.0, usually called Navigational or Core XPath, is not powerful enough to express every first-order definable path. In this article, we show that there exists a natural expansion of Core XPath in which every first-order definable path in XML document trees is expressible. This expansion is called Conditional XPath. It contains additional axis relations of the form (child::n[F])&plus;, denoting the transitive closure of the path expressed by child::n[F]. The difference with XPath's descendant::n[F] is that the path (child::n[F])&plus; is conditional on the fact that all nodes in between the start and end node of the path should also be labeled by n and should make the predicate F true. This result can be viewed as the XPath analogue of the expressive completeness of the relational algebra with respect to first-order logic.
    BibTeX:
    @article{Marx05,
      author = {Maarten Marx},
      title = {Conditional XPath},
      journal = {ACM Transactions on Database Systems},
      year = {2005},
      volume = {30},
      pages = {929--959},
      doi = {http://doi.acm.org/10.1145/1114244.1114247}
    }
    
    Matsuda, K., Hu, Z. & Takeichi, M. Type-Based Specialization of XML Transformations 2009 Partial Evaluation and Semantics-Based Program Manipulation (PEPM), pp. 61-72  inproceedings DOI  
    Abstract: It is often convenient to write a function and apply it to a specific input. However, a program developed in this way may be inefficient to evaluate and difficult to analyze due to its generality. In this paper, we propose a technique of new specialization for a class of XML transformations, in which no output of a function can be decomposed or traversed. Our specialization is type-based in the sense that it uses the structures of input types; types are described by regular hedge grammars and subtyping is defined set-theoretically. The specialization always terminates, resulting in a program where every function is fully specialized and only accepts its rigid input. We present several interesting applications of our new specialization, especially for injectivity analysis.
    BibTeX:
    @inproceedings{MHT09,
      author = {Kazutaka Matsuda and Zhenjiang Hu and Masato Takeichi},
      title = {Type-Based Specialization of XML Transformations},
      booktitle = {Partial Evaluation and Semantics-Based Program Manipulation (PEPM)},
      year = {2009},
      pages = {61--72},
      doi = {http://doi.acm.org/10.1145/1480945.1480955}
    }
    
    Mcadam, B. Y in Practical Programs 2001 Fixed Points in Computer Science (FICS)  conference  
    Abstract: For typical working programmers, the Y combinator for finding the fixed point of higher order functions is seen, at best, as an idiosyncratic example of the features of functional programming languages or, worse, not understood at all. We are going to see that it is actually useful in programming development.

    If we program recursive functions in a form that uses Y instead of the recursive constructs built in at the language level then we gain control over and information about how programs are executed. The example we will investigate include creating memo functions (in languages with mutable data), providing dummy or default results for failed function calls and building call trees annotated with arguments and results.

    As well as demonstrating the practical properties of this programming technique, we will see an interesting property relating to the theory of sequential realisability [Lon99].

    BibTeX:
    @conference{Mcadam01,
      author = {Bruce Mcadam},
      title = {Y in Practical Programs},
      booktitle = {Fixed Points in Computer Science (FICS)},
      year = {2001}
    }
    
    McBride, C. The Derivative of a Regular Type is its Type of One-Hole Contexts 2001   misc  
    BibTeX:
    @misc{McBride01,
      author = {Conor McBride},
      title = {The Derivative of a Regular Type is its Type of One-Hole Contexts},
      year = {2001}
    }
    
    McBride, C. Clowns to the Left of me, Jokers to the Right 2008 Principles of Programming Languages (POPL), pp. 287-295  inproceedings DOI  
    BibTeX:
    @inproceedings{McBride08,
      author = {Conor McBride},
      title = {Clowns to the Left of me, Jokers to the Right},
      booktitle = {Principles of Programming Languages (POPL)},
      year = {2008},
      pages = {287--295},
      doi = {http://doi.acm.org/10.1145/1328897.1328474}
    }
    
    McCarthy, J. Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I 1960 Communications of the ACM
    Vol. 3, pp. 184-195 
    article DOI  
    BibTeX:
    @article{McCarthy60,
      author = {John McCarthy},
      title = {Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I},
      journal = {Communications of the ACM},
      year = {1960},
      volume = {3},
      pages = {184--195},
      doi = {http://doi.acm.org/10.1145/367177.367199}
    }
    
    Meuss, H., Schulz, K.U. & Bry, F. Towards Aggregated Answers for Semistructured Data 2001 International Conference on Database Theory (ICDT), pp. 346-360  inproceedings DOI  
    Abstract: Semistructured data [5],[34],[23],[31],[1] are used to model data transferred on the Web for applications such as e-commerce [18], biomolecular biology [8], document management [2],[21], linguistics [32], thesauri and ontologies [17]. They are formalized as trees or more generally as (multi-)graphs [23],[1]. Query languages for semistructured data have been proposed [6],[11],[1],[4],[10] that, like SQL, can be seen as involving a number of variables [35], but, in contrast to SQL, give rise to arrange the variables in trees or graphs reflecting the structure of the semi- structured data to be retrieved. Leaving aside the ``construct'' parts of queries, answers can be formalized as mappings represented as tuples, hence called an- swer tuples, that assign database nodes to query variables. These answer tuples underly the semistructured data delivered as answers.
    BibTeX:
    @inproceedings{MSB01,
      author = {Holger Meuss and Klaus U. Schulz and François Bry},
      title = {Towards Aggregated Answers for Semistructured Data},
      booktitle = {International Conference on Database Theory (ICDT)},
      year = {2001},
      pages = {346--360},
      doi = {http://dx.doi.org/10.1007/3-540-44503-X_22}
    }
    
    Mezei, J. & Wright, J.B. Algebraic Automata and Context-Free Sets 1967 Information and Control
    Vol. 11, pp. 3-29 
    article DOI  
    BibTeX:
    @article{MW67,
      author = {J. Mezei and Jesse B. Wright},
      title = {Algebraic Automata and Context-Free Sets},
      journal = {Information and Control},
      year = {1967},
      volume = {11},
      pages = {3--29},
      doi = {http://dx.doi.org/10.1016/S0019-9958(67)90353-1}
    }
    
    Miklau, G. & Suciu, D. Containment and Equivalence for a Fragment of XPath 2004 Journal of the ACM
    Vol. 51, pp. 2-45 
    article DOI  
    BibTeX:
    @article{MS04,
      author = {Gerome Miklau and Dan Suciu},
      title = {Containment and Equivalence for a Fragment of XPath},
      journal = {Journal of the ACM},
      year = {2004},
      volume = {51},
      pages = {2--45},
      doi = {http://doi.acm.org/10.1145/962446.962448}
    }
    
    Milner, R., Parrow, J. & Walker, D. A Calculus of Mobile Processes, I 1992 Information and Computation
    Vol. 100, pp. 1-40 
    article DOI  
    Abstract: We present the $-calculus, a calculus of communicating systems in which one can naturally express processes which have changing structure. Not only may the component agents of a system be arbitrarily linked, but a communication between neighbours may carry information which changes that linkage. The calculus is an extension of the process algebra CCS, following work by Engberg and Nielsen, who added mobility to CCS while preserving its algebraic properties. The $-calculus gains simplicity by removing all distinction between variables and constants; communication links are identified by names, and computation is represented purely as the communication of names across links. After an illustrated description of how the $-calculus generalises conventional process algebras in treating mobility, several examples exploiting mobility are given in some detail. The important examples are the encoding into the $-calculus of higher-order functions (the $-calculus and combinatory algebra), the transmission of processes as values, and the representation of data structures as processes. The paper continues by presenting the algebraic theory of strong bisimilarity and strong equivalence, including a new notion of equivalence indexed by distinctions--i.e., assumptions of inequality among names. These theories are based upon a semantics in terms of a labeled transition system and a notion of strong bisimulation, both of which are expounded in detail in a companion paper. We also report briefly on work-in-progress based upon the corresponding notion of weak bisimulation, in which internal actions cannot be observed.
    BibTeX:
    @article{MPW92,
      author = {Robin Milner and Joachim Parrow and David Walker},
      title = {A Calculus of Mobile Processes, I},
      journal = {Information and Computation},
      year = {1992},
      volume = {100},
      pages = {1--40},
      doi = {http://dx.doi.org/10.1016/0890-5401(92)90008-4}
    }
    
    Milner, R., Parrow, J. & Walker, D. A Calculus of Mobile Processes, II 1992 Information and Computation
    Vol. 100, pp. 41-77 
    article DOI  
    Abstract: This is the second of two papers in which we present the $-calculus, a calculus of mobile processes. We provide a detailed presentation of some of the theory of the calculus developed to date, and in particular we establish most of the results stated in the companion paper.
    BibTeX:
    @article{MPW92a,
      author = {Robin Milner and Joachim Parrow and David Walker},
      title = {A Calculus of Mobile Processes, II},
      journal = {Information and Computation},
      year = {1992},
      volume = {100},
      pages = {41--77},
      doi = {http://dx.doi.org/10.1016/0890-5401(92)90009-5}
    }
    
    Milo, T., Suciu, D. & Vianu, V. Typechecking for XML Transformers 2003 Journal of Computer and System Sciences
    Vol. 66Principles of Database Systems (PODS), pp. 66-97 
    article DOI  
    Abstract: We study the typechecking problem for XML (eXtensible Markup Language) transformers: given an XML transformation program and a DTD for the input XML documents, check whether every result of the program conforms to a specified output DTD. We model XML transformers using a novel device called a k-pebble transducer, that can express most queries without data-value joins in XML-QL, XSLT, and other XML query languages. Types are modeled by regular tree languages, a robust extension of DTDs. The main result of the paper is that typechecking for k-pebble transducers is decidable. Consequently, typechecking can be performed for a broad range of XML transformation languages, including XML-QL and a fragment of XSLT.
    BibTeX:
    @article{MSV03,
      author = {Tova Milo and Dan Suciu and Victor Vianu},
      title = {Typechecking for XML Transformers},
      booktitle = {Principles of Database Systems (PODS)},
      journal = {Journal of Computer and System Sciences},
      year = {2003},
      volume = {66},
      pages = {66--97},
      doi = {http://dx.doi.org/10.1016/S0022-0000(02)00030-2}
    }
    
    Mohri, M. String-Matching with Automata 1997 Nordic Journal of Computing
    Vol. 4, pp. 217-231 
    article  
    Review: How to generate efficiently a DFA recognizing Sigma*L(A) from a DFA A? Constructing a NFA by adding Sigma-loop and determinizing it may not be optimal. So, an Aho-Corasick-like algorithm is introduced. State splitting is necessary for assigning a fine grained failure-function.
    BibTeX:
    @article{Mohri97,
      author = {Mehryar Mohri},
      title = {String-Matching with Automata},
      journal = {Nordic Journal of Computing},
      year = {1997},
      volume = {4},
      pages = {217--231}
    }
    
    Mohri, M. Minimization Algorithms for Sequential Transducers 2000 Theoretical Computer Science
    Vol. 234, pp. 177-201 
    article DOI  
    BibTeX:
    @article{Mohri00,
      author = {Mehryar Mohri},
      title = {Minimization Algorithms for Sequential Transducers},
      journal = {Theoretical Computer Science},
      year = {2000},
      volume = {234},
      pages = {177--201},
      doi = {http://dx.doi.org/10.1016/S0304-3975(98)00115-7}
    }
    
    Mohri, M. Semiring Frameworks and Algorithms for Shortest-Distance Problems 2002 Journal of Automata, Languages and Combinatorics
    Vol. 7, pp. 321-350 
    article  
    Review: General characterization of shortest-distance algorithms (e.g. Bellman-Ford), also treating k-shortest and k-distinct-shortest distance problem. When the distance addition (+) and the minimization (*/min) forms a semi-ring, this framework can be applied.
    BibTeX:
    @article{Mohri02,
      author = {Mehryar Mohri},
      title = {Semiring Frameworks and Algorithms for Shortest-Distance Problems},
      journal = {Journal of Automata, Languages and Combinatorics},
      year = {2002},
      volume = {7},
      pages = {321--350}
    }
    
    Mohri, M. Weighted Finite-State Transducer Algorithms: An Overview 2004 Formal Languages and Applications
    Vol. 148, pp. 551-564 
    article URL 
    BibTeX:
    @article{Mohri04,
      author = {Mehryar Mohri},
      title = {Weighted Finite-State Transducer Algorithms: An Overview},
      journal = {Formal Languages and Applications},
      year = {2004},
      volume = {148},
      pages = {551--564},
      url = {http://www.cs.nyu.edu/~mohri/pub.html}
    }
    
    Mohri, M. & Nederhof, M. Regular Approximation of Context-Free Grammars through Transformation 2001 Robustness in Language and Speech Technology, pp. 153-163  incollection  
    Review: Rewriting a rule A->wBvCu into A->wB, B'->vC, C'->uA', and A'->eps. A nonterminal N' intuitively approximates the follow(N) set. Note that if for example B is not mutually recursive between A then we do not have to split the rhs at B (in other words, w, v, or u may contain nonterminals that are not mutually recursive between A.) This algorithm is obviously linear. More fine-graind and quadratic algorithm by Nederhof is also refered and explained in the paper.
    BibTeX:
    @incollection{MN01,
      author = {Mehryar Mohri and Mark-Jan Nederhof},
      title = {Regular Approximation of Context-Free Grammars through Transformation},
      booktitle = {Robustness in Language and Speech Technology},
      publisher = {Kluwer Academic Publishers},
      year = {2001},
      pages = {153--163}
    }
    
    Montagu, B. & Rémy, D. Modeling Abstract Types in Modules with Open Existential Types 2009 Principles of Programming Languages (POPL), pp. 354-365  inproceedings DOI  
    Abstract: We propose F-zip, a calculus of open existential types that is an extension of System F obtained by decomposing the introduction and elimination of existential types into more atomic constructs. Open existential types model modular type abstraction as done in module systems. The static semantics of F-zip adapts standard techniques to deal with linearity of typing contexts, its dynamic semantics is a small-step reduction semantics that performs extrusion of type abstraction as needed during reduction, and the two are related by subject reduction and progress lemmas. Applying the Curry-Howard isomorphism, F-zip can be also read back as a logic with the same expressive power as second-order logic but with more modular ways of assembling partial proofs. We also extend the core calculus to handle the double vision problem as well as type-level and term-level recursion. The resulting language turns out to be a new formalization of (a minor variant of) Dreyer's internal language for recursive and mixin modules.
    BibTeX:
    @inproceedings{MR09,
      author = {Benot Montagu and Didier Rémy},
      title = {Modeling Abstract Types in Modules with Open Existential Types},
      booktitle = {Principles of Programming Languages (POPL)},
      year = {2009},
      pages = {354--365},
      doi = {http://doi.acm.org/10.1145/1480881.1480926}
    }
    
    Morihata, A. Calculational Approach to Automatic Algorithm Construction 2009 School: The University of Tokyo  phdthesis URL 
    BibTeX:
    @phdthesis{Morihata09,
      author = {Akimasa Morihata},
      title = {Calculational Approach to Automatic Algorithm Construction},
      school = {The University of Tokyo},
      year = {2009},
      url = {http://www.ipl.t.u-tokyo.ac.jp/~morihata/}
    }
    
    Moriya, E. On Two-Way Tree Automata 1994 Information Processing Letters
    Vol. 50, pp. 117-121 
    article DOI  
    BibTeX:
    @article{Moriya94,
      author = {Etsuro Moriya},
      title = {On Two-Way Tree Automata},
      journal = {Information Processing Letters},
      year = {1994},
      volume = {50},
      pages = {117--121},
      doi = {http://dx.doi.org/10.1016/0020-0190(94)00022-0}
    }
    
    Muschevici, R., Potanin, A., Tempero, E. & Noble, J. Multiple Dispatch in Practice 2008 Object Oriented Programming Systems Languages and Applications (OOPSLA), pp. 563-582  inproceedings DOI  
    Abstract: Multiple dispatch uses the run time types of more than one argument to a method call to determine which method body to run. While several languages over the last 20 years have provided multiple dispatch, most object-oriented languages still support only single dispatch forcing programmers to implement multiple dispatch manually when required. This paper presents an empirical study of the use of multiple dispatch in practice, considering six languages that support multiple dispatch, and also investigating the potential for multiple dispatch in Java programs. We hope that this study will help programmers understand the uses and abuses of multiple dispatch; virtual machine implementors optimise multiple dispatch; and language designers to evaluate the choice of providing multiple dispatch in new programming languages.
    BibTeX:
    @inproceedings{MPTN08,
      author = {Radu Muschevici and Alex Potanin and Ewan Tempero and James Noble},
      title = {Multiple Dispatch in Practice},
      booktitle = {Object Oriented Programming Systems Languages and Applications (OOPSLA)},
      year = {2008},
      pages = {563--582},
      doi = {http://doi.acm.org/10.1145/1449764.1449808}
    }
    
    Naini, M.M. A New Efficient Incremental Evaluator for Attribute Grammars Allowing Conditional Semantic Equations 1988 SoutheastCon, pp. 386-390  inproceedings DOI  
    Abstract: The evaluator presented performs a depth-first search of the (static) reverse dependency graph associated with a parse tree, interleaved with the execution of semantic rules. The full compound dependency graph is not constructed. Instead, it is implicitly represented by the semantic tree and the dependency graph of the productions. The semantic rules are precompiled as programs written in intermediate code and called semantic modules. Evaluation is a call-by-need evaluation and it is optimal in the number of attribute instances evaluated.
    BibTeX:
    @inproceedings{Naini88,
      author = {M. M. Naini},
      title = {A New Efficient Incremental Evaluator for Attribute Grammars Allowing Conditional Semantic Equations},
      booktitle = {SoutheastCon},
      year = {1988},
      pages = {386--390},
      doi = {http://dx.doi.org/10.1109/SECON.1988.194883}
    }
    
    Nakano, K. Composing Stack-Attributed Tree Transducers 2009 Theory of Computing Systems
    Vol. 44, pp. 1-38 
    article DOI  
    Abstract: Stack-attributed tree transducers extend attributed tree transducers with a pushdown stack device for attribute values, which make them strictly more powerful. This paper presents an algorithm for the composition of stack-attributed tree transducers with attributed tree transducers. The algorithm is an extension of the existing method to compose attributed tree transducers. It leads to some natural closure properties of the corresponding classes of tree transformations.
    BibTeX:
    @article{Nakano09,
      author = {Keisuke Nakano},
      title = {Composing Stack-Attributed Tree Transducers},
      journal = {Theory of Computing Systems},
      year = {2009},
      volume = {44},
      pages = {1--38},
      doi = {http://dx.doi.org/10.1007/s00224-008-9125-y}
    }
    
    Nanevski, A., Morrisett, G., Shinnar, A., Govereau, P. & Birkedal, L. Ynot: Dependent Types for Imperative Programs 2008 International Conference on Functional Programming (ICFP), pp. 229-240  inproceedings DOI  
    Abstract: We describe an axiomatic extension to the Coq proof assistant, that supports writing, reasoning about, and extracting higher-order, dependently-typed programs with side-effects. Coq already includes a powerful functional language that supports dependent types, but that language is limited to pure, total functions. The key contribution of our extension, which we call Ynot, is the added support for computations that may have effects such as non-termination, accessing a mutable store, and throwing/catching exceptions.

    The axioms of Ynot form a small trusted computing base which has been formally justified in our previous work on Hoare Type Theory (HTT). We show how these axioms can be combined with the powerful type and abstraction mechanisms of Coq to build higher-level reasoning mechanisms which in turn can be used to build realistic, verified software components. To substantiate this claim, we describe here a representative series of modules that implement imperative finite maps, including support for a higher-order (effectful) iterator. The implementations range from simple (e.g., association lists) to complex (e.g., hash tables) but share a common interface which abstracts the implementation details and ensures that the modules properly implement the finite map abstraction.

    BibTeX:
    @inproceedings{NMSGB08,
      author = {Aleksandar Nanevski and Greg Morrisett and Avraham Shinnar and Paul Govereau and Lars Birkedal},
      title = {Ynot: Dependent Types for Imperative Programs},
      booktitle = {International Conference on Functional Programming (ICFP)},
      year = {2008},
      pages = {229--240},
      doi = {http://doi.acm.org/10.1145/1411204.1411237}
    }
    
    Neven, F. & Bussche, J.V.D. Expressiveness of Structured Document Query Languages Based on Attribute Grammars 2002 Journal of the ACM
    Vol. 49, pp. 56-100 
    article DOI  
    Abstract: Structured document databases can be naturally viewed as derivation trees of a context-free grammar. Under this view, the classical formalism of attribute grammars becomes a formalism for structured document query languages. From this perspective, we study the expressive power of BAGs: Boolean-valued attribute grammars with propositional logic formulas as semantic rules, and RAGs: relation-valued attribute grammars with first-order logic formulas as semantic rules. BAGs can express only unary queries; RAGs can express queries of any arity. We first show that the (unary) queries expressible by BAGs are precisely those definable in monadic second-order logic. We then show that the queries expressible by RAGs are precisely those definable by first-order inductions of linear depth, or, equivalently, those computable in linear time on a parallel machine with polynomially many processors. Further, we show that RAGs that only use synthesized attributes are strictly weaker than RAGs that use both synthesized and inherited attributes. We show that RAGs are more expressive than monadic second-order logic for queries of any arity. Finally, we discuss relational attribute grammars in the context of BAGs and RAGs. We show that in the case of BAGs this does not increase the expressive power, while different semantics for relational RAGs capture the complexity classes NP, coNP and UP $p$ coUP.
    BibTeX:
    @article{NB02,
      author = {Frank Neven and Jan Van Den Bussche},
      title = {Expressiveness of Structured Document Query Languages Based on Attribute Grammars},
      journal = {Journal of the ACM},
      year = {2002},
      volume = {49},
      pages = {56--100},
      doi = {http://doi.acm.org/10.1145/505241.505245}
    }
    
    Nielson, H.R. & Nielson, F. Semantics with Applications: A Formal Introduction 1999   book URL 
    BibTeX:
    @book{NN99,
      author = {Hanne Riis Nielson and Flemming Nielson},
      title = {Semantics with Applications: A Formal Introduction},
      publisher = {Wiley Professional Computing},
      year = {1999},
      url = {http://www.daimi.au.dk/~bra8130/Wiley_book/wiley.html}
    }
    
    Niwinski, D. Fixed Points vs. Infinite Generation 1988 Logic in Computer Science (LICS), pp. 402-409  inproceedings DOI  
    Abstract: The author characterizes Rabin definability (see M.O. Rabin, 1969) of properties of infinite trees of fixed-point definitions based on the basic operations of a standard powerset algebra of trees and involving the least and greatest fixed-point operators as well as the finite union operator and functional composition. A strict connection is established between a hierarchy resulting from alternating the least and greatest fixed-point operators and the hierarchy induced by Rabin indices of automata. The characterization result is actually proved on a more general level, namely, for arbitrary powerset algebra, where the concept of Rabin automaton is replaced by the more general concept of infinite grammar
    BibTeX:
    @inproceedings{Niwinski88,
      author = {Damian Niwinski},
      title = {Fixed Points vs. Infinite Generation},
      booktitle = {Logic in Computer Science (LICS)},
      year = {1988},
      pages = {402--409},
      doi = {http://dx.doi.org/10.1109/LICS.1988.5137}
    }
    
    O'Hearn, P.W. Resources, Concurrency, and Local Reasoning 2007 Theoretical Computer Science
    Vol. 375, pp. 271-307 
    article DOI  
    Abstract: In this paper we show how a resource-oriented logic, separation logic, can be used to reason about the usage of resources in concurrent programs.
    BibTeX:
    @article{OHearn07,
      author = {Peter W. O'Hearn},
      title = {Resources, Concurrency, and Local Reasoning},
      journal = {Theoretical Computer Science},
      year = {2007},
      volume = {375},
      pages = {271--307},
      doi = {http://dx.doi.org/10.1016/j.tcs.2006.12.035}
    }
    
    O'Hearn, P.W., Reynolds, J.C. & Yang, H. Local Reasoning about Programs that Alter Data Structures 2001 Computer Science Logic (CSL), pp. 1-19  inproceedings DOI  
    Abstract: We describe an extension of Hoare's logic for reasoning about programs that alter data structures. We consider a low-level storage model based on a heap with associated lookup, update, allocation and deallocation operations, and unrestricted address arithmetic. The assertion language is based on a possible worlds model of the logic of bunched implications, and includes spatial conjunction and implication connectives alongside those of classical logic. Heap operations are axiomatized using what we call the ``small axioms'', each of which mentions only those cells accessed by a particular command. Through these and a number of examples we show that the formalism supports local reasoning: A specification and proof can concentrate on only those cells in memory that a program accesses.

    This paper builds on earlier work by Burstall, Reynolds, Ishtiaq and O'Hearn on reasoning about data structures.

    BibTeX:
    @inproceedings{ORY01,
      author = {Peter W. O'Hearn and John C. Reynolds and Hongseok Yang},
      title = {Local Reasoning about Programs that Alter Data Structures},
      booktitle = {Computer Science Logic (CSL)},
      year = {2001},
      pages = {1--19},
      doi = {http://dx.doi.org/10.1007/3-540-44802-0_1}
    }
    
    O'Keefe, R.A. O(1) Reversible Tree Navigation without Cycles 2001 Theory and Practice of Logic Programming
    Vol. 1, pp. 617-630 
    article DOI URL 
    BibTeX:
    @article{OKeefe01,
      author = {Richard A. O'Keefe},
      title = {O(1) Reversible Tree Navigation without Cycles},
      journal = {Theory and Practice of Logic Programming},
      year = {2001},
      volume = {1},
      pages = {617--630},
      url = {http://arxiv.org/abs/cs.PL/0406014},
      doi = {http://dx.doi.org/10.1017/S1471068401001065}
    }
    
    Odersky, M., Spoon, L. & Venners, B. Programming in Scala 2008   book URL 
    Abstract: Scala is an object-oriented programming language for the Java Virtual Machine. In addition to being object-oriented, Scala is also a functional language, and combines the best approaches to OO and functional programming.

    In Italian, Scala means a stairway, or steps--indeed, Scala lets you step up to a programming environment that incorporates some of the best recent thinking in programming language design while also letting you use all your existing Java code.

    Artima is very pleased to publish the first book on Scala, written by the designer of the language, Martin Odersky. Co-authored by Lex Spoon and Bill Venners, this book takes a step-by-step tutorial approach to teaching you Scala. Starting with the fundamental elements of the language, Programming in Scala introduces functional programming from the practitioner's perspective, and describes advanced language features that can make you a better, more productive developer.

    BibTeX:
    @book{OSV08,
      author = {Martin Odersky and Lex Spoon, and Bill Venners},
      title = {Programming in Scala},
      publisher = {Artima},
      year = {2008},
      url = {http://www.artima.com/shop/programming_in_scala}
    }
    
    Odgen, W.F. & Rounds, W.C. Compositions of $n$ Tree Transducers 1972 ACM Symposium on Theory of Computing (STOC), pp. 198-206  inproceedings DOI  
    BibTeX:
    @inproceedings{OR72,
      author = {W. F. Odgen and William C. Rounds},
      title = {Compositions of $n$ Tree Transducers},
      booktitle = {ACM Symposium on Theory of Computing (STOC)},
      year = {1972},
      pages = {198--206},
      doi = {http://doi.acm.org/10.1145/800152.804915}
    }
    
    Ohori, A. & Sasano, I. Lightweight Fusion by Fixed Point Promotion 2007 Principles of Programming Languages (POPL), pp. 143-154  inproceedings DOI  
    Abstract: This paper proposes a lightweight fusion method for general recursive function definitions. Compared with existing proposals, our method has several significant practical features: it works for general recursive functions on general algebraic data types; it does not produce extra runtime overhea (except for possible code size increase due to the success of fusion); and it is readily incorporated in standard inlining optimization. This is achieved by extending the ordinary inlining process with a new fusion law that transforms a term of the form f o (fix $g rc lambda x.E$) to a new fixed point term fix $h rc E'$ by promoting the function f through the fixed point operator. This is a sound syntactic transformation rule that is not sensitive to the types of f and g. This property makes our method applicable to wide range of functions including those with multi-parameters in both curried and uncurried forms. Although this method does not guarantee any form of completeness, it fuses typical examples discussed in the literature and others that involve accumulating parameters, either in the tt foldl-like specific forms or in general recursive forms, without any additional machinery. In order to substantiate our claim, we have implemented our method in a compiler. Although it is preliminary, it demonstrates practical feasibility of this method.
    BibTeX:
    @inproceedings{OS07,
      author = {Atsushi Ohori and Isao Sasano},
      title = {Lightweight Fusion by Fixed Point Promotion},
      booktitle = {Principles of Programming Languages (POPL)},
      year = {2007},
      pages = {143--154},
      doi = {http://doi.acm.org/10.1145/1190215.1190241}
    }
    
    Olteanu, D. SPEX: Streamed and Progressive Evaluation of XPath 2007 IEEE Transactions on Knowledge and Data Engineering
    Vol. 19, pp. 934-949 
    article DOI  
    BibTeX:
    @article{Olteanu07,
      author = {Dan Olteanu},
      title = {SPEX: Streamed and Progressive Evaluation of XPath},
      journal = {IEEE Transactions on Knowledge and Data Engineering},
      year = {2007},
      volume = {19},
      pages = {934--949},
      doi = {http://doi.ieeecomputersociety.org/10.1109/TKDE.2007.1063}
    }
    
    Patraşcu, M. Succincter 2008 Foundations of Computer Science (FOCS), pp. 305-313  inproceedings DOI URL 
    Abstract: We can represent an array of n values from 0,1,2 using ceil(n log_2 3) bits (arithmetic coding), but then we cannot retrieve a single element efficiently. Instead, we can encode every block of t elements using ceil(t log_2 3) bits, and bound the retrieval time by t. This gives a linear trade-off between the redundancy of the representation and the query time.In fact, this type of linear trade-off is ubiquitous in known succinct data structures, and in data compression. The folk wisdom is that if we want to waste one bit per block, the encoding is so constrained that it cannot help the query in any way. Thus, the only thing a query can do is to read the entire block and unpack it.We break this limitation and show how to use recursion to improve redundancy. It turns out that if a block is encoded with two (!) bits of redundancy, we can decode a single element, and answer many other interesting queries, in time logarithmic in the block size.Our technique allows us to revisit classic problems in succinct data structures, and give surprising new upper bounds. We also construct a locally-decodable version of arithmetic coding.
    BibTeX:
    @inproceedings{Pvatracscu08,
      author = {Mihai Patraşcu},
      title = {Succincter},
      booktitle = {Foundations of Computer Science (FOCS)},
      year = {2008},
      pages = {305--313},
      url = {http://people.csail.mit.edu/mip/papers/index.html},
      doi = {http://dx.doi.org/10.1109/FOCS.2008.83}
    }
    
    Paakki, J. Attribute Grammar Paradigms--A High-Level Methodology in Languag Implementation 1995 ACM Computing Surverys
    Vol. 27, pp. 196-255 
    article DOI  
    Abstract: Attribute grammars are a formalism for specifying programming languages. They have been applied to a great number of systems automatically producing language implementations from their specifications. The systems and their specification languages can be evaluated and classified according to their level of application support, linguistic characteristics, and degree of automation.A survey of attribute grammar-based specification languages is given. The modern advanced specification languages extend the core attribute grammar model with concepts and primitives from established programming paradigms. The main ideas behind the developed attribute grammar paradigms are discussed, and representative specification languages are presented with a common example grammar. The presentation is founded on mapping elements of attribute grammars to their counterparts in programming languages. This methodology of integrating two problem-solving disciplines together is explored with a classification of the paradigms into structured, modular, object-oriented, logic, and functional attribute grammars. The taxonomy is complemented by introducing approaches based on an implicit parallel or incremental attribute evaluation paradigm.
    BibTeX:
    @article{Paakki95,
      author = {Jukka Paakki},
      title = {Attribute Grammar Paradigms--A High-Level Methodology in Languag Implementation},
      journal = {ACM Computing Surverys},
      year = {1995},
      volume = {27},
      pages = {196--255},
      doi = {http://doi.acm.org/10.1145/210376.197409}
    }
    
    Papi, M.M., Ali, M., Jr., T.L.C., Perkins, J.H. & Ernst, M.D. Practical Pluggable Types for Java 2008 International Symposium on Software Testing and Analysis (ISSTA), pp. 201-212  inproceedings URL 
    BibTeX:
    @inproceedings{PACPE08,
      author = {Matthew M. Papi and Mahmood Ali and Telmo Luis Correa Jr. and Jeff H. Perkins and Michael D. Ernst},
      title = {Practical Pluggable Types for Java},
      booktitle = {International Symposium on Software Testing and Analysis (ISSTA)},
      year = {2008},
      pages = {201--212},
      url = {http://people.csail.mit.edu/mernst/pubs/pluggable-checkers-issta2008-abstract.html}
    }
    
    Perst, T. & Seidl, H. Macro Forest Transducers 2004 Information Processing Letters
    Vol. 89, pp. 141-149 
    article DOI  
    BibTeX:
    @article{PS04,
      author = {Thomas Perst and Helmut Seidl},
      title = {Macro Forest Transducers},
      journal = {Information Processing Letters},
      year = {2004},
      volume = {89},
      pages = {141--149},
      doi = {http://dx.doi.org/10.1016/j.ipl.2003.05.001}
    }
    
    Post, E.L. A Variant of a Recursively Unsolvable Problem 1946 Bulletin of the American Mathematical Society
    Vol. 52, pp. 264-269 
    article URL 
    BibTeX:
    @article{Post46,
      author = {Emil L. Post},
      title = {A Variant of a Recursively Unsolvable Problem},
      journal = {Bulletin of the American Mathematical Society},
      year = {1946},
      volume = {52},
      pages = {264--269},
      url = {http://projecteuclid.org/DPubS?verb=Display&version=1.0&service=UI&handle=euclid.bams/1183507843&page=record}
    }
    
    Rajasekaran, S. & Yooseph, S. TAL recognition in O(M($n^2$)) time 1995 Annual Meeting of the Association for Computational Linguistics (ACL), pp. 166-173  inproceedings  
    Abstract: We propose an O(M ($n^2$)) time algorithm for the recognition of Tree Adjoining Languages (TALs), where n is the size of the input string and M (k) is the time needed to multiply two $k Theta k$ boolean matrices. Tree Adjoining Grammars (TAGs) are formalisms suitable for natural language processing and have received enormous attention in the past among not only natural language processing researchers but also algorithms designers. The first polynomial time algorithm for TAL parsing was proposed in 1986 and had a run time of O($n^6$).
    BibTeX:
    @inproceedings{RY95,
      author = {Sanguthevar Rajasekaran and Shibu Yooseph},
      title = {TAL recognition in O(M($n^2$)) time},
      booktitle = {Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {1995},
      pages = {166--173}
    }
    
    Rayward-Smith, V.J. Hypergrammars: An Extension of Macrogrammars 1977 Journal of Computer and System Sciences
    Vol. 14, pp. 130-149 
    article DOI  
    Abstract: A new class of generative grammars called hypergrammars is introduced. They are described as a natural extension of Fischer's macrogrammars. Three modes of derivation, inside-out, outside-in, and unrestricted are considered, and the classes of languages so defined are compared with other known classes. It is shown that the outside-in hyper-languages are the same as the outside-in macrolanguages but that inside-out hyperlanguages are the same as Fischer's quoted languages. Various closure properties are considered as well as generalizations of the original definitions. Three new hierarchies of languages each embedded in the class of quoted languages are discovered. It is claimed that this new approach to Fischer's work is more understandable and also mathematically elegant.
    BibTeX:
    @article{RaywardSmith77,
      author = {Victor J. Rayward-Smith},
      title = {Hypergrammars: An Extension of Macrogrammars},
      journal = {Journal of Computer and System Sciences},
      year = {1977},
      volume = {14},
      pages = {130--149},
      doi = {http://dx.doi.org/10.1016/S0022-0000(77)80043-3}
    }
    
    Raza, M. & Gardner, P. Footprints in Local Reasoning 2008 Foundations of Software Science and Computation Structures (FoSSaCS), pp. 201-215  inproceedings DOI  
    Abstract: Local reasoning about programs exploits the natural local behaviour common in programs by focussing on the footprint - that part of the resource accessed by the program. We address the problem of formally characterising and analysing the footprint notion for abstract local functions introduced by Calcagno, O'Hearn and Yang. With our definition, we prove that the footprints are the only essential elements required for a complete specification of a local function. We also show that, for well-founded models (which is usually the case in practice), a smallest specification always exists that only includes the footprints, thus formalising the notion of small axioms in local reasoning. We also present results for the non-well-founded case, and introduce the natural class of one-step local functions for which the footprints are the smallest safe states.
    BibTeX:
    @inproceedings{RG08,
      author = {Mohammad Raza and Philippa Gardner},
      title = {Footprints in Local Reasoning},
      booktitle = {Foundations of Software Science and Computation Structures (FoSSaCS)},
      year = {2008},
      pages = {201--215},
      doi = {http://dx.doi.org/10.1007/978-3-540-78499-9_15}
    }
    
    Reynolds, C.W. Flocks, Herds and Schools: A Distributed Behavioral Model 1987 Computer Graphics and Interactive Techniques (SIGGRAPH), pp. 25-34  inproceedings DOI URL 
    Abstract: The aggregate motion of a flock of birds, a herd of land animals, or a school of fish is a beautiful and familiar part of the natural world. But this type of complex motion is rarely seen in computer animation. This paper explores an approach based on simulation as an alternative to scripting the paths of each bird individually. The simulated flock is an elaboration of a particle systems, with the simulated birds being the particles. The aggregate motion of the simulated flock is created by a distributed behavioral model much like that at work in a natural flock; the birds choose their own course. Each simulated bird is implemented as an independent actor that navigates according to its local perception of the dynamic environment, the laws of simulated physics that rule its motion, and a set of behaviors programmed into it by the "animator." The aggregate motion of the simulated flock is the result of the dense interaction of the relatively simple behaviors of the individual simulated birds.
    BibTeX:
    @inproceedings{Reynolds87,
      author = {Craig W. Reynolds},
      title = {Flocks, Herds and Schools: A Distributed Behavioral Model},
      booktitle = {Computer Graphics and Interactive Techniques (SIGGRAPH)},
      year = {1987},
      pages = {25--34},
      url = {http://www.red3d.com/cwr/boids/},
      doi = {http://doi.acm.org/10.1145/37401.37406}
    }
    
    Rigo, A. Representation-Based Just-In-Time Specialization and the Psyco Prototype for Python 2004 Partial Evaluation and Semantics-Based Program Manipulation (PEPM), pp. 15-26  inproceedings DOI  
    BibTeX:
    @inproceedings{Rigo04,
      author = {Armin Rigo},
      title = {Representation-Based Just-In-Time Specialization and the Psyco Prototype for Python},
      booktitle = {Partial Evaluation and Semantics-Based Program Manipulation (PEPM)},
      year = {2004},
      pages = {15--26},
      doi = {http://dx.doi.org/10.1145/1014007.1014010}
    }
    
    Rodriguez-Hortala, J. A Hierarchy of Semantics for Non-deterministic Term Rewriting Systems 2008 Foundations of Software Technology and Theoretical Computer Science (FSTTCS)  inproceedings URL 
    Abstract: Formalisms involving some degree of nondeterminism are frequent in computer science. In particular, various programming or specification languages are based on term rewriting systems where confluence is not required. In this paper we examine three concrete possible semantics for non-determinism that can be assigned to those programs. Two of them --call-time choice and run-time choice-- are quite well-known, while the third one --plural semantics-- is investigated for the first time in the context of term rewriting based programming languages. We investigate some basic intrinsic properties of the semantics and establish some relationships between them: we show that the three semantics form a hierarchy in the sense of set inclusion, and we prove that call-time choice and plural semantics enjoy a remarkable compositionality property that fails for run-time choice; finally, we show how to express plural semantics within run-time choice by means of a program transformation, for which we prove its adequacy.
    BibTeX:
    @inproceedings{RH08,
      author = {Juan Rodriguez-Hortala},
      title = {A Hierarchy of Semantics for Non-deterministic Term Rewriting Systems},
      booktitle = {Foundations of Software Technology and Theoretical Computer Science (FSTTCS)},
      year = {2008},
      url = {http://drops.dagstuhl.de/portals/FSTTCS08/}
    }
    
    Rounds, W.C. Mappings and Grammars on Trees 1970 Mathematical Systems Theory
    Vol. 4, pp. 257-287 
    article DOI  
    BibTeX:
    @article{Rounds70,
      author = {William C. Rounds},
      title = {Mappings and Grammars on Trees},
      journal = {Mathematical Systems Theory},
      year = {1970},
      volume = {4},
      pages = {257--287},
      doi = {http://dx.doi.org/10.1007/BF01695769}
    }
    
    Rounds, W.C. Complexity of Recognition in Intermediate-Level Languages 1973 Foundations of Computer Science (FOCS), pp. 145-158  inproceedings  
    BibTeX:
    @inproceedings{Rounds73,
      author = {William C. Rounds},
      title = {Complexity of Recognition in Intermediate-Level Languages},
      booktitle = {Foundations of Computer Science (FOCS)},
      year = {1973},
      pages = {145--158}
    }
    
    Rozenberg, G. Extension of Tabled 0$L$-Systems and Languages 1973 Internaltional Journal of Computer and Information Sciences
    Vol. 2, pp. 311-336 
    article DOI  
    Abstract: This paper introduces a new family of languages which originated from a study of some mathematical models for the development of biological organisms. Various properties of this family are established and in particular it is proved that it forms a full abstract family of languages. It is compared with some other families of languages which have already been studied and which either originated from the study of models for biological development or belong to the now standard Chomsky hierarchy. A characterization theorem for context-free languages is also established.
    BibTeX:
    @article{Rozenberg73,
      author = {Grzegorz Rozenberg},
      title = {Extension of Tabled 0$L$-Systems and Languages},
      journal = {Internaltional Journal of Computer and Information Sciences},
      year = {1973},
      volume = {2},
      pages = {311--336},
      doi = {http://dx.doi.org/10.1007/BF00985664}
    }
    
    Rozenberg, G. T0L Systems and Languages 1973 Information and Control
    Vol. 23, pp. 357-381 
    article DOI  
    Abstract: We discuss a family of systems and languages (called TOL) which have originally arisen from the study of mathematical models for the development of some biological organisms. From a formal language theory point of view, a TOL system is a rewriting system where at each step of a derivation every symbol in a string is rewritten in a context-free way, but different rewriting steps may use different sets of production rules and the language consists of all strings derivable from the single fixed string (the axiom).

    The family of TOL languages (as well as its different subfamilies considered here) is not closed with respect to usually considered operations; it is "incomparable" with context-free languages, but it is contained in the family of context-free programmed languages. TOL languages form an infinite hierarchy with respect to "natural" complexity measures introduced in this paper.

    BibTeX:
    @article{Rozenberg73a,
      author = {Grzegorz Rozenberg},
      title = {T0L Systems and Languages},
      journal = {Information and Control},
      year = {1973},
      volume = {23},
      pages = {357--381},
      doi = {http://dx.doi.org/10.1016/S0019-9958(73)80004-X}
    }
    
    Rozenberg, G. & Doucet, P. On 0L-Languages 1971 Information and Control
    Vol. 19, pp. 302-318 
    article DOI  
    Abstract: In 0L-languages, words are produced from each other by the simultaneous transition of all letters according to a set of production rules; the context is ignored.

    (i) 0L-languages are not closed under the operations usually considered.

    (ii) 0L-languages over a one-letter alphabet are discussed separately; a characterization is given of a subclass.

    (iii) 0L-languages are incomparable with regular sets, incomparable with context-free languages, and strictly included in context-sensitive languages

    BibTeX:
    @article{RD71,
      author = {Grzegorz Rozenberg and P.G. Doucet},
      title = {On 0L-Languages},
      journal = {Information and Control},
      year = {1971},
      volume = {19},
      pages = {302--318},
      doi = {http://dx.doi.org/10.1016/S0019-9958(71)90164-1}
    }
    
    Rémy, D. & Yakobowski, B. From ML to ML$^F$: Graphic Type Constraints with Efficient Type Inference 2008 International Conference on Functional Programming (ICFP), pp. 63-74  inproceedings DOI  
    Abstract: ML$^F$ is a type system that seamlessly merges ML-style type inference with System-F polymorphism. We propose a system of graphic (type) constraints that can be used to perform type inference in both ML or ML$^F$. We show that this constraint system is a small extension of the formalism of graphic types, originally introduced to represent ML$^F$ types. We give a few semantic preserving transformations on constraints and propose a strategy for applying them to solve constraints. We show that the resulting algorithm has optimal complexity for ML$^F$ type inference, and argue that, as for ML, this complexity is linear under reasonable assumptions.
    BibTeX:
    @inproceedings{RY08,
      author = {Didier Rémy and Boris Yakobowski},
      title = {From ML to ML$^F$: Graphic Type Constraints with Efficient Type Inference},
      booktitle = {International Conference on Functional Programming (ICFP)},
      year = {2008},
      pages = {63--74},
      doi = {http://doi.acm.org/10.1145/1411204.1411216}
    }
    
    Saito, T.L. Purifying XML Structures 2006 School: The University of Tokyo  phdthesis URL 
    BibTeX:
    @phdthesis{Saito06,
      author = {Taro Leo Saito},
      title = {Purifying XML Structures},
      school = {The University of Tokyo},
      year = {2006},
      url = {http://www.xerial.org/trac/Xerial/wiki/Publications}
    }
    
    Salomaa, K. Deterministic Tree Pushdown Automata and Monadic Tree Rewriting Systems 1988 Journal of Computer and System Sciences
    Vol. 37, pp. 367-394 
    article DOI  
    Review: read rule. sigma(gamma1(xs1), ..., gammak(xsk)) mapsto gamma(gamma1(xs1), ..., gammak(xsk))

    ex-tpa eps rule. Tree(Gamma cup X) mapsto gamma(X or Gamma0 ...)

    del-tpa eps rule. Tree(Gamma cup X) mapsto gamma(x1, ..., xn) where X=x1,...,xn,xn+1,...xm

    ba-tpa eps rule. Tree(Gamma cup X) mapsto gamma(x1, ..., xn) where X=x1,...,xn exactly

    BibTeX:
    @article{Salomaa88,
      author = {Kai Salomaa},
      title = {Deterministic Tree Pushdown Automata and Monadic Tree Rewriting Systems},
      journal = {Journal of Computer and System Sciences},
      year = {1988},
      volume = {37},
      pages = {367--394},
      doi = {http://dx.doi.org/10.1016/0022-0000(88)90014-1}
    }
    
    Salomaa, K. Yield-languages of Two-way Pushdown Tree Automata 1996 Information Processing Letters
    Vol. 58, pp. 195-199 
    article DOI  
    BibTeX:
    @article{Salomaa96,
      author = {Kai Salomaa},
      title = {Yield-languages of Two-way Pushdown Tree Automata},
      journal = {Information Processing Letters},
      year = {1996},
      volume = {58},
      pages = {195--199},
      doi = {http://dx.doi.org/10.1016/0020-0190(96)00048-8}
    }
    
    Sanders, P., Egner, S. & Tolhuizen, L. Polynomial Time Algorithms for Network Information Flow 2003 Symposium on Parallel Algorithms and Architectures (SPAA), pp. 286-294  inproceedings DOI  
    Abstract: The famous max-flow min-cut theorem states that a source node s can send information through a network (V,E) to a sink node t at a data rate determined by the min-cut separating s and t. Recently it has been shown that this rate can also be achieved for multicasting to several sinks provided that the intermediate nodes are allowed to reencode the information they receive. In contrast, we present graphs where without coding the rate must be a factor $V|)$ smaller. However, so far no fast algorithms for constructing appropriate coding schemes were known. Our main result are polynomial time algorithms for constructing coding schemes for multicasting at the maximal data rate.
    BibTeX:
    @inproceedings{SET03,
      author = {Peter Sanders and Sebastian Egner and Ludo Tolhuizen},
      title = {Polynomial Time Algorithms for Network Information Flow},
      booktitle = {Symposium on Parallel Algorithms and Architectures (SPAA)},
      year = {2003},
      pages = {286--294},
      doi = {http://doi.acm.org/10.1145/777412.777464}
    }
    
    Sato, K. The Strength of Extensionality I -- Weak Weak Set Theories with Infinity 2009 Annals of Pure and Applied Logic
    Vol. 157, pp. 234-268 
    article DOI  
    Abstract: We measure, in the presence of the axiom of infinity, the proof-theoretic strength of the axioms of set theory which make the theory look really like a ``theory of sets'', namely, the axiom of extensionality Ext, separation axioms and the axiom of regularity Reg (and the axiom of choice AC). We first introduce a weak weak set theory (which has the axioms of infinity and of collapsing) as a base over which to clarify the strength of these axioms. We then prove the following results about proof-theoretic ordinals:

    1. |Basic|=$omega^ and , |Basic + Ext| = $0$.

    2. |Basic+$0$-Sep|=$0$ and |Basic+$0$-Sep+Ext|=$0$.

    We also show that neither Reg nor affects the proof-theoretic strength, i.e., |T|=|T+Reg|=|T+AC|=|T+Reg+AC| where T is Basic plus any combination of Ext and $$-Sep.

    BibTeX:
    @article{Sato09,
      author = {Kentaro Sato},
      title = {The Strength of Extensionality I -- Weak Weak Set Theories with Infinity},
      journal = {Annals of Pure and Applied Logic},
      year = {2009},
      volume = {157},
      pages = {234--268},
      doi = {http://dx.doi.org/10.1016/j.apal.2008.09.010}
    }
    
    Savitch, W.J. Relationships between Nondeterministic and Deterministic Tape Complexities 1970 Journal of Computer and System Sciences
    Vol. 4, pp. 177-192 
    article DOI  
    BibTeX:
    @article{Savitch70,
      author = {Walter J. Savitch},
      title = {Relationships between Nondeterministic and Deterministic Tape Complexities},
      journal = {Journal of Computer and System Sciences},
      year = {1970},
      volume = {4},
      pages = {177--192},
      doi = {http://dx.doi.org/10.1016/S0022-0000(70)80006-X}
    }
    
    Schützenberger, M.P. On Context-Free Languages and Push-Down Automata 1963 Information and Control
    Vol. 6, pp. 246-264 
    article DOI  
    Abstract: This note describes a special type of one-way, one-tape automata in the sense of Rabin and Scott that idealizes some of the elementary formal features used in the so-called "push-down store" programming techniques. It is verified that the sets of words accepted by these automata form a proper subset of the family of the unambiguous context-free languages of Chomsky's and that this property admits a weak converse.
    BibTeX:
    @article{Schutzenberger63,
      author = {M. P. Schützenberger},
      title = {On Context-Free Languages and Push-Down Automata},
      journal = {Information and Control},
      year = {1963},
      volume = {6},
      pages = {246--264},
      doi = {http://dx.doi.org/10.1016/S0019-9958(63)90306-1}
    }
    
    Schimpf, K.M. & Gallier, J.H. Tree Pushdown Automata 1985 Journal of Computer and System Sciences
    Vol. 30, pp. 25-40 
    article DOI  
    Abstract: This paper presents a new type of automaton called a tree pushdown automaton (a bottom-up tree automaton augmented with internal memory in the form of a tree, similar to the way a stack is added to a finite state machine to produce a pushdown automaton) and shows that the class of languages recognized by such automata is identical to the class of context-free tree languages.
    BibTeX:
    @article{SG85,
      author = {Karl M. Schimpf and Jean H. Gallier},
      title = {Tree Pushdown Automata},
      journal = {Journal of Computer and System Sciences},
      year = {1985},
      volume = {30},
      pages = {25--40},
      doi = {http://dx.doi.org/10.1016/0022-0000(85)90002-9}
    }
    
    Seidel, R. & Sharir, M. Top-Down Analysis of Path Compression 2005 SIAM Journal on Computing
    Vol. 34, pp. 515-525 
    article DOI  
    Abstract: We present a new analysis of the worst-case cost of path compression, which is an operation that is used in various well-known "union-find" algorithms. In contrast to previous analyses which are essentially based on bottom-up approaches, our method proceeds top-down, yielding recurrence relations from which the various bounds arise naturally. In particular the famous quasi-linear bound involving the inverse Ackermann function can be derived without having to introduce the Ackermann function itself.
    BibTeX:
    @article{SS05,
      author = {Raimund Seidel and Micha Sharir},
      title = {Top-Down Analysis of Path Compression},
      journal = {SIAM Journal on Computing},
      year = {2005},
      volume = {34},
      pages = {515--525},
      doi = {http://dx.doi.org/10.1137/S0097539703439088}
    }
    
    Seidl, H. A Quadratic Regularity Test for Non-deleting Macro S Grammars 1985 Fundamentals of Computation Theory (FCT), pp. 422-430  inproceedings DOI  
    BibTeX:
    @inproceedings{Seidl85,
      author = {Helmut Seidl},
      title = {A Quadratic Regularity Test for Non-deleting Macro S Grammars},
      booktitle = {Fundamentals of Computation Theory (FCT)},
      year = {1985},
      pages = {422--430},
      doi = {http://dx.doi.org/10.1007/BFb0028826}
    }
    
    Seidl, H. Single-Valuedness of Tree Transducers is Decidable in Polynomial Time 1992 Theoretical Computer Science
    Vol. 106, pp. 135-181 
    article DOI  
    Abstract: A bottom-up finite-state tree transducer (FST) A is called single-valued iff for every input tree there is at most one output tree.

    We give a polynomial-time algorithm which decides whether or not a given FST is single-valued.

    - the freedom of the submonoid of trees which contain at least one occurrence of one variable *;

    - the succinct representation of trees by graphs;

    - a sequence of normalizing transformations of the given transducer; and

    - a polynomially decidable characterization of pairs of equivalent output functions.

    We apply these methods to show that finite-valuedness is decidable in polynomial time as well.

    BibTeX:
    @article{Seidl92,
      author = {Helmut Seidl},
      title = {Single-Valuedness of Tree Transducers is Decidable in Polynomial Time},
      journal = {Theoretical Computer Science},
      year = {1992},
      volume = {106},
      pages = {135--181},
      doi = {http://dx.doi.org/10.1016/0304-3975(92)90281-J}
    }
    
    Seidl, H. When is a Functional Tree Transduction Deterministic? 1993 Theory and Practice of Software Development (TAPSOFT)  inproceedings DOI  
    Abstract: We give a decision procedure to determine whether or not the transduction of a functional transducer can be realized by a deterministic (resp. reduced deterministic) transducer. In case this is possible we exhibit a general construction to build this transducer.
    BibTeX:
    @inproceedings{Seidl93,
      author = {Helmut Seidl},
      title = {When is a Functional Tree Transduction Deterministic?},
      booktitle = {Theory and Practice of Software Development (TAPSOFT)},
      year = {1993},
      doi = {http://dx.doi.org/10.1007/3-540-56610-4_69}
    }
    
    Seidl, H. Haskell Overloading is DEXPTIME-complete 1994 Information Processing Letters
    Vol. 52, pp. 57-60 
    article DOI  
    BibTeX:
    @article{Seidl94,
      author = {Helmut Seidl},
      title = {Haskell Overloading is DEXPTIME-complete},
      journal = {Information Processing Letters},
      year = {1994},
      volume = {52},
      pages = {57--60},
      doi = {http://dx.doi.org/10.1016/0020-0190(94)00130-8}
    }
    
    Seki, H. & Kato, Y. On the Generative Power of Multiple Context-Free Grammars and Macro Grammars 2008 IEICE Transactions on Information and Systems
    Vol. E91-D, pp. 209-221 
    article DOI  
    Abstract: Several grammars of which generative power is between context-free grammar and context-sensitive grammar were proposed. Among them are macro grammar and tree adjoining grammar. Multiple context-free grammar is also a natural extension of context-free grammars, and is known to be stronger in its generative power than tree adjoining grammar and yet to be recognizable in polynomial time. In this paper, the generative power of several subclasses of variable-linear macro grammars and that of multiple context-free grammars are compared in details.
    BibTeX:
    @article{SK08,
      author = {Hiroyuki Seki and Yuki Kato},
      title = {On the Generative Power of Multiple Context-Free Grammars and Macro Grammars},
      journal = {IEICE Transactions on Information and Systems},
      year = {2008},
      volume = {E91-D},
      pages = {209--221},
      doi = {http://dx.doi.org/10.1093/ietisy/e91-d.2.209}
    }
    
    Seki, H., Matsumura, T., Fujii, M. & Kasami, T. On Multiple Context-Free Grammars 1991 Theoretical Computer Science
    Vol. 88, pp. 191-229 
    article DOI  
    Abstract: Multiple context-free grammars (mcfg's) is a subclass of generalized context-free grammars introduced by Pollard (1984) in order to describe the syntax of natural languages. The class of languages generated by mcfg's (called multiple context-free languages or, shortly, mcfl's) properly includes the class of context-free languages and is properly included in the class of context-sensitive languages. First, the paper presents results on the generative capacity of mcfg's and also on the properties of mcfl's such as formal language-theoretic closure properties. Next, it is shown that the time complexity of the membership problem for multiple context-free languages is O(ne), where n is the length of an input string and e is a constant called the degree of a given mcfg. Head grammars (hg's) introduced by Pollard and tree adjoining grammars (tag's) introduced by Joshi et al. (1975) are also grammatical formalisms to describe the syntax of natural languages. The paper also presents the following results on the generative capacities of hg's, tag's and 2-mcfg's, which are a subclass of mcfg's: (1) The class HL of languages generated by hg's is the same as the one generated by tag's; (2) HL is the same as the one generated by left-wrapping hg's (or right-wrapping hg's) which is a proper subclass of hg's; (3) HL is properly included in the one generated by 2-mcfg's. As a corollary of (1), it is also shown that HL is a substitution-closed full AFL.
    BibTeX:
    @article{SMFK91,
      author = {Hiroyuki Seki and Takashi Matsumura and Mamoru Fujii and Tadao Kasami},
      title = {On Multiple Context-Free Grammars},
      journal = {Theoretical Computer Science},
      year = {1991},
      volume = {88},
      pages = {191--229},
      doi = {http://dx.doi.org/10.1016/0304-3975(91)90374-B}
    }
    
    Sethi, P.J.D.R. & Tarjan, R.E. Variations on the Common Subexpression Problem 1980 Journal of the ACM
    Vol. 27, pp. 758-771 
    article DOI  
    BibTeX:
    @article{DST80,
      author = {Peter J. Downeyand Ravi Sethi and Robert Endre Tarjan},
      title = {Variations on the Common Subexpression Problem},
      journal = {Journal of the ACM},
      year = {1980},
      volume = {27},
      pages = {758--771},
      doi = {http://doi.acm.org/10.1145/322217.322228}
    }
    
    Shepherdson, J.C. The Reduction of Two-Way Automata to One-Way Automata 1959 IBM Journal of Research and Development
    Vol. 3, pp. 198-200 
    article  
    BibTeX:
    @article{Shepherdson59,
      author = {J. C. Shepherdson},
      title = {The Reduction of Two-Way Automata to One-Way Automata},
      journal = {IBM Journal of Research and Development},
      year = {1959},
      volume = {3},
      pages = {198--200}
    }
    
    Shivers, O. & Fisher, D. Multi-Return Function Call 2004 International Conference on Functional Programming (ICFP), pp. 79-86  inproceedings DOI  
    Abstract: It is possible to extend the basic notion of "function call" to allow functions to have multiple return points. This turns out to be a surprisingly useful mechanism. This paper conducts a fairly wide-ranging tour of such a feature: a formal semantics for a minimal $ -calculus capturing the mechanism; a motivating example; a static type system; useful transformations; implementation concerns and experience with an implementation; and comparison to related mechanisms, such as exceptions, sum-types and explicit continuations. We conclude that multiple-return function call is not only a useful and expressive mechanism, both at the source-code and intermediate-representation level, but is also quite inexpensive to implement.
    BibTeX:
    @inproceedings{SF04,
      author = {Olin Shivers and David Fisher},
      title = {Multi-Return Function Call},
      booktitle = {International Conference on Functional Programming (ICFP)},
      year = {2004},
      pages = {79--86},
      doi = {http://doi.acm.org/10.1145/1016850.1016864}
    }
    
    Shivers, O. & Wand, M. Bottom-Up $-Substitution: Uplinks and $-DAGs 2004 (RS-04-38)  techreport URL 
    Abstract: Terms of the lambda-calculus are one of the most important data structures we have in computer science. Among their uses are representing program terms, advanced type systems, and proofs in theorem provers. Unfortunately, heavy use of this data structure can become intractable in time and space; the typical culprit is the fundamental operation of beta reduction.

    If we represent a lambda-calculus term as a DAG rather than a tree, we can efficiently represent the sharing that arises from beta reduction, thus avoiding combinatorial explosion in space. By adding uplinks from a child to its parents, we can efficiently implement beta reduction in a bottom-up manner, thus avoiding combinatorial explosion in time required to search the term in a top-down fashion.

    We present an algorithm for performing beta reduction on lambda terms represented as uplinked DAGs; describe its proof of correctness; discuss its relation to alternate techniques such as Lamping graphs, the suspension lambda-calculus (SLC) and director strings; and present some timings of an implementation.

    Besides being both fast and parsimonious of space, the algorithm is particularly suited to applications such as compilers, theorem provers, and type-manipulation systems that may need to examine terms in-between reductions - i.e., the ``readback'' problem for our representation is trivial.

    Like Lamping graphs, and unlike director strings or the suspension lambda-calculus, the algorithm functions by side-effecting the term containing the redex; the representation is not a ``persistent'' one.

    The algorithm additionally has the charm of being quite simple; a complete implementation of the core data structures and algorithms is 180 lines of fairly straightforward SML.

    BibTeX:
    @techreport{SW04,
      author = {Olin Shivers and Mitchell Wand},
      title = {Bottom-Up $-Substitution: Uplinks and $-DAGs},
      year = {2004},
      number = {RS-04-38},
      url = {http://www.brics.dk/RS/04/38/}
    }
    
    Silva, J. A Comparative Study of Algorithmic Debugging Strategies 2006 Logic-Based Program Synthesis and TransformationLogic Program Synthesis and Transformation (LOPSTR), pp. 143-159  inproceedings DOI  
    BibTeX:
    @inproceedings{Silva06,
      author = {Josep Silva},
      title = {A Comparative Study of Algorithmic Debugging Strategies},
      booktitle = {Logic Program Synthesis and Transformation (LOPSTR)},
      journal = {Logic-Based Program Synthesis and Transformation},
      year = {2006},
      pages = {143--159},
      doi = {http://dx.doi.org/10.1007/978-3-540-71410-1_11}
    }
    
    Simon, I. Factorization Forests of Finite Height 1990 Theoretical Computer Science
    Vol. 72, pp. 65-94 
    article DOI  
    BibTeX:
    @article{Simon90,
      author = {Imre Simon},
      title = {Factorization Forests of Finite Height},
      journal = {Theoretical Computer Science},
      year = {1990},
      volume = {72},
      pages = {65--94},
      doi = {http://dx.doi.org/10.1016/0304-3975(90)90047-L}
    }
    
    Storer, J.A. & Szymanski, T.G. Data Compression via Textual Substitution 1982 Journal of the ACM
    Vol. 29, pp. 928-951 
    article DOI  
    BibTeX:
    @article{SS82,
      author = {James A. Storer and Thomas G. Szymanski},
      title = {Data Compression via Textual Substitution},
      journal = {Journal of the ACM},
      year = {1982},
      volume = {29},
      pages = {928--951},
      doi = {http://doi.acm.org/10.1145/322344.322346}
    }
    
    Syme, D., Neverov, G. & Margetson, J. Extensible Pattern Matching via a Lightweight Language Extension 2007 ACM SIGPLAN Notices
    Vol. 42, pp. 29-40 
    article DOI  
    Abstract: Pattern matching of algebraic data types (ADTs) is a standard feature in typed functional programming languages, but it is well known that it interacts poorly with abstraction. While several partial solutions to this problem have been proposed, few have been implemented or used. This paper describes an extension to the .NET language F# called active patterns, which supports pattern matching over abstract representations of generic heterogeneous data such as XML and term structures, including where these are represented via object models in other .NET languages. Our design is the first to incorporate both ad hoc pattern matching functions for partial decompositions and "views" for total decompositions, and yet remains a simple and lightweight extension. We give a description of the language extension along with numerous motivating examples. Finally we describe how this feature would interact with other reasonable and related language extensions: existential types quantified at data discrimination tags, GADTs, and monadic generalizations of pattern matching.
    BibTeX:
    @article{SNM07,
      author = {Don Syme and Gregory Neverov and James Margetson},
      title = {Extensible Pattern Matching via a Lightweight Language Extension},
      journal = {ACM SIGPLAN Notices},
      year = {2007},
      volume = {42},
      pages = {29--40},
      doi = {http://doi.acm.org/10.1145/1291220.1291159}
    }
    
    Sénizergues, Gé. L(A)=L(B)? a Simplified Decidability Proof 2002 Theoretical Computer Science
    Vol. 281, pp. 555-608 
    article DOI  
    BibTeX:
    @article{Senizergues02,
      author = {Géraud Sénizergues},
      title = {L(A)=L(B)? a Simplified Decidability Proof},
      journal = {Theoretical Computer Science},
      year = {2002},
      volume = {281},
      pages = {555--608},
      doi = {http://dx.doi.org/10.1016/S0304-3975(02)00027-0}
    }
    
    Takahashi, M. Generalizations of Regular Sets and Their Application to a Study of Context-Free Languages 1975 Information and Control
    Vol. 27, pp. 1-36 
    article DOI  
    Abstract: We extend the notion of regular sets of strings to those of trees and of forests in a unified mathematical approach, and investigate their properties. Then by taking certain one-dimensional expressions of these objects, we come to an interesting subclass of CF languages defined over paired alphabets. They are shown to form a Boolean algebra with the Dyck set as the universe, and to play an important role in the whole class of CF languages. In particular, using the subclass we prove a refinement of the well-known Chomsky-Schützenberger Theorem, and also prove that the decision procedure for parenthesis grammars can be extended to a broader class of CF grammars.
    BibTeX:
    @article{Takahashi75,
      author = {Masako Takahashi},
      title = {Generalizations of Regular Sets and Their Application to a Study of Context-Free Languages},
      journal = {Information and Control},
      year = {1975},
      volume = {27},
      pages = {1--36},
      doi = {http://dx.doi.org/10.1016/S0019-9958(75)90058-3}
    }
    
    Takano, A. & Meijer, E. Shortcut Deforestation in Calculational Form 1995 Functional Programming Languages and Computer Architecture (FPCA), pp. 306-313  inproceedings DOI  
    BibTeX:
    @inproceedings{TM95,
      author = {Akihiko Takano and Erik Meijer},
      title = {Shortcut Deforestation in Calculational Form},
      booktitle = {Functional Programming Languages and Computer Architecture (FPCA)},
      year = {1995},
      pages = {306--313},
      doi = {http://doi.acm.org/10.1145/224164.224221}
    }
    
    Thatcher, J.W. Generalized$^2$ Sequential Machine Maps 1970 Journal of Computer and System Sciences
    Vol. 4, pp. 339-367 
    article DOI  
    BibTeX:
    @article{Thatcher70,
      author = {James W. Thatcher},
      title = {Generalized$^2$ Sequential Machine Maps},
      journal = {Journal of Computer and System Sciences},
      year = {1970},
      volume = {4},
      pages = {339--367},
      doi = {http://dx.doi.org/10.1016/S0022-0000(70)80017-4}
    }
    
    Thatcher, J.W. & Wright, J.B. Generalized Finite Automata Theory with an Application to a Decision Problem of Second-Order Logic 1968 Mathematical Systems Theory
    Vol. 2, pp. 57-811 
    article DOI  
    Abstract: Many of the important concepts and results of conventional finite automata theory are developed for a generalization in which finite algebras take the place of finite automata. The standard closure theorems are proved for the class of sets recognizable by finite algebras, and a generalization of Kleene's regularity theory is presented. The theorems of the generalized theory are then applied to obtain a positive solution to a decision problem of second-order logic.
    BibTeX:
    @article{TW68,
      author = {James W. Thatcher and Jesse B. Wright},
      title = {Generalized Finite Automata Theory with an Application to a Decision Problem of Second-Order Logic},
      journal = {Mathematical Systems Theory},
      year = {1968},
      volume = {2},
      pages = {57--811},
      doi = {http://dx.doi.org/10.1007/BF01691346}
    }
    
    Tiede, H. & Kepser, S. Monadic Second-Order Logic and Transitive Closure Logics over Trees 2006 Workshop on Logic, Language, Information and Computation (WoLLIC), pp. 189-199  inproceedings DOI  
    Abstract: Model theoretic syntax is concerned with studying the descriptive complexity of grammar formalisms for natural languages by defining their derivation trees in suitable logical formalisms. The central tool for model theoretic syntax has been monadic second-order logic (MSO). Much of the recent research in this area has been concerned with finding more expressive logics to capture the derivation trees of grammar formalisms that generate non-context-free languages. The motivation behind this search for more expressive logics is to describe formally certain mildly context-sensitive phenomena of natural languages. Several extensions to MSO have been proposed, most of which no longer define the derivation trees of grammar formalisms directly, while others introduce logically odd restrictions. We therefore propose to consider first-order transitive closure logic. In this logic, derivation trees can be defined in a direct way. Our main result is that transitive closure logic, even deterministic transitive closure logic, is more expressive in defining classes of tree languages than MSO. (Deterministic) transitive closure logics are capable of defining non-regular tree languages that are of interest to linguistics.
    BibTeX:
    @inproceedings{TK06,
      author = {Hans-Jörg Tiede and Stephan Kepser},
      title = {Monadic Second-Order Logic and Transitive Closure Logics over Trees},
      booktitle = {Workshop on Logic, Language, Information and Computation (WoLLIC)},
      year = {2006},
      pages = {189--199},
      doi = {http://dx.doi.org/10.1016/j.entcs.2006.05.044}
    }
    
    Tozawa, A. Towards Static Type Checking for XSLT 2001 ACM Symposium on Document Engineering, pp. 18-27  inproceedings DOI  
    BibTeX:
    @inproceedings{Tozawa01,
      author = {Akihiko Tozawa},
      title = {Towards Static Type Checking for XSLT},
      booktitle = {ACM Symposium on Document Engineering},
      year = {2001},
      pages = {18--27},
      doi = {http://dx.doi.org/10.1145/502187.502191}
    }
    
    Tozawa, A. XML Type Checking Using High-Level Tree Transducer 2006 Functional and Logic Programming (FLOPS), pp. 81-96  inproceedings  
    BibTeX:
    @inproceedings{Tozawa06,
      author = {Akihiko Tozawa},
      title = {XML Type Checking Using High-Level Tree Transducer},
      booktitle = {Functional and Logic Programming (FLOPS)},
      year = {2006},
      pages = {81--96}
    }
    
    Vafeiadis, V. & Parkinson, M. A Marriage of Rely/Guarantee and Separation Logic 2007 Concurrency Theory (CONCUR), pp. 256-271  inproceedings DOI  
    Abstract: In the quest for tractable methods for reasoning about concurrent algorithms both rely/guarantee logic and separation logic have made great advances. They both seek to tame, or control, the complexity of concurrent interactions, but neither is the ultimate approach. Rely-guarantee copes naturally with interference, but its specifications are complex because they describe the entire state. Conversely separation logic has difficulty dealing with interference, but its specifications are simpler because they describe only the relevant state that the program accesses.

    We propose a combined system which marries the two approaches. We can describe interference naturally (using a relation as in rely/guarantee), and where there is no interference, we can reason locally (as in separation logic). We demonstrate the advantages of the combined approach by verifying a lock-coupling list algorithm, which actually disposes/frees removed nodes.

    BibTeX:
    @inproceedings{VP07,
      author = {Viktor Vafeiadis and Matthew Parkinson},
      title = {A Marriage of Rely/Guarantee and Separation Logic},
      booktitle = {Concurrency Theory (CONCUR)},
      year = {2007},
      pages = {256--271},
      doi = {http://dx.doi.org/10.1007/978-3-540-74407-8_18}
    }
    
    Valiant, L.G. General Context-Free Recognition in Less Than Cubic Time 1975 Journal of Computer and System Sciences
    Vol. 10, pp. 308-314 
    article DOI  
    Abstract: An algorithm for general context-free recognition is given that requires less than $n^3$ time asymptotically for input strings of length $n$.
    BibTeX:
    @article{Valiant75a,
      author = {Leslie G. Valiant},
      title = {General Context-Free Recognition in Less Than Cubic Time},
      journal = {Journal of Computer and System Sciences},
      year = {1975},
      volume = {10},
      pages = {308--314},
      doi = {http://dx.doi.org/10.1016/S0022-0000(75)80046-8}
    }
    
    Valiant, L.G. Regularity and Related Problems for Deterministic Pushdown Automata 1975 Journal of the ACM
    Vol. 22, pp. 1-10 
    article DOI  
    BibTeX:
    @article{Valiant75,
      author = {Leslie G. Valiant},
      title = {Regularity and Related Problems for Deterministic Pushdown Automata},
      journal = {Journal of the ACM},
      year = {1975},
      volume = {22},
      pages = {1--10},
      doi = {http://dx.doi.org/10.1145/321864.321865}
    }
    
    Valiant, L.G. Relative Complexity of Checking and Evaluating 1976 Information Processing Letters
    Vol. 5, pp. 20-23 
    article DOI  
    BibTeX:
    @article{Valiant76,
      author = {Leslie G. Valiant},
      title = {Relative Complexity of Checking and Evaluating},
      journal = {Information Processing Letters},
      year = {1976},
      volume = {5},
      pages = {20--23},
      doi = {http://dx.doi.org/10.1016/0020-0190(76)90097-1}
    }
    
    Valiant, L.G. The complexity of computing the permanent 1979 Theoretical Computer Science
    Vol. 8, pp. 189-201 
    article DOI  
    Abstract: It is shown that the permanent function of (0, 1)-matrices is a complete problem for the class of counting problems associated with nondeterministic polynomial time computations. Related counting problems are also considered. The reductions used are characterized by their nontrivial use of arithmetic.
    BibTeX:
    @article{Valiant79,
      author = {Leslie G. Valiant},
      title = {The complexity of computing the permanent},
      journal = {Theoretical Computer Science},
      year = {1979},
      volume = {8},
      pages = {189--201},
      doi = {http://dx.doi.org/10.1016/0304-3975(79)90044-6}
    }
    
    Vardi, M.Y. A Note on the Reduction of Two-Way Automata to One-Way Automata 1989 Information Processing Letters
    Vol. 30, pp. 261-264 
    article DOI  
    BibTeX:
    @article{Vardi89,
      author = {Moshe Y. Vardi},
      title = {A Note on the Reduction of Two-Way Automata to One-Way Automata},
      journal = {Information Processing Letters},
      year = {1989},
      volume = {30},
      pages = {261--264},
      doi = {http://dx.doi.org/10.1016/0020-0190(89)90205-6}
    }
    
    Vijay-Shanker, K. & Weir, D.J. The Equivalence of Four Extensions of Context-Free Grammars 1994 Mathematical Systems Theory
    Vol. 27, pp. 511-546 
    article DOI  
    Abstract: There is currently considerable interest among computational linguists in grammatical formalisms with highly restricted generative power. This paper concerns the relationship between the class of string languages generated by several such formalisms, namely, combinatory categorial grammars, head grammars, linear indexed grammars, and tree adjoining grammars. Each of these formalisms is known to generate a larger class of languages than context-free grammars. The four formalisms under consideration were developed independently and appear superficially to be quite different from one another. The result presented in this paper is that all four of the formalisms under consideration generate exactly the same class of string languages.
    BibTeX:
    @article{VW94,
      author = {K. Vijay-Shanker and D. J. Weir},
      title = {The Equivalence of Four Extensions of Context-Free Grammars},
      journal = {Mathematical Systems Theory},
      year = {1994},
      volume = {27},
      pages = {511--546},
      doi = {http://dx.doi.org/10.1007/BF01191624}
    }
    
    Vogler, H. The OI-hierarchy is Closed under Control 1986 Mathematical Foundations of Computer Science, pp. 611-619  inproceedings DOI  
    BibTeX:
    @inproceedings{Vogler86,
      author = {Heiko Vogler},
      title = {The OI-hierarchy is Closed under Control},
      booktitle = {Mathematical Foundations of Computer Science},
      year = {1986},
      pages = {611--619},
      doi = {http://dx.doi.org/10.1007/BFb0016288}
    }
    
    Voigtländer, J. Using Circular Programs to Deforest in Accumulating Parameters 2004 Higher-Order and Symbolic Computation
    Vol. 17, pp. 129-163 
    article  
    BibTeX:
    @article{Voigtlander04,
      author = {Janis Voigtländer},
      title = {Using Circular Programs to Deforest in Accumulating Parameters},
      journal = {Higher-Order and Symbolic Computation},
      year = {2004},
      volume = {17},
      pages = {129--163}
    }
    
    Voigtländer, J. Tree Transducer Composition as Program Transformation 2005 School: Technische Universität Dresden  phdthesis URL 
    Abstract: Nonstrict, purely functional programming languages offer a high potential for the modularization of software. But beside their advantages with respect to reliability and reusability, modularly specified programs often have the disadvantage of low execution efficiency, caused in particular by the creation and consumption of structured intermediate results. One possible approach to cure this conflict is the automatic, semantics-preserving optimization of programs, for which purely functional languages are again particularly suited due to their mathematical foundation. This dissertation studies a specific transformation for the elimination of intermediate results (for so called deforestation) regarding its impact on the program efficiency under nonstrict evaluation. The formal framework is provided by concepts from the theory of tree transducers. One special feature of the transformation under consideration is the successful handling of accumulating parameters, which find frequent use in functional programs. The core of the thesis is the derivation of effectively decidable, syntactic conditions on the original program under which the transformed program is to be preferred over it with respect to efficiency.
    BibTeX:
    @phdthesis{Voigtlander05,
      author = {Janis Voigtländer},
      title = {Tree Transducer Composition as Program Transformation},
      school = {Technische Universität Dresden},
      year = {2005},
      url = {http://wwwtcs.inf.tu-dresden.de/~voigt/}
    }
    
    Voigtländer, J. Bidirectionalization for Free! 2009 Principles of Programming Languages (POPL)  inproceedings URL 
    Abstract: A bidirectional transformation consists of a function get that takes a source (document or value) to a view and a function put that takes an updated view and the original source back to an updated source, governed by certain consistency conditions relating the two functions. Both the database and programming language communities have studied techniques that essentially allow a user to specify only one of get and put and have the other inferred automatically. All approaches so far to this bidirectionalization task have been syntactic in nature, either proposing a domain-specific language with limited expressiveness but built-in (and composable) backward components, or restricting get to a simple syntactic form from which some algorithm can synthesize an appropriate definition for put . Here we present a semantic approach instead. The idea is to take a general-purpose language, Haskell, and write a higher-order function that takes (polymorphic) get-functions as arguments and returns appropriate put-functions. All this on the level of semantic values, without being willing, or even able, to inspect the definition of get, and thus liberated from syntactic restraints. Our solution is inspired by relational parametricity and uses free theorems for proving the consistency conditions. It works beautifully.
    BibTeX:
    @inproceedings{Voigtlander09,
      author = {Janis Voigtländer},
      title = {Bidirectionalization for Free!},
      booktitle = {Principles of Programming Languages (POPL)},
      year = {2009},
      url = {http://wwwtcs.inf.tu-dresden.de/~voigt/}
    }
    
    Vytiniotis, D., Weirich, S. & Jones, S.P. FPH: First-Class Polymorphism for Haskell 2008 International Conference on Functional Programming (ICFP), pp. 295-306  inproceedings DOI  
    Abstract: Languages supporting polymorphism typically have ad-hoc restrictions on where polymorphic types may occur. Supporting "firstclass" polymorphism, by lifting those restrictions, is obviously desirable, but it is hard to achieve this without sacrificing type inference. We present a new type system for higher-rank and impredicative polymorphism that improves on earlier proposals: it is an extension of Damas-Milner; it relies only on System F types; it has a simple, declarative specification; it is robust to program transformations; and it enjoys a complete and decidable type inference algorithm.
    BibTeX:
    @inproceedings{VWJ08,
      author = {Dimitrios Vytiniotis and Stephanie Weirich and Simon Peyton Jones},
      title = {FPH: First-Class Polymorphism for Haskell},
      booktitle = {International Conference on Functional Programming (ICFP)},
      year = {2008},
      pages = {295--306},
      doi = {http://doi.acm.org/10.1145/1411204.1411246}
    }
    
    Wadler, P. Deforestation: Transforming Programs to Eliminate Trees 1990 Theoretical Computer Science
    Vol. 73, pp. 231-248 
    article DOI URL 
    Abstract: An algorithm that transforms programs to eliminate intermediate trees is presented. The algorithm applies to any term containing only functions with definitions in a given syntactic form, and is suitable for incorporation in an optimizing compiler.
    BibTeX:
    @article{Wadler90,
      author = {Philip Wadler},
      title = {Deforestation: Transforming Programs to Eliminate Trees},
      journal = {Theoretical Computer Science},
      year = {1990},
      volume = {73},
      pages = {231--248},
      url = {http://homepages.inf.ed.ac.uk/wadler/topics/deforestation.html},
      doi = {http://dx.doi.org/10.1016/0304-3975(90)90147-A}
    }
    
    Walters, D.A. Deterministic Context-Sensitive Languages: Part I 1970 Information and Control
    Vol. 17, pp. 14-40 
    article DOI  
    Abstract: A context-sensitive grammar G is said to be CS(k) iff a particular kind of table-driven parser, , exists. Corresponding to each , we define a class of parsers . is itself a . The main results are:

    1. Any processor for a CS(k) grammar G accepts exactly the sentences of G.

    2. The set of languages generable by CS(k) grammars is exactly the set of languages accepted by deterministic linear-bounded automata (DLBA's).

    3.(a) It is undecidable whether there exists any k 0 such that an arbitrary CSG is CS(k).

    (b) For every fixed k 0, there is no algorithm that will decide if G is CS(k) and also construct if it exists.

    4. For any DLBA M, algorithms are given to (i) construct a CS(k) grammar GM that generates the language accepted by M, and (ii) construct a processor .

    5. CS(k) grammars are unambiguous.

    6. The sentences of a CS(k) grammar can be parsed in a time proportional to the length of their derivations.

    BibTeX:
    @article{Walters70,
      author = {Daniel A. Walters},
      title = {Deterministic Context-Sensitive Languages: Part I},
      journal = {Information and Control},
      year = {1970},
      volume = {17},
      pages = {14--40},
      doi = {http://dx.doi.org/10.1016/S0019-9958(70)80004-3}
    }
    
    Walters, D.A. Deterministic Context-Sensitive Languages: Part II 1970 Information and Control
    Vol. 17, pp. 41-61 
    article DOI  
    BibTeX:
    @article{Walters70a,
      author = {Daniel A. Walters},
      title = {Deterministic Context-Sensitive Languages: Part II},
      journal = {Information and Control},
      year = {1970},
      volume = {17},
      pages = {41--61},
      doi = {http://dx.doi.org/10.1016/S0019-9958(70)80005-5}
    }
    
    Warth, A. Experimenting with Programming Languages 2008 School: University of California, Los Angeles  phdthesis URL 
    BibTeX:
    @phdthesis{Warth08,
      author = {Alessandro Warth},
      title = {Experimenting with Programming Languages},
      school = {University of California, Los Angeles},
      year = {2008},
      url = {http://www.tinlizzie.org/~awarth/}
    }
    
    Warth, A. & Kay, A. Worlds: Controlling the Scope of Side Effects 2008   techreport URL 
    BibTeX:
    @techreport{WK08,
      author = {Alessandro Warth and Alan Kay},
      title = {Worlds: Controlling the Scope of Side Effects},
      year = {2008},
      url = {http://www.cs.ucla.edu/~awarth/}
    }
    
    Warth, A. & Piumarta, I. OMeta: an Object-Oriented Language for Pattern Matching 2007 Dynamic Languages Symposium (DLS), pp. 11-19  inproceedings DOI  
    Abstract: This paper introduces OMeta, a new object-oriented language for pattern matching. OMeta is based on a variant of Parsing Expression Grammars (PEGs) [5]---a recognition-based foundation for describing syntax---which we have extended to handle arbitrary kinds of data. We show that OMeta's general-purpose pattern matching provides a natural and convenient way for programmers to implement tokenizers, parsers, visitors, and tree transformers, all of which can be extended in interesting ways using familiar object-oriented mechanisms. This makes OMeta particularly well-suited as a medium for experimenting with new designs for programming languages and extensions to existing languages.
    BibTeX:
    @inproceedings{WP07,
      author = {Alessandro Warth and Ian Piumarta},
      title = {OMeta: an Object-Oriented Language for Pattern Matching},
      booktitle = {Dynamic Languages Symposium (DLS)},
      year = {2007},
      pages = {11--19},
      doi = {http://doi.acm.org/10.1145/1297081.1297086}
    }
    
    Watson, B.W. A Taxonomy of Finite Automata Minimization Algorithms 1994   techreport  
    Abstract: This paper presents a taxonomy of finite automata minimization algorithms. Brzozowski's elegant minimization algorithm differs from all other known minimization algorithms, and is derived separately. All of the remaining algorithms depend upon computing an equivalence relation on states. We define the equivalence relation, the partition that it induces, and its complement. Additionally, some useful properties are derived. It is shown that the equivalence relation is the greatest fixed point of an equation, providing a useful characterization of the required computation. We derive an upperbound on the number of approximation steps required to compute the fixed point. Algorithms computing the equivalence relation (or the partition, or its complement) are derived systematically in the same framework. The algorithms include Hopcroft's, several algorithms from text-books (including Hopcroft and Ullman's [HU79], Wood's [Wood87], and Aho, Sethi, and Ullman's [ASU86]), and several new algorithms or variants of existing algorithms.
    BibTeX:
    @techreport{Watson94,
      author = {Bruce W. Watson},
      title = {A Taxonomy of Finite Automata Minimization Algorithms},
      year = {1994}
    }
    
    Welch, T.A. A Technique for High-Performance Data Compression 1984 Computer
    Vol. 17, pp. 8-19 
    article DOI  
    BibTeX:
    @article{Welch84,
      author = {Terry A. Welch},
      title = {A Technique for High-Performance Data Compression},
      journal = {Computer},
      year = {1984},
      volume = {17},
      pages = {8--19},
      doi = {http://dx.doi.org/10.1109/MC.1984.1659158}
    }
    
    Wotschke, D. The Boolean Closures of the Deterministic and Nondeterministic Context-Free Languages 1973 Gesellschaft für Informatik e.V., 3. Jahrestagung, pp. 113-121  inproceedings DOI  
    Abstract: The Boolean closure of the deterministic context-free languages is properly contained in the intersection-closure of the context-free languages and the latter is properly contained in the Boolean closure of the context-free languages. The class of context-free languages and the Boolean closure of the deterministic context-free languages are incomparable in the sense that neither one is contained in the other. The intersection-closures of the deterministic and nondeterministic context-free languages are not principal AFDL's.

    The research in this paper was supported in part by the National Science Foundation under Grant GJ-803.

    BibTeX:
    @inproceedings{Wotschke73,
      author = {Detlef Wotschke},
      title = {The Boolean Closures of the Deterministic and Nondeterministic Context-Free Languages},
      booktitle = {Gesellschaft für Informatik e.V., 3. Jahrestagung},
      year = {1973},
      pages = {113--121},
      doi = {http://dx.doi.org/10.1007/3-540-06473-7_11}
    }
    
    Yamasaki, K. & Sodeshima, Y. A Comparison of Bottom-Up Pushdown Tree Transducers and Top-Down Pushdown Tree Transducers 1999 SUT Journal of Mathematics
    Vol. 36, pp. 43-82 
    article URL 
    Abstract: In this paper we introduce a bottom-up pushdown tree transducer (b-PDTT) which is a bottom-up tree transducer with pushdown storage (where the pushdown storage stores the trees) and may be considered as a dual concept of the topdown pushdown tree transducer (t-PDTT). After proving some fundamental properties of b-PDTT, for example, any b-PDTT can be realized by a linear stack with single state and converted into G-type normal form which corresponds to Greibach normal form in a conteat-free grammar, and so on, we compare the translational capability of a b-PDTT with that of a t-PDTT.
    BibTeX:
    @article{YS99,
      author = {Katsunori Yamasaki and Yoshichika Sodeshima},
      title = {A Comparison of Bottom-Up Pushdown Tree Transducers and Top-Down Pushdown Tree Transducers},
      journal = {SUT Journal of Mathematics},
      year = {1999},
      volume = {36},
      pages = {43--82},
      url = {http://ci.nii.ac.jp/naid/110003210627/}
    }
    
    Yang, H. & O'Hearn, P. A Semantic Basis for Local Reasoning 2002 Foundations of Software Science and Computation Structures (FoSSaCS), pp. 281-335  inproceedings DOI  
    Abstract: We present a semantic analysis of a recently proposed formalism for local reasoning, where a specification (and hence proof) can concentrate on only those cells that a program accesses. Our main results are the soundness and, in a sense, completeness of a rule that allows frame axioms, which describe invariant properties of portions of heap memory, to be inferred automatically; thus, these axioms can be avoided when writing specifications.
    BibTeX:
    @inproceedings{YO02,
      author = {Hongseok Yang and Peter O'Hearn},
      title = {A Semantic Basis for Local Reasoning},
      booktitle = {Foundations of Software Science and Computation Structures (FoSSaCS)},
      year = {2002},
      pages = {281--335},
      doi = {http://dx.doi.org/10.1007/3-540-45931-6_28}
    }
    
    Yoshida, Y. & Ito, H. Property Testing on k-Vertex-Connectivity of Graphs 2008 International Colloquium on Automata, Languages and Programming (ICALP), pp. 539-550  inproceedings DOI  
    Abstract: We present an algorithm for testing the k-vertex-connectivity of graphs with given maximum degree. The time complexity of the algorithm is independent of the number of vertices and edges of graphs. A graph G with n vertices and maximum degree at most d is called $-far from k-vertex-connectivity when at least edges must be added to or removed from G to obtain a k-vertex-connected graph with maximum degree at most d. The algorithm always accepts every graph that is k-vertex-connected and rejects every graph that is $epsilon$-far from k-vertex-connectivity with a probability of at least 2/3. The algorithm runs in O( $d (c/ed)^k 1/ed)$ ) time (c > 1 is a constant) for given (k-1)-vertex-connected graphs, and O( $d (ck/epsilon d)^k k/epsilon d)$ ) time (c > 1 is a constant) for given general graphs. It is the first constant-time k-vertex-connectivity testing algorithm for general $k geq 4$.
    BibTeX:
    @inproceedings{YI08,
      author = {Yuichi Yoshida and Hiro Ito},
      title = {Property Testing on k-Vertex-Connectivity of Graphs},
      booktitle = {International Colloquium on Automata, Languages and Programming (ICALP)},
      year = {2008},
      pages = {539--550},
      doi = {http://dx.doi.org/10.1007/978-3-540-70575-8_44}
    }
    
    Ziv, J. & Lempel, A. A Universal Algorithm for Sequential Data Compression 1977 IEEE Transactions on Information Theory
    Vol. 23, pp. 337-343 
    article URL 
    Abstract: A universal algorithm for sequential data compression is presented. Its performance is investigated with respect to a nonprobabilistic model of constrained sources. The compression ratio achieved by the proposed universal code uniformly approaches the lower bounds on the compression ratios attainable by block-to-variable codes and variable-to-block codes designed to match a completely specified source.
    BibTeX:
    @article{LZ77,
      author = {Jacob Ziv and Abraham Lempel},
      title = {A Universal Algorithm for Sequential Data Compression},
      journal = {IEEE Transactions on Information Theory},
      year = {1977},
      volume = {23},
      pages = {337--343},
      url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1055714}
    }
    
    Ziv, J. & Lempel, A. Compression of Individual Sequences via Variable-Rate Coding 1978 IEEE Transactions on Information Theory
    Vol. 24, pp. 530-536 
    article URL 
    Abstract: Compressibility of individual sequences by the class of generalized finite-state information-lossless encoders is investigated. These encoders can operate in a variable-rate mode as well as a fixed-rate one, and they allow for any finite-state scheme of variable-length-to-variable-length coding. For every individual infinite sequencexa quantity $x)$ is defined, called the compressibility ofx, which is shown to be the asymptotically attainable lower bound on the compression ratio that can be achieved forxby any finite-state encoder. This is demonstrated by means of a constructive coding theorem and its converse that, apart from their asymptotic significance, also provide useful performance criteria for finite and practical data-compression tasks. The proposed concept of compressibility is also shown to play a role analogous to that of entropy in classical information theory where one deals with probabilistic ensembles of sequences rather than with individual sequences. While the definition of $x)$ allows a different machine for each different sequence to be compressed, the constructive coding theorem leads to a universal algorithm that is asymptotically optimal for all sequences.
    BibTeX:
    @article{LZ78,
      author = {Jacob Ziv and Abraham Lempel},
      title = {Compression of Individual Sequences via Variable-Rate Coding},
      journal = {IEEE Transactions on Information Theory},
      year = {1978},
      volume = {24},
      pages = {530--536},
      url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1055934}
    }
    

    Created by JabRef on 26/05/2009.