first commit

This commit is contained in:
Aleksey Chichenkov 2019-01-28 15:08:59 +03:00
commit b21727a3fe
20 changed files with 18126 additions and 0 deletions

84
README.md Normal file
View File

@ -0,0 +1,84 @@
# LEMON.JS - LALR(1) Parser Generator for JavaScript
Lemon.JS is an LALR(1) parser generator for JavaScript based on Lemon parser generator for C included in SQLite package distribution.
## Parser Code Base
Files `lemon.c`, `lempar.c`, `lemon.html` are extracted from SQLite v3.17.0. Original parser generator code is slightly fixed to produce JavaScript compatible statements. Parser template translated from C to JavaScript. Source comments mostly not touched to keep it easy diff against original file.
Both original C version and patched JS version are included for side by side comparison for reference.
## Installation
Compile lenon-js.c with any C compiler and place in anywhere with lempar.js side by side.
## Compilation
Prerequisites: C compiler, for example GCC.
```bash
gcc -o lemon-js -O2 lemon-js.c
```
## Usage
```bash
lemon-js <filename>.y
```
See http://www.hwaci.com/sw/lemon/lemon.html for more details.
## Special Directives
See lemon.html for additional documentation.
- %name - Set parser class name (default is "Parse")
- %include - Include code in the beginning of file (usefull for imports)
- %code - Include code in the end of file (usefull for exports or main code)
- %token_destructor - Define code which will be executed on token destruction.
- %default_destructor
- %token_prefix - Define token name prefix.
- %syntax_error - Define custom error handler for syntax erorrs.
- %parse_accept - Define handler for all accepted tokens.
- %parse_failure - Define handler for parse errors.
- %stack_overflow - Define handler for stack overflow.
- %extra_argument - **NOT SUPPORTED**
- %token_type - **NOT SUPPORTED**
- %default_type - **NOT SUPPORTED**
- %stack_size - Set default stack size.
- %start_symbol
- %left - Set left associative tokens.
- %right - Set right associative tokens.
- %nonassoc - Set non associative tokens.
- %destructor - Define custom parser destructor.
- %type - **NOT SUPPORTED**
- %fallback - Define fallback logic for tokens.
- %wildcard - Define WILDCARD token.
- %token_class - **NOT SUPPORTED**
Notes:
- some expressions, for example, regular expression `/\*/` could break lemon parser in `%code` or `%include` sections.
- the best place to put something like `module.exports = ParserName;` or `export default ParserName;` is in `%code` section.
## TODO
- add some tests for different options
- document variables
- YYNOERRORRECOVERY ?
- YYERRORSYMBOL ?
- rename methods, variables, get rid of YY prefixes?
- enable asserts, could be usefull for testing
## Alternative Lexers
- https://github.com/tantaman/lexed.js
- https://github.com/aaditmshah/lexer
- https://github.com/YuhangGe/jslex
## Alternative Parsers
- https://github.com/sormy/flex-js
- http://jscc.brobston.com
- http://zaach.github.io/jison/
- https://pegjs.org

987
documentation/lemon.html Normal file
View File

@ -0,0 +1,987 @@
<html>
<head>
<title>The Lemon Parser Generator</title>
</head>
<body bgcolor=white>
<h1 align=center>The Lemon Parser Generator</h1>
<p>Lemon is an LALR(1) parser generator for C.
It does the same job as "bison" and "yacc".
But lemon is not a bison or yacc clone. Lemon
uses a different grammar syntax which is designed to
reduce the number of coding errors. Lemon also uses a
parsing engine that is faster than yacc and
bison and which is both reentrant and threadsafe.
(Update: Since the previous sentence was written, bison
has also been updated so that it too can generate a
reentrant and threadsafe parser.)
Lemon also implements features that can be used
to eliminate resource leaks, making is suitable for use
in long-running programs such as graphical user interfaces
or embedded controllers.</p>
<p>This document is an introduction to the Lemon
parser generator.</p>
<h2>Theory of Operation</h2>
<p>The main goal of Lemon is to translate a context free grammar (CFG)
for a particular language into C code that implements a parser for
that language.
The program has two inputs:
<ul>
<li>The grammar specification.
<li>A parser template file.
</ul>
Typically, only the grammar specification is supplied by the programmer.
Lemon comes with a default parser template which works fine for most
applications. But the user is free to substitute a different parser
template if desired.</p>
<p>Depending on command-line options, Lemon will generate between
one and three files of outputs.
<ul>
<li>C code to implement the parser.
<li>A header file defining an integer ID for each terminal symbol.
<li>An information file that describes the states of the generated parser
automaton.
</ul>
By default, all three of these output files are generated.
The header file is suppressed if the "-m" command-line option is
used and the report file is omitted when "-q" is selected.</p>
<p>The grammar specification file uses a ".y" suffix, by convention.
In the examples used in this document, we'll assume the name of the
grammar file is "gram.y". A typical use of Lemon would be the
following command:
<pre>
lemon gram.y
</pre>
This command will generate three output files named "gram.c",
"gram.h" and "gram.out".
The first is C code to implement the parser. The second
is the header file that defines numerical values for all
terminal symbols, and the last is the report that explains
the states used by the parser automaton.</p>
<h3>Command Line Options</h3>
<p>The behavior of Lemon can be modified using command-line options.
You can obtain a list of the available command-line options together
with a brief explanation of what each does by typing
<pre>
lemon -?
</pre>
As of this writing, the following command-line options are supported:
<ul>
<li><b>-b</b>
Show only the basis for each parser state in the report file.
<li><b>-c</b>
Do not compress the generated action tables.
<li><b>-D<i>name</i></b>
Define C preprocessor macro <i>name</i>. This macro is useable by
"%ifdef" lines in the grammar file.
<li><b>-g</b>
Do not generate a parser. Instead write the input grammar to standard
output with all comments, actions, and other extraneous text removed.
<li><b>-l</b>
Omit "#line" directives in the generated parser C code.
<li><b>-m</b>
Cause the output C source code to be compatible with the "makeheaders"
program.
<li><b>-p</b>
Display all conflicts that are resolved by
<a href='#precrules'>precedence rules</a>.
<li><b>-q</b>
Suppress generation of the report file.
<li><b>-r</b>
Do not sort or renumber the parser states as part of optimization.
<li><b>-s</b>
Show parser statistics before existing.
<li><b>-T<i>file</i></b>
Use <i>file</i> as the template for the generated C-code parser implementation.
<li><b>-x</b>
Print the Lemon version number.
</ul>
<h3>The Parser Interface</h3>
<p>Lemon doesn't generate a complete, working program. It only generates
a few subroutines that implement a parser. This section describes
the interface to those subroutines. It is up to the programmer to
call these subroutines in an appropriate way in order to produce a
complete system.</p>
<p>Before a program begins using a Lemon-generated parser, the program
must first create the parser.
A new parser is created as follows:
<pre>
void *pParser = ParseAlloc( malloc );
</pre>
The ParseAlloc() routine allocates and initializes a new parser and
returns a pointer to it.
The actual data structure used to represent a parser is opaque &mdash;
its internal structure is not visible or usable by the calling routine.
For this reason, the ParseAlloc() routine returns a pointer to void
rather than a pointer to some particular structure.
The sole argument to the ParseAlloc() routine is a pointer to the
subroutine used to allocate memory. Typically this means malloc().</p>
<p>After a program is finished using a parser, it can reclaim all
memory allocated by that parser by calling
<pre>
ParseFree(pParser, free);
</pre>
The first argument is the same pointer returned by ParseAlloc(). The
second argument is a pointer to the function used to release bulk
memory back to the system.</p>
<p>After a parser has been allocated using ParseAlloc(), the programmer
must supply the parser with a sequence of tokens (terminal symbols) to
be parsed. This is accomplished by calling the following function
once for each token:
<pre>
Parse(pParser, hTokenID, sTokenData, pArg);
</pre>
The first argument to the Parse() routine is the pointer returned by
ParseAlloc().
The second argument is a small positive integer that tells the parse the
type of the next token in the data stream.
There is one token type for each terminal symbol in the grammar.
The gram.h file generated by Lemon contains #define statements that
map symbolic terminal symbol names into appropriate integer values.
A value of 0 for the second argument is a special flag to the
parser to indicate that the end of input has been reached.
The third argument is the value of the given token. By default,
the type of the third argument is integer, but the grammar will
usually redefine this type to be some kind of structure.
Typically the second argument will be a broad category of tokens
such as "identifier" or "number" and the third argument will
be the name of the identifier or the value of the number.</p>
<p>The Parse() function may have either three or four arguments,
depending on the grammar. If the grammar specification file requests
it (via the <a href='#extraarg'><tt>extra_argument</tt> directive</a>),
the Parse() function will have a fourth parameter that can be
of any type chosen by the programmer. The parser doesn't do anything
with this argument except to pass it through to action routines.
This is a convenient mechanism for passing state information down
to the action routines without having to use global variables.</p>
<p>A typical use of a Lemon parser might look something like the
following:
<pre>
01 ParseTree *ParseFile(const char *zFilename){
02 Tokenizer *pTokenizer;
03 void *pParser;
04 Token sToken;
05 int hTokenId;
06 ParserState sState;
07
08 pTokenizer = TokenizerCreate(zFilename);
09 pParser = ParseAlloc( malloc );
10 InitParserState(&sState);
11 while( GetNextToken(pTokenizer, &hTokenId, &sToken) ){
12 Parse(pParser, hTokenId, sToken, &sState);
13 }
14 Parse(pParser, 0, sToken, &sState);
15 ParseFree(pParser, free );
16 TokenizerFree(pTokenizer);
17 return sState.treeRoot;
18 }
</pre>
This example shows a user-written routine that parses a file of
text and returns a pointer to the parse tree.
(All error-handling code is omitted from this example to keep it
simple.)
We assume the existence of some kind of tokenizer which is created
using TokenizerCreate() on line 8 and deleted by TokenizerFree()
on line 16. The GetNextToken() function on line 11 retrieves the
next token from the input file and puts its type in the
integer variable hTokenId. The sToken variable is assumed to be
some kind of structure that contains details about each token,
such as its complete text, what line it occurs on, etc. </p>
<p>This example also assumes the existence of structure of type
ParserState that holds state information about a particular parse.
An instance of such a structure is created on line 6 and initialized
on line 10. A pointer to this structure is passed into the Parse()
routine as the optional 4th argument.
The action routine specified by the grammar for the parser can use
the ParserState structure to hold whatever information is useful and
appropriate. In the example, we note that the treeRoot field of
the ParserState structure is left pointing to the root of the parse
tree.</p>
<p>The core of this example as it relates to Lemon is as follows:
<pre>
ParseFile(){
pParser = ParseAlloc( malloc );
while( GetNextToken(pTokenizer,&hTokenId, &sToken) ){
Parse(pParser, hTokenId, sToken);
}
Parse(pParser, 0, sToken);
ParseFree(pParser, free );
}
</pre>
Basically, what a program has to do to use a Lemon-generated parser
is first create the parser, then send it lots of tokens obtained by
tokenizing an input source. When the end of input is reached, the
Parse() routine should be called one last time with a token type
of 0. This step is necessary to inform the parser that the end of
input has been reached. Finally, we reclaim memory used by the
parser by calling ParseFree().</p>
<p>There is one other interface routine that should be mentioned
before we move on.
The ParseTrace() function can be used to generate debugging output
from the parser. A prototype for this routine is as follows:
<pre>
ParseTrace(FILE *stream, char *zPrefix);
</pre>
After this routine is called, a short (one-line) message is written
to the designated output stream every time the parser changes states
or calls an action routine. Each such message is prefaced using
the text given by zPrefix. This debugging output can be turned off
by calling ParseTrace() again with a first argument of NULL (0).</p>
<h3>Differences With YACC and BISON</h3>
<p>Programmers who have previously used the yacc or bison parser
generator will notice several important differences between yacc and/or
bison and Lemon.
<ul>
<li>In yacc and bison, the parser calls the tokenizer. In Lemon,
the tokenizer calls the parser.
<li>Lemon uses no global variables. Yacc and bison use global variables
to pass information between the tokenizer and parser.
<li>Lemon allows multiple parsers to be running simultaneously. Yacc
and bison do not.
</ul>
These differences may cause some initial confusion for programmers
with prior yacc and bison experience.
But after years of experience using Lemon, I firmly
believe that the Lemon way of doing things is better.</p>
<p><i>Updated as of 2016-02-16:</i>
The text above was written in the 1990s.
We are told that Bison has lately been enhanced to support the
tokenizer-calls-parser paradigm used by Lemon, and to obviate the
need for global variables.</p>
<h2>Input File Syntax</h2>
<p>The main purpose of the grammar specification file for Lemon is
to define the grammar for the parser. But the input file also
specifies additional information Lemon requires to do its job.
Most of the work in using Lemon is in writing an appropriate
grammar file.</p>
<p>The grammar file for lemon is, for the most part, free format.
It does not have sections or divisions like yacc or bison. Any
declaration can occur at any point in the file.
Lemon ignores whitespace (except where it is needed to separate
tokens) and it honors the same commenting conventions as C and C++.</p>
<h3>Terminals and Nonterminals</h3>
<p>A terminal symbol (token) is any string of alphanumeric
and/or underscore characters
that begins with an upper case letter.
A terminal can contain lowercase letters after the first character,
but the usual convention is to make terminals all upper case.
A nonterminal, on the other hand, is any string of alphanumeric
and underscore characters than begins with a lower case letter.
Again, the usual convention is to make nonterminals use all lower
case letters.</p>
<p>In Lemon, terminal and nonterminal symbols do not need to
be declared or identified in a separate section of the grammar file.
Lemon is able to generate a list of all terminals and nonterminals
by examining the grammar rules, and it can always distinguish a
terminal from a nonterminal by checking the case of the first
character of the name.</p>
<p>Yacc and bison allow terminal symbols to have either alphanumeric
names or to be individual characters included in single quotes, like
this: ')' or '$'. Lemon does not allow this alternative form for
terminal symbols. With Lemon, all symbols, terminals and nonterminals,
must have alphanumeric names.</p>
<h3>Grammar Rules</h3>
<p>The main component of a Lemon grammar file is a sequence of grammar
rules.
Each grammar rule consists of a nonterminal symbol followed by
the special symbol "::=" and then a list of terminals and/or nonterminals.
The rule is terminated by a period.
The list of terminals and nonterminals on the right-hand side of the
rule can be empty.
Rules can occur in any order, except that the left-hand side of the
first rule is assumed to be the start symbol for the grammar (unless
specified otherwise using the <tt>%start</tt> directive described below.)
A typical sequence of grammar rules might look something like this:
<pre>
expr ::= expr PLUS expr.
expr ::= expr TIMES expr.
expr ::= LPAREN expr RPAREN.
expr ::= VALUE.
</pre>
</p>
<p>There is one non-terminal in this example, "expr", and five
terminal symbols or tokens: "PLUS", "TIMES", "LPAREN",
"RPAREN" and "VALUE".</p>
<p>Like yacc and bison, Lemon allows the grammar to specify a block
of C code that will be executed whenever a grammar rule is reduced
by the parser.
In Lemon, this action is specified by putting the C code (contained
within curly braces <tt>{...}</tt>) immediately after the
period that closes the rule.
For example:
<pre>
expr ::= expr PLUS expr. { printf("Doing an addition...\n"); }
</pre>
</p>
<p>In order to be useful, grammar actions must normally be linked to
their associated grammar rules.
In yacc and bison, this is accomplished by embedding a "$$" in the
action to stand for the value of the left-hand side of the rule and
symbols "$1", "$2", and so forth to stand for the value of
the terminal or nonterminal at position 1, 2 and so forth on the
right-hand side of the rule.
This idea is very powerful, but it is also very error-prone. The
single most common source of errors in a yacc or bison grammar is
to miscount the number of symbols on the right-hand side of a grammar
rule and say "$7" when you really mean "$8".</p>
<p>Lemon avoids the need to count grammar symbols by assigning symbolic
names to each symbol in a grammar rule and then using those symbolic
names in the action.
In yacc or bison, one would write this:
<pre>
expr -> expr PLUS expr { $$ = $1 + $3; };
</pre>
But in Lemon, the same rule becomes the following:
<pre>
expr(A) ::= expr(B) PLUS expr(C). { A = B+C; }
</pre>
In the Lemon rule, any symbol in parentheses after a grammar rule
symbol becomes a place holder for that symbol in the grammar rule.
This place holder can then be used in the associated C action to
stand for the value of that symbol.<p>
<p>The Lemon notation for linking a grammar rule with its reduce
action is superior to yacc/bison on several counts.
First, as mentioned above, the Lemon method avoids the need to
count grammar symbols.
Secondly, if a terminal or nonterminal in a Lemon grammar rule
includes a linking symbol in parentheses but that linking symbol
is not actually used in the reduce action, then an error message
is generated.
For example, the rule
<pre>
expr(A) ::= expr(B) PLUS expr(C). { A = B; }
</pre>
will generate an error because the linking symbol "C" is used
in the grammar rule but not in the reduce action.</p>
<p>The Lemon notation for linking grammar rules to reduce actions
also facilitates the use of destructors for reclaiming memory
allocated by the values of terminals and nonterminals on the
right-hand side of a rule.</p>
<a name='precrules'></a>
<h3>Precedence Rules</h3>
<p>Lemon resolves parsing ambiguities in exactly the same way as
yacc and bison. A shift-reduce conflict is resolved in favor
of the shift, and a reduce-reduce conflict is resolved by reducing
whichever rule comes first in the grammar file.</p>
<p>Just like in
yacc and bison, Lemon allows a measure of control
over the resolution of paring conflicts using precedence rules.
A precedence value can be assigned to any terminal symbol
using the
<a href='#pleft'>%left</a>,
<a href='#pright'>%right</a> or
<a href='#pnonassoc'>%nonassoc</a> directives. Terminal symbols
mentioned in earlier directives have a lower precedence that
terminal symbols mentioned in later directives. For example:</p>
<p><pre>
%left AND.
%left OR.
%nonassoc EQ NE GT GE LT LE.
%left PLUS MINUS.
%left TIMES DIVIDE MOD.
%right EXP NOT.
</pre></p>
<p>In the preceding sequence of directives, the AND operator is
defined to have the lowest precedence. The OR operator is one
precedence level higher. And so forth. Hence, the grammar would
attempt to group the ambiguous expression
<pre>
a AND b OR c
</pre>
like this
<pre>
a AND (b OR c).
</pre>
The associativity (left, right or nonassoc) is used to determine
the grouping when the precedence is the same. AND is left-associative
in our example, so
<pre>
a AND b AND c
</pre>
is parsed like this
<pre>
(a AND b) AND c.
</pre>
The EXP operator is right-associative, though, so
<pre>
a EXP b EXP c
</pre>
is parsed like this
<pre>
a EXP (b EXP c).
</pre>
The nonassoc precedence is used for non-associative operators.
So
<pre>
a EQ b EQ c
</pre>
is an error.</p>
<p>The precedence of non-terminals is transferred to rules as follows:
The precedence of a grammar rule is equal to the precedence of the
left-most terminal symbol in the rule for which a precedence is
defined. This is normally what you want, but in those cases where
you want to precedence of a grammar rule to be something different,
you can specify an alternative precedence symbol by putting the
symbol in square braces after the period at the end of the rule and
before any C-code. For example:</p>
<p><pre>
expr = MINUS expr. [NOT]
</pre></p>
<p>This rule has a precedence equal to that of the NOT symbol, not the
MINUS symbol as would have been the case by default.</p>
<p>With the knowledge of how precedence is assigned to terminal
symbols and individual
grammar rules, we can now explain precisely how parsing conflicts
are resolved in Lemon. Shift-reduce conflicts are resolved
as follows:
<ul>
<li> If either the token to be shifted or the rule to be reduced
lacks precedence information, then resolve in favor of the
shift, but report a parsing conflict.
<li> If the precedence of the token to be shifted is greater than
the precedence of the rule to reduce, then resolve in favor
of the shift. No parsing conflict is reported.
<li> If the precedence of the token it be shifted is less than the
precedence of the rule to reduce, then resolve in favor of the
reduce action. No parsing conflict is reported.
<li> If the precedences are the same and the shift token is
right-associative, then resolve in favor of the shift.
No parsing conflict is reported.
<li> If the precedences are the same the shift token is
left-associative, then resolve in favor of the reduce.
No parsing conflict is reported.
<li> Otherwise, resolve the conflict by doing the shift and
report the parsing conflict.
</ul>
Reduce-reduce conflicts are resolved this way:
<ul>
<li> If either reduce rule
lacks precedence information, then resolve in favor of the
rule that appears first in the grammar and report a parsing
conflict.
<li> If both rules have precedence and the precedence is different
then resolve the dispute in favor of the rule with the highest
precedence and do not report a conflict.
<li> Otherwise, resolve the conflict by reducing by the rule that
appears first in the grammar and report a parsing conflict.
</ul>
<h3>Special Directives</h3>
<p>The input grammar to Lemon consists of grammar rules and special
directives. We've described all the grammar rules, so now we'll
talk about the special directives.</p>
<p>Directives in lemon can occur in any order. You can put them before
the grammar rules, or after the grammar rules, or in the mist of the
grammar rules. It doesn't matter. The relative order of
directives used to assign precedence to terminals is important, but
other than that, the order of directives in Lemon is arbitrary.</p>
<p>Lemon supports the following special directives:
<ul>
<li><tt>%code</tt>
<li><tt>%default_destructor</tt>
<li><tt>%default_type</tt>
<li><tt>%destructor</tt>
<li><tt>%endif</tt>
<li><tt>%extra_argument</tt>
<li><tt>%fallback</tt>
<li><tt>%ifdef</tt>
<li><tt>%ifndef</tt>
<li><tt>%include</tt>
<li><tt>%left</tt>
<li><tt>%name</tt>
<li><tt>%nonassoc</tt>
<li><tt>%parse_accept</tt>
<li><tt>%parse_failure </tt>
<li><tt>%right</tt>
<li><tt>%stack_overflow</tt>
<li><tt>%stack_size</tt>
<li><tt>%start_symbol</tt>
<li><tt>%syntax_error</tt>
<li><tt>%token_class</tt>
<li><tt>%token_destructor</tt>
<li><tt>%token_prefix</tt>
<li><tt>%token_type</tt>
<li><tt>%type</tt>
<li><tt>%wildcard</tt>
</ul>
Each of these directives will be described separately in the
following sections:</p>
<a name='pcode'></a>
<h4>The <tt>%code</tt> directive</h4>
<p>The %code directive is used to specify addition C code that
is added to the end of the main output file. This is similar to
the <a href='#pinclude'>%include</a> directive except that %include
is inserted at the beginning of the main output file.</p>
<p>%code is typically used to include some action routines or perhaps
a tokenizer or even the "main()" function
as part of the output file.</p>
<a name='default_destructor'></a>
<h4>The <tt>%default_destructor</tt> directive</h4>
<p>The %default_destructor directive specifies a destructor to
use for non-terminals that do not have their own destructor
specified by a separate %destructor directive. See the documentation
on the <a name='#destructor'>%destructor</a> directive below for
additional information.</p>
<p>In some grammers, many different non-terminal symbols have the
same datatype and hence the same destructor. This directive is
a convenience way to specify the same destructor for all those
non-terminals using a single statement.</p>
<a name='default_type'></a>
<h4>The <tt>%default_type</tt> directive</h4>
<p>The %default_type directive specifies the datatype of non-terminal
symbols that do no have their own datatype defined using a separate
<a href='#ptype'>%type</a> directive.
</p>
<a name='destructor'></a>
<h4>The <tt>%destructor</tt> directive</h4>
<p>The %destructor directive is used to specify a destructor for
a non-terminal symbol.
(See also the <a href='#token_destructor'>%token_destructor</a>
directive which is used to specify a destructor for terminal symbols.)</p>
<p>A non-terminal's destructor is called to dispose of the
non-terminal's value whenever the non-terminal is popped from
the stack. This includes all of the following circumstances:
<ul>
<li> When a rule reduces and the value of a non-terminal on
the right-hand side is not linked to C code.
<li> When the stack is popped during error processing.
<li> When the ParseFree() function runs.
</ul>
The destructor can do whatever it wants with the value of
the non-terminal, but its design is to deallocate memory
or other resources held by that non-terminal.</p>
<p>Consider an example:
<pre>
%type nt {void*}
%destructor nt { free($$); }
nt(A) ::= ID NUM. { A = malloc( 100 ); }
</pre>
This example is a bit contrived but it serves to illustrate how
destructors work. The example shows a non-terminal named
"nt" that holds values of type "void*". When the rule for
an "nt" reduces, it sets the value of the non-terminal to
space obtained from malloc(). Later, when the nt non-terminal
is popped from the stack, the destructor will fire and call
free() on this malloced space, thus avoiding a memory leak.
(Note that the symbol "$$" in the destructor code is replaced
by the value of the non-terminal.)</p>
<p>It is important to note that the value of a non-terminal is passed
to the destructor whenever the non-terminal is removed from the
stack, unless the non-terminal is used in a C-code action. If
the non-terminal is used by C-code, then it is assumed that the
C-code will take care of destroying it.
More commonly, the value is used to build some
larger structure and we don't want to destroy it, which is why
the destructor is not called in this circumstance.</p>
<p>Destructors help avoid memory leaks by automatically freeing
allocated objects when they go out of scope.
To do the same using yacc or bison is much more difficult.</p>
<a name="extraarg"></a>
<h4>The <tt>%extra_argument</tt> directive</h4>
The %extra_argument directive instructs Lemon to add a 4th parameter
to the parameter list of the Parse() function it generates. Lemon
doesn't do anything itself with this extra argument, but it does
make the argument available to C-code action routines, destructors,
and so forth. For example, if the grammar file contains:</p>
<p><pre>
%extra_argument { MyStruct *pAbc }
</pre></p>
<p>Then the Parse() function generated will have an 4th parameter
of type "MyStruct*" and all action routines will have access to
a variable named "pAbc" that is the value of the 4th parameter
in the most recent call to Parse().</p>
<a name='pfallback'></a>
<h4>The <tt>%fallback</tt> directive</h4>
<p>The %fallback directive specifies an alternative meaning for one
or more tokens. The alternative meaning is tried if the original token
would have generated a syntax error.
<p>The %fallback directive was added to support robust parsing of SQL
syntax in <a href="https://www.sqlite.org/">SQLite</a>.
The SQL language contains a large assortment of keywords, each of which
appears as a different token to the language parser. SQL contains so
many keywords, that it can be difficult for programmers to keep up with
them all. Programmers will, therefore, sometimes mistakenly use an
obscure language keyword for an identifier. The %fallback directive
provides a mechanism to tell the parser: "If you are unable to parse
this keyword, try treating it as an identifier instead."
<p>The syntax of %fallback is as follows:
<blockquote>
<tt>%fallback</tt> <i>ID</i> <i>TOKEN...</i> <b>.</b>
</blockquote>
<p>In words, the %fallback directive is followed by a list of token names
terminated by a period. The first token name is the fallback token - the
token to which all the other tokens fall back to. The second and subsequent
arguments are tokens which fall back to the token identified by the first
argument.
<a name='pifdef'></a>
<h4>The <tt>%ifdef</tt>, <tt>%ifndef</tt>, and <tt>%endif</tt> directives.</h4>
<p>The %ifdef, %ifndef, and %endif directives are similar to
#ifdef, #ifndef, and #endif in the C-preprocessor, just not as general.
Each of these directives must begin at the left margin. No whitespace
is allowed between the "%" and the directive name.
<p>Grammar text in between "%ifdef MACRO" and the next nested "%endif" is
ignored unless the "-DMACRO" command-line option is used. Grammar text
betwen "%ifndef MACRO" and the next nested "%endif" is included except when
the "-DMACRO" command-line option is used.
<p>Note that the argument to %ifdef and %ifndef must be a single
preprocessor symbol name, not a general expression. There is no "%else"
directive.
<a name='pinclude'></a>
<h4>The <tt>%include</tt> directive</h4>
<p>The %include directive specifies C code that is included at the
top of the generated parser. You can include any text you want --
the Lemon parser generator copies it blindly. If you have multiple
%include directives in your grammar file, their values are concatenated
so that all %include code ultimately appears near the top of the
generated parser, in the same order as it appeared in the grammer.</p>
<p>The %include directive is very handy for getting some extra #include
preprocessor statements at the beginning of the generated parser.
For example:</p>
<p><pre>
%include {#include &lt;unistd.h&gt;}
</pre></p>
<p>This might be needed, for example, if some of the C actions in the
grammar call functions that are prototyed in unistd.h.</p>
<a name='pleft'></a>
<h4>The <tt>%left</tt> directive</h4>
The %left directive is used (along with the <a href='#pright'>%right</a> and
<a href='#pnonassoc'>%nonassoc</a> directives) to declare precedences of
terminal symbols. Every terminal symbol whose name appears after
a %left directive but before the next period (".") is
given the same left-associative precedence value. Subsequent
%left directives have higher precedence. For example:</p>
<p><pre>
%left AND.
%left OR.
%nonassoc EQ NE GT GE LT LE.
%left PLUS MINUS.
%left TIMES DIVIDE MOD.
%right EXP NOT.
</pre></p>
<p>Note the period that terminates each %left, %right or %nonassoc
directive.</p>
<p>LALR(1) grammars can get into a situation where they require
a large amount of stack space if you make heavy use or right-associative
operators. For this reason, it is recommended that you use %left
rather than %right whenever possible.</p>
<a name='pname'></a>
<h4>The <tt>%name</tt> directive</h4>
<p>By default, the functions generated by Lemon all begin with the
five-character string "Parse". You can change this string to something
different using the %name directive. For instance:</p>
<p><pre>
%name Abcde
</pre></p>
<p>Putting this directive in the grammar file will cause Lemon to generate
functions named
<ul>
<li> AbcdeAlloc(),
<li> AbcdeFree(),
<li> AbcdeTrace(), and
<li> Abcde().
</ul>
The %name directive allows you to generator two or more different
parsers and link them all into the same executable.
</p>
<a name='pnonassoc'></a>
<h4>The <tt>%nonassoc</tt> directive</h4>
<p>This directive is used to assign non-associative precedence to
one or more terminal symbols. See the section on
<a href='#precrules'>precedence rules</a>
or on the <a href='#pleft'>%left</a> directive for additional information.</p>
<a name='parse_accept'></a>
<h4>The <tt>%parse_accept</tt> directive</h4>
<p>The %parse_accept directive specifies a block of C code that is
executed whenever the parser accepts its input string. To "accept"
an input string means that the parser was able to process all tokens
without error.</p>
<p>For example:</p>
<p><pre>
%parse_accept {
printf("parsing complete!\n");
}
</pre></p>
<a name='parse_failure'></a>
<h4>The <tt>%parse_failure</tt> directive</h4>
<p>The %parse_failure directive specifies a block of C code that
is executed whenever the parser fails complete. This code is not
executed until the parser has tried and failed to resolve an input
error using is usual error recovery strategy. The routine is
only invoked when parsing is unable to continue.</p>
<p><pre>
%parse_failure {
fprintf(stderr,"Giving up. Parser is hopelessly lost...\n");
}
</pre></p>
<a name='pright'></a>
<h4>The <tt>%right</tt> directive</h4>
<p>This directive is used to assign right-associative precedence to
one or more terminal symbols. See the section on
<a href='#precrules'>precedence rules</a>
or on the <a href='#pleft'>%left</a> directive for additional information.</p>
<a name='stack_overflow'></a>
<h4>The <tt>%stack_overflow</tt> directive</h4>
<p>The %stack_overflow directive specifies a block of C code that
is executed if the parser's internal stack ever overflows. Typically
this just prints an error message. After a stack overflow, the parser
will be unable to continue and must be reset.</p>
<p><pre>
%stack_overflow {
fprintf(stderr,"Giving up. Parser stack overflow\n");
}
</pre></p>
<p>You can help prevent parser stack overflows by avoiding the use
of right recursion and right-precedence operators in your grammar.
Use left recursion and and left-precedence operators instead, to
encourage rules to reduce sooner and keep the stack size down.
For example, do rules like this:
<pre>
list ::= list element. // left-recursion. Good!
list ::= .
</pre>
Not like this:
<pre>
list ::= element list. // right-recursion. Bad!
list ::= .
</pre>
<a name='stack_size'></a>
<h4>The <tt>%stack_size</tt> directive</h4>
<p>If stack overflow is a problem and you can't resolve the trouble
by using left-recursion, then you might want to increase the size
of the parser's stack using this directive. Put an positive integer
after the %stack_size directive and Lemon will generate a parse
with a stack of the requested size. The default value is 100.</p>
<p><pre>
%stack_size 2000
</pre></p>
<a name='start_symbol'></a>
<h4>The <tt>%start_symbol</tt> directive</h4>
<p>By default, the start-symbol for the grammar that Lemon generates
is the first non-terminal that appears in the grammar file. But you
can choose a different start-symbol using the %start_symbol directive.</p>
<p><pre>
%start_symbol prog
</pre></p>
<a name='token_destructor'></a>
<h4>The <tt>%token_destructor</tt> directive</h4>
<p>The %destructor directive assigns a destructor to a non-terminal
symbol. (See the description of the %destructor directive above.)
This directive does the same thing for all terminal symbols.</p>
<p>Unlike non-terminal symbols which may each have a different data type
for their values, terminals all use the same data type (defined by
the %token_type directive) and so they use a common destructor. Other
than that, the token destructor works just like the non-terminal
destructors.</p>
<a name='token_prefix'></a>
<h4>The <tt>%token_prefix</tt> directive</h4>
<p>Lemon generates #defines that assign small integer constants
to each terminal symbol in the grammar. If desired, Lemon will
add a prefix specified by this directive
to each of the #defines it generates.
So if the default output of Lemon looked like this:
<pre>
#define AND 1
#define MINUS 2
#define OR 3
#define PLUS 4
</pre>
You can insert a statement into the grammar like this:
<pre>
%token_prefix TOKEN_
</pre>
to cause Lemon to produce these symbols instead:
<pre>
#define TOKEN_AND 1
#define TOKEN_MINUS 2
#define TOKEN_OR 3
#define TOKEN_PLUS 4
</pre>
<a name='token_type'></a><a name='ptype'></a>
<h4>The <tt>%token_type</tt> and <tt>%type</tt> directives</h4>
<p>These directives are used to specify the data types for values
on the parser's stack associated with terminal and non-terminal
symbols. The values of all terminal symbols must be of the same
type. This turns out to be the same data type as the 3rd parameter
to the Parse() function generated by Lemon. Typically, you will
make the value of a terminal symbol by a pointer to some kind of
token structure. Like this:</p>
<p><pre>
%token_type {Token*}
</pre></p>
<p>If the data type of terminals is not specified, the default value
is "void*".</p>
<p>Non-terminal symbols can each have their own data types. Typically
the data type of a non-terminal is a pointer to the root of a parse-tree
structure that contains all information about that non-terminal.
For example:</p>
<p><pre>
%type expr {Expr*}
</pre></p>
<p>Each entry on the parser's stack is actually a union containing
instances of all data types for every non-terminal and terminal symbol.
Lemon will automatically use the correct element of this union depending
on what the corresponding non-terminal or terminal symbol is. But
the grammar designer should keep in mind that the size of the union
will be the size of its largest element. So if you have a single
non-terminal whose data type requires 1K of storage, then your 100
entry parser stack will require 100K of heap space. If you are willing
and able to pay that price, fine. You just need to know.</p>
<a name='pwildcard'></a>
<h4>The <tt>%wildcard</tt> directive</h4>
<p>The %wildcard directive is followed by a single token name and a
period. This directive specifies that the identified token should
match any input token.
<p>When the generated parser has the choice of matching an input against
the wildcard token and some other token, the other token is always used.
The wildcard token is only matched if there are no other alternatives.
<h3>Error Processing</h3>
<p>After extensive experimentation over several years, it has been
discovered that the error recovery strategy used by yacc is about
as good as it gets. And so that is what Lemon uses.</p>
<p>When a Lemon-generated parser encounters a syntax error, it
first invokes the code specified by the %syntax_error directive, if
any. It then enters its error recovery strategy. The error recovery
strategy is to begin popping the parsers stack until it enters a
state where it is permitted to shift a special non-terminal symbol
named "error". It then shifts this non-terminal and continues
parsing. But the %syntax_error routine will not be called again
until at least three new tokens have been successfully shifted.</p>
<p>If the parser pops its stack until the stack is empty, and it still
is unable to shift the error symbol, then the %parse_failed routine
is invoked and the parser resets itself to its start state, ready
to begin parsing a new file. This is what will happen at the very
first syntax error, of course, if there are no instances of the
"error" non-terminal in your grammar.</p>
</body>
</html>

45
examples/calculator-c.y Normal file
View File

@ -0,0 +1,45 @@
%token_type {int}
%left PLUS MINUS.
%left DIVIDE TIMES.
%include {
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
#include "calculator-c.h"
}
%code {
int main()
{
void* pParser = ParseAlloc(malloc);
ParseTrace(stderr, "> ");
Parse(pParser, INTEGER, 1);
Parse(pParser, PLUS, 0);
Parse(pParser, INTEGER, 2);
Parse(pParser, TIMES, 0);
Parse(pParser, INTEGER, 10);
Parse(pParser, DIVIDE, 0);
Parse(pParser, INTEGER, 2);
Parse(pParser, 0, 0);
ParseFree(pParser, free);
}
}
%syntax_error {
fprintf(stderr, "Syntax error\n");
}
program ::= expr(A). { printf("Result=%d\n", A); }
expr(A) ::= expr(B) MINUS expr(C). { A = B - C; }
expr(A) ::= expr(B) PLUS expr(C). { A = B + C; }
expr(A) ::= expr(B) TIMES expr(C). { A = B * C; }
expr(A) ::= expr(B) DIVIDE expr(C). {
if (C != 0) {
A = B / C;
} else {
fprintf(stderr, "Divide by zero\n");
}
}
expr(A) ::= INTEGER(B). { A = B; }

917
examples/calculator-js.js Normal file
View File

@ -0,0 +1,917 @@
/*
** 2000-05-29
**
** The author disclaims copyright to this source code. In place of
** a legal notice, here is a blessing:
**
** May you do good and not evil.
** May you find forgiveness for yourself and forgive others.
** May you share freely, never taking more than you give.
**
** Based on SQLite distribution v3.17.0
** Adopted for JavaScript by Artem Butusov <art.sormy@gmail.com>
**
*************************************************************************
** Driver template for the LEMON parser generator.
**
** The "lemon" program processes an LALR(1) input grammar file, then uses
** this template to construct a parser. The "lemon" program inserts text
** at each "%%" line. Also, any "P-a-r-s-e" identifer prefix (without the
** interstitial "-" characters) contained in this template is changed into
** the value of the %name directive from the grammar. Otherwise, the content
** of this template is copied straight through into the generate parser
** source file.
**
** The following is the concatenation of all %include directives from the
** input grammar file:
*/
/************ Begin %include sections from the grammar ************************/
// line 8 "examples/calculator-js.y"
// include something
// line 33 "examples/calculator-js.js"
/**************** End of %include directives **********************************/
function Parser() {
/* These constants specify the various numeric values for terminal symbols
** in a format understandable to "makeheaders".
***************** Begin makeheaders token definitions *************************/
this.TOKEN_PLUS = 1;
this.TOKEN_MINUS = 2;
this.TOKEN_DIVIDE = 3;
this.TOKEN_TIMES = 4;
this.TOKEN_INTEGER = 5;
/**************** End makeheaders token definitions ***************************/
/* The next sections is a series of control #defines.
** various aspects of the generated parser.
** YYNOCODE is a number of type YYCODETYPE that is not used for
** any terminal or nonterminal symbol.
** YYFALLBACK If defined, this indicates that one or more tokens
** (also known as: "terminal symbols") have fall-back
** values which should be used if the original symbol
** would not parse. This permits keywords to sometimes
** be used as identifiers, for example.
** YYSTACKDEPTH is the maximum depth of the parser's stack. If
** zero the stack is dynamically sized using realloc()
** YYERRORSYMBOL is the code number of the error symbol. If not
** defined, then do no error processing.
** YYNSTATE the combined number of states.
** YYNRULE the number of rules in the grammar
** YY_MAX_SHIFT Maximum value for shift actions
** YY_MIN_SHIFTREDUCE Minimum value for shift-reduce actions
** YY_MAX_SHIFTREDUCE Maximum value for shift-reduce actions
** YY_MIN_REDUCE Maximum value for reduce actions
** YY_ERROR_ACTION The yy_action[] code for syntax error
** YY_ACCEPT_ACTION The yy_action[] code for accept
** YY_NO_ACTION The yy_action[] code for no-op
*/
/************* Begin control #defines *****************************************/
this.YYNOCODE = 10;
this.YYSTACKDEPTH = 100;
this.YYFALLBACK = false;
this.YYNSTATE = 8;
this.YYNRULE = 6;
this.YY_MAX_SHIFT = 7;
this.YY_MIN_SHIFTREDUCE = 11;
this.YY_MAX_SHIFTREDUCE = 16;
this.YY_MIN_REDUCE = 17;
this.YY_MAX_REDUCE = 22;
this.YY_ERROR_ACTION = 23;
this.YY_ACCEPT_ACTION = 24;
this.YY_NO_ACTION = 25;
/************* End control #defines *******************************************/
/* Define the yytestcase() macro to be a no-op if is not already defined
** otherwise.
**
** Applications can choose to define yytestcase() in the %include section
** to a macro that can assist in verifying code coverage. For production
** code the yytestcase() macro should be turned off. But it is useful
** for testing.
*/
if (!this.yytestcase) {
this.yytestcase = function () {};
}
/* Next are the tables used to determine what action to take based on the
** current state and lookahead token. These tables are used to implement
** functions that take a state number and lookahead value and return an
** action integer.
**
** Suppose the action integer is N. Then the action is determined as
** follows
**
** 0 <= N <= YY_MAX_SHIFT Shift N. That is, push the lookahead
** token onto the stack and goto state N.
**
** N between YY_MIN_SHIFTREDUCE Shift to an arbitrary state then
** and YY_MAX_SHIFTREDUCE reduce by rule N-YY_MIN_SHIFTREDUCE.
**
** N between YY_MIN_REDUCE Reduce by rule N-YY_MIN_REDUCE
** and YY_MAX_REDUCE
**
** N == YY_ERROR_ACTION A syntax error has occurred.
**
** N == YY_ACCEPT_ACTION The parser accepts its input.
**
** N == YY_NO_ACTION No such action. Denotes unused
** slots in the yy_action[] table.
**
** The action table is constructed as a single large table named yy_action[].
** Given state S and lookahead X, the action is computed as either:
**
** (A) N = yy_action[ yy_shift_ofst[S] + X ]
** (B) N = yy_default[S]
**
** The (A) formula is preferred. The B formula is used instead if:
** (1) The yy_shift_ofst[S]+X value is out of range, or
** (2) yy_lookahead[yy_shift_ofst[S]+X] is not equal to X, or
** (3) yy_shift_ofst[S] equal YY_SHIFT_USE_DFLT.
** (Implementation note: YY_SHIFT_USE_DFLT is chosen so that
** YY_SHIFT_USE_DFLT+X will be out of range for all possible lookaheads X.
** Hence only tests (1) and (2) need to be evaluated.)
**
** The formulas above are for computing the action when the lookahead is
** a terminal symbol. If the lookahead is a non-terminal (as occurs after
** a reduce action) then the yy_reduce_ofst[] array is used in place of
** the yy_shift_ofst[] array and YY_REDUCE_USE_DFLT is used in place of
** YY_SHIFT_USE_DFLT.
**
** The following are the tables generated in this section:
**
** yy_action[] A single table containing all actions.
** yy_lookahead[] A table containing the lookahead for each entry in
** yy_action. Used to detect hash collisions.
** yy_shift_ofst[] For each state, the offset into yy_action for
** shifting terminals.
** yy_reduce_ofst[] For each state, the offset into yy_action for
** shifting non-terminals after a reduce.
** yy_default[] Default action for each state.
**
*********** Begin parsing tables **********************************************/
this.yy_action = [
/* 0 */ 17, 3, 4, 1, 2, 24, 5, 1, 2, 15,
/* 10 */ 16, 14, 19, 19, 6, 7,
];
this.yy_lookahead = [
/* 0 */ 0, 1, 2, 3, 4, 7, 8, 3, 4, 8,
/* 10 */ 5, 8, 9, 9, 8, 8,
];
this.YY_SHIFT_USE_DFLT = 16;
this.YY_SHIFT_COUNT = 7;
this.YY_SHIFT_MIN = 0;
this.YY_SHIFT_MAX = 5;
this.yy_shift_ofst = [
/* 0 */ 5, 5, 5, 5, 5, 0, 4, 4,
];
this.YY_REDUCE_USE_DFLT = -3;
this.YY_REDUCE_COUNT = 4;
this.YY_REDUCE_MIN = -2;
this.YY_REDUCE_MAX = 7;
this.yy_reduce_ofst = [
/* 0 */ -2, 1, 3, 6, 7,
];
this.yy_default = [
/* 0 */ 23, 23, 23, 23, 23, 23, 19, 18,
];
/********** End of lemon-generated parsing tables *****************************/
/* The next table maps tokens (terminal symbols) into fallback tokens.
** If a construct like the following:
**
** %fallback ID X Y Z.
**
** appears in the grammar, then ID becomes a fallback token for X, Y,
** and Z. Whenever one of the tokens X, Y, or Z is input to the parser
** but it does not parse, the type of the token is changed to ID and
** the parse is retried before an error is thrown.
**
** This feature can be used, for example, to cause some keywords in a language
** to revert to identifiers if they keyword does not apply in the context where
** it appears.
*/
this.yyFallback = [
];
/* The following structure represents a single element of the
** parser's stack. Information stored includes:
**
** + The state number for the parser at this level of the stack.
**
** + The value of the token stored at this level of the stack.
** (In other words, the "major" token.)
**
** + The semantic value stored at this level of the stack. This is
** the information used by the action routines in the grammar.
** It is sometimes called the "minor" token.
**
** After the "shift" half of a SHIFTREDUCE action, the stateno field
** actually contains the reduce action for the second half of the
** SHIFTREDUCE.
*/
//{
// stateno, /* The state-number, or reduce action in SHIFTREDUCE */
// major, /* The major token value. This is the code
// ** number for the token at this stack level */
// minor, /* The user-supplied minor token value. This
// ** is the value of the token */
//}
/* The state of the parser is completely contained in an instance of
** the following structure */
this.yyhwm = 0; /* High-water mark of the stack */
this.yyerrcnt = -1; /* Shifts left before out of the error */
this.yystack = null; /* The parser's stack */
this.yyidx = -1; /* Stack index of current element in the stack */
this.yyTraceCallback = null;
this.yyTracePrompt = "";
/*
** Turn parser tracing on by giving a stream to which to write the trace
** and a prompt to preface each trace message. Tracing is turned off
** by making either argument NULL
**
** Inputs:
** <ul>
** <li> A callback to which trace output should be written.
** If NULL, then tracing is turned off.
** <li> A prefix string written at the beginning of every
** line of trace output. Default is "".
** </ul>
**
** Outputs:
** None.
*/
this.setTraceCallback = function (callback, prompt) {
this.yyTraceCallback = callback;
this.yyTracePrompt = prompt || "";
}
this.trace = function (message) {
this.yyTraceCallback(this.yyTracePrompt + message + "\n");
}
/* For tracing shifts, the names of all terminals and nonterminals
** are required. The following table supplies these names */
this.yyTokenName = [
"$", "PLUS", "MINUS", "DIVIDE",
"TIMES", "INTEGER", "error", "program",
"expr",
];
/* For tracing reduce actions, the names of all rules are required.
*/
this.yyRuleName = [
/* 0 */ "program ::= expr",
/* 1 */ "expr ::= expr MINUS expr",
/* 2 */ "expr ::= expr PLUS expr",
/* 3 */ "expr ::= expr TIMES expr",
/* 4 */ "expr ::= expr DIVIDE expr",
/* 5 */ "expr ::= INTEGER",
];
/*
** Try to increase the size of the parser stack. Return the number
** of errors. Return 0 on success.
*/
this.yyGrowStack = function () {
// fix me: yystksz*2 + 100
this.yystack.push({
stateno: undefined,
major: undefined,
minor: undefined
});
}
/* Initialize a new parser that has already been allocated.
*/
this.init = function () {
this.yyhwm = 0;
this.yyerrcnt = -1;
this.yyidx = 0;
if (this.YYSTACKDEPTH <= 0) {
this.yystack = [];
this.yyGrowStack();
} else {
this.yystack = new Array(this.YYSTACKDEPTH);
for (var i = 0; i < this.YYSTACKDEPTH; i++) {
this.yystack[i] = {
stateno: undefined,
major: undefined,
minor: undefined
};
}
}
var yytos = this.yystack[0];
yytos.stateno = 0;
yytos.major = 0;
}
/* The following function deletes the "minor type" or semantic value
** associated with a symbol. The symbol can be either a terminal
** or nonterminal. "yymajor" is the symbol code, and "yypminor" is
** a pointer to the value to be deleted. The code used to do the
** deletions is derived from the %destructor and/or %token_destructor
** directives of the input grammar.
*/
this.yy_destructor = function (
yymajor, /* Type code for object to destroy */
yyminor /* The object to be destroyed */
) {
switch (yymajor) {
/* Here is inserted the actions which take place when a
** terminal or non-terminal is destroyed. This can happen
** when the symbol is popped from the stack during a
** reduce or during error processing or when a parser is
** being destroyed before it is finished parsing.
**
** Note: during a reduce, the only symbols destroyed are those
** which appear on the RHS of the rule, but which are *not* used
** inside the C code.
*/
/********* Begin destructor definitions ***************************************/
/********* End destructor definitions *****************************************/
default: break; /* If no destructor action specified: do nothing */
}
}
/*
** Pop the parser's stack once.
**
** If there is a destructor routine associated with the token which
** is popped from the stack, then call it.
*/
this.yy_pop_parser_stack = function () {
// assert( pParser->yytos!=0 );
// assert( pParser->yytos > pParser->yystack );
var yytos = this.yystack[this.yyidx];
if (this.yyTraceCallback) {
this.trace("Popping " + this.yyTokenName[yytos.major]);
}
this.yy_destructor(yytos.major, yytos.minor);
this.yyidx--;
}
/*
** Clear all secondary memory allocations from the parser
*/
this.finalize = function () {
while (this.yyidx > 0) {
this.yy_pop_parser_stack();
}
this.yystack = null;
}
/*
** Return the peak depth of the stack for a parser.
*/
this.getStackPeak = function () {
return this.yyhwm;
}
/*
** Find the appropriate action for a parser given the terminal
** look-ahead token iLookAhead.
*/
this.yy_find_shift_action = function (
iLookAhead /* The look-ahead token */
) {
var yytos = this.yystack[this.yyidx];
var stateno = yytos.stateno;
if (stateno >= this.YY_MIN_REDUCE) {
return stateno;
}
// assert( stateno <= YY_SHIFT_COUNT );
do {
var i = this.yy_shift_ofst[stateno];
// assert( iLookAhead!=YYNOCODE );
i += iLookAhead;
if (i < 0 || i >= this.yy_action.length || this.yy_lookahead[i] != iLookAhead) {
if (this.YYFALLBACK) {
var iFallback; /* Fallback token */
if ((iLookAhead < this.yyFallback.length)
&& (iFallback = this.yyFallback[iLookAhead]) != 0
) {
if (this.yyTraceCallback) {
this.trace("FALLBACK " + this.yyTokenName[iLookAhead] + " => " + this.yyTokenName[iFallback]);
}
}
// assert( yyFallback[iFallback]==0 ); /* Fallback loop must terminate */
iLookAhead = iFallback;
continue;
}
if (this.YYWILDCARD) {
var j = i - iLookAhead + this.YYWILDCARD;
var cond1 = (this.YY_SHIFT_MIN + this.YYWILDCARD) < 0 ? j >= 0 : true;
var cond2 = (this.YY_SHIFT_MAX + this.YYWILDCARD) >= this.yy_action.length ? j < this.yy_action.length : true;
if (cond1 && cond2 && this.yy_lookahead[j] == this.YYWILDCARD && iLookAhead > 0) {
if (this.yyTraceCallback) {
this.trace("WILDCARD " + this.yyTokenName[iLookAhead] + " => " + this.yyTokenName[this.YYWILDCARD]);
}
return this.yy_action[j];
}
}
return this.yy_default[stateno];
} else {
return this.yy_action[i];
}
} while (true);
}
/*
** Find the appropriate action for a parser given the non-terminal
** look-ahead token iLookAhead.
*/
this.yy_find_reduce_action = function (
stateno, /* Current state number */
iLookAhead /* The look-ahead token */
) {
if (this.YYERRORSYMBOL) {
if (stateno > this.YY_REDUCE_COUNT) {
return this.yy_default[stateno];
}
} else {
// assert( stateno<=YY_REDUCE_COUNT );
}
var i = this.yy_reduce_ofst[stateno];
// assert( i!=YY_REDUCE_USE_DFLT );
// assert( iLookAhead!=YYNOCODE );
i += iLookAhead;
if (this.YYERRORSYMBOL) {
if (i < 0 || i >= this.yy_action.length || this.yy_lookahead[i] != iLookAhead) {
return this.yy_default[stateno];
}
} else {
// assert( i>=0 && i<YY_ACTTAB_COUNT );
// assert( yy_lookahead[i]==iLookAhead );
}
return this.yy_action[i];
}
/*
** The following routine is called if the stack overflows.
*/
this.yyStackOverflow = function () {
if (this.yyTraceCallback) {
this.trace("Stack Overflow!");
}
while (this.yyidx > 0) {
this.yy_pop_parser_stack();
}
/* Here code is inserted which will execute if the parser
** stack every overflows */
/******** Begin %stack_overflow code ******************************************/
/******** End %stack_overflow code ********************************************/
}
/*
** Print tracing information for a SHIFT action
*/
this.yyTraceShift = function (yyNewState) {
if (this.yyTraceCallback) {
var yytos = this.yystack[this.yyidx];
if (yyNewState < this.YYNSTATE) {
this.trace("Shift '" + this.yyTokenName[yytos.major] + "', go to state " + yyNewState);
} else {
this.trace("Shift '" + this.yyTokenName[yytos.major] + "'");
}
}
}
/*
** Perform a shift action.
*/
this.yy_shift = function (
yyNewState, /* The new state to shift in */
yyMajor, /* The major token to shift in */
yyMinor /* The minor token to shift in */
) {
this.yyidx++;
if (this.yyidx > this.yyhwm) {
this.yyhwm++;
// assert( yypParser->yyhwm == (int)(yypParser->yytos - yypParser->yystack) );
}
if (this.YYSTACKDEPTH > 0) {
if (this.yyidx >= this.YYSTACKDEPTH) {
this.yyidx--;
this.yyStackOverflow();
return;
}
} else {
if (this.yyidx >= this.yystack.length) {
this.yyGrowStack();
}
}
if (yyNewState > this.YY_MAX_SHIFT) {
yyNewState += this.YY_MIN_REDUCE - this.YY_MIN_SHIFTREDUCE;
}
var yytos = this.yystack[this.yyidx];
yytos.stateno = yyNewState;
yytos.major = yyMajor;
yytos.minor = yyMinor;
this.yyTraceShift(yyNewState);
}
/* The following table contains information about every rule that
** is used during the reduce.
*/
//{
// lhs, /* Symbol on the left-hand side of the rule */
// nrhs, /* Number of right-hand side symbols in the rule */
//}
this.yyRuleInfo = [
{ lhs: 7, nrhs: 1 },
{ lhs: 8, nrhs: 3 },
{ lhs: 8, nrhs: 3 },
{ lhs: 8, nrhs: 3 },
{ lhs: 8, nrhs: 3 },
{ lhs: 8, nrhs: 1 },
];
/*
** Perform a reduce action and the shift that must immediately
** follow the reduce.
*/
this.yy_reduce = function (
yyruleno /* Number of the rule by which to reduce */
){
var yymsp = this.yystack[this.yyidx]; /* The top of the parser's stack */
if (yyruleno < this.yyRuleName.length) {
var yysize = this.yyRuleInfo[yyruleno].nrhs;
var ruleName = this.yyRuleName[yyruleno];
var newStateNo = this.yystack[this.yyidx - yysize].stateno;
if (this.yyTraceCallback) {
this.trace("Reduce [" + ruleName + "], go to state " + newStateNo + ".");
}
}
/* Check that the stack is large enough to grow by a single entry
** if the RHS of the rule is empty. This ensures that there is room
** enough on the stack to push the LHS value */
if (this.yyRuleInfo[yyruleno].nrhs == 0) {
if (this.yyidx > this.yyhwm) {
this.yyhwm++;
// assert( yypParser->yyhwm == (int)(yypParser->yytos - yypParser->yystack));
}
if (this.YYSTACKDEPTH > 0) {
if (this.yyidx >= this.YYSTACKDEPTH - 1) {
this.yyStackOverflow();
return;
}
} else {
if (this.yyidx >= this.yystack.length - 1) {
this.yyGrowStack();
yymsp = this.yystack[this.yyidx];
}
}
}
var yylhsminor;
switch (yyruleno) {
/* Beginning here are the reduction cases. A typical example
** follows:
** case 0:
** #line <lineno> <grammarfile>
** { ... } // User supplied code
** #line <lineno> <thisfile>
** break;
*/
/********** Begin reduce actions **********************************************/
case 0: /* program ::= expr */
// line 63 "examples/calculator-js.y"
{ console.log("Result=" + this.yystack[this.yyidx + 0].minor); }
// line 602 "examples/calculator-js.js"
break;
case 1: /* expr ::= expr MINUS expr */
// line 64 "examples/calculator-js.y"
{ yylhsminor = this.yystack[this.yyidx + -2].minor - this.yystack[this.yyidx + 0].minor; }
// line 607 "examples/calculator-js.js"
this.yystack[this.yyidx + -2].minor = yylhsminor;
break;
case 2: /* expr ::= expr PLUS expr */
// line 65 "examples/calculator-js.y"
{ yylhsminor = this.yystack[this.yyidx + -2].minor + this.yystack[this.yyidx + 0].minor; }
// line 613 "examples/calculator-js.js"
this.yystack[this.yyidx + -2].minor = yylhsminor;
break;
case 3: /* expr ::= expr TIMES expr */
// line 66 "examples/calculator-js.y"
{ yylhsminor = this.yystack[this.yyidx + -2].minor * this.yystack[this.yyidx + 0].minor; }
// line 619 "examples/calculator-js.js"
this.yystack[this.yyidx + -2].minor = yylhsminor;
break;
case 4: /* expr ::= expr DIVIDE expr */
// line 67 "examples/calculator-js.y"
{
if (this.yystack[this.yyidx + 0].minor != 0) {
yylhsminor = this.yystack[this.yyidx + -2].minor / this.yystack[this.yyidx + 0].minor;
} else {
throw new Error("Divide by zero");
}
}
// line 631 "examples/calculator-js.js"
this.yystack[this.yyidx + -2].minor = yylhsminor;
break;
case 5: /* expr ::= INTEGER */
// line 74 "examples/calculator-js.y"
{ yylhsminor = this.yystack[this.yyidx + 0].minor; }
// line 637 "examples/calculator-js.js"
this.yystack[this.yyidx + 0].minor = yylhsminor;
break;
default:
break;
/********** End reduce actions ************************************************/
};
// assert( yyruleno<sizeof(yyRuleInfo)/sizeof(yyRuleInfo[0]) );
var yygoto = this.yyRuleInfo[yyruleno].lhs; /* The next state */
var yysize = this.yyRuleInfo[yyruleno].nrhs; /* Amount to pop the stack */
var yyact = this.yy_find_reduce_action( /* The next action */
this.yystack[this.yyidx - yysize].stateno,
yygoto
);
if (yyact <= this.YY_MAX_SHIFTREDUCE) {
if (yyact > this.YY_MAX_SHIFT) {
yyact += this.YY_MIN_REDUCE - this.YY_MIN_SHIFTREDUCE;
}
this.yyidx -= yysize - 1;
yymsp = this.yystack[this.yyidx];
yymsp.stateno = yyact;
yymsp.major = yygoto;
this.yyTraceShift(yyact);
} else {
// assert( yyact == YY_ACCEPT_ACTION );
this.yyidx -= yysize;
this.yy_accept();
}
}
/*
** The following code executes when the parse fails
*/
this.yy_parse_failed = function () {
if (this.yyTraceCallback) {
this.trace("Fail!");
}
while (this.yyidx > 0) {
this.yy_pop_parser_stack();
}
/* Here code is inserted which will be executed whenever the
** parser fails */
/************ Begin %parse_failure code ***************************************/
/************ End %parse_failure code *****************************************/
}
/*
** The following code executes when a syntax error first occurs.
*/
this.yy_syntax_error = function (
yymajor, /* The major type of the error token */
yyminor /* The minor type of the error token */
) {
var TOKEN = yyminor;
/************ Begin %syntax_error code ****************************************/
// line 59 "examples/calculator-js.y"
console.log("Syntax error");
// line 696 "examples/calculator-js.js"
/************ End %syntax_error code ******************************************/
}
/*
** The following is executed when the parser accepts
*/
this.yy_accept = function () {
if (this.yyTraceCallback) {
this.trace("Accept!");
}
if (!this.YYNOERRORRECOVERY) {
this.yyerrcnt = -1;
}
// assert( yypParser->yytos==yypParser->yystack );
/* Here code is inserted which will be executed whenever the
** parser accepts */
/*********** Begin %parse_accept code *****************************************/
/*********** End %parse_accept code *******************************************/
}
/* The main parser program.
** The first argument is a pointer to a structure obtained from
** "ParserAlloc" which describes the current state of the parser.
** The second argument is the major token number. The third is
** the minor token. The fourth optional argument is whatever the
** user wants (and specified in the grammar) and is available for
** use by the action routines.
**
** Inputs:
** <ul>
** <li> A pointer to the parser (an opaque structure.)
** <li> The major token number.
** <li> The minor token number.
** <li> An option argument of a grammar-specified type.
** </ul>
**
** Outputs:
** None.
*/
this.parse = function (
yymajor, /* The major token code number */
yyminor /* The value for the token */
) {
var yyact; /* The parser action. */
var yyendofinput; /* True if we are at the end of input */
var yyerrorhit = 0; /* True if yymajor has invoked an error */
//assert( yypParser->yytos!=0 );
if (yymajor === undefined || yymajor === null) {
yymajor = 0;
}
yyendofinput = yymajor == 0;
if (this.yyTraceCallback) {
this.trace("Input '" + this.yyTokenName[yymajor] + "'");
}
do {
yyact = this.yy_find_shift_action(yymajor);
if (yyact <= this.YY_MAX_SHIFTREDUCE) { // check me?
this.yy_shift(yyact, yymajor, yyminor);
if (!this.YYNOERRORRECOVERY) {
this.yyerrcnt--;
}
yymajor = this.YYNOCODE;
} else if (yyact <= this.YY_MAX_REDUCE) { // check me?
this.yy_reduce(yyact - this.YY_MIN_REDUCE); // check me?
} else {
// assert( yyact == YY_ERROR_ACTION );
if (this.yyTraceCallback) {
this.trace("Syntax Error!");
}
if (this.YYERRORSYMBOL) {
/* A syntax error has occurred.
** The response to an error depends upon whether or not the
** grammar defines an error token "ERROR".
**
** This is what we do if the grammar does define ERROR:
**
** * Call the %syntax_error function.
**
** * Begin popping the stack until we enter a state where
** it is legal to shift the error symbol, then shift
** the error symbol.
**
** * Set the error count to three.
**
** * Begin accepting and shifting new tokens. No new error
** processing will occur until three tokens have been
** shifted successfully.
**
*/
if (this.yyerrcnt < 0) {
this.yy_syntax_error(yymajor, yyminor);
}
var yymx = this.yystack[this.yyidx].major;
if (yymx == this.YYERRORSYMBOL || yyerrorhit) {
if (this.yyTraceCallback) {
this.trace("Discard input token " + this.yyTokenName[yymajor]);
}
this.yy_destructor(yymajor, yyminor);
yymajor = this.YYNOCODE;
} else {
while (this.yyidx >= 0
&& yymx != this.YYERRORSYMBOL
&& (yyact = this.yy_find_reduce_action(
this.yystack[this.yyidx].stateno,
this.YYERRORSYMBOL)) >= this.YY_MIN_REDUCE // check me?
) {
this.yy_pop_parser_stack();
}
if (this.yyidx < 0 || yymajor == 0) {
this.yy_destructor(yymajor, yyminor);
this.yy_parse_failed();
if (!this.YYNOERRORRECOVERY) {
this.yyerrcnt = -1;
}
yymajor = this.YYNOCODE;
} else if (yymx != this.YYERRORSYMBOL) {
this.yy_shift(yyact, this.YYERRORSYMBOL, yyminor); // check me?
}
}
this.yyerrcnt = 3;
yyerrorhit = 1;
} else if (this.YYNOERRORRECOVERY) {
/* If the YYNOERRORRECOVERY macro is defined, then do not attempt to
** do any kind of error recovery. Instead, simply invoke the syntax
** error routine and continue going as if nothing had happened.
**
** Applications can set this macro (for example inside %include) if
** they intend to abandon the parse upon the first syntax error seen.
*/
this.yy_syntax_error(yymajor, yyminor);
this.yy_destructor(yymajor, yyminor);
yymajor = this.YYNOCODE;
} else { /* YYERRORSYMBOL is not defined */
/* This is what we do if the grammar does not define ERROR:
**
** * Report an error message, and throw away the input token.
**
** * If the input token is $, then fail the parse.
**
** As before, subsequent error messages are suppressed until
** three input tokens have been successfully shifted.
*/
if (this.yyerrcnt <= 0) {
this.yy_syntax_error(yymajor, yyminor);
}
this.yyerrcnt = 3;
this.yy_destructor(yymajor, yyminor);
if (yyendofinput) {
this.yy_parse_failed();
if (!this.YYNOERRORRECOVERY) {
this.yyerrcnt = -1;
}
}
yymajor = this.YYNOCODE;
}
}
} while (yymajor != this.YYNOCODE && this.yyidx > 0);
if (this.yyTraceCallback) {
var remainingTokens = [];
for (var i = 1; i <= this.yyidx; i++) {
remainingTokens.push(this.yyTokenName[this.yystack[i].major]);
}
this.trace("Return. Stack=[" + remainingTokens.join(" ") + "]");
}
}
this.init();
} // function Parser()
// line 12 "examples/calculator-js.y"
var Lexer = require('../lexer/lexer');
var parser = new Parser();
parser.setTraceCallback(function (value) {
process.stdout.write(value);
}, '> ');
var lexer = new Lexer();
lexer.addRule(/\d+/, function (value) {
return { major: parser.TOKEN_INTEGER, minor: parseInt(value, 10) };
});
lexer.addRule('+', function (value) {
return { major: parser.TOKEN_PLUS, minor: null };
});
lexer.addRule('-', function (value) {
return { major: parser.TOKEN_MINUS, minor: null };
});
lexer.addRule('/', function (value) {
return { major: parser.TOKEN_DIVIDE, minor: null };
});
lexer.addRule('*', function (value) {
return { major: parser.TOKEN_TIMES, minor: null };
});
lexer.addRule(/\s+/, function () {});
var data = '';
process.stdin.on('data', function (chunk) {
data += chunk;
});
process.stdin.on('end', function () {
var token;
lexer.setInput(data);
while (token = lexer.lex()) {
parser.parse(token.major, token.minor);
}
parser.parse();
});
// line 918 "examples/calculator-js.js"

102
examples/calculator-js.out Normal file
View File

@ -0,0 +1,102 @@
State 0:
program ::= * expr
expr ::= * expr MINUS expr
expr ::= * expr PLUS expr
expr ::= * expr TIMES expr
expr ::= * expr DIVIDE expr
expr ::= * INTEGER
INTEGER shift-reduce 5 expr ::= INTEGER
program accept
expr shift 5
State 1:
expr ::= * expr MINUS expr
expr ::= * expr PLUS expr
expr ::= * expr TIMES expr
expr ::= * expr DIVIDE expr
expr ::= expr DIVIDE * expr
expr ::= * INTEGER
INTEGER shift-reduce 5 expr ::= INTEGER
expr shift-reduce 4 expr ::= expr DIVIDE expr
State 2:
expr ::= * expr MINUS expr
expr ::= * expr PLUS expr
expr ::= * expr TIMES expr
expr ::= expr TIMES * expr
expr ::= * expr DIVIDE expr
expr ::= * INTEGER
INTEGER shift-reduce 5 expr ::= INTEGER
expr shift-reduce 3 expr ::= expr TIMES expr
State 3:
expr ::= * expr MINUS expr
expr ::= * expr PLUS expr
expr ::= expr PLUS * expr
expr ::= * expr TIMES expr
expr ::= * expr DIVIDE expr
expr ::= * INTEGER
INTEGER shift-reduce 5 expr ::= INTEGER
expr shift 6
State 4:
expr ::= * expr MINUS expr
expr ::= expr MINUS * expr
expr ::= * expr PLUS expr
expr ::= * expr TIMES expr
expr ::= * expr DIVIDE expr
expr ::= * INTEGER
INTEGER shift-reduce 5 expr ::= INTEGER
expr shift 7
State 5:
(0) program ::= expr *
expr ::= expr * MINUS expr
expr ::= expr * PLUS expr
expr ::= expr * TIMES expr
expr ::= expr * DIVIDE expr
$ reduce 0 program ::= expr
PLUS shift 3
MINUS shift 4
DIVIDE shift 1
TIMES shift 2
State 6:
expr ::= expr * MINUS expr
expr ::= expr * PLUS expr
(2) expr ::= expr PLUS expr *
expr ::= expr * TIMES expr
expr ::= expr * DIVIDE expr
DIVIDE shift 1
TIMES shift 2
{default} reduce 2 expr ::= expr PLUS expr
State 7:
expr ::= expr * MINUS expr
(1) expr ::= expr MINUS expr *
expr ::= expr * PLUS expr
expr ::= expr * TIMES expr
expr ::= expr * DIVIDE expr
DIVIDE shift 1
TIMES shift 2
{default} reduce 1 expr ::= expr MINUS expr
----------------------------------------------------
Symbols:
0: $:
1: PLUS
2: MINUS
3: DIVIDE
4: TIMES
5: INTEGER
6: error:
7: program: INTEGER
8: expr: INTEGER

74
examples/calculator-js.y Normal file
View File

@ -0,0 +1,74 @@
%name Parser
%token_prefix TOKEN_
%left PLUS MINUS.
%left DIVIDE TIMES.
%include {
// include something
}
%code {
var Lexer = require('../lexer/lexer');
var parser = new Parser();
parser.setTraceCallback(function (value) {
process.stdout.write(value);
}, '> ');
var lexer = new Lexer();
lexer.addRule(/\d+/, function (value) {
return { major: parser.TOKEN_INTEGER, minor: parseInt(value, 10) };
});
lexer.addRule('+', function (value) {
return { major: parser.TOKEN_PLUS, minor: null };
});
lexer.addRule('-', function (value) {
return { major: parser.TOKEN_MINUS, minor: null };
});
lexer.addRule('/', function (value) {
return { major: parser.TOKEN_DIVIDE, minor: null };
});
lexer.addRule('*', function (value) {
return { major: parser.TOKEN_TIMES, minor: null };
});
lexer.addRule(/\s+/, function () {});
var data = '';
process.stdin.on('data', function (chunk) {
data += chunk;
});
process.stdin.on('end', function () {
var token;
lexer.setInput(data);
while (token = lexer.lex()) {
parser.parse(token.major, token.minor);
}
parser.parse();
});
}
%syntax_error {
console.log("Syntax error");
}
program ::= expr(A). { console.log("Result=" + A); }
expr(A) ::= expr(B) MINUS expr(C). { A = B - C; }
expr(A) ::= expr(B) PLUS expr(C). { A = B + C; }
expr(A) ::= expr(B) TIMES expr(C). { A = B * C; }
expr(A) ::= expr(B) DIVIDE expr(C). {
if (C != 0) {
A = B / C;
} else {
throw new Error("Divide by zero");
}
}
expr(A) ::= INTEGER(B). { A = B; }

BIN
lemon-src/lemon-js Executable file

Binary file not shown.

5442
lemon-src/lemon-js.c Normal file

File diff suppressed because it is too large Load Diff

5436
lemon-src/lemon.c Normal file

File diff suppressed because it is too large Load Diff

775
lemon-src/lempar.js Normal file
View File

@ -0,0 +1,775 @@
/*
** 2000-05-29
**
** The author disclaims copyright to this source code. In place of
** a legal notice, here is a blessing:
**
** May you do good and not evil.
** May you find forgiveness for yourself and forgive others.
** May you share freely, never taking more than you give.
**
** Based on SQLite distribution v3.17.0
** Adopted for JavaScript by Artem Butusov <art.sormy@gmail.com>
**
*************************************************************************
** Driver template for the LEMON parser generator.
**
** The "lemon" program processes an LALR(1) input grammar file, then uses
** this template to construct a parser. The "lemon" program inserts text
** at each "%%" line. Also, any "P-a-r-s-e" identifer prefix (without the
** interstitial "-" characters) contained in this template is changed into
** the value of the %name directive from the grammar. Otherwise, the content
** of this template is copied straight through into the generate parser
** source file.
**
** The following is the concatenation of all %include directives from the
** input grammar file:
*/
/************ Begin %include sections from the grammar ************************/
%%
/**************** End of %include directives **********************************/
function Parse() {
/* These constants specify the various numeric values for terminal symbols
** in a format understandable to "makeheaders".
***************** Begin makeheaders token definitions *************************/
%%
/**************** End makeheaders token definitions ***************************/
/* The next sections is a series of control #defines.
** various aspects of the generated parser.
** YYNOCODE is a number of type YYCODETYPE that is not used for
** any terminal or nonterminal symbol.
** YYFALLBACK If defined, this indicates that one or more tokens
** (also known as: "terminal symbols") have fall-back
** values which should be used if the original symbol
** would not parse. This permits keywords to sometimes
** be used as identifiers, for example.
** YYSTACKDEPTH is the maximum depth of the parser's stack. If
** zero the stack is dynamically sized using realloc()
** YYERRORSYMBOL is the code number of the error symbol. If not
** defined, then do no error processing.
** YYNSTATE the combined number of states.
** YYNRULE the number of rules in the grammar
** YY_MAX_SHIFT Maximum value for shift actions
** YY_MIN_SHIFTREDUCE Minimum value for shift-reduce actions
** YY_MAX_SHIFTREDUCE Maximum value for shift-reduce actions
** YY_MIN_REDUCE Maximum value for reduce actions
** YY_ERROR_ACTION The yy_action[] code for syntax error
** YY_ACCEPT_ACTION The yy_action[] code for accept
** YY_NO_ACTION The yy_action[] code for no-op
*/
/************* Begin control #defines *****************************************/
%%
/************* End control #defines *******************************************/
/* Define the yytestcase() macro to be a no-op if is not already defined
** otherwise.
**
** Applications can choose to define yytestcase() in the %include section
** to a macro that can assist in verifying code coverage. For production
** code the yytestcase() macro should be turned off. But it is useful
** for testing.
*/
if (!this.yytestcase) {
this.yytestcase = function () {};
}
/* Next are the tables used to determine what action to take based on the
** current state and lookahead token. These tables are used to implement
** functions that take a state number and lookahead value and return an
** action integer.
**
** Suppose the action integer is N. Then the action is determined as
** follows
**
** 0 <= N <= YY_MAX_SHIFT Shift N. That is, push the lookahead
** token onto the stack and goto state N.
**
** N between YY_MIN_SHIFTREDUCE Shift to an arbitrary state then
** and YY_MAX_SHIFTREDUCE reduce by rule N-YY_MIN_SHIFTREDUCE.
**
** N between YY_MIN_REDUCE Reduce by rule N-YY_MIN_REDUCE
** and YY_MAX_REDUCE
**
** N == YY_ERROR_ACTION A syntax error has occurred.
**
** N == YY_ACCEPT_ACTION The parser accepts its input.
**
** N == YY_NO_ACTION No such action. Denotes unused
** slots in the yy_action[] table.
**
** The action table is constructed as a single large table named yy_action[].
** Given state S and lookahead X, the action is computed as either:
**
** (A) N = yy_action[ yy_shift_ofst[S] + X ]
** (B) N = yy_default[S]
**
** The (A) formula is preferred. The B formula is used instead if:
** (1) The yy_shift_ofst[S]+X value is out of range, or
** (2) yy_lookahead[yy_shift_ofst[S]+X] is not equal to X, or
** (3) yy_shift_ofst[S] equal YY_SHIFT_USE_DFLT.
** (Implementation note: YY_SHIFT_USE_DFLT is chosen so that
** YY_SHIFT_USE_DFLT+X will be out of range for all possible lookaheads X.
** Hence only tests (1) and (2) need to be evaluated.)
**
** The formulas above are for computing the action when the lookahead is
** a terminal symbol. If the lookahead is a non-terminal (as occurs after
** a reduce action) then the yy_reduce_ofst[] array is used in place of
** the yy_shift_ofst[] array and YY_REDUCE_USE_DFLT is used in place of
** YY_SHIFT_USE_DFLT.
**
** The following are the tables generated in this section:
**
** yy_action[] A single table containing all actions.
** yy_lookahead[] A table containing the lookahead for each entry in
** yy_action. Used to detect hash collisions.
** yy_shift_ofst[] For each state, the offset into yy_action for
** shifting terminals.
** yy_reduce_ofst[] For each state, the offset into yy_action for
** shifting non-terminals after a reduce.
** yy_default[] Default action for each state.
**
*********** Begin parsing tables **********************************************/
%%
/********** End of lemon-generated parsing tables *****************************/
/* The next table maps tokens (terminal symbols) into fallback tokens.
** If a construct like the following:
**
** %fallback ID X Y Z.
**
** appears in the grammar, then ID becomes a fallback token for X, Y,
** and Z. Whenever one of the tokens X, Y, or Z is input to the parser
** but it does not parse, the type of the token is changed to ID and
** the parse is retried before an error is thrown.
**
** This feature can be used, for example, to cause some keywords in a language
** to revert to identifiers if they keyword does not apply in the context where
** it appears.
*/
this.yyFallback = [
%%
];
/* The following structure represents a single element of the
** parser's stack. Information stored includes:
**
** + The state number for the parser at this level of the stack.
**
** + The value of the token stored at this level of the stack.
** (In other words, the "major" token.)
**
** + The semantic value stored at this level of the stack. This is
** the information used by the action routines in the grammar.
** It is sometimes called the "minor" token.
**
** After the "shift" half of a SHIFTREDUCE action, the stateno field
** actually contains the reduce action for the second half of the
** SHIFTREDUCE.
*/
//{
// stateno, /* The state-number, or reduce action in SHIFTREDUCE */
// major, /* The major token value. This is the code
// ** number for the token at this stack level */
// minor, /* The user-supplied minor token value. This
// ** is the value of the token */
//}
/* The state of the parser is completely contained in an instance of
** the following structure */
this.yyhwm = 0; /* High-water mark of the stack */
this.yyerrcnt = -1; /* Shifts left before out of the error */
this.yystack = null; /* The parser's stack */
this.yyidx = -1; /* Stack index of current element in the stack */
this.yyTraceCallback = null;
this.yyTracePrompt = "";
/*
** Turn parser tracing on by giving a stream to which to write the trace
** and a prompt to preface each trace message. Tracing is turned off
** by making either argument NULL
**
** Inputs:
** <ul>
** <li> A callback to which trace output should be written.
** If NULL, then tracing is turned off.
** <li> A prefix string written at the beginning of every
** line of trace output. Default is "".
** </ul>
**
** Outputs:
** None.
*/
this.setTraceCallback = function (callback, prompt) {
this.yyTraceCallback = callback;
this.yyTracePrompt = prompt || "";
}
this.trace = function (message) {
this.yyTraceCallback(this.yyTracePrompt + message + "\n");
}
/* For tracing shifts, the names of all terminals and nonterminals
** are required. The following table supplies these names */
this.yyTokenName = [
%%
];
/* For tracing reduce actions, the names of all rules are required.
*/
this.yyRuleName = [
%%
];
/*
** Try to increase the size of the parser stack. Return the number
** of errors. Return 0 on success.
*/
this.yyGrowStack = function () {
// fix me: yystksz*2 + 100
this.yystack.push({
stateno: undefined,
major: undefined,
minor: undefined
});
}
/* Initialize a new parser that has already been allocated.
*/
this.init = function () {
this.yyhwm = 0;
this.yyerrcnt = -1;
this.yyidx = 0;
if (this.YYSTACKDEPTH <= 0) {
this.yystack = [];
this.yyGrowStack();
} else {
this.yystack = new Array(this.YYSTACKDEPTH);
for (var i = 0; i < this.YYSTACKDEPTH; i++) {
this.yystack[i] = {
stateno: undefined,
major: undefined,
minor: undefined
};
}
}
var yytos = this.yystack[0];
yytos.stateno = 0;
yytos.major = 0;
}
/* The following function deletes the "minor type" or semantic value
** associated with a symbol. The symbol can be either a terminal
** or nonterminal. "yymajor" is the symbol code, and "yypminor" is
** a pointer to the value to be deleted. The code used to do the
** deletions is derived from the %destructor and/or %token_destructor
** directives of the input grammar.
*/
this.yy_destructor = function (
yymajor, /* Type code for object to destroy */
yyminor /* The object to be destroyed */
) {
switch (yymajor) {
/* Here is inserted the actions which take place when a
** terminal or non-terminal is destroyed. This can happen
** when the symbol is popped from the stack during a
** reduce or during error processing or when a parser is
** being destroyed before it is finished parsing.
**
** Note: during a reduce, the only symbols destroyed are those
** which appear on the RHS of the rule, but which are *not* used
** inside the C code.
*/
/********* Begin destructor definitions ***************************************/
%%
/********* End destructor definitions *****************************************/
default: break; /* If no destructor action specified: do nothing */
}
}
/*
** Pop the parser's stack once.
**
** If there is a destructor routine associated with the token which
** is popped from the stack, then call it.
*/
this.yy_pop_parser_stack = function () {
// assert( pParser->yytos!=0 );
// assert( pParser->yytos > pParser->yystack );
var yytos = this.yystack[this.yyidx];
if (this.yyTraceCallback) {
this.trace("Popping " + this.yyTokenName[yytos.major]);
}
this.yy_destructor(yytos.major, yytos.minor);
this.yyidx--;
}
/*
** Clear all secondary memory allocations from the parser
*/
this.finalize = function () {
while (this.yyidx > 0) {
this.yy_pop_parser_stack();
}
this.yystack = null;
}
/*
** Return the peak depth of the stack for a parser.
*/
this.getStackPeak = function () {
return this.yyhwm;
}
/*
** Find the appropriate action for a parser given the terminal
** look-ahead token iLookAhead.
*/
this.yy_find_shift_action = function (
iLookAhead /* The look-ahead token */
) {
var yytos = this.yystack[this.yyidx];
var stateno = yytos.stateno;
if (stateno >= this.YY_MIN_REDUCE) {
return stateno;
}
// assert( stateno <= YY_SHIFT_COUNT );
do {
var i = this.yy_shift_ofst[stateno];
// assert( iLookAhead!=YYNOCODE );
i += iLookAhead;
if (i < 0 || i >= this.yy_action.length || this.yy_lookahead[i] != iLookAhead) {
if (this.YYFALLBACK) {
var iFallback; /* Fallback token */
if ((iLookAhead < this.yyFallback.length)
&& (iFallback = this.yyFallback[iLookAhead]) != 0
) {
if (this.yyTraceCallback) {
this.trace("FALLBACK " + this.yyTokenName[iLookAhead] + " => " + this.yyTokenName[iFallback]);
}
}
// assert( yyFallback[iFallback]==0 ); /* Fallback loop must terminate */
iLookAhead = iFallback;
continue;
}
if (this.YYWILDCARD) {
var j = i - iLookAhead + this.YYWILDCARD;
var cond1 = (this.YY_SHIFT_MIN + this.YYWILDCARD) < 0 ? j >= 0 : true;
var cond2 = (this.YY_SHIFT_MAX + this.YYWILDCARD) >= this.yy_action.length ? j < this.yy_action.length : true;
if (cond1 && cond2 && this.yy_lookahead[j] == this.YYWILDCARD && iLookAhead > 0) {
if (this.yyTraceCallback) {
this.trace("WILDCARD " + this.yyTokenName[iLookAhead] + " => " + this.yyTokenName[this.YYWILDCARD]);
}
return this.yy_action[j];
}
}
return this.yy_default[stateno];
} else {
return this.yy_action[i];
}
} while (true);
}
/*
** Find the appropriate action for a parser given the non-terminal
** look-ahead token iLookAhead.
*/
this.yy_find_reduce_action = function (
stateno, /* Current state number */
iLookAhead /* The look-ahead token */
) {
if (this.YYERRORSYMBOL) {
if (stateno > this.YY_REDUCE_COUNT) {
return this.yy_default[stateno];
}
} else {
// assert( stateno<=YY_REDUCE_COUNT );
}
var i = this.yy_reduce_ofst[stateno];
// assert( i!=YY_REDUCE_USE_DFLT );
// assert( iLookAhead!=YYNOCODE );
i += iLookAhead;
if (this.YYERRORSYMBOL) {
if (i < 0 || i >= this.yy_action.length || this.yy_lookahead[i] != iLookAhead) {
return this.yy_default[stateno];
}
} else {
// assert( i>=0 && i<YY_ACTTAB_COUNT );
// assert( yy_lookahead[i]==iLookAhead );
}
return this.yy_action[i];
}
/*
** The following routine is called if the stack overflows.
*/
this.yyStackOverflow = function () {
if (this.yyTraceCallback) {
this.trace("Stack Overflow!");
}
while (this.yyidx > 0) {
this.yy_pop_parser_stack();
}
/* Here code is inserted which will execute if the parser
** stack every overflows */
/******** Begin %stack_overflow code ******************************************/
%%
/******** End %stack_overflow code ********************************************/
}
/*
** Print tracing information for a SHIFT action
*/
this.yyTraceShift = function (yyNewState) {
if (this.yyTraceCallback) {
var yytos = this.yystack[this.yyidx];
if (yyNewState < this.YYNSTATE) {
this.trace("Shift '" + this.yyTokenName[yytos.major] + "', go to state " + yyNewState);
} else {
this.trace("Shift '" + this.yyTokenName[yytos.major] + "'");
}
}
}
/*
** Perform a shift action.
*/
this.yy_shift = function (
yyNewState, /* The new state to shift in */
yyMajor, /* The major token to shift in */
yyMinor /* The minor token to shift in */
) {
this.yyidx++;
if (this.yyidx > this.yyhwm) {
this.yyhwm++;
// assert( yypParser->yyhwm == (int)(yypParser->yytos - yypParser->yystack) );
}
if (this.YYSTACKDEPTH > 0) {
if (this.yyidx >= this.YYSTACKDEPTH) {
this.yyidx--;
this.yyStackOverflow();
return;
}
} else {
if (this.yyidx >= this.yystack.length) {
this.yyGrowStack();
}
}
if (yyNewState > this.YY_MAX_SHIFT) {
yyNewState += this.YY_MIN_REDUCE - this.YY_MIN_SHIFTREDUCE;
}
var yytos = this.yystack[this.yyidx];
yytos.stateno = yyNewState;
yytos.major = yyMajor;
yytos.minor = yyMinor;
this.yyTraceShift(yyNewState);
}
/* The following table contains information about every rule that
** is used during the reduce.
*/
//{
// lhs, /* Symbol on the left-hand side of the rule */
// nrhs, /* Number of right-hand side symbols in the rule */
//}
this.yyRuleInfo = [
%%
];
/*
** Perform a reduce action and the shift that must immediately
** follow the reduce.
*/
this.yy_reduce = function (
yyruleno /* Number of the rule by which to reduce */
){
var yymsp = this.yystack[this.yyidx]; /* The top of the parser's stack */
if (yyruleno < this.yyRuleName.length) {
var yysize = this.yyRuleInfo[yyruleno].nrhs;
var ruleName = this.yyRuleName[yyruleno];
var newStateNo = this.yystack[this.yyidx - yysize].stateno;
if (this.yyTraceCallback) {
this.trace("Reduce [" + ruleName + "], go to state " + newStateNo + ".");
}
}
/* Check that the stack is large enough to grow by a single entry
** if the RHS of the rule is empty. This ensures that there is room
** enough on the stack to push the LHS value */
if (this.yyRuleInfo[yyruleno].nrhs == 0) {
if (this.yyidx > this.yyhwm) {
this.yyhwm++;
// assert( yypParser->yyhwm == (int)(yypParser->yytos - yypParser->yystack));
}
if (this.YYSTACKDEPTH > 0) {
if (this.yyidx >= this.YYSTACKDEPTH - 1) {
this.yyStackOverflow();
return;
}
} else {
if (this.yyidx >= this.yystack.length - 1) {
this.yyGrowStack();
yymsp = this.yystack[this.yyidx];
}
}
}
var yylhsminor;
switch (yyruleno) {
/* Beginning here are the reduction cases. A typical example
** follows:
** case 0:
** #line <lineno> <grammarfile>
** { ... } // User supplied code
** #line <lineno> <thisfile>
** break;
*/
/********** Begin reduce actions **********************************************/
%%
/********** End reduce actions ************************************************/
};
// assert( yyruleno<sizeof(yyRuleInfo)/sizeof(yyRuleInfo[0]) );
var yygoto = this.yyRuleInfo[yyruleno].lhs; /* The next state */
var yysize = this.yyRuleInfo[yyruleno].nrhs; /* Amount to pop the stack */
var yyact = this.yy_find_reduce_action( /* The next action */
this.yystack[this.yyidx - yysize].stateno,
yygoto
);
if (yyact <= this.YY_MAX_SHIFTREDUCE) {
if (yyact > this.YY_MAX_SHIFT) {
yyact += this.YY_MIN_REDUCE - this.YY_MIN_SHIFTREDUCE;
}
this.yyidx -= yysize - 1;
yymsp = this.yystack[this.yyidx];
yymsp.stateno = yyact;
yymsp.major = yygoto;
this.yyTraceShift(yyact);
} else {
// assert( yyact == YY_ACCEPT_ACTION );
this.yyidx -= yysize;
this.yy_accept();
}
}
/*
** The following code executes when the parse fails
*/
this.yy_parse_failed = function () {
if (this.yyTraceCallback) {
this.trace("Fail!");
}
while (this.yyidx > 0) {
this.yy_pop_parser_stack();
}
/* Here code is inserted which will be executed whenever the
** parser fails */
/************ Begin %parse_failure code ***************************************/
%%
/************ End %parse_failure code *****************************************/
}
/*
** The following code executes when a syntax error first occurs.
*/
this.yy_syntax_error = function (
yymajor, /* The major type of the error token */
yyminor /* The minor type of the error token */
) {
var TOKEN = yyminor;
/************ Begin %syntax_error code ****************************************/
%%
/************ End %syntax_error code ******************************************/
}
/*
** The following is executed when the parser accepts
*/
this.yy_accept = function () {
if (this.yyTraceCallback) {
this.trace("Accept!");
}
if (!this.YYNOERRORRECOVERY) {
this.yyerrcnt = -1;
}
// assert( yypParser->yytos==yypParser->yystack );
/* Here code is inserted which will be executed whenever the
** parser accepts */
/*********** Begin %parse_accept code *****************************************/
%%
/*********** End %parse_accept code *******************************************/
}
/* The main parser program.
** The first argument is a pointer to a structure obtained from
** "ParseAlloc" which describes the current state of the parser.
** The second argument is the major token number. The third is
** the minor token. The fourth optional argument is whatever the
** user wants (and specified in the grammar) and is available for
** use by the action routines.
**
** Inputs:
** <ul>
** <li> A pointer to the parser (an opaque structure.)
** <li> The major token number.
** <li> The minor token number.
** <li> An option argument of a grammar-specified type.
** </ul>
**
** Outputs:
** None.
*/
this.parse = function (
yymajor, /* The major token code number */
yyminor /* The value for the token */
) {
var yyact; /* The parser action. */
var yyendofinput; /* True if we are at the end of input */
var yyerrorhit = 0; /* True if yymajor has invoked an error */
//assert( yypParser->yytos!=0 );
if (yymajor === undefined || yymajor === null) {
yymajor = 0;
}
yyendofinput = yymajor == 0;
if (this.yyTraceCallback) {
this.trace("Input '" + this.yyTokenName[yymajor] + "'");
}
do {
yyact = this.yy_find_shift_action(yymajor);
if (yyact <= this.YY_MAX_SHIFTREDUCE) { // check me?
this.yy_shift(yyact, yymajor, yyminor);
if (!this.YYNOERRORRECOVERY) {
this.yyerrcnt--;
}
yymajor = this.YYNOCODE;
} else if (yyact <= this.YY_MAX_REDUCE) { // check me?
this.yy_reduce(yyact - this.YY_MIN_REDUCE); // check me?
} else {
// assert( yyact == YY_ERROR_ACTION );
if (this.yyTraceCallback) {
this.trace("Syntax Error!");
}
if (this.YYERRORSYMBOL) {
/* A syntax error has occurred.
** The response to an error depends upon whether or not the
** grammar defines an error token "ERROR".
**
** This is what we do if the grammar does define ERROR:
**
** * Call the %syntax_error function.
**
** * Begin popping the stack until we enter a state where
** it is legal to shift the error symbol, then shift
** the error symbol.
**
** * Set the error count to three.
**
** * Begin accepting and shifting new tokens. No new error
** processing will occur until three tokens have been
** shifted successfully.
**
*/
if (this.yyerrcnt < 0) {
this.yy_syntax_error(yymajor, yyminor);
}
var yymx = this.yystack[this.yyidx].major;
if (yymx == this.YYERRORSYMBOL || yyerrorhit) {
if (this.yyTraceCallback) {
this.trace("Discard input token " + this.yyTokenName[yymajor]);
}
this.yy_destructor(yymajor, yyminor);
yymajor = this.YYNOCODE;
} else {
while (this.yyidx >= 0
&& yymx != this.YYERRORSYMBOL
&& (yyact = this.yy_find_reduce_action(
this.yystack[this.yyidx].stateno,
this.YYERRORSYMBOL)) >= this.YY_MIN_REDUCE // check me?
) {
this.yy_pop_parser_stack();
}
if (this.yyidx < 0 || yymajor == 0) {
this.yy_destructor(yymajor, yyminor);
this.yy_parse_failed();
if (!this.YYNOERRORRECOVERY) {
this.yyerrcnt = -1;
}
yymajor = this.YYNOCODE;
} else if (yymx != this.YYERRORSYMBOL) {
this.yy_shift(yyact, this.YYERRORSYMBOL, yyminor); // check me?
}
}
this.yyerrcnt = 3;
yyerrorhit = 1;
} else if (this.YYNOERRORRECOVERY) {
/* If the YYNOERRORRECOVERY macro is defined, then do not attempt to
** do any kind of error recovery. Instead, simply invoke the syntax
** error routine and continue going as if nothing had happened.
**
** Applications can set this macro (for example inside %include) if
** they intend to abandon the parse upon the first syntax error seen.
*/
this.yy_syntax_error(yymajor, yyminor);
this.yy_destructor(yymajor, yyminor);
yymajor = this.YYNOCODE;
} else { /* YYERRORSYMBOL is not defined */
/* This is what we do if the grammar does not define ERROR:
**
** * Report an error message, and throw away the input token.
**
** * If the input token is $, then fail the parse.
**
** As before, subsequent error messages are suppressed until
** three input tokens have been successfully shifted.
*/
if (this.yyerrcnt <= 0) {
this.yy_syntax_error(yymajor, yyminor);
}
this.yyerrcnt = 3;
this.yy_destructor(yymajor, yyminor);
if (yyendofinput) {
this.yy_parse_failed();
if (!this.YYNOERRORRECOVERY) {
this.yyerrcnt = -1;
}
}
yymajor = this.YYNOCODE;
}
}
} while (yymajor != this.YYNOCODE && this.yyidx > 0);
if (this.yyTraceCallback) {
var remainingTokens = [];
for (var i = 1; i <= this.yyidx; i++) {
remainingTokens.push(this.yyTokenName[this.yystack[i].major]);
}
this.trace("Return. Stack=[" + remainingTokens.join(" ") + "]");
}
}
this.init();
} // function Parse()

54
main.js Normal file
View File

@ -0,0 +1,54 @@
/**
* Created by Aleksey Chichenkov <a.chichenkov@initi.ru> on 1/28/19.
*/
var js_beautify = require("js-beautify");
var args = require("args-parser")(process.argv);
var fs = require("fs");
var exec = require('child_process').exec;
var program_path = "./lemon-src/lemon-js";
var parser_path = "parsers/filters/";
var file_name = "parser.y";
var temp_file_name = "temp_parser.y";
var update_parser_y = function () {
var source_parser_y = fs.readFileSync(parser_path + file_name, "utf8");
var result = /&&.*?REPLACER\{(.*?)\}&&/gm.exec(source_parser_y);
if(result) {
var file_path = result[1];
var process_code = fs.readFileSync(file_path, "utf8");
source_parser_y = source_parser_y.replace(/&&.*?REPLACER\{(.*?)\}&&/gm, process_code);
fs.writeFileSync(parser_path + temp_file_name, source_parser_y);
}
};
var post_process_parser = function () {
var out_js = fs.readFileSync(parser_path + "temp_parser.js", "utf8");
out_js = js_beautify(out_js, {indent_size: 4, space_in_empty_paren: true});
fs.writeFileSync(parser_path + "parser.js", out_js);
var temp_parser_out = fs.readFileSync(parser_path + "temp_parser.out", "utf8");
fs.writeFileSync(parser_path + "parser.out", temp_parser_out);
};
var start = function () {
update_parser_y();
exec(program_path + " " + parser_path + temp_file_name + " -l", function(err, stdout, stderr) {
err && console.log("ERROR: ", err);
err && process.exit(1);
post_process_parser();
fs.unlinkSync(parser_path + temp_file_name);
fs.unlinkSync(parser_path + "temp_parser.js");
fs.unlinkSync(parser_path + "temp_parser.out");
});
};
start();

223
package-lock.json generated Normal file
View File

@ -0,0 +1,223 @@
{
"name": "lemon-js-generator",
"requires": true,
"lockfileVersion": 1,
"dependencies": {
"@types/node": {
"version": "10.12.18",
"resolved": "https://registry.npmjs.org/@types/node/-/node-10.12.18.tgz",
"integrity": "sha512-fh+pAqt4xRzPfqA6eh3Z2y6fyZavRIumvjhaCL753+TVkGKGhpPeyrJG2JftD0T9q4GF00KjefsQ+PQNDdWQaQ=="
},
"@types/semver": {
"version": "5.5.0",
"resolved": "https://registry.npmjs.org/@types/semver/-/semver-5.5.0.tgz",
"integrity": "sha512-41qEJgBH/TWgo5NFSvBCJ1qkoi3Q6ONSF2avrHq1LVEZfYpdHmj0y9SuTK+u9ZhG1sYQKBL1AWXKyLWP4RaUoQ=="
},
"abbrev": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/abbrev/-/abbrev-1.1.1.tgz",
"integrity": "sha512-nne9/IiQ/hzIhY6pdDnbBtz7DjPTKrY00P/zvPSm5pOFkl6xuGrGnXn/VtTNNfNtAfZ9/1RtehkszU9qcTii0Q=="
},
"args-parser": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/args-parser/-/args-parser-1.1.0.tgz",
"integrity": "sha1-YlO/zWlNJ5/mPqr9eNYo0UoF/6k="
},
"balanced-match": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.0.tgz",
"integrity": "sha1-ibTRmasr7kneFk6gK4nORi1xt2c="
},
"brace-expansion": {
"version": "1.1.11",
"resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.11.tgz",
"integrity": "sha512-iCuPHDFgrHX7H2vEI/5xpz07zSHB00TpugqhmYtVmMO6518mCuRMoOYFldEBl0g187ufozdaHgWKcYFb61qGiA==",
"requires": {
"balanced-match": "^1.0.0",
"concat-map": "0.0.1"
}
},
"commander": {
"version": "2.19.0",
"resolved": "https://registry.npmjs.org/commander/-/commander-2.19.0.tgz",
"integrity": "sha512-6tvAOO+D6OENvRAh524Dh9jcfKTYDQAqvqezbCW82xj5X0pSrcpxtvRKHLG0yBY6SD7PSDrJaj+0AiOcKVd1Xg=="
},
"concat-map": {
"version": "0.0.1",
"resolved": "https://registry.npmjs.org/concat-map/-/concat-map-0.0.1.tgz",
"integrity": "sha1-2Klr13/Wjfd5OnMDajug1UBdR3s="
},
"config-chain": {
"version": "1.1.12",
"resolved": "https://registry.npmjs.org/config-chain/-/config-chain-1.1.12.tgz",
"integrity": "sha512-a1eOIcu8+7lUInge4Rpf/n4Krkf3Dd9lqhljRzII1/Zno/kRtUWnznPO3jOKBmTEktkt3fkxisUcivoj0ebzoA==",
"requires": {
"ini": "^1.3.4",
"proto-list": "~1.2.1"
}
},
"editorconfig": {
"version": "0.15.2",
"resolved": "https://registry.npmjs.org/editorconfig/-/editorconfig-0.15.2.tgz",
"integrity": "sha512-GWjSI19PVJAM9IZRGOS+YKI8LN+/sjkSjNyvxL5ucqP9/IqtYNXBaQ/6c/hkPNYQHyOHra2KoXZI/JVpuqwmcQ==",
"requires": {
"@types/node": "^10.11.7",
"@types/semver": "^5.5.0",
"commander": "^2.19.0",
"lru-cache": "^4.1.3",
"semver": "^5.6.0",
"sigmund": "^1.0.1"
}
},
"fs.realpath": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/fs.realpath/-/fs.realpath-1.0.0.tgz",
"integrity": "sha1-FQStJSMVjKpA20onh8sBQRmU6k8="
},
"glob": {
"version": "7.1.3",
"resolved": "https://registry.npmjs.org/glob/-/glob-7.1.3.tgz",
"integrity": "sha512-vcfuiIxogLV4DlGBHIUOwI0IbrJ8HWPc4MU7HzviGeNho/UJDfi6B5p3sHeWIQ0KGIU0Jpxi5ZHxemQfLkkAwQ==",
"requires": {
"fs.realpath": "^1.0.0",
"inflight": "^1.0.4",
"inherits": "2",
"minimatch": "^3.0.4",
"once": "^1.3.0",
"path-is-absolute": "^1.0.0"
}
},
"inflight": {
"version": "1.0.6",
"resolved": "https://registry.npmjs.org/inflight/-/inflight-1.0.6.tgz",
"integrity": "sha1-Sb1jMdfQLQwJvJEKEHW6gWW1bfk=",
"requires": {
"once": "^1.3.0",
"wrappy": "1"
}
},
"inherits": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.3.tgz",
"integrity": "sha1-Yzwsg+PaQqUC9SRmAiSA9CCCYd4="
},
"ini": {
"version": "1.3.5",
"resolved": "https://registry.npmjs.org/ini/-/ini-1.3.5.tgz",
"integrity": "sha512-RZY5huIKCMRWDUqZlEi72f/lmXKMvuszcMBduliQ3nnWbx9X/ZBQO7DijMEYS9EhHBb2qacRUMtC7svLwe0lcw=="
},
"js-beautify": {
"version": "1.8.9",
"resolved": "https://registry.npmjs.org/js-beautify/-/js-beautify-1.8.9.tgz",
"integrity": "sha512-MwPmLywK9RSX0SPsUJjN7i+RQY9w/yC17Lbrq9ViEefpLRgqAR2BgrMN2AbifkUuhDV8tRauLhLda/9+bE0YQA==",
"requires": {
"config-chain": "^1.1.12",
"editorconfig": "^0.15.2",
"glob": "^7.1.3",
"mkdirp": "~0.5.0",
"nopt": "~4.0.1"
}
},
"lru-cache": {
"version": "4.1.5",
"resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-4.1.5.tgz",
"integrity": "sha512-sWZlbEP2OsHNkXrMl5GYk/jKk70MBng6UU4YI/qGDYbgf6YbP4EvmqISbXCoJiRKs+1bSpFHVgQxvJ17F2li5g==",
"requires": {
"pseudomap": "^1.0.2",
"yallist": "^2.1.2"
}
},
"minimatch": {
"version": "3.0.4",
"resolved": "https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz",
"integrity": "sha512-yJHVQEhyqPLUTgt9B83PXu6W3rx4MvvHvSUvToogpwoGDOUQ+yDrR0HRot+yOCdCO7u4hX3pWft6kWBBcqh0UA==",
"requires": {
"brace-expansion": "^1.1.7"
}
},
"minimist": {
"version": "0.0.8",
"resolved": "https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz",
"integrity": "sha1-hX/Kv8M5fSYluCKCYuhqp6ARsF0="
},
"mkdirp": {
"version": "0.5.1",
"resolved": "https://registry.npmjs.org/mkdirp/-/mkdirp-0.5.1.tgz",
"integrity": "sha1-MAV0OOrGz3+MR2fzhkjWaX11yQM=",
"requires": {
"minimist": "0.0.8"
}
},
"nopt": {
"version": "4.0.1",
"resolved": "https://registry.npmjs.org/nopt/-/nopt-4.0.1.tgz",
"integrity": "sha1-0NRoWv1UFRk8jHUFYC0NF81kR00=",
"requires": {
"abbrev": "1",
"osenv": "^0.1.4"
}
},
"once": {
"version": "1.4.0",
"resolved": "https://registry.npmjs.org/once/-/once-1.4.0.tgz",
"integrity": "sha1-WDsap3WWHUsROsF9nFC6753Xa9E=",
"requires": {
"wrappy": "1"
}
},
"os-homedir": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/os-homedir/-/os-homedir-1.0.2.tgz",
"integrity": "sha1-/7xJiDNuDoM94MFox+8VISGqf7M="
},
"os-tmpdir": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/os-tmpdir/-/os-tmpdir-1.0.2.tgz",
"integrity": "sha1-u+Z0BseaqFxc/sdm/lc0VV36EnQ="
},
"osenv": {
"version": "0.1.5",
"resolved": "https://registry.npmjs.org/osenv/-/osenv-0.1.5.tgz",
"integrity": "sha512-0CWcCECdMVc2Rw3U5w9ZjqX6ga6ubk1xDVKxtBQPK7wis/0F2r9T6k4ydGYhecl7YUBxBVxhL5oisPsNxAPe2g==",
"requires": {
"os-homedir": "^1.0.0",
"os-tmpdir": "^1.0.0"
}
},
"path-is-absolute": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/path-is-absolute/-/path-is-absolute-1.0.1.tgz",
"integrity": "sha1-F0uSaHNVNP+8es5r9TpanhtcX18="
},
"proto-list": {
"version": "1.2.4",
"resolved": "https://registry.npmjs.org/proto-list/-/proto-list-1.2.4.tgz",
"integrity": "sha1-IS1b/hMYMGpCD2QCuOJv85ZHqEk="
},
"pseudomap": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/pseudomap/-/pseudomap-1.0.2.tgz",
"integrity": "sha1-8FKijacOYYkX7wqKw0wa5aaChrM="
},
"semver": {
"version": "5.6.0",
"resolved": "https://registry.npmjs.org/semver/-/semver-5.6.0.tgz",
"integrity": "sha512-RS9R6R35NYgQn++fkDWaOmqGoj4Ek9gGs+DPxNUZKuwE183xjJroKvyo1IzVFeXvUrvmALy6FWD5xrdJT25gMg=="
},
"sigmund": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/sigmund/-/sigmund-1.0.1.tgz",
"integrity": "sha1-P/IfGYytIXX587eBhT/ZTQ0ZtZA="
},
"wrappy": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz",
"integrity": "sha1-tSQ9jz7BqjXxNkYFvA0QNuMKtp8="
},
"yallist": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/yallist/-/yallist-2.1.2.tgz",
"integrity": "sha1-HBH5IY8HYImkfdUS+TxmmaaoHVI="
}
}
}

14
package.json Normal file
View File

@ -0,0 +1,14 @@
{
"name": "lemon-js-generator",
"requires": true,
"lockfileVersion": 1,
"license": "MIT",
"author": {
"name": "chichenkov",
"email": "rolahd@yandex.ru"
},
"dependencies": {
"args-parser": "^1.1.0",
"js-beautify": "^1.8.9"
}
}

1922
parsers/filters/lexer.js Normal file

File diff suppressed because it is too large Load Diff

1152
parsers/filters/parser.js Normal file

File diff suppressed because it is too large Load Diff

155
parsers/filters/parser.out Normal file
View File

@ -0,0 +1,155 @@
State 0:
main ::= * expr
string ::= * STRING_LITERAL
id ::= * string
id ::= * ID
eq ::= * id EQ literal
and ::= * expr AND expr
expr ::= * eq
expr ::= * and
expr ::= * LCB expr RCB
STRING_LITERAL shift-reduce 3 string ::= STRING_LITERAL
ID shift-reduce 5 id ::= ID
LCB shift 1
main accept
expr shift 6
string shift-reduce 4 id ::= string
id shift 11
eq shift-reduce 8 expr ::= eq
and shift-reduce 9 expr ::= and
State 1:
string ::= * STRING_LITERAL
id ::= * string
id ::= * ID
eq ::= * id EQ literal
and ::= * expr AND expr
expr ::= * eq
expr ::= * and
expr ::= * LCB expr RCB
expr ::= LCB * expr RCB
STRING_LITERAL shift-reduce 3 string ::= STRING_LITERAL
ID shift-reduce 5 id ::= ID
LCB shift 1
expr shift 5
string shift-reduce 4 id ::= string
id shift 11
eq shift-reduce 8 expr ::= eq
and shift-reduce 9 expr ::= and
State 2:
string ::= * STRING_LITERAL
id ::= * string
id ::= * ID
eq ::= * id EQ literal
and ::= * expr AND expr
and ::= expr AND * expr
expr ::= * eq
expr ::= * and
expr ::= * LCB expr RCB
STRING_LITERAL shift-reduce 3 string ::= STRING_LITERAL
ID shift-reduce 5 id ::= ID
LCB shift 1
expr shift-reduce 7 and ::= expr AND expr
string shift-reduce 4 id ::= string
id shift 11
eq shift-reduce 8 expr ::= eq
and shift-reduce 9 expr ::= and
State 3:
integer ::= * INTEGER_LITERAL
literal ::= * integer
eq ::= id EQ * literal
address_literal ::= * ADDRESS LSB address_literal_content_or_empty RSB
literal ::= * address_literal
INTEGER_LITERAL shift-reduce 1 integer ::= INTEGER_LITERAL
ADDRESS shift 10
integer shift-reduce 2 literal ::= integer
literal shift-reduce 6 eq ::= id EQ literal
address_literal shift-reduce 16 literal ::= address_literal
State 4:
address_literal_content ::= * STRING_LITERAL
address_literal_content ::= * address_literal_content COMMA STRING_LITERAL
address_literal_content_or_empty ::= * address_literal_content
(14) address_literal_content_or_empty ::= *
address_literal ::= ADDRESS LSB * address_literal_content_or_empty RSB
STRING_LITERAL shift-reduce 11 address_literal_content ::= STRING_LITERAL
address_literal_content shift 9
address_literal_content_or_empty shift 7
{default} reduce 14 address_literal_content_or_empty ::=
State 5:
and ::= expr * AND expr
expr ::= LCB expr * RCB
AND shift 2
RCB shift-reduce 10 expr ::= LCB expr RCB
State 6:
(0) main ::= expr *
and ::= expr * AND expr
$ reduce 0 main ::= expr
AND shift 2
State 7:
address_literal ::= ADDRESS LSB address_literal_content_or_empty * RSB
RSB shift-reduce 15 address_literal ::= ADDRESS LSB address_literal_content_or_empty RSB
State 8:
address_literal_content ::= address_literal_content COMMA * STRING_LITERAL
STRING_LITERAL shift-reduce 12 address_literal_content ::= address_literal_content COMMA STRING_LITERAL
State 9:
address_literal_content ::= address_literal_content * COMMA STRING_LITERAL
(13) address_literal_content_or_empty ::= address_literal_content *
COMMA shift 8
{default} reduce 13 address_literal_content_or_empty ::= address_literal_content
State 10:
address_literal ::= ADDRESS * LSB address_literal_content_or_empty RSB
LSB shift 4
State 11:
eq ::= id * EQ literal
EQ shift 3
----------------------------------------------------
Symbols:
0: $:
1: OR
2: AND
3: NOT
4: INTEGER_LITERAL
5: STRING_LITERAL
6: ID
7: EQ
8: LCB
9: RCB
10: COMMA
11: ADDRESS
12: LSB
13: RSB
14: error:
15: main: STRING_LITERAL ID LCB
16: expr: STRING_LITERAL ID LCB
17: integer: INTEGER_LITERAL
18: literal: INTEGER_LITERAL ADDRESS
19: string: STRING_LITERAL
20: id: STRING_LITERAL ID
21: eq: STRING_LITERAL ID
22: and: STRING_LITERAL ID LCB
23: address_literal_content: STRING_LITERAL
24: address_literal_content_or_empty: <lambda> STRING_LITERAL
25: address_literal: ADDRESS

152
parsers/filters/parser.y Normal file
View File

@ -0,0 +1,152 @@
%name Parser
%token_prefix TOKEN_
%left OR.
%left AND.
%right NOT.
%include {
// include something
}
%code {
&&REPLACER{process.js}&&
}
%syntax_error {
console.log("Syntax error");
}
main ::= expr(A) . {
_result.root_node = A
}
integer(A) ::= INTEGER_LITERAL(B) . {
A = new Node({
type: "INTEGER_LITERAL",
lexeme: B.lexeme,
start: B.start,
end: B.end
})
}
literal(A) ::= integer(B) . {
A = new Node({
type: "literal",
children: [B]
})
}
string(A) ::= STRING_LITERAL(B) . {
A = new Node({
type: "STRING_LITERAL",
lexeme: B.lexeme,
start: B.start,
end: B.end
})
}
id(A) ::= string(B) . {
A = new Node({
type: "id",
children: [B]
});
}
id(A) ::= ID(B) . {
A = new Node({
type: "ID",
lexeme: B.lexeme,
start: B.start,
end: B.end
})
}
eq(A) ::= id(B) EQ(C) literal(D) . {
A = new Node({
type: "eq",
children: [
B,
new Node({
type: "EQ",
lexeme: C.lexeme,
start: C.start,
end: C.end
}),
D
]
})
}
and(A) ::= expr(B) AND expr(D) . {
A = new Node({
type: "and",
children: [
B,
D
]
})
}
expr(A) ::= eq(B) . {
A = new Node({
type: "expr",
children: [B]
})
}
expr(A) ::= and(B) . {
A = B;
}
expr(A) ::= LCB expr(C) RCB . {
A = C;
}
address_literal_content(A) ::= STRING_LITERAL(B) . {
A = new Node({
children: [
new Node({
type: "STRING_LITERAL",
lexeme: B.lexeme,
start: B.start,
end: B.end
})
]
});
}
address_literal_content(A) ::= address_literal_content(B) COMMA STRING_LITERAL(C) . {
B.add(new Node({
type: "STRING_LITERAL",
lexeme: C.lexeme,
start: C.start,
end: C.end
}));
A = B;
}
address_literal_content_or_empty(A) ::= address_literal_content(B) . {
A = B;
}
address_literal_content_or_empty(A) ::= . {
A = new Node({
type: "address_literal_content"
});
}
address_literal(A) ::= ADDRESS LSB address_literal_content_or_empty(C) RSB . {
A = new Node({
type: "address_literal",
children: C.children
});
}
literal(A) ::= address_literal(B) . {
A = new Node({
type: "literal",
children: [B]
});
}

370
process.js Normal file
View File

@ -0,0 +1,370 @@
/**
* Created by Aleksey Chichenkov <a.chichenkov@initi.ru> on 1/28/19.
*/
var fs = require("fs");
var Lexer = require('./lexer.js');
var tokens = (function () {
var std = (function () {
var protos = "__protos__";
var keys = "__keys__";
/**
* Return unique data
*
* @param {Object[]} _arr - prototypes of inheritance classes
* @param {Object} _main - prototype of resulting class
*
* @return {Object}
* */
var unique = function (_arr, _main) {
var result = Object.create(null);
var to_remove = [];
for (var i = 0, e = _arr.length; i != e; ++i) {
var item = _arr[i];
for (var key in item) {
if (key in result) {
to_remove.push(key);
continue;
}
result[key] = item[key];
}
if (keys in item) {
for (var ii = 0, ee = item[keys].length; ii != ee; ++ii) {
var key = item[keys][ii];
if (key in result) {
to_remove.push(key);
continue;
}
result[key] = item[key];
}
}
}
for (var i = 0; i != to_remove.length; ++i) {
delete result[to_remove[i]];
}
for (var key in _main) {
result[key] = _main[key];
}
return result;
};
/**
* Create OOP class
*
* @param {Function[]} _constrs - inheritance classes
* @param {Object} _proto - prototype of resulting class
* @param {Object?} _static - static data
*
* @return {Function}
* */
var class_creator = function (_constrs, _proto, _static) {
_constrs = _constrs || [];
_proto = _proto || [];
_static = _static || [];
var constr;
if (_proto && _proto.hasOwnProperty("constructor")) {
constr = _proto.constructor;
delete _proto.constructor;
} else {
constr = function () {
for (var i = 0; i != _constrs.length; ++i) {
_constrs[i].apply(this, arguments);
}
};
}
var proto = Object.create(null);
Object.defineProperty(proto, protos, {
"value": []
});
Object.defineProperty(proto, keys, {
"value": []
});
/************************FOR MEMBERS*******************************/
for (var i = 0, e = _constrs.length; i != e; ++i) {
proto[protos].push(_constrs[i].prototype);
}
var m_un = unique(proto[protos], _proto);
for (var key in m_un) {
proto[keys].push(key);
Object.defineProperty(proto, key, {
"value": m_un[key]
});
}
/************************FOR MEMBERS END***************************/
/************************FOR STATICS*******************************/
var s_un = unique(_constrs, _static);
for (var key in s_un) {
Object.defineProperty(constr, key, {
"value": s_un[key],
"enumerable": true
});
}
/************************FOR STATICS END***************************/
Object.defineProperties(constr, {
"pr": {
"value": proto
},
"prototype": {
"value": proto
}
});
Object.freeze(proto);
Object.freeze(constr);
return constr;
};
/**
* Check if target has prototype
*
* @param {Object} _target - checkable instance
* @param {Object} _proto - posible prototype
*
* */
var check = function (_target, _proto) {
for (var i = 0; i != _target[protos].length; ++i) {
var t_proto = _target[protos][i];
if (t_proto == _proto) {
return true;
}
if (t_proto[protos]) {
if (check(t_proto, _proto))
return true;
}
}
return false;
};
/**
* Check if target is instance of class
*
* @param {Object} _target - checkable instance
* @param {Function} _constr - posible constructor
*
* */
var class_check = function (_target, _constr) {
if (_target instanceof _constr) {
return true;
}
return check(_target, _constr.prototype);
};
return {
class: class_creator,
class_check: class_check
};
})();
var tools = {
merge: function (_obj) {
var target = Object.create(null);
var i = 0, e = arguments.length;
for (; i != e; ++i) {
var options = arguments[i];
for (var key in options) {
if (options[key] === undefined && target === options[key])
continue;
target[key] = options[key];
}
}
return target;
}
};
var Node = std.class([], {
constructor: function Node(_options) {
var base = tools.merge({
children: []
}, _options);
this.children = base.children;
},
add: function (_n) {
this.children.push(_n);
return this;
}
});
var Lexeme = std.class([Node], {
constructor: function Lexeme(_options) {
var base = tools.merge({
start: -1,
end: -1,
type: null,
value: null
}, _options);
Node.call(this, base);
this.start = base.start;
this.end = base.end;
this.type = base.type;
this.value = base.value;
}
});
var Rule = std.class([Node], {
constructor: function NonTerminal(_options) {
var base = tools.merge({}, _options);
Node.call(this, base);
}
});
var string_literal = std.class([Rule], {
constructor: function string_literal(_options) {
var base = tools.merge({}, _options);
Rule.call(this, base);
}
});
var integer_literal = std.class([Rule], {
constructor: function integer_literal(_options) {
var base = tools.merge({}, _options);
Rule.call(this, base);
}
});
var id = std.class([Rule], {
constructor: function id(_options) {
var base = tools.merge({}, _options);
Rule.call(this, base);
}
});
var literal = std.class([Rule], {
constructor: function literal(_options) {
var base = tools.merge({}, _options);
Rule.call(this, base);
}
});
var eq = std.class([Rule], {
constructor: function eq(_options) {
var base = tools.merge({
id: null,
EQ: null,
literal: null
}, _options);
Rule.call(this, base);
this.id = base.id;
this.EQ = base.EQ;
this.literal = base.literal;
},
set_id: function (_n) {
this._id = _n;
},
set_EQ: function (_n) {
this._EQ = _n;
},
set_literal: function (_n) {
this._literal = _n;
}
});
var and = std.class([Rule], {
constructor: function and(_options) {
var base = tools.merge({
lexpr: null,
AND: null,
rexpr: null
}, _options);
Rule.call(this, base);
this.lexpr = base.lexpr;
this.AND = base.AND;
this.rexpr = base.rexpr;
},
set_lexpr: function (_n) {
this._lexpr = _n;
},
set_AND: function (_n) {
this._AND = _n;
},
set_rexpr: function (_n) {
this._rexpr = _n;
}
});
var expr = std.class([Rule], {
constructor: function expr(_options) {
var base = tools.merge({}, _options);
Rule.call(this, base);
}
});
return {
// terminal
LEXEME: Lexeme,
// non terminal
string_literal: string_literal,
integer_literal: integer_literal,
id: id,
literal: literal,
eq: eq,
and: and,
expr: expr,
}
})();
var _result = {};
var LemonJS = function (_input) {
var parser = new Parser();
var lexer = new Lexer(_input);
var token;
while (token = lexer.next()) {
console.log("PARSE", token.lexeme);
parser.parse(parser["TOKEN_" + token.lexeme], token);
}
parser.parse();
return _result;
};
fs.mkdirSync("tests");
var test_and = LemonJS("abc == 1 and abc1 == 2 and (bbc == 5)");
fs.writeFileSync("tests/out_test_and.json", JSON.stringify(test_and, true, 3));
var test_address = LemonJS('abc == Address ["a", "b", "c"]');
fs.writeFileSync("tests/out_tree_address.json", JSON.stringify(test_address, true, 3));

153
tests/out_test_and.json Normal file
View File

@ -0,0 +1,153 @@
{
"root_node": {
"type": "and",
"children": [
{
"type": "and",
"children": [
{
"type": "expr",
"children": [
{
"type": "eq",
"children": [
{
"type": "ID",
"children": [],
"lexeme": "ID",
"start": 0,
"end": 3
},
{
"type": "EQ",
"children": [],
"lexeme": "EQ",
"start": 4,
"end": 6
},
{
"type": "literal",
"children": [
{
"type": "INTEGER_LITERAL",
"children": [],
"lexeme": "INTEGER_LITERAL",
"start": 7,
"end": 8
}
],
"lexeme": null,
"start": 0,
"end": 0
}
],
"lexeme": null,
"start": 0,
"end": 0
}
],
"lexeme": null,
"start": 0,
"end": 0
},
{
"type": "expr",
"children": [
{
"type": "eq",
"children": [
{
"type": "ID",
"children": [],
"lexeme": "ID",
"start": 13,
"end": 17
},
{
"type": "EQ",
"children": [],
"lexeme": "EQ",
"start": 18,
"end": 20
},
{
"type": "literal",
"children": [
{
"type": "INTEGER_LITERAL",
"children": [],
"lexeme": "INTEGER_LITERAL",
"start": 21,
"end": 22
}
],
"lexeme": null,
"start": 0,
"end": 0
}
],
"lexeme": null,
"start": 0,
"end": 0
}
],
"lexeme": null,
"start": 0,
"end": 0
}
],
"lexeme": null,
"start": 0,
"end": 0
},
{
"type": "expr",
"children": [
{
"type": "eq",
"children": [
{
"type": "ID",
"children": [],
"lexeme": "ID",
"start": 28,
"end": 31
},
{
"type": "EQ",
"children": [],
"lexeme": "EQ",
"start": 32,
"end": 34
},
{
"type": "literal",
"children": [
{
"type": "INTEGER_LITERAL",
"children": [],
"lexeme": "INTEGER_LITERAL",
"start": 35,
"end": 36
}
],
"lexeme": null,
"start": 0,
"end": 0
}
],
"lexeme": null,
"start": 0,
"end": 0
}
],
"lexeme": null,
"start": 0,
"end": 0
}
],
"lexeme": null,
"start": 0,
"end": 0
}
}

View File

@ -0,0 +1,69 @@
{
"root_node": {
"type": "expr",
"children": [
{
"type": "eq",
"children": [
{
"type": "ID",
"children": [],
"lexeme": "ID",
"start": 0,
"end": 3
},
{
"type": "EQ",
"children": [],
"lexeme": "EQ",
"start": 4,
"end": 6
},
{
"type": "literal",
"children": [
{
"type": "address_literal",
"children": [
{
"type": "STRING_LITERAL",
"children": [],
"lexeme": "STRING_LITERAL",
"start": 16,
"end": 19
},
{
"type": "STRING_LITERAL",
"children": [],
"lexeme": "STRING_LITERAL",
"start": 21,
"end": 24
},
{
"type": "STRING_LITERAL",
"children": [],
"lexeme": "STRING_LITERAL",
"start": 26,
"end": 29
}
],
"lexeme": null,
"start": 0,
"end": 0
}
],
"lexeme": null,
"start": 0,
"end": 0
}
],
"lexeme": null,
"start": 0,
"end": 0
}
],
"lexeme": null,
"start": 0,
"end": 0
}
}