This section explains how to use the parts of the LALRLib, the GUI, the parse stepper, and the command line tool.
These are all available at bricologica.com or at github.com. They are licensed under GPLv3. The build files are for Microsoft Visual Studio Express C#.
This section lists the deliverables for the LALRLib package.
LALRLib.dll contains the code described below.
LALRGUI is a GUI application that uses the Designer classes to let the user run a grammar.
The parser stepper is GUI application lets a user step through each parse step of a grammar and inspect the parse and push stacks. The user can go forward and backward.
The lalrgen.exe is command line application that generates serializable files created by the generators. This allows a user to skip the time consuming process of generating the tables. It can also generate the code to make the table as a compile time object.
LALRLib is the workhorse of the work I've done. It contains everything needed to create LALR scanners and parsers.
The top level is LALRManager which aggregates the components needed to parse a file—the grammar, scanner, converter, reducer, and parser. It is an abstract class, so the user must create a class with their more specific operations. An example is the Designer which supports a user in the design a new grammar, converter, and reducer.
The Manager Template
The top level of the library is the LALRManager. Every component is publicly available, but the LALRManager is a convenient way to manage the whole process.
The three generics are user defined objects derived from TokenCreatorBase, ConverterBase, and ReducerBase, respectively. There is a set of Designer classes that can be used here. Applications that use only the Designer versions may simply use the Designer class which derives from the LALRManager.
The token creator contains a function called Process that for each terminal type the processes the token. Process takes an IContext object, the terminal ID, and a ScanStateManager object and uses these to create a new token which comes in. The Process function may do any of the following:
- Read the matched string to calculate literals
- Alters the scan state
- Nulls out the token so that no token is returned
- Alters the token (e.g. turn and 'int' into a 'long' based of the length of the string.)
- Uses or updates context
- Determine if certain keywords are actually identifiers
- Determine if certain identifiers are actually keywords
- Anything else that needs to be done
The converter is a module with the same interface as the scanner. It sits between the scanner and parser and contains modules that look for the start of certain patterns in the token stream which may indicate a conversion is necessary. If a pattern starts, it starts to pull in tokens until the pattern is completed or it is determined that the pattern doesn't match the expected pattern.
If a match is found, the converter can manipulate the tokens it examined, doing substitutions or adding delimiters. The converters deliver up tokens from this token queue.
The reducer is an object with one function: Reduce. Reduce takes the rule ID and an array of symbols to reduce by that rule.
4.3.2 Manager Elements
The grammar object holds the terminals, non-terminals, and rules. It is configurable from a .lang file or a serializable GrammarInfo object, but does not require any programming.
The scanner object hold the scanner. It can be used as a standalone scanner or will automatically feed into the converter or parser. It is configurable by either an IGrammar or by the serializable ScannerInfo which holds the ScanState indexed ScanTables.
The parser object holds the parser. It can be used as a standalone parser or will automatically pull tokens from the scanner or converter if it exists. It is configurable by either an IGrammar or by the serializable ParseTable.
User Written Manager Elements
A user derived token creator is needed to create scanned tokens. A token type is determined from ITerminal and may be a user defined TokenBase derived class. The token creator simply contains an array of Factory objects which are Factory objects that create the tokens. This has to be a derived class because the Factory is a compile time operation. ITerminal.Id (IToken.Symbol.Id) is the index.
The factory creates a token of type T from the generic constructor. T must be derived from TokenBase. There is an abstract function Process which takes all the information from the scanner and initializes the token. Process returns a token. Usually it simply returns itself, but it can also return null if the token should be passed to the parser or a different token such as a more specific token determined from the scanned text.
Converters intercept tokens on the way from the scanner to the parser and do any manipulations that need to be done to support parsing. This is part of the design of the grammar, so this will require user created code. It is possible to have multiple converters cascaded together with a higher level converter that contains the cascaded converters.
The user derived reducer is the object that reduces a rule using tree objects that may be user-defined TokenBase or NodeBase derived classes and returns a node that may be a user defined NodeBase. There is just one function that takes an IRule, the right hand side as an array, and the context object.
The GrammarGenerator is exposed by the Designer. It takes lines that represent a grammar and converts them to a GrammarInfo.
GrammarInfo GrammarGenerator.Generate(String lines);
GrammarInfo is a serializable class that results from a GrammarGenerator and is sufficient to initialize a Grammar object.
The ScannerGenerator is exposed by the Designer. It takes a Grammar and calculates the ScannerInfo which is a ScanState indexed set of ScanTables.
ScannerInfo ScannerGenerator.Generate(String lines);
ScannerInfo is a serializable class that results from ScannerGenertor and is sufficient to initialize a Scanner object.
18.104.22.168 Parser Generator
The ParserGenerator is exposed by the Designer. It takes a Grammar and calculates the ParseTable.
ParseTable ParserGenerator.Generate(String lines);
ParseTable is a serializable class that results from ParserGenerator and is sufficient to initialize a Parser object. Using a serialized version of the class can be significantly faster than generating the ParseTable from a Grammar object.
User tokens are derived from TokenBase. TokenBase is possibly the root of a gradually derived token class tree. Extra methods and properties can be added to aid in their use in reduction and usage in whatever purpose the parse tree will be used for (data of literals, code generation, search and manipulation).
User nodes are derived from NodeBase. NodeBase is possibly the root of a gradually derived token class tree. Extra methods and properties can be added to aid in their use in reduction and usage in whatever purpose the parse tree will be used for (code generation, search and manipulation).
The DesignerConverter takes a DLL name and the name of a class in that DLL and creates an instance of that in an AppDomain. This allows the DLL be unloaded for iterative design.
The Designer converter uses an AppDomain to load in the converter. This allows unloading when the user wants to recompile for iterative grammar design and troubleshooting.
22.214.171.124.1 How it Works
A LiteToken contains only the UniqueId, the Abbrev, and the SymbolId. This is to make fast transport across the AppDomain boundary, but if the converters will be used in the Designer, they can only use these three pieces of information to make decisions.
The host creates the AppDomain and loads the client. It then manages communication to the client which actually controls the loaded ConverterBase derived class in the loaded DLL.
bool Load(String path, String clientTypeName, String abbrevArray);
The load function loads the DLL at 'path' and creates an instance of the class whose fully qualified type name is found in the clientTypeName. The array of abbreviations is how the DLL will be able to tie the abbreviation to the SymbolId.
The unload function unloads the AppDomain. It allows recompile of the DLL.
The client is the MarshalByRefObject derived object that straddles the AppDomain.
When serializing, it is necessary to pass the LALRSerializationContext to the Deserialize function. Some objects need temporary stores on this object to reconstruct themselves. The user is not required to add anything to this, though they may if they plan on making a larger serializer.
Both token creation and reduction may make use of a context object supplied by the user. This is optional, but any context object passed to IParser.Parse(…) or ITokenizable.Next(…) should be derived from ContextBase.
126.96.36.199 Grammar by File
188.8.131.52 Scanner by Grammar
184.108.40.206 Plug in Token Stream Converter
220.127.116.11 Parser by Grammar
Interface User Controls
18.104.22.168 Terminals User Control
The terminals control takes the IGrammar.TerminalArray as input and displays information about the terminals. It includes, the id, name, abbreviation, regular expression, precedence, and associativity.
22.214.171.124 NonTerminals User Control
The nonterminals control takes the IGrammar.NonTerminalArray as input and displays information about the terminals. It includes, the id, name and abbreviation.
126.96.36.199 Rules User Control
The rules control takes IGrammar.RuleArray as input and displays the id, left hand side, right hand side abbreviations, and precedence.
188.8.131.52 CFSM User Control
The CFSM user control has 3 elements. First is a parse stack that contains a series of states. The next part is the configuration section. The third part is the transition part. When the user double-clicks a reducible configuration, the number of parse states equal to the number of rhs elements are popped off and the new state is shown. If a transition is double-clicked, the new state is placed on the parse stack.
184.108.40.206 Symbol Stack/Goal Object User Control
The symbol stack and goal object user control shows either the goal object after a successful parse or the shift stack after a failure. The inputs are GoalObject and ShiftStack.
220.127.116.11.1 GUI Interactions
As a tree view, the user
void SetGoalObject(ITreeObject goalObject)
void SetShiftStack(ITreeObject shiftStack, int parseStack)
ShiftLevel is an index into the ShiftStack to determine which ShiftLevel the user has double-clicked.
When the user double-clicks a CFSM node in a shift stack tree, this even fires so the CFSM can show that corresponding CFSM state.
18.104.22.168 Token Stream Converter Dialog
4.5 Designing a Grammar
22.214.171.124 Lang file format
126.96.36.199.1 Configuration for code generation
Non-empty lines after line containing "grammar usings" will be placed into the code generated files after the default usings which are Com.Bricologica.Types, and Com.Bricologica.Designer.
Grammar namespace ns
Sets the namespace that all generated code will be in.
Non-empty lines following "scanstates" will be scanstates. The first one is the default scanstate.
Grammar tokenbase tb
Sets the default token base class for when the base class is not specified after : in the Terminals section.
Grammar nodebase nb
Sets the default node base class for when the base class is not specified after : in the NonTerminals section.
Grammar module md
Sets the module name which is prepended to elements in generated code.
Grammar match mt
Sets a tokenbase that is a match-only token base. If a terminal is derived from this class, then the token will still run IToken.Process(…) but no token will be generated by the scanner.
188.8.131.52.2 Configuration for grammar
Non-empty lines following Terminals specify terminals. The format is:
name abbrev /regex/ [ScanState] :baseclass [LRN]:[0-9]+
Where each part after name are optional.
If abbrev is not included, name is used. If regex is not included, name is converted to a regex. If ScanState is not included, the default scan state is used. If baseclass is not included, the default tokenbase is used.
Name abbrev :baseclass [LRN]:[0-9]+
Lhs ([0-9]+): rhs…
184.108.40.206 Parse Results
220.127.116.11.1 On Success
18.104.22.168.2 On Error