What do you understand by L-attributed definition? Give example. Describe with diagram the working process of Lexical Analyzer. Describe LR parsing with block diagram.

Added 2 years ago
Active
Viewed 1040
Ans

L-attributes: All variables have an attribute associated with it, getting some values.

A→BCD.

A.s=f(B.s, C.s, D.s).

A is getting values from its children. Such an attribute is called Synthesized attribute because they are synthesizing the value of the attribute from the children.

A→BCD, if say C is getting the values from its parent A or its left siblings B,  it is called inherited attribute. C.i=A.i; C.i=B.i.

An L-attributed SDT (Syntax Directed Translation)

Uses both inherited and synthesized attributes.

Semantic actions are placed anywhere on RHS. A→{}BC | D{}E | FG{}

Attributes are evaluated by traversing parse tree by depth first search, left to right.  

Ex: A→BC {B.s=A.s}

We can see that is inherited attribute. So, it is L-attributed. 

 

Lexical analysis reads characters from left to right and groups into tokens. A simple way to build lexical analyzer is to construct a diagram to illustrate the structure of tokens of the source program. We can also produce a lexical analyzer automatically by specifying the lexeme patterns to a lexical-analyzer generator and compiling those patterns into code that functions as a lexical analyzer. This approach makes it easier to modify a lexical analyzer, since we have only to rewrite the affected patterns, not the entire program. Three general approaches for implementing lexical analyzer are:

  • Use lexical analyzer generator (LEX) from a regular expression based specification that provides routines for reading and buffering the input.
  • Write lexical analyzer in conventional language using I/O facilities to read input.
  • Write lexical analyzer in assembly language and explicitly manage the reading of input.

The speed of lexical analysis is a concern in compiler design, since only this phase reads the source program character-by character.

Since the lexical analyzer is the part of the compiler that reads the source text, it may perform certain other tasks besides identification of lexemes.

  • One such task is stripping out comments and whitespace (blank, newline, tab, and perhaps other characters that are used to separate tokens in the input).
  • Another task is correlating error messages generated by the compiler with the source program. For instance, the lexical analyzer may keep track of the number of newline characters seen, so it can associate a line number with each error message. In some compilers, the lexical analyzer makes a copy of the source program with the error messages inserted at the appropriate positions.
  • If the source program uses a macro-preprocessor, the expansion of macros may also be performed by the lexical analyzer. It is the first phase of a compiler. It reads source code as input and sequence of tokens as output. This will be used as input by the parser in syntax analysis. Upon receiving ‘getNextToken’ from parser, lexical analyzer searches for the next token.
  • Some additional tasks are: eliminating comments, blanks, and tab and newline characters, providing line numbers associated with error messages and making a copy of the source program with error messages.

atOptions = { 'key': 'a900f2dbf175e78754c26c6231a4b673', 'format': 'iframe', 'height': 90, 'width': 728, 'params': {} };

Related Questions