<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>characters to tokens | Student Projects</title>
	<atom:link href="https://studentprojects.in/tag/characters-to-tokens/feed/" rel="self" type="application/rss+xml" />
	<link>https://studentprojects.in</link>
	<description>Microcontroller projects, Circuit Diagrams, Project Ideas</description>
	<lastBuildDate>Thu, 20 Nov 2008 07:37:32 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.1.7</generator>
	<item>
		<title>Lexical Analyzer</title>
		<link>https://studentprojects.in/software-development/software-projects/software/lexical-analyzer/</link>
					<comments>https://studentprojects.in/software-development/software-projects/software/lexical-analyzer/#comments</comments>
		
		<dc:creator><![CDATA[Editorial Team]]></dc:creator>
		<pubDate>Thu, 20 Nov 2008 07:37:32 +0000</pubDate>
				<category><![CDATA[Software Engineering Projects]]></category>
		<category><![CDATA[Lexical Analyzer]]></category>
		<category><![CDATA[characters to tokens]]></category>
		<category><![CDATA[C codes]]></category>
		<category><![CDATA[Algorithms]]></category>
		<guid isPermaLink="false">http://studentprojects.in/?p=223</guid>

					<description><![CDATA[<p>Lexical analyzer converts stream of input characters into a stream of tokens. The different tokens that our lexical analyzer identifies are as follows: KEYWORDS: int, char, float, double, if, for, while, else, switch, struct, printf, scanf, case, break, return, typedef, void IDENTIFIERS: main, fopen, getch etc NUMBERS: positive and negative integers, positive and negative floating</p>
<p>The post <a href="https://studentprojects.in/software-development/software-projects/software/lexical-analyzer/">Lexical Analyzer</a> first appeared on <a href="https://studentprojects.in">Student Projects</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Lexical analyzer converts stream of input characters into a stream of tokens. The different tokens that our lexical analyzer identifies are as follows:</p>
<p>KEYWORDS: int, char, float, double, if, for, while, else, switch, struct, printf, scanf, case, break, return, typedef, void</p>
<p>IDENTIFIERS: main, fopen, getch etc</p>
<p>NUMBERS: positive and negative integers, positive and negative floating point numbers.</p>
<p>OPERATORS: +, ++, -, &#8211;, ||, *, ?, /, &gt;, &gt;=, &lt;, &lt;=, =, ==, &amp;, &amp;&amp;.</p>
<p>BRACKETS: [ ], { }, ( ).</p>
<p>STRINGS : Set of characters enclosed within the quotes</p>
<p>COMMENT LINES: Ignores single  line, multi line comments</p>
<p>For tokenizing into identifiers and keywords we incorporate a symbol table which initially consists of predefined keywords. The tokens are read from an input file. If the encountered token is an identifier or a keyword the lexical analyzer will look up in the symbol table to check  the existence  of the respective token. If an entry does exist then we proceed to the next  token. If not then that particular  token along with the token value is written into the symbol table. The rest of the tokens are directly displayed by writing into an output file.</p>
<p>The output file will consist of all the tokens present in our input file along with their respective token values.</p>
<p><strong>INTRODUCTION:</strong></p>
<p>Lexical analysis involves scanning the program to be compiled and recognizing the tokens that make up the source statements Scanners or lexical analyzers are usually designed to recognize keywords , operators , and identifiers , as well as integers, floating point numbers , character strings , and other  similar items that are written as part of the source program . The exact set of  tokens to be recognized of course, depends upon the programming language being used to describe it.</p>
<p>A sequence of input characters that comprises a single token is called a lexeme. A lexical analyzer can insulate a parser from the lexeme representation of tokens. Following are the list of functions that lexical analyzers perform.</p>
<p><strong>Removal of white space and  comment :</strong></p>
<p>Many languages allow “white space” to appear between tokens. Comments can likewise be ignored by the parser and translator , so they may also be treated as white space. If white space is eliminated by the lexical analyzer, the parser will never have to consider it.</p>
<p><strong>Constants :</strong></p>
<p>An integer constant is a sequence of digits, integer constants can be allowed by adding productions to the grammar for expressions, or by creating a token for such constants . The job of collecting digits into integers is generally given to a lexical analyzer because numbers can be treated as single units during translation. The lexical analyzer passes both the token and attribute to the parser.<br />
<strong><br />
Recognizing identifiers and keywords :</strong></p>
<p>Languages use identifiers as names of variables, arrays and functions. A grammar for a language often treats an identifier as a token. Many languages use fixed character strings such as begin, end , if, and so on, as punctuation marks or to identify certain constructs. These character strings, called keywords, generally satisfy the rules for forming identifiers.</p>
<p><strong>SYSTEM ANALYSIS:</strong></p>
<p>The operating  system used is MS-DOS.</p>
<p><strong>MS-DOS : </strong>The MS-DOS is a single user, single process, single processor operating system. Due to the confinement of the device dependent code into one layer, the porting of MS-DOS has theoretically been reduced to writing  of the BIOS code for the new hardware. At the command level it provides a hierarchical file system, input output redirection, pipes and filters. User written commands can be invoked in the same way as the standard system commands, giving the appearance as if the basic system functionality has been extended.</p>
<p>Being a single user system, it provides only file protection and access control. The disk space is allocated in terms of a cluster of consecutive sectors. The command language of MS-DOS has been designed to allow the user to interact directly with the operating system using a CRT terminal. The principal use of the command language is to initiate the execution of commands and programs. A user program can be executed under MS-DOS by simply typing the name of the executable file of the user program at the DOS command prompt.</p>
<p>The programming language used here is C programming</p>
<p><strong>SYSTEM DESIGN:</strong></p>
<p><strong>Process:</strong></p>
<p>The lexical analyzer is the first phase of  a compiler. Its main task is to read the input characters and produce as output a sequence of tokens that the parser uses for syntax analysis. This interaction, summarized schematically in fig. a.</p>
<figure id="attachment_224" aria-describedby="caption-attachment-224" style="width: 486px" class="wp-caption aligncenter"><img decoding="async" class="size-full wp-image-224" title="fig a. Process of lexical analyzer " src="https://studentprojects.in/wp-content/uploads/2008/11/block_1.gif" alt="fig a. Process of lexical analyzer " width="486" height="179" /><figcaption id="caption-attachment-224" class="wp-caption-text">fig a. Process of lexical analyzer </figcaption></figure>
<p>Upon receiving a “get next token “command from the parser, the lexical analyzer reads the input characters until it can identify next token.</p>
<p>Sometimes , lexical analyzers are divided into a cascade of two phases, the first called “scanning”, and the second “lexical analysis”.</p>
<p>The scanner is responsible for doing simple tasks, while the lexical analyzer proper does the more complex operations.</p>
<p>The lexical analyzer which we have designed takes the input  from a input file. It reads one character at a time from the input file, and continues to read until end of the file is reached. It recognizes the valid identifiers, keywords and specifies the token values of the keywords.</p>
<p>It also identifies the header files, #define statements, numbers, special characters, various relational and logical operators, ignores the white spaces and comments. It prints the output in a separate file specifying the line number .</p>
<p><strong>BLOCK DIAGRAM:</strong></p>
<figure id="attachment_225" aria-describedby="caption-attachment-225" style="width: 450px" class="wp-caption aligncenter"><img decoding="async" loading="lazy" class="size-full wp-image-225" title="fig b. Block diagram of the project" src="https://studentprojects.in/wp-content/uploads/2008/11/block_2.gif" alt="fig b. Block diagram of the project" width="450" height="198" /><figcaption id="caption-attachment-225" class="wp-caption-text">fig b. Block diagram of the project</figcaption></figure><p>The post <a href="https://studentprojects.in/software-development/software-projects/software/lexical-analyzer/">Lexical Analyzer</a> first appeared on <a href="https://studentprojects.in">Student Projects</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://studentprojects.in/software-development/software-projects/software/lexical-analyzer/feed/</wfw:commentRss>
			<slash:comments>27</slash:comments>
		
		
			</item>
	</channel>
</rss>
