In linguistics, lexical-syntactical analysis involves separating and analyzing a text's words for their meaning to determine their literal usage. For example, linguists may consider whether the word "good" in a passage is an adjective or a noun. They also analyze the words to determine whether they are used figuratively or, in the case of certain religious texts, apocalyptically.
Linguists place the words they studied back in order to determine the author's intended meaning of a passage or to develop a syntax. In this part of the analysis, considerations such as church status at the time the text was written are used to develop context, which helps researchers understand a text's meaning better.
Computer scientists use lexical-syntactical analysis to break linear computer code into its components, which are called tokens. Tokens are single units of a computer language. Syntactical analysis involves arranging the tokens into correct grammar as defined by the rules of a computer program; this is called the syntax. The relationship of the tokens to each other can be understood in this stage.
Computer scientists use the analysis method to verify whether or not computer code matches a computer program's rules, which helps them develop code error messages. For instance, they may use the method to learn whether or not computer code is Java-compliant. The two-part analysis, also known as parsing, is also used in the process of transforming code from one computer program to conform to the language another computer program.