CS代考 Take-home Exam in Advanced Programming

Take-home Exam in Advanced Programming
Deadline: Friday, 11 November 2022 at 15:00 Version 1.0
This is the exam set for the individual, written take-home exam on the course Advanced Programming, B1-2022. This document consists of 26 pages; make sure you have them all. Please read the entire preamble carefully.
The exam consists of 2 questions. Your solution will be graded as a whole, on the 7-point grading scale, with an external examiner. The questions each count for 50%. However, note that you must have both some non-trivial working Haskell and Erlang code to get a passing grade.

Copyright By PowCoder代写 加微信 powcoder

In the event of errors or ambiguities in an exam question, you are expected to state your assumptions as to the intended meaning in your report. You may ask for clarifications in the discussion forum on Absalon, but do not expect an immediate reply. If there is no time to resolve a case, you should proceed according to your chosen (documented) interpretation.
What To Hand In
To pass this exam you must hand in both a report and your source code:
• The report should be around 5–10 pages, not counting appendices, presenting (at least) your solutions, reflections, and assumptions, if any. The report should contain all your source code in appendices. The report must be a PDF document.
• Thesourcecodeshouldbeina.ZIPfilecalledcode.zip,archivingonedirectorycalled code, and following the structure of the handout skeleton files.
Make sure that you follow the format specifications (PDF and .ZIP). If you don’t, the hand in will not be assessed and treated as a blank hand in. The hand in is done via the Digital Exam system (eksamen.ku.dk).
Learning Objectives
To get a passing grade you must demonstrate that you are both able to program a solution using the techniques taught in the course and write up your reflections and assessments of

Advanced Programming DIKU, B1-2022/2023
your own work.
• For each question your report should give an overview of your solution, including an assessment of how good you think your solution is and on which grounds you base your assessment. Likewise, it is important to document all relevant design decisions you have made.
• Inyourprogrammingsolutionsemphasisshouldbeoncorrectness,ondemonstrating that your have understood the principles taught in the course, and on clear separation of concerns.
• It is important that you implement the required API, as your programs might be subjected to automated testing as part of the grading. Failure to implement the correct API may influence your grade.
• To get a passing grade, you must have some non-trivial working code in both Haskell and Erlang.
Exam Fraud
This is a strictly individual exam, thus you are not allowed to discuss any part of the exam with anyone on, or outside the course. Submitting answers (code and/or text) you have not written entirely by yourself, or sharing your answers with others, is considered exam fraud.
You are allowed to ask (not answer) how an exam question is to be interpreted on the course discussion forum on Absalon. That is, you may ask for official clarification of what constitutes a proper solution to one of the exam problems, if this seems either underspecified or inconsistently specified in the exam text. But note that this permission does not extend to discussion of any particular solution approaches or strategies, whether concrete or abstract.
This is an open-book exam, and so you are welcome to make use of any reading material from the course, or elsewhere. However, make sure to use proper and specific citations for any material from which you draw considerable inspiration – including what you may find on the Internet, such as snippets of code. Similarly, if you reuse any significant amount of code from the course assignments that you did not develop entirely on your own, remember to clearly identify the extent of any such code by suitable comments in the source.
Also note that it is not allowed to copy any part of the exam text (or supplementary skeleton files) and publish it on forums other than the course discussion forum (e.g., StackOverflow, IRC, exam banks, chatrooms, or suchlike), whether during or after the exam, without explicit permission of the author(s).
During the exam period, students are not allowed to answer questions on the discussion forum; only teachers and teaching assistants are allowed to answer questions.
Breaches of the above policy will be handled in accordance with the Faculty of Science’s disciplinary procedures.

Advanced Programming DIKU, B1-2022/2023
Question 1: APpy: A simple parser generator
Don’t worry, be ’APpy!
In the course we have looked primarily at language-integrated parsing in Haskell. In many other languages, parser combinators cannot be expressed so concisely and elegantly; in particular, backtracking may be quite hard to accommodate in a language that cannot immediately support it as an instance of the monadic framework. In such languages, it is common to instead write parsers and/or lexers in a dedicated, domain-specific language. Such a language provides a concrete, machine-processable notation for expressing context- free grammars, together with a way to specify how to build the intended structured representation of the the parsed text, typically as some form of abstract syntax tree
This approach was popularized for C in the 1970s by the yacc and lex tools, subsequently reimplemented as the Free variants bison and flex, and with more or less direct ports to other languages, both imperative and functional. A parser specification in this style combines a grammar specification with snippets of host-language code that get embedded directly into the generated parser, and get executed when that parser is then run on some concrete input.
A main advantage of a dedicated parser-specification language is that it may allow more grammars to be expressed directly, or to improve efficiency. A disadvantage is that it complicates the development process somewhat, in that it is now necessary to first run the parser-generator to obtain the parser program, and then compile and the parser program to actually get a runnable parser. This means that any syntactic or semantic issues with the embedded host-language snippets are only discovered when the constructed parser is compiled, at which time it may be hard to trace back a compile-time error to the specific location in the parser-specification file that caused it, and that needs to be fixed.
In this task, you will implement a simple parser generator for Haskell along these lines. An open-source parser generator for Haskell, with an interface quite similar to the yacc tool (but with many Haskell-specific adaptations) exists under the name under the name of Happy. Accordingly, our much less ambitious version will be called APpy. Note, however, that apart from the similar name and overall goal, APpy is very substantially different from both yacc and Happy; in particular, it is highly unlikely that you will find anything of value for this task in either the Happy documentation or its source code.
A tour of APpy
The following section gives an informal overview of the APpy features from a user’s point of view. The next section goes into more detail on what you are asked to implement.
Basic BNFs The grammar is specified in a syntax similar to the one in the parser notes, with literal tokens written between double quotes, and all rules terminated by a period:
— A simple expression grammar:
Exp ::= Exp “+” Term | Exp “-” Term | Term.
Term ::= num | “(” Term “)”.

Advanced Programming DIKU, B1-2022/2023
A nonterminal name can by any sequence of letters, digits, and underscores, starting with a letter. (The case of the letter matters; see the paragraph “tokens and whitespace” below.)
A literal can be any non-empty sequence of printable characters between double-quotes. If the literal itself contains a double-quote, it must be doubled; for example, the 3-character token =”= would be written as “=””=”.
𝜖-alternatives are written as simply an empty sequence of symbols, e.g. Digits ::= Digit Digitz.
Digitz ::= | Digit Digitz.
(Note that these examples are not yet valid APpy inputs because they are missing the semantic actions, discussed next.)
Comments in the grammar specification start with — and end with the next newline.
Semantic actions The purpose of APpy is to generate a parser that not only checks whether an input string can be generated by a grammar, but to construct the semantic meaning of the string, typically as an abstract syntax tree. For this purpose, every alternative in a rule ends with a semantic action specifying how to construct the meaning of this alternative from the meanings of the constituents in the sequence. For example, we could specify:
Exp ::= Exp “+” Term
| Exp “-” Term
Term ::= num
| “(” Term “)”
{Plus _1 _3}
{Minus _1 _3}
A semantic action is simply an arbitrary Haskell expression, in which each special variables _1, _2, … will be bound to the meaning constructed by the corresponding element of the preceding sequence of symbols.
The action will always be treated as a complete, parenthesized expression, and should not rely on indentation; instead, if needed, semicolons should be used to separate bindings in a do or let, alternatives in a case, etc. The semantic action can contain any number of arbitrary characters, and will not be modified or analyzed by APpy itself. However, if an action itself is to contain the characters { or } anywhere (even in string constants, etc., they must be doubled. For example,
Exp ::= …
| “let”id”=”Exp”in”Exp {Let{{var=_2,def=_4,body=_6}}}.
Such doubled braces are replaced by single ones in the actual Haskell code. A sole } immediately terminates the action, whereas a sole { is illegal.
Literal tokens, such as “+”, have the semantic meaning (). They would normally not be explicitly referenced in the semantic action, but they are still counted for the purpose of numbering.

Advanced Programming DIKU, B1-2022/2023
Conversely, if some phrase has no semantic meaning (for example, a comment, as discussed), its semantic action should be given as {()}.
All semantic actions in a a single rule must be of the same type. (If the exact semantic action for a particular alternative has not been determined yet, it may be written as {undefined}, which makes it typable in any context.)
If a sequence of grammar elements in an alternative is of length exactly one, the action may be omitted and corresponds to simply propagating the meaning of the single element in the sequence, as if the action had been written as {_1}.
Finally, mainly to aid the human reader (since Haskell’s type inferencer can almost always deduce it), the type of a nonterminal’s meaning can be optionally given on the left-hand side of the rule, as follows:
Exp {:Expr} ::= Exp “+” Term {Plus _1 _3} | …
Again, the text in {: · · · } can be any well formed Haskell type, using the same convention
for brace-doubling as in the semantic actions.
Nested choices Sometimes it is convenient to specify a simple choice in the middle of a grammar rule, as in EBNF, by enclosing the alternatives in parentheses. Correspondingly, in APpy we can write:
Exp ::= …
| Exp (“+” {Plus} | “-” {Minus}) Term {_2 _1 _3}
Variables in nested actions refer to constituents of the preceding sequence, not those in the outer rule. For example, in
Addr ::= reg ({\r -> RegPlain r} | “+” num {\r -> RegOffset r _2}) {_2 _1} the first occurrence of _2 refers to the value of the numeric constant num, whereas the
second _2 is the function resulting from the parenthesized choice.
Also like in EBNF, there are convenient shorthands for introducing optional and repeated grammar elements. Their meanings have Maybe and list types, respectively. For exam- ple:
Exps{:[Expr]} ::= Exp Exp* {_1 : _2}.
Addr ::= reg (“+” num {_2} | “-” num {negate _2})?
{case _2 of Nothing -> RegPlain _1; Just n -> RegOffset _1 n}.
(Note how we must separate the case-branches with ; instead of indentation in the semantic
Thus, the EBNF notation “[···]” can be expressed as (···)? in APpy, and “{···}” as (···)*.

Advanced Programming DIKU, B1-2022/2023
Predicated and negated parsers Semantic actions and nested choices do not change what kinds of languages we can parse; they merely allow us to express the meanings of various phrases as we parse them, or specifying the syntax more compactly. However, APpy also contains two extensions to the context-free paradigm specifically meant for expressing practical parsers.
The first, predicated parsers, allow the semantic meaning of a phrase to affect whether it is syntactically well formed. It allows any element in a sequence to be followed by a predicate (expressed in general Haskell, like the semantic actions). For example,
var ::= Word{? notElem [“let”, “in”, “if”, “then”, “else”]}.
Word ::= Char{?isAlpha} *.
The Haskell expression in {? · · · } uses the same conventions for brace-doubling as semantic actions. It should have type 𝑡 -> Bool where 𝑡 is the type of the meaning of the element it follows.
Since predicates (like ? and *) are always attached to the preceding grammar element, the convention is to write them immediately next to the preceding element without intervening whitespace (but such whitespace is still formally allowed).
The second extension is negated parsers. A sequence element can be preceded by an exclamation mark (!), meaning that it should not be possible to successfully call that parser at that point in the input. For example, we can say
Num ::= Digit Digit* !Digit {_1:_2}
This will parse a maximal sequence of digits (similarly to munch1 isDigit in ReadP).
Again, the negation is conventionally written immediately before the element it negates. Only a single ?, *, or ! decoration is allowed on a grammar element, since combinations of them very rarely make sense in practice.
For efficiency the parser to be negated should be simple, often just a single character. The semantic value of a negated parser is always () (like for token literals), regardless of the type of the parser being negated.
In a grammar specification, predication groups tighter than the prefix/postfix decorations; for example, !A{?p} parses as !(A{?p}), not (!A){?p} (which wouldn’t make much sense, anyway). Nested predications are allowed, as in as Char{?isAscii}{?isLetter}.
Tokens and whitespace Like parser combinators, APpy can be used to specify both lexical and syntactic grammars in a unified framework. This requires careful attention to where whitespace can or must occur. APpy uses the standard convention of letting designated token parsers (only) skip any following whitespace. Tokens in the grammar are either literals (i.e., strings between double quotes) or symbols with names starting with lowercase letter. Note that whitespace-skipping is added to each token parser at its definition (like with lexeme), not at its uses.
Character parsers Since double-quoted (i.e. token) literals skip following whitespace, we also need a way to express that whitespace is not allowed after some characters, such

Advanced Programming DIKU, B1-2022/2023
as the digits within a number, or the letters in an identifier. For that purpose, APpy allows character literals, written between single quotes:
num ::= ‘-‘? Digits {maybe _2 (\_ -> negate _2) _1}
(where Data.Maybe.maybe is a concise way of writing a case-expression over a Maybe
type.) Here, there can be no whitespace between the optional sign and the digits.
A character literal consists of exactly one printable character between single quotes. (So a literal single-quote character can be written directly as ”’ without any doubling or escaping.)
Quite commonly, we want to allow a choice between a fair-sized number of characters. While we could write, e.g., (‘0’|’1’|…|’9′)| (with … replaced by an actual enumeration of the remaining digits, this gets cumbersome. Instead, we can use predication with a general single-character parser:
Digit ::= @{?isDigit}.
Here, @ accepts any single character, further constrained by the predicate. This also allows us to specify individual non-printable characters by escaping to Haskell’s general character syntax, e.g.,
Newline ::= @{?==’\n’}. Bell ::= @{?==’\7′}.
Token-separation parser The character parsers can be used to express the precise whitespace and comment rules of the grammar as a nonterminal. Then, this designated token-separation parser named _ (suggesting whitespace) is automatically called after each token parser, as well as initially. It is not possible to invoke _ directly as part of a grammar rule.
However, there is a complication in that certain literal tokens (typically keywords) often must not be followed by additional letters or digits, while others (such as symbols) have no such restrictions; yet this difference is not evident from the token literals, and we don’t want to hard-code any particular conventions int the parser generator.
Therefore, as a special case, the separation parser is told which literal it follows, so that it can make any special arrangements. The literal is available in the special String-typed variable _0, and may be used in predicates to influence the behavior of the parser. When the separation parser is invoked after a nonterminal designated as a token, as well as when skipping initial whitespace, _0 is bound to the empty string (recall that token literals must be non-empty.)
For example, to ensure that keyword literals are not immediately followed by a letter, we can say:
_ ::= !Letter{?\_ -> _0 elem keywords} {()}.
When _0 is not a keyword, the predicate \_ -> … will always return False regardless of
its argument, so the predicated parser Letter{?…} will fail, and therefore the negated 7

Advanced Programming DIKU, B1-2022/2023
parser !Letter{?…} will succeed, and go on to skip any following space characters. Similarly, if _0 is a keyword but the next character is not a letter, the negated parser will succeed.
But when _0 is a keyword (e.g., “not”) and next character in the input is (e.g., ‘x’), the negated parser will correctly fail, since the input string “notx” should not parse as a negation, bot only as an identifier.
Note that _0 is not bound in any parsers invoked from the separation parser; this includes any parsers arising from nested choices or sequences.
Preamble and top-level parsers In many cases, the semantic actions in the parser specification use task-specific abstract-syntax datatypes, which need to be defined or imported. Similarly, predicates often refer to the various character-classification isXxx predicates from Data.Char. Finally, the generated parser will usually be a separate module with a suitable header and export list, which includes the generated parsers.
To accommodate this, a APpy grammar specification starts with a preamble, which is an arbitrary fragment of Haskell code that will be copied verbatim (without even treating open/close braces specially) to the generated parser file. For example,
module Expressions (Exp(..), parseExp) where
import qualified Text.ParserCombinators.ReadP as RP — needed by backend
import Data.Char (isDigit)
data Exp = Cst Int | Plus Exp Exp | Minus Exp Exp
deriving Show
parseExp :: String -> Either String Exp
parseExp = parseTop p_E
E::=T{_1}|E”+”T{_1 Plus _3}|E”-“T{_1 Minus _3}. T ::= num {Cst _1} | “(” E “)” {_2}.
num ::= Digit Digit* {read (_1:_2) :: Int}.
Digit ::= @{?isDigit}.
_ ::= ‘ ‘* {()}.
The preamble is separated from the grammar specification by a line containing only — (three consecutive dashes). The generated code expects ReadP (or a work-alike) to be available as RP.
The code in the preamble can use the function
parseTop :: RP.ReadP a -> String -> Either String a
to invoke one of the generated parsers, with p_ prepended to its name. (For debugging, all parsers can be invoked directly with readP_to_S as

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com