Question
For the following Questions provide clear answers. Give the respective coding for each. A context free grammar for a fragment of English is shown below:
For the following Questions provide clear answers. Give the respective coding for each.
A context free grammar for a fragment of English is shown below: S -> NP VP NP -> Det N N -> N N VP -> rumbles, rusts Det -> the, a, every N -> bus, car, train, park, airport, station (a) Show the parse trees for the two parses that the grammar assigns for sentence S1. S1: the train station bus rumbles [3 marks] (b) Give an algorithm for a bottom-up passive chart parser without packing. Illustrate your answer by showing the edges constructed when parsing sentence S1. [11 marks] (c) Describe how this algorithm could be modied so that edges may be packed, illustrating your answer by considering sentences S1 and S2. What eect does packing have on parsing eciency? S2: the airport car park bus rumbles [6 marks] (a) The function x is the least xed point operator from (D D) to D, for a domain D. (i) Show that f. fn() is a continuous function from (D D) to D for any natural number n. [Hint: Use induction on n. You may assume the evaluation function (f,d) 7 f(d) and the function f 7 (f,f), where f (D D) and d D, are continuous.] [7 marks] (ii) Now argue briey why x = G n0 f. fn() , to deduce that x is itself a continuous function. [3 marks] (b) In this part you are asked to consider a variant PCFrec of the programming language PCF in which there are terms recx : .t, recursively dening x to be t, instead of terms x. (i) Write down a typing rule for recx : .t. [2 marks] (ii) Write down a rule for the evaluation of recx : .t. [2 marks] (iii) Write down the clause in the denotational semantics which describes the denotation of recx : .t. (This will involve the denotation of t which you may assume.) [3 marks] (iv) Write down a term in PCFrec whose denotation is the least xed point operator of type ( ) . [3 marks]
(a) A transaction processing system is using a write-ahead log as a mechanism for providing persistent storage. What information must be written to the log (i) when a transaction starts; (ii) when a transaction performs an operation on a persistent object; (iii) when a transaction commits? [2 marks each] (b) Describe how the log can be used to roll-back a transaction that has aborted or become deadlocked. [6 marks] (c) The log can also be used to recover after some kinds of system failure. (i) Describe how the log can be used to recover after a fail-stop crash. [2 marks] (ii) What is meant by checkpointing? How will using it aect the structure of the log and the recovery procedure after a crash? [6 marks] The ARM processor allows the second operand to be shifted by an arbitrary amount. In order to improve the performance, a six-stage pipeline is proposed with the following stages: instruction decode and shift execute memory register fetch register fetch operand 2 access write back (a) What are control hazards and how could they be resolved in the above pipeline? [4 marks] (b) What are data hazards and how could they be resolved in the above pipeline? [4 marks] (c) What are feed-forward paths and where could they be added to the above pipeline to improve performance? [6 marks] (d) Why might a branch instruction result in pipeline bubbles and how many bubbles will appear in the above pipeline as a result of taking a branch instruction? [6 marks]
In the following, N is a feedforward neural network architecture taking a vector xT = ( x1 x2 xn ) of n inputs. The complete collection of weights for the network is denoted w and the output produced by the network when applied to input x using weights w is denoted N(w,x). The number of outputs is arbitrary. We have a sequence s of m labelled training examples s = ((x1,l1),(x2,l2),...,(xm,lm)) where the li denote vectors of desired outputs. Let E(w;(xi,li)) denote some measure of the error that N makes when applied to the ith labelled training example. Assuming that each node in the network computes a weighted summation of its inputs, followed by an activation function, such that the node j in the network computes a function g w(j) 0 + k X i=1 w(j) i input(i)! of its k inputs, where g is some activation function, derive in full the backpropagation algorithm for calculating the gradient E w = E w1 E w2 E wW T for the ith labelled example, where w1,...,wW denotes the complete collection of W weights in the network. [20 marks] A new programming language has the notion of "statically scoped exceptions" in which the program exception foo; void f() { try { void g() { raise foo; } try { g(); } except (foo) { C2 } } except (foo) { C1 } } would execute C1 rather than C2 as the former was in scope at the raise point. By analogy with statically scoped variables, or otherwise, explain how such exceptions might be implemented on a stack.
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started