Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

- Please DO NOT use any AI tools like Chat-GPT to generate codes/answers as that'd a violation. 1. Peng & al., in Towards NN (Neural

- Please DO NOT use any AI tools like Chat-GPT to generate codes/answers as that'd a violation.

1. Peng & al., in "Towards NN (Neural Network)-based reasoning" , considered reasoning tasks consisting of a question to be answered and n premises to be used in deriving an answer to the question. They proposed to tackle such tasks by providing a separate recurrent NN (a GRU) for each premise sentence and for the question (thus, n+1 GRUs) at the input side, followed by several futher layers of n+1 GRUs. The successive layers were thought of as performing successive inference steps, and at the end there was an answer module that produced the answer to the question based on the final inference layer.

Thus the NN architecture reflected the expected reasoning structure. Also, training of the entire network was based on large amounts of synthetic examples (such as the room-to-room navigation examples in the bAbI reasoning problem sets). While the approach achieved very high success scores on bAbI problems, it did not get us very far in terms of the goal of building AI systems with general reasoning abilities.

What do you see as some weaknesses of this (pre-LLM) approach to machine reasoning, in terms of the scope of reasoning problems that can be handled, NN architecture, and training? Keep in mind how later systems such as REFLEX are constructed and trained, and make some comparisons in answering this question. 2. ChatGPT and Reasoning Give ChatGPT (or a similar system) some small room-to-room navigation problems of the sort Peng et al. addressed. I.e., describe a layout something like the one on the slides (but also somewhat more complicated ones), and then ask how a person would go from one of the locations to another. ChatGPT will probably succeed on one-or-two-step problems, but see how far you can push the compexity before answers become hit-and-miss. Report your results, and hence give your brief assessment of ChatGPT's abilities in this sort of problem domain.

3. Planning and the Frame Problem. a. We can approach action modeling within a logical framework in either of two ways (as discussed in class): - by treating actions as predicates; e.g., Walk(Robbie,Loc1,Loc2,T1), where T1 might be a start time, or perhaps a time interval, or perhaps the event of walking; (there are variants such as making 'Walk' monadic and writing {Walk(E1), Agent(E1,Robbie), Start(E1, Loc1), Destin(E1,Loc2)}, etc.; or - by treating actions as functions that transform one situation (world state) into another; e.g., walk(Robbie, Loc1, Loc2,S1), where this is the situation resulting from Robbie walking from Loc1 to Loc2, starting in situation S1. McCarthy & Hayes, in proposing the "Situation Calculus", chose the latter option -- slightly reformulating action functions in two parts, an action type, like walk(Loc1,Loc2), and a function for doing a type of action in a given situation, like do(walk(Loc1,Loc2),S1).

For purposes of deductively deriving a plan, what is the advantage of the latter, functional approach? Couldn't we use deduction in the same way in the predicative approach? If so, how, if not, why not?

b. The Frame Problem -- the problem of succinctly axiomatizing NON-change -- arises in both approaches. What prevents logical planning from succeeding, if we just axiomatize the effects of actions (what becomes true or false as a consequence of performing the actions) in a given situation (or at a given time), ignoring NON-change? Illustrate with a simple example of your own.

c. Suggest a frame axiom (of the type suggested by McCarthy and Hayes) for ensuring that if Robbie walks from a location x to a location y in a situation s (in some room -- but ignore this), and the light is on when he starts at x, the light is still on when he arrives at y. Use the McCarthy & Hayes' functional formulation of actions based on the 'do' function (also sometimes called 'result' as it gives a state/situation as value when applied to an action type and a state). (Note that a similar axiom for inferring non-change of the light would be needed for Robbie picking up an object, or releasing it, or pushing an object, or recharging himself at an outlet, etc.)

d. i. Assuming that Robbie is the only agent in the room, suggest an explanation closure axiom that states that if the light is on in some situation and Robbie does some type of action 'a', and the light is not on in the state resulting from the action, then 'a' must be the type of action where Robbie toggles the light switch (assume we have a unique, named switch).

ii. The EC axiom actually isn't quite enough to enable the desired non-change inference about the light when Robbie walks. That's because logically, doing the 'walk' and 'toggle' actions could be the same! Suggest a supplementary axiom to rule out this logical possibility.

4. The way actions and change are modeled in the STRIPS approach to planning makes very strong assumptions about non-change. What real-world circumstances in a robot's world could invalidate these assumptions?

- Please answer all the questions/parts as all are part of one single assignment and I cannot ask them in parts.

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Advances In Databases And Information Systems 23rd European Conference Adbis 2019 Bled Slovenia September 8 11 2019 Proceedings Lncs 11695

Authors: Tatjana Welzer ,Johann Eder ,Vili Podgorelec ,Aida Kamisalic Latific

1st Edition

3030287297, 978-3030287290

More Books

Students also viewed these Databases questions