All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
mathematics
introduction to the mathematics
Questions and Answers of
Introduction To The Mathematics
(a) Show that if \(G\) is an abelian group and \(n\) is an integer, then \((a b)^{n}=\) \(a^{n} b^{n}\) for all \(a,b, \in G\).(b) Give an example of a group \(G\), an integer \(n\), and elements
Let \(G\) be a finite group with an even number of elements. Prove that there is an element \(x \in G\) with the property that \(x eq e\) and \(x^{2}=e\).
Still a film critic by night, Ivor Smallbrain has taken up a day job as Head of Binary Operations for the huge poetry production company Identity in Verse. At the end of one sunny Tuesday, he notices
In the American option valuation problem, suppose that at time 0 , the share price is \(x_{1}=\$ 20\). Return rates are \(u=.07\) and \(d=0\). Use the remark in the section to compute the probability
For the scenario in Exercise 1, redo the problem for values \(u=.08\) and \(u=.09\), and comment on how the solution depends on \(u\).
For the scenario in Exercise 1, redo the problem for values of the exercise price \(E=\$ 22\) and \(E=\$ 24\), and comment on how the solution depends on \(E\).
For the scenario in Exercise 1, redo the problem for values of the discount factor \(\alpha=.95\) and \(\alpha=.9\), and comment on how the solution depends on \(\alpha\).
A put option is an option that allows its owner to sell, rather than buy, an asset at an agreed-upon exercise price \(E\) by a certain expiration time \(T\). This option is valuable to the holder if
For the dishwasher inventory problem, Example 2, relabel the state space such that states \((0,0),(0,1)\), and \((0,2)\) are lumped together as, say, state \(0^{*}\), and states \((1,1)\) and
Redo the inventory problem, Example 2, for (a) parameter value \(r=4\); (b) parameter value \(r=6\).
In the inventory problem, Example 2, try to find the smallest value of \(r>2\) that you can for which for some state it is optimal to reorder some positive number of dishwashers. Obtain the \(r\)
In the inventory problem, Example 2, try to find the largest value of \(r
Suppose in the inventory problem of Example 2 that we are only interested in controlling the system during a particular ad campaign that ends after the third week. No additional order is placed at
Use the method of policy improvement to find the optimal policy in the dishwasher inventory example of Example 2, beginning with the policy that orders only enough to restore the inventory level to 1
Consider an inventory problem as in Example 2, but with finite time horizon \(T=2\) and no discount factor. Let \(r\) remain as a parameter and work through the backward programming method for times
Describe the possible actions of the investor and the transition probabilities under those actions. What are the per-period and terminal reward functions for the finite horizon problem? Describe the
Write in general the DP equation for the problem formulated in Exercise 13.Suppose that at time 0 , the share price \(x_{0}=\$ 5\). The investor holds exactly 10 shares initially and the checking
To solve the problem, it is convenient at this stage to impose a continuous approximation to the actual problem: we suppose that the investor can buy and sell fractional shares of stock so that all
Redo the investment problem with \(q=.4, u=.05\), \(d=-.04, T=4\), and explain the result intuitively.
For the investment problem, show that as long as the expected change in stock price from one time to the next is positive, the optimal strategy is to invest all wealth in the stock.
Solve Example 1 of Section 6.3 by policy improvement, starting with the policy that takes action 1 at both states.
Redo Example 2 on van servicing with problem parameters: (a) \(c=400, l=300\); (b) \(c=500, l=200\).
Solve the advertising problem (Exercise 5 of Section 6.1) viewed as an infinite horizon problem with \(\alpha=.9, r=10\), and \(c=2\) by policy improvement, starting with the policy that never
Solve the reservoir problem (Exercise 13 of Section 6.3) by policy improvement, starting with the policy that releases all water present.
Write your own version of the command PolicyImprovementOneStep as described in the section.
We have a machine that is in one of five possible conditions at each time. State 1 is the best condition, etc. down to state 5 , which is the worst condition. We can replace a machine with one that
Redo the machine replacement example Exercise 6 by policy improvement, beginning with the initial policy of replacing the current machine if it is in condition 4 or worse.
Consider the machine replacement example Exercise 6, in which all costs \(C_{i}, i=1, \ldots, 6\) are replaced by \(b \cdot C_{i}\), where \(b\) is a positive constant. Show that the policy that
Create an infinite horizon problem in which the optimal policy is not unique.
Consider again Example 3 of Section 6.2 on fishery planning, but in an infinite horizon context with discount factor \(\alpha=.99\). Assume the same per-period reward function \(r\) and transition
Does the optimal policy in the fishery example (see Exercise 10) change if the reward function \(r(i, a)\) is changed to \(10 a\) ? If not, can you provide an intuitive reason for it?
Consider an infinite horizon Markov decision problem with the usual notation, and let \(V\) be a function on the state space satisfying the linear programming problem (8). Adjoin to \(E\) an
Use the linear programming formulation (8) in the remark at the end of the section to solve for the optimal value function in Example 1.Once the optimal value function is in hand, discuss how you
For the chain with transition matrix below, the rewards for states \(1-5\) are, respectively, \(1,0,5,2\), and 3 . Draw the transition diagram, and use your intuition to guess at the optimal stopping
For the chain with the transition diagram below, the number of the state is equal to its reward. Find the optimal stopping time. 3/4 3/4 1/4 Exercise 2 1/4
Consider the Markov chain of Figure 4.8, whose transition matrix is reproduced below. Suppose that the reward function is \(f(i)=7-i\), \(i=1, \ldots, 6\) Find intuitively the optimal stopping time.
Let \(\left(X_{n}\right)\) be a Markov chain with the transition matrix below. Show that the constant function \(f=2\) is excessive (i.e., \(f \geqslant T \cdot f\) ). More generally, show that for
A shady character has a sports betting operation. Each month he either makes one more monetary unit, or else he is closed down by the police and all profits are confiscated. The latter occurs with
You have a contract called an option to purchase a share of stock when you desire, at the fixed price of 3 monetary units. The day-today price of the stock follows a Markov chain with the transition
On a television game show, a contestant is offered a sequence of prizes, which are independent and identically distributed random variables taking possible values \(\$ 1000, \$ 2000\), and \(\$
Suppose, in the optimal stopping problem, that there is a discount factor of \(\alpha \in(0,1)\) per period. That is, the reward collected when the game is stopped at time \(S\) is only \(\alpha^{S}
Suppose that the reward function \(f\) itself in an optimal stopping problem is excessive. Show that the optimal strategy is to stop immediately.
Can you develop a policy improvement approach to the optimal stopping problem? Try it on the problem of Figure 6.4.
In the game show "Who Wants To Be A Millionaire?," a contestant is given a sequence of multiple choice questions with four alternative answers. The contestant can choose to keep the amount of money
Argue that for fixed \(i\), the maximum in the optimal value function \(W(i)=\max _{\mathbf{u}} W(i, \mathbf{u})\) among only all stationary policies must be assumed by some policy. Does your
Show that if the reward function \(r\) of a Markov decision problem is bounded in absolute value by a constant \(c\), then for any policy \(\mathbf{u}\), the infinite horizon discounted value
Write an expression similar to (1) relating the value of a policy \(\mathbf{u}=\left(u_{0}, u_{1}, u_{2}, u_{3}, \ldots\right)\) to that of \(\mathbf{u}^{2}=\left(u_{2}, u_{3}, \ldots\right)\).
Prove inequality (6) as suggested in the proof of Theorem 2.
For the two-state, two-action problem (Example 1), write a Mathematica program to compute the sequence of functions generated by the method of successive approximations, until a termination condition
Using the same problem parameters as in Example 1 and the initial function \(w_{0}(1)=w_{0}(2)=0\), estimate analytically how large \(n\) must be so that \(w_{n}\) is within .1 of the optimal value
Redo Example 1, changing \(T_{1}(2,1)\) to \(7 / 8\) and \(r(1,1)\) to 8 .
Redo Example 2, changing the costs to \(C_{1}=4, C_{2}=6, C_{3}=3, C_{4}=5\). Keep \(\alpha\) set at .9 .
(a) Redo Example 2, changing \(\alpha\) to .5 .(b) Redo Example 2, changing \(\alpha\) to .95 .(c) What happens to the solution \(W(1)\) of the dynamic programming equation as \(\alpha
Let us expand the model of Example 2.Suppose now that the machine can be in "like new" condition (state 0), "badly deteriorated" condition (state 4), or one of three intermediate states of
(a) Recall the advertising problem (Exercise 5 of Section 6.1). Considering the problem as an infinite horizon discounted reward problem with discount factor \(\alpha=.9\), write the DP equation.(b)
For the advertising problem (Exercise 5 of Section 6.1) viewed as an infinite horizon problem with discount factor \(\alpha=.9\), use the parameters in Exercise 11(c) to find the value of the policy
A reservoir holds 3 units of water. We will control the chain defined by \(X_{n}=\#\) units of water in the reservoir at the beginning of month \(n\), by deciding how much water to release from the
Working by hand (rather than using the DPEquation command), find the optimal policy for the rocket production problem, Example 2, if the cost function \(r(x, a)\) is changed to 16+8a if x 1 and a > 0
(a) Use the DPEquation command to confirm the computations in Example 2.(b) Repeat the solution of Example 2 holding \(r\) as it is, but with values of the terminal cost function of (i) 50 ; (ii) 45
In Section 1 we introduced an example with two states and two actions in which the reward function was \(r(A, 1)=4, r(B, 1)=3\), \(r(A, 2)=2, r(B, 2)=5\) and the transition matrices were as below.
A house has a simple thermostat that can be set at 1 to turn the furnace on, and 0 to turn it off. Potential changes in setting take effect every 10 minutes. If the thermostat is on 0 , in a 10
In Exercise 1 of Section 6.1, suppose that the single period reward function is \(r(i, a)=i-a\), and at the terminal time \(T=4\), a final reward \(R\left(X_{4}\right)=X_{4}\) is received. Find the
In the fishery example of Example 3, assume that there are only two fishing seasons under study, and that the net benefit per unit of fish harvested is 5 and the net benefit per unit remaining at the
Using the original problem parameters of Example 3, find the smallest time horizon \(T\) such that it is beneficial to harvest fish at some time prior to that horizon.
For the two-state, two-action Markov decision process with transition matrices and per period reward function as below, consider the finite horizon problem with time horizon \(T=4\) and terminal
Let us presume that the dynamic programming equation (6) still holds when the state and action spaces are not finite, for the purposes of the following problem. An owner of a baseball team can spend
A person has \(\$ 4000\) available initially for investment in two risky ventures A and B. Venture A will return nothing in a time period with probability \(2 / 3\), and will return \(\$ 3000\) per
The following is a deterministic dynamic programming problem. A company is planning a marketing strategy for a new product. There are three phases of the plan: (1) an introductory low price; (2) a
Solve Exercise 12 of Section 6.1 on immigration if the time horizon is \(T=4\) and the probability of population increase is \(p=1 / 2\).
Consider the Markov chain with the transition diagram below. Suppose that there are two possible actions, labelled 0 and 1 . Under action 0 , the chain moves according to this transition diagram, and
Prove formula (4)(b).
If the state space of a Markov decision process has size 4, the action space has size 3, and all actions are admissible at all states, then how many stationary policies are there? How many admissible
Below is a directed graph seen earlier in the book as Figure 1.26. Find the shortest paths in the graph from vertices 7,8 , and 9 to vertex 10 , and use these to find the shortest paths from vertices
An advertising agency will conduct a campaign for a new soft drink. The agency follows the share of the market possessed by the soft drink, in increments of \(5 \%\), from month to month. Each month
Consider the Markov decision process illustrated by Figure 6.2. Suppose that the time horizon is \(T=3\), the terminal reward function is \(R(x)=0 ; x=A, B\), and the per period reward function is
How many stationary policies are there in the problem of Exercise 5?
How many admissible feedback policies are there in the fish harvesting problem of Example 3?
For the Markov decision process of Exercise 1, find the transition matrix corresponding to the stationary policy \(\mathbf{u}\), and calculate \(E_{\mathbf{u}}\left[\sum_{n=0}^{2} R_{n} \mid
For the Markov decision process of Exercise 1, calculate the expectation \(E_{\mathbf{u}}\left[\sum_{n=0}^{2} R_{n} \mid X_{0}=3\right]\) for the non-stationary policy for which \(u_{0}(i)=0\) for
At a small cellular phone company, servers must spend a half hour discussing options with each possible customer who comes in. During any half hour period, either 0,1 , or 2 customers will come in,
The population of a country can be approximately modelled so that it has a value of either 0 units, 1 unit, 2, 3, 4, or 5 units. The population undergoes a natural change from one time period to the
(a) Show that if \(A\) is a countable set and \(B\) is a finite set, then \(A \cup B\) is countable.(b) Show that if \(A\) and \(B\) are both countable sets, then \(A \cup B\) is countable.
(a) Show that if each of the sets \(S_{n}(n=1,2,3, \ldots)\) is countable, then the union \(S=\bigcup_{n=1}^{\infty} S_{n}\) is also countable.(b) Show that if \(S\) and \(T\) are countable sets,
Write down a sequence \(z_{1}, z_{2}, z_{3}, \ldots\) of complex numbers with the following property: for any complex number \(w\) and any positive real number \(\varepsilon\), there exists \(N\)
Let \(S\) be the set consisting of all infinite sequences of \(0 \mathrm{~s}\) and \(1 \mathrm{~s}\) (so a typical member of \(S\) is \(010011011100110 \ldots\), going on forever). Use Cantor's
(a) Let \(S\) be the set consisting of all the finite subsets of \(\mathbb{N}\). Prove that \(S\) is countable.(b) Let \(T\) be the set consisting of all the infinite subsets of \(\mathbb{N}\). Prove
Every Tuesday, critic Ivor Smallbrain drinks a little too much, staggers out of the pub, and performs a kind of random walk towards his home. At each step of this walk, he stumbles either forwards or
Which of the following sets \(S\) have an upper bound and which have a lower bound? In the cases where these exist, state what the least upper bounds and greatest lower bounds are.(i)
Write down proofs of the following statements about sets \(A\) and \(B\) of real numbers:(a) If \(x\) is an upper bound for \(A\), and \(x \in A\), then \(x\) is a least upper bound for \(A\).(b) If
Prove that if \(S\) is a set of real numbers, then \(S\) cannot have two different least upper bounds or greatest lower bounds.
Find the LUB and GLB of the following sets:(i) \(\left\{x \mid x=2^{-p}+3^{-q}\right.\) for some \(\left.p, q \in \mathbb{N}\right\}\)(ii) \(\left\{x \in \mathbb{R} \mid 3 x^{2}-4 x
(a) Find a set of rationals having rational LUB.(b) Find a set of rationals having irrational LUB.(c) Find a set of irrationals having rational LUB.
Which of the following statements are true and which are false?(a) Every set of real numbers has a GLB.(b) For any real number \(r\), there is a set of rationals having GLB equal to \(r\).(c) Let \(S
Prove that the cubic equation \(x^{3}-x-1=0\) has a real root (i.e., prove that there exists a real number \(c\) such that \(c^{3}-c-1=0\) ).
Here is an exercise, not for the faint-hearted, leading you through the rigorous construction of the real numbers from the rationals \(\mathbb{Q}\) and proving the Completeness Axiom.Call a subset
Let \(x_{1}, x_{2}, x_{3}, \ldots\) be a sequence of real numbers (going on forever). For any integer \(n \geq 1\), define \(T_{n}\) to be the set \(\left\{x_{n}, x_{n+1}, \ldots\right\}\). (So, for
Which of the following sequences \(\left(a_{n}\right)\) are convergent and which are not? For the convergent sequences, find the limit.(i) \(a_{n}=\frac{n}{n+5}\).(ii)
Prove that the limit of a sequence, if it exists, is unique: in other words, if \(\left(a_{n}\right)\) is a sequence such that \(a_{n} \rightarrow a\) and \(a_{n} \rightarrow b\), then \(a=b\).
Let \(S\) be a non-empty set of real numbers, and suppose \(S\) has least upper bound \(c\). Prove that there exists a sequence \(\left(s_{n}\right)\) such that \(s_{n} \in S\) for all \(n\) and
For each of the following sequences \(\left(a_{n}\right)\), decide whether it is (a) bounded, (b) increasing, (c) decreasing, (d) convergent:(i) \(a_{n}=\frac{n^{3}}{n^{3}-1}\).(ii) \(a_{n}=2^{1 /
Showing 1 - 100
of 575
1
2
3
4
5
6