Question
Miskien & Co. Trial balance at 31 December 2009 Cash 6,700 Debtors 23,800 Insurance 3,400 Stock 1 January 2009 1,950 Land 50,000 Building at cost
Miskien & Co. Trial balance at 31 December 2009 Cash 6,700 Debtors 23,800 Insurance 3,400 Stock 1 January 2009 1,950 Land 50,000 Building at cost 141,500 Accumulated depr - buildings 91,700 Equipment at cost 90,100 Accumulated depr - equipment 65,300 Creditors 7,500 Interest receivable 6,000 Capital account - Miskien 81,500 Drawings - Miskien 10,000 Sales 218,400 Purchases 80,200 Electricity and gas 28,200 Advertising 19,000 Repairs 11,500 Salaries 4,050 470,400 470,400 The following year-end adjustments need to be made: (i) Prepaid insurance at 31 December 2009 800 (ii) Stock on hand at 31 December 2009 450 (iii) Depreciation of buildings for the year 1,620 (iv) Depreciation of equipment for the year 5,500 (v) Interest due to Miskien at 31 December 2009 500 (vi) Accrued salaries and wages 31 December 2009 2,000 (vii) Sales delivered but not invoiced at 31 December 2009 2,750 You are required to show the above adjustments as journal entries and then to prepare an adjusted trial balance. You are then required to prepare the profit and loss account and balance sheet for the year.
Each of the expressions on the right-hand side is already known. The value of P(E) is 6/11,
and the value of P(E|D) is 5/6. Furthermore, the prior probability P(D) before knowing
the age is 6/11. Consequently, the posterior probability may be estimated as follows:
P(D|E) = (5/6)(6/11)
6/11
= 5/6. (10.17)
Therefore, if we had 1-dimensional training data containing only the Age, along with the
class variable, the probabilities could be estimated using this approach. Table 10.1 contains
an example with training instances satisfying the aforementioned conditions. It is also easy
to verify from Table 10.1 that the fraction of individuals above age 50 who are donors is
5/6, which is in agreement with Bayes theorem. In this particular case, the Bayes theorem
is not really essential because the classes can be predicted directly from a single attribute of
the training data. A question arises, as to why the indirect route of using the Bayes theorem
is useful, if the posterior probability P(D|E) could be estimated directly from the training
data (Table 10.1) in the first place. The reason is that the conditional event E usually
corresponds to a combination of constraints on d different feature variables, rather than a
single one. This makes the direct estimation of P(D|E) much more difficult. For example, the
probability P(Donor|Age > 50, Salary > 50, 000) is harder to robustly estimate from the
training data because there are fewer instances in Table 10.1 that satisfy both the conditions
on age and salary. This problem increases with increasing dimensionality. In general, for a ddimensional test instance, with d conditions, it may be the case that not even a single tuple
in the training data satisfies all these conditions. Bayes rule provides a way of expressing
P(Donor|Age > 50, Salary > 50, 000) in terms of P(Age > 50, Salary > 50, 000|Donor).
The latter is much easier to estimate with the use of a product-wise approximation known
as the naive Bayes approximation, whereas the former is not.
For ease in discussion, it will be assumed that all feature variables are categorical. The
numeric case is discussed later. Let C be the random variable representing the class variable
of an unseen test instance with d-dimensional feature values X = (a1 ...ad). The goal is to
estimate P(C = c|X = (a1 ...ad)). Let the random variables for the individual dimensions of
X be denoted by X = (x1 ...xd). Then, it is desired to estimate the conditional probability
P(C = c|x1 = a1,...xd = ad). This is difficult to estimate directly from the training
data because the training data may not contain even a single record with attribute values
(a1 ...ad). Then, by using Bayes theorem, the following equivalence can be inferred:
P(C = c|x1 = a1,...xd = ad) = P(C = c)P(x1 = a1,...xd = ad|C = c)
P(x1 = a1,...xd = ad) (10.18)
? P(C = c)P(x1 = a1,...xd = ad|C = c). (10.19)
The second relationship above is based on the fact that the term P(x1 = a1,...xd =
ad) in the denominator of the first relationship is independent of the class. Therefore, it
suffices to only compute the numerator to determine the class with the maximum conditional
probability. The value of P(C = c) is the prior probability of the class identifier c and
can be estimated as the fraction of the training data points belonging to class c. The key
usefulness of the Bayes rule is that the terms on the right-hand side can now be effectively
approximated from the training data with the use of a naive Bayes approximation. The
naive Bayes approximation assumes that the values on the different attributes x1 ...xd are
independent of one another conditional on the class. When two random events A and B are
independent of one another conditional on a third event F, it follows that P(A ? B|F) =
P(A|F)P(B|F).
Problem 1-09 (Algorithmic) Suppose the following is the mathematical model: Max 13x s.t. ax 40 X 0 where a is the number of hours required for each unit produced. With a = 5, the optimal solution is x = 8.00. If we have a stochastic model with a = 3, a = 4, a = 5, or a = 6 as the possible values for the number of hours required per unit, what is the optimal value for x? Round your answers for the optimal solution to two decimal places. Round the answers for profit to the nearest dollar. If a = 3, x = and profit = $ If a = 4, x = and profit = $ If a = 5, x = and profit = $ If a = 6, x = and profit = $ What problems does this stochastic model cause? The problem with this stochastic model is and therefore the values of are not known with certainty.
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started