Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

7 . 2 ( 8 * ) Write a computer program, using the algorithm in Example 7 . 2 , for simulating the hard -

7.2(8*) Write a computer program, using the algorithm in Example 7.2, for simulating the hard-core model on a general k \times k square grid. Then do some simulation experiments. Algorith in Example 7.2: G()P,'=1ZG12k=G(')P',
When you have managed to do this for, say, a 10\times 10 square lattice, consider the following: Think back to Example 2.3(the Internet as a Markov chain). Did that example seem to have a ridiculously huge state space? Well, you have just simulated a Markov chain whose state space is even bigger! It is not hard to show that the state space S as defined in (43) contains at least 2k2/2=2501.11015 elements much larger than the number of web pages on the Internet today.
Example 2.3: The Internet as a Markov chain. Imagine that you are surfing on
the Internet, and that each time that you encounter a web page, you click on one
of its hyperlinks chosen at random (uniformly). If X n denotes where you are after
n clicks, then (X0, X1,...) may be described as a Markov chain with state space
S equal to the set of all web pages on the Internet, and transition matrix P given
by
Pi j =
{1
di if page si has a link to page s j
0 otherwise,
where di is the number of links from page si .(To make this chain well-defined,
we also need to define what happens if there are no links at all from si . We may,
for instance, set Pii =1(and Pi j =0 for all i = j) in that case, meaning that
when you encounter a page with no links, you are stuck.) This is of course a very
complicated Markov chain (especially compared to Examples 2.1 and 2.2), but it
has nevertheless turned out to be a useful model which under various simplifying
assumptions admits interesting analysis.5
A recent variant (see Fagin et al.[Fa]) of this model is to take into account
also the possibility to use back buttons in web browsers. However, the resulting
process (X0, X1,...) is then no longer a Markov chain, since what happens when
the back button is pressed depends not only on the present state X n , but in general
also on X0,..., X n1. Nevertheless, it turns out that this variant can be studied
by a number of techniques from the theory of Markov chains. We will not say
anything more about this model here.
A useful way to picture a Markov chain is its so-called transition graph. The
transition graph consists of nodes representing the states of the Markov chain,
and arrows between the nodes, representing transition probabilities. This is
most easily explained by just showing the transition graphs of the examples
considered so far. See Figure 2.
In all examples above, as well as in Definition 2.1, the rule for obtaining
X n+1 from X n did not change with time. In some situations, it is more realistic,
or for other reasons more desirable, 6 to let this rule change with time. This
brings us to the topic of inhomogeneous Markov chains, and the following
definition, which generalizes Definition 2.1
image text in transcribed

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image_2

Step: 3

blur-text-image_3

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Fundamentals Of Database Systems

Authors: Ramez Elmasri, Shamkant B. Navathe

7th Edition Global Edition

1292097612, 978-1292097619

More Books

Students also viewed these Databases questions

Question

How do Excel Pivot Tables handle data from non OLAP databases?

Answered: 1 week ago