All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
fundamentals simulations and advanced topics
Questions and Answers of
Fundamentals Simulations And Advanced Topics
5.18 This exercise guides you through a direct proof of the impossibility of I- resilient consensus for shared memory. Assume A is a 1-resilient consensus algorithm for n processors in shared memory.
5.17 In the transaction commit problem for distributed databases, each of n proces- sors forms an independent opinion whether to commit or abort a distributed transaction. The processors must come to
5.16 Consider a variation of the consensus problem in which the validity condition is the following: There must be at least one admissible execution with decision value 0, and there must be at least
5.15 Prove Lemma 5.16. That is, assume there is a wait-free consensus algorithm for the asynchronous shared memory system and prove that it has a bivalent initial configuration.
5.14 Assuming n is sufficiently large, modify the polynomial message algorithm of Section 5.2.5 to satisfy the stronger validity condition of Exercise 5.12.
5.13 Assuming n is sufficiently large, modify the exponential message algorithm of Section 5.2.4 to satisfy the stronger validity condition of Exercise 5.12.
5.12 Show that to satisfy the stronger validity condition (every nonfaulty deci- sion is some nonfaulty input) for Byzantine failures, n must be greater than max(3, m)f, where m is the size of the
5.11 Modify the exponential information gathering algorithm in Section 5.2.4 to reduce the number of messages to be O(f + n).
12.13 (a) Show that if A can simulate B, then A can simulate B with respect to the nonfaulty processors. (b) Show that if A can simulate B with respect to the nonfaulty processors, then A can
12.12 Show that the time complexity of Algorithm 39 is O(1). That is, a message broadcast by a nonfaulty processor is received by all nonfaulty processors within O(1) time.
12.11 Modify the algorithm of Section 12.7.3 to simulate synchronous identical Byzantine faults assuming n > 3f and using three rounds for each simulated round.
12.10 Modify the algorithm of Section 12.3.2 to simulate asynchronous identical Byzantine faults using only two types of messages. Assume n > 4f. What is the asynchronous time complexity of this
12.9 Show how to avoid validation of messages and use the simulation of identical Byzantine on top of Byzantine to get a simulation of Algorithm 15 with smaller messages. Hint: Note that in this
12.8 Show a simulation of crash failures on top of send omission failures that assumes only that n >f. (Informally speaking, in the send omission failure model, a faulty processor can either crash
12.7 Prove that the algorithm for consensus in the presence of crash failures (Al- gorithm 15) is correct even in the presence of omission failures.
12.6 In the simulation of crash failures on omission failures (Section 12.5), why do we need processors to accept messages echoed by other processors?
12.5 What happens in the simulation of crash failures on omission failures (Sec- tion 12.5) if n
12.4 Show how to reduce the size of messages in the synchronous simulation of identical Byzantine failures (Algorithm 36).
12.3 Explain why the following synchronous algorithm does not solve the con- sensus problem: Each processor broadcasts its input using Algorithm 36. Each processor waits for two rounds and then
12.2 Show that assuming processors are nonfaulty and the network corrupts mes- sages is equivalent to assuming processors are faulty and the network does not corrupt messages.
12.1 Show that there is no loss of generality in assuming that at each round a processor sends the same message to all processors.
11.8 If a specification is not internal, does that mean it cannot be implemented in an asynchronous system?
11.7 Is mutual exclusion an internal problem?
11.6 Apply synchronizer ALPHA to both of the synchronous leader election algo- rithms in Chapter 3. What are the resulting time and message complexities? How do they compare to the lower bounds for
11.5 What are the worst-case time and message complexities of the asynchronous BFS tree algorithm that results from applying synchronizer ALPHA? What network topology exhibits this worst-case
11.4 Prove Lemma 11.3.
11.3 Show how to bound the space complexity of synchronizer ALPHA.
11.2 Is wait-free consensus possible in the system SynchP? What if there is at most one failure? Hint: Think about Exercise 11.1.
11.1 Does logical buffering work for simulating system SynchP by system AsynchP in the presence of crash failures? If so, why? If not, then modify it to do so. What about Byzantine failures?
10.13 Show a direct simulation of a single-writer multi-reader register from message passing extending the algorithm of Section 10.4, without using an extra layer of sequence numbers. Prove the
10.12 Show how to combine Algorithm 27 and the simulation of a single-writer single-reader register from message passing (Section 10.4) to obtain a simu- lation of a single-writer multi-reader
10.11 Construct an execution in which multiple readers run the simulation of a single- writer single-reader register in the message-passing model (Section 10.4) and experience a new-old inversion.
10.10 Prove Theorem 10.21.
10.9 Prove Lemma 10.20.
10.8 Prove Lemma 10.17.
10.7 Prove the properties in Theorem 10.10 when i and j are reversed.
10.6 Prove Lemma 10.6.
10.5 Does there exist a wait-free simulation of an n-reader register from single- reader registers in which only one reader writes, when n > 2? Does there exist such a simulation for n > 2 readers in
10.4 In the proof of Theorem 10.3, show that wj, must be a write to a register in Si, for i = 1,2.
10.3 Suppose we attempt to fix the straw man multi-reader algorithm of Sec- tion 10.2.2 without having the readers write, by having each reader read the array B twice. Show a counterexample to this
10.2 Show that the two definitions of wait-free simulation discussed in Section 10.1 are equivalent.
10.1 Expand on the critical section idea for simulating shared memory (in a non- fault-tolerant way).
9.11 (a) An operation of a data type is called an accessor if, informally speaking, it does not change the "state" of the object. Make a formal definition of accessor using the notion of legal
9.10 This exercise asks you to generalize the proof of Theorem 9.8 that twrite (a) Consider a shared object (data type) specification with the following prop- erty. There exists a sequence p of
9.9 Develop a linearizable algorithm for implementing shared objects of other data types besides read/write registers, for instance, stacks and queues. Try to get the best time complexity for
9.8 if the assumption about the number of distinct readers and writers is removed?
9.7 Show that if u = 0, then local read and local write algorithms are possible for linearizability. 9.8 What happens to Theorem
9.6 For each e between 0 and u, describe an algorithm for sequential consistency in which reads take time d-c and writes take time c.
9.5 Present a schedule of the local writes algorithm (Algorithm 25) that is sequen- tially consistent but is not linearizable.
9.4 Prove that the response time of the totally ordered broadcast algorithm of Section 8.2.3.2 (Algorithm 21) is 2d.
9.3 Prove that sequential consistency is not composable. That is, present a sched- ule that is not sequentially consistent but whose projection on each object is sequentially consistent.
9.2 Prove that linearizability is local, that is, if we compose separate implemen- tations of linearizable shared variables and y, the result is also linearizable.
9.1 Prove that an algorithm that locally simulates a linearizable shared memory provides sequential consistency.
8.11 Show that broadcast with total ordering implies multiple-group ordering.
8.10 Prove that totally ordered reliable broadcast cannot be implemented on top of an asynchronous point-to-point message system. Hint: Use reduction to the consensus problem.
8.9 Show that Algorithm 22 does not provide total ordering, by explicitly con- structing an execution in which (concurrent) messages are not received in the same order by all processors.
8.8 Can the vector timestamps used in Algorithm 22 be replaced with ordinary (scalar) timestamps?
8.7 Prove Lemma 8.6.
8.6 Show that Algorithm 21 provides the causal ordering property, if each point- to-point link delivers messages in FIFO order.
8.5 Show that the symmetric algorithm of Section 8.2.3.2 (Algorithm 21) provides FIFO ordering.
8.4 Extend the asymmetric algorithm of Section 8.2.3 to provide FIFO ordering. Hint: Force a FIFO order on the messages from each processor to the central site.
8.3 Write pseudocode for the single-source FIFO broadcast algorithm described in Section 8.2.2; prove its correctness.
8.2 Write pseudocode for the basic broadcast algorithm described in Section 8.2.1; prove its correctness.
8.1 Prove that if a broadcast service provides both single-source FIFO ordering and total ordering, then it is also causal.
Prove that global simulation is transitive, that is, if A globally simulates B, and B globally simulates C, then A globally simulates C. Is the same true of local simulation?
Prove that global simulation implies local simulation.
Using the model presented in this chapter, specify the no deadlock and no lockout versions of the mutual exclusion problem.
6.14 Suppose that po has access to some external source of time, so that its adjusted clock can be considered correct and should not be altered. How can the two- processor algorithm from Section
6.13 Modify Algorithm 20 for synchronizing the clocks of n processors to use the improved clock difference estimation technique in Section 6.3.6. Analyze the worst-case skew achieved by your
6.12 Explain how a processor can calculate the round-trip delay of a query-response message pair when an arbitrary amount of time can elapse between the receipt of the query and the sending of the
6.11 Suppose we have a distributed system whose topology is a tree instead of a clique. Assume the message delay on every link is in the range [d-u, d]. What is the tight bound on the skew obtainable
6.10 Devise an algorithm to synchronize clocks when there is no upper bound on message delays.
6.9 Show that if the requirement of termination is dropped from the definition of achieving clock synchronization, a skew of 0 is obtainable. Hint: The adjusted clocks in this scheme are not very
6.8 In the proof of Theorem 6.11, verify that y' is a causal shuffle of y.
6.7 Modify the snapshot algorithm to record the channel states as well as the processor states. Prove that your algorithm is correct.
6.6 Prove that the algorithm for finding a maximal consistent cut is correct.
6.5 Prove that there is a unique maximal consistent cut preceding any given cut.
6.4 Prove that there is no loss of generality in assuming that at each computation event a processor receives exactly one message.
6.3 Extend the notion of a causal shuffle and prove Lemmas 6.1 and 6.2 for the shared memory model.
6.2 Suggest improvements in the message complexity of vector clocks.
2.1 Code one of the simple algorithms in state transitions.
2.13 Prove that the time complexity of Algorithm 3 is O(m).
2.14 Modify Algorithm 3 so it constructs a DFS numbering of the nodes, indicating the order in which the message (M) arrives at the nodes.
2.15 Modify Algorithm 3 to obtain an algorithm that constructs a DFS tree with O(n) time complexity. Hint: When a node receives the message (M) for the first time, it notifies all its neighbors but
2.16 Prove Theorem 2.12.
3.1 Prove that there is no anonymous leader election algorithm for asynchronous ring systems.
4.13 Construct an execution of the algorithm from Exercise 4.12 in which there are two processors in the entry section and both read at least 2(n) variables before entering the critical section.
4.12 Write the pseudocode for the algorithm described in Figure 4.12, and prove that it satisfies the mutual exclusion and the no deadlock properties. Which properties should the embedded components
4.11 Show a simplified version of the lower bound presented in Section 4.4.4 for the case n = 2. That is, prove that any mutual exclusion algorithm for two processors requires at least two shared
2.12 Prove that Algorithm 3 constructs a DFS tree of the network rooted at pr.
2.11 Modify Algorithm 3 so that all nodes terminate.
2.10 Modify Algorithm 3 so that it handles correctly the case where the distin- guished node has no neighbors.
2.2 Analyze the time complexity of the convergecast algorithm of Section 2.2 when communication is synchronous and when communication is asyn- chronous. Hint: For the synchronous case, prove that
2.3 Generalize the convergecast algorithm of Section 2.2 to collect all the infor- mation. That is, when the algorithm terminates, the root should have the input values of all the processors. Analyze
2.4 Prove the claim used in the proof of Lemma 2.6 that a processor is reachable from p,. in G if and only if it ever sets its parent variable.
2.5 Describe an execution of the modified flooding algorithm (Algorithm 2) in an asynchronous system with n nodes that does not construct a BFS tree.
2.6 Describe an execution of Algorithm 2 in some asynchronous system, where the message is sent twice on communication channels that do not connect a parent and its children in the spanning tree.
2.7 Perform a precise analysis of the time complexity of the modified flooding algorithm (Algorithm 2), for the synchronous and the asynchronous models.
2.8 Explain how to eliminate the (already) messages from the modified flooding algorithm (Algorithm 2) in the synchronous case and still have a correct algorithm. What is the message complexity of
2.9 Do the broadcast and convergecast algorithms rely on knowledge of the num- ber of nodes in the system?
Showing 100 - 200
of 222
1
2
3