Answered step by step
Verified Expert Solution
Question
1 Approved Answer
I am trying to understand what this question is asking exactly. 38 Chapter 2 ERROR AND COMPUTER ARITHMETIC 2.1.1 Accuracy of Floating-Point Representation Consider how
I am trying to understand what this question is asking exactly.
38 Chapter 2 ERROR AND COMPUTER ARITHMETIC 2.1.1 Accuracy of Floating-Point Representation Consider how accurately a number can be stored in the floating-point representation This is measured in various ways, with the machine epsilon being the most popular. The machine epsilon for any particular floating-point format is the difference between 1 and the next larger number that can be stored in that format. In single precision IEEE format, the next larger binary number is (2.8) with the final binary digit 1 in position 23 to the right of the binary point. Thus, the machine epsilon in single precision IEEE format is 2-23. As an example, it follows that the number 1 +2-24 cannot be stored exactly in IEEE single precision format. From 23 1.19 10 (2.9) we say that IEEE single precision format can be used to store approximately 7 decimal digits of a number x when it is written in decimal format. In a similar fashion, the machine epsilon in double precision IEEE format is 2-52-2.22 x 10-16; IEEE double precision format can be used to store approximately 16 decimal digits of a number x. In MatLaB, the machine epsilon is available as the constant named eps As another way to measure the accuracy of a floating-point format, we look for the largest integer M having the property that any integer x satisfying 0 s x
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started