# Lecture 2 (Basic Error and Some Definitions)

## Round-off Error

Suppose that \(p^* \) approximates \(p\):

- \(p-p^* \) = the actual Error
- \( \frac{|{p-p^* }|}{|p|} \) is the relative error
- \(|p-p^* |\) is the absolute error

The relative error is usually more significant, i.e. more practical to use

### example:

- \(p = 0.003, p^* =0.0031\) actual error: -0.0001 relative error: 0.03333333 absolute error: 0.0001
- \(p = 3000, p^* = 3100\) actual error: -100 relative error: 0.03333333 absolute error: 100

## Finite digit numbers representation (floating point numbers)

\[ r=\pm m* 10^c \]

where \(m\) refers to the mantissa and \(c\) refers to the characteristic. This format is used to store most floating numbers in computers. \(m\) is of the form \( m=d_1, d_2, d_3…d_k \), where \(k\) is the number of significant digits in the number being represented.

We need a way to deal with non-finite digit numbers, for example \(0.3178124655789\) Let’s say we want it to be 4 significant digits long. One method is to truncate, i.e. end up with \(0.3178\). The other way is to round the value, which is simply rounding the \(8\) in this case based on the following digit. If that digit is \(\geq 5\), you round up, otherwise you round down. For example, chopping the number above at 3 significant digits using round-off would leave you with \(0.318\)

## Errors

Both of these methods “lose” information, which causes errors. These errors are carried over future operations as well.

For example, \[ \frac{1}{3}+\frac{3}{11}+\frac{3}{20}=0.75606060… \] if you round, you might get \[ 0.3333 + 0.2727+0.15 = 0.756 \] which is not exact. It is essential then to characterize the margin for which you know the accuracy of your approximation. In other words, to know how many digits of your answer can be trusted.

### Definition

\( p^* \) approximates \( p \) to to \( t \) significant digits:

\[ \frac{|p-p^* |}{ |p| } \leq 5* 10^{-t} \]

Example, for \(p=900\), Find the largest interval of values \(p^* \) to approximate \(p\) up to \(10^{-10}\) 1. \[ \frac{|900-p^* |}{|900|} \leq 5* 10^{-10} \] 2. \[ |900-p^* | \leq 5* 10^{-10}* |900| \]

- Here we split into two equations, to get rid of absolute value \[ 900-p^* \leq 900* 5* 10^{-10} \textrm{ AND } 900-p^* \geq 900* 5* 10^{-10} \]

Solving for a range for \(p^* \) from here is trivial, and will be left to the reader.

## Algorithms

An algorithm is a procedure that describes in an unambiguous way a phenomenon that is composed of a finite number of steps to be performed in a specific order. Properties to look for in a good algortihm

- “short running time” aka FAST
- The algorithm returns the correct result aka ACCURATE
- Do not take up too many system resources aka EFFICIENT
- The algorithm should be able to handle bad input aka ROBUST/STABLE
- In other words, small errors in data should only produce small errors in result, not large Errors

There are two main types of error

- The error is LINEAR (cumulative error): \[ \epsilon_n=C_n+\epsilon_0 \] Where \(\epsilon_n\) is the error at the \(n^{th}\) step, \(C_n\) is some constant, and \(\epsilon_0\) is the error at step 0.
- The error is EXPONENTIAL:
\[
\epsilon_n = C^n\epsilon_0
\]
Exponential error increases with each step, and is therefore almost always worse than Linear error.
## Rate of Convergence

How fast (how many steps) it takes a solution to arrive at an answer, or more specifically, characterizes how rapidly the algorithm converges. This is equivalent to measuring how fast the error between approximation and the exact solution goes to 0.

We will use Big-O notation to define rates of convergence. Suppose we have a sequence such that \[ \beta_{n} \rightarrow0 \textrm{ as } n \rightarrow \infty \textrm{ and } \alpha_n \rightarrow \alpha \textrm{ as } n \rightarrow \infty \] IF \[ |\alpha_n - \alpha| \leq C|\beta_n| \textrm { and } \alpha_n = \alpha + O(\beta_n) \] then \(\alpha_n\) converges to \(\alpha\) with a rate of \(O(\beta_n)\)