Rabu, 21 November 2012

Limit (Mathematics)

In mathematics, a limit is the value that a function or sequence "approaches" as the input or index approaches some value. Limits are essential to calculus (and mathematical analysis in general) and are used to define continuity, derivatives, and integrals.
The concept of a limit of a sequence is further generalized to the concept of a limit of a topological net, and is closely related to limit and direct limit in category theory.In formulas, limit is usually abbreviated as lim as in lim(an) = a, and the fact of approaching a limit is represented by the right arrow (→) as in ana.

Limit of a function

Whenever a point x is within δ units of c, f(x) is within ε units of L.

For all x > S, f(x) is within ε of L.



Suppose f(x) is a real-valued function and c is a real number. The expression
 \lim_{x \to c}f(x) = L
means that f(x) can be made to be as close to L as desired by making x sufficiently close to c. In that case, the above equation can be read as "the limit of f of x, as x approaches c, is L".
Augustin-Louis Cauchy in 1821, followed by Karl Weierstrass, formalized the definition of the limit of a function as the above definition, which became known as the (ε, δ)-definition of limit in the 19th century. The definition uses ε (the lowercase Greek letter epsilon) to represent a small positive number, so that "f(x) becomes arbitrarily close to L" means that f(x) eventually lies in the interval (L - ε, L + ε), which can also be written using the absolute value sign as |f(x) - L| < ε. The phrase "as x approaches c" then indicates that we refer to values of x whose distance from c is less than some positive number δ (the lower case Greek letter delta)—that is, values of x within either (c - δ, c) or (c, c + δ), which can be expressed with 0 < |x - c| < δ. The first inequality means that the distance between x and c is greater than 0 and that x ≠ c, while the second indicates that x is within distance δ of c.
Note that the above definition of a limit is true even if f(c) ≠ L. Indeed, the function f(x) need not even be defined at c.
For example, if
 f(x) = \frac{x^2 - 1}{x - 1}
then f(1) is not defined, yet as x moves arbitrarily close to 1, f(x) correspondingly approaches 2:
f(0.9) f(0.99) f(0.999) f(1.0) f(1.001) f(1.01) f(1.1)
1.900 1.990 1.999 ⇒ undefined ⇐ 2.001 2.010 2.100
Thus, f(x) can be made arbitrarily close to the limit of 2 just by making x sufficiently close to 1.
In other words,  \lim_{x \to 1} \frac{x^2-1}{x-1} = 2
In addition to limits at finite values, functions can also have limits at infinity. For example, consider
f(x) = {2x-1 \over x}
  • f(100) = 1.9900
  • f(1000) = 1.9990
  • f(10000) = 1.99990
As x becomes extremely large, the value of f(x) approaches 2, and the value of f(x) can be made as close to 2 as one could wish just by picking x sufficiently large. In this case, the limit of f(x) as x approaches infinity is 2. In mathematical notation,
 \lim_{x \to \infty} \frac{2x-1}{x} = 2.

Limit of a sequence

Consider the following sequence: 1.79, 1.799, 1.7999,... It can be observed that the numbers are "approaching" 1.8, the limit of the sequence.
Formally, suppose a1, a2, ... is a sequence of real numbers. It can be stated that the real number L is the limit of this sequence, namely:
 \lim_{n \to \infty} a_n = L
to mean
For every real number ε > 0, there exists a natural number n0 such that for all n > n0, |an − L| < ε.
Intuitively, this means that eventually all elements of the sequence get arbitrarily close to the limit, since the absolute value |an − L| is the distance between an and L. Not every sequence has a limit; if it does, it is called convergent, and if it does not, it is divergent. One can show that a convergent sequence has only one limit.
The limit of a sequence and the limit of a function are closely related. On one hand, the limit as n goes to infinity of a sequence a(n) is simply the limit at infinity of a function defined on the natural numbers n. On the other hand, a limit L of a function f(x) as x goes to infinity, if it exists, is the same as the limit of any arbitrary sequence an that approaches L, and where an is never equal to L. Note that one such sequence would be L + 1/n.

Limit as standard part

In the context of a hyperreal enlargement of the number system, the limit of a sequence (a_n) can be expressed as the standard part of the value a_H of the natural extension of the sequence at an infinite hypernatural index n=H. Thus,
 \lim_{n \to \infty} a_n = \text{st}(a_H) .
Here the standard part function "st" associates to each finite hyperreal, the unique finite real infinitely close to it (i.e., the difference between them is infinitesimal). This formalizes the natural intuition that for "very large" values of the index, the terms in the sequence are "very close" to the limit value of the sequence. Conversely, the standard part of a hyperreal a=[a_n] represented in the ultrapower construction by a Cauchy sequence (a_n), is simply the limit of that sequence:
 \text{st}(a)=\lim_{n \to \infty} a_n .
In this sense, taking the limit and taking the standard part are equivalent procedures.

Convergence and fixed point

A formal definition of convergence can be stated as follows. Suppose  { {p}_{n} } as  n goes from  0 to   \infty  is a sequence that converges to a fixed point  p , with   {p}_{n} \neq 0 for all  n . If positive constants  \lambda and  \alpha exist with
\lim_{n \rightarrow  \infty  }  \frac{ \left | { p}_{n+1 } -p   \right |  }{ { \left |  { p}_{n }-p   \right |  }^{ \alpha} } =\lambda
then  { {p}_{n} } as  n goes from  0 to   \infty  converges to  p of order  \alpha , with asymptotic error constant  \lambda
Given a function  f(x) = x with a fixed point  p , there is a nice checklist for checking the convergence of p.
1) First check that p is indeed a fixed point:
 f(p) = p
2) Check for linear convergence. Start by finding \left | f^\prime (p)  \right | . If....
\left | f^\prime (p)  \right | \in (0,1) then there is linear convergence
\left | f^\prime (p)  \right | > 1 series diverges
\left | f^\prime (p)  \right | =0 then there is at least linear convergence and maybe something better, the expression should be checked for quadratic convergence
3) If it is found that there is something better than linear the expression should be checked for quadratic convergence. Start by finding \left | f^{\prime\prime} (p)  \right | If....
\left | f^{\prime\prime} (p)  \right | \neq 0 then there is quadratic convergence provided that  f^{\prime\prime} (p) is continuous
\left | f^{\prime\prime} (p)  \right | = 0 then there is something even better than quadratic convergence
\left | f^{\prime\prime} (p)  \right | does not exist then there is convergence that is better than linear but still not quadratic

0 komentar:

Posting Komentar