Precalc Summary Publish Date June 21, 2024 This isn't completely related to computer science so I didn't include the tag :P. Functions in this article only have
single variables.
lim x → a + f ( x ) = b \lim_{x \rightarrow a^{+}}{f(x)} = b
x → a + lim f ( x ) = b
represents the fact that when x x x approaches a a a from right, f f f approaches b b b , which is called a "right limit".
lim x → a − f ( x ) = b \lim_{x \rightarrow a^{-}}{f(x)} = b
x → a − lim f ( x ) = b
represents the fact that when x x x approaches a a a from left, f f f approaches b b b , which is called a "left limit".
When both of them are the same, that is
lim x → a + f ( x ) = lim x → a − f ( x ) = b \lim_{x \rightarrow a^{+}}{f(x)} = \lim_{x \rightarrow a^{-}}{f(x)} = b
x → a + lim f ( x ) = x → a − lim f ( x ) = b
, we say that when x x x approaches a a a , f f f approaches b b b , which is
lim x → a f ( x ) = b \lim_{x \rightarrow a}{f(x)} = b
x → a lim f ( x ) = b
, this also shows that f f f has a limit at x = a x=a x = a .
For a function to be continuous in a certain interval, it must first have limit in that interval. If f : R → R , f ( x ) f: R \rightarrow R, f(x) f : R → R , f ( x ) is
continuous given x ∈ [ a , b ] x \in [a,b] x ∈ [ a , b ] , then it must satisfy the following facts:
For all c c c in [ a , b ] [a,b] [ a , b ] , f f f at x = c x=c x = c must have a limit that is defined.
For all c c c in [ a , b ] [a,b] [ a , b ] , f ( c ) = lim x → c f ( x ) f(c) = \lim_{x \rightarrow c}{f(x)} f ( c ) = lim x → c f ( x )
A function's derivative with respect to a specific variable is the function's change regarding the change of the specified variable.
For example, the derivative of f : R → R , f ( x ) f: R \rightarrow R, f(x) f : R → R , f ( x ) regarding x x x is d f d x \frac{df}{dx} d x df .
It can be defined like this:
d f d x = lim h → 0 f ( x + h ) − f ( x ) h \frac{df}{dx} = \lim_{h \rightarrow 0}{\frac{f(x+h) - f(x)}{h}}
d x df = h → 0 lim h f ( x + h ) − f ( x )
When a function is differentiable in a certain interval, it must first be continuous in that interval. If f : R → R , f ( x ) f: R \rightarrow R, f(x) f : R → R , f ( x )
is differentiable, then its derivative must be continuous, so finally we can say that
Differentiable ⟹ Continuous ⟹ Limit Existing \text{Differentiable} \implies \text{Continuous} \implies \text{Limit Existing}
Differentiable ⟹ Continuous ⟹ Limit Existing
For a continuous function f f f in interval [ a , b ] [a,b] [ a , b ] , it must satisfy the following fact: ∀ c ∈ [ min ( f ( a ) , f ( b ) ) , max ( f ( a ) , f ( b ) ) ] \forall c \in [\min(f(a), f(b)), \max(f(a), f(b)) ] ∀ c ∈ [ min ( f ( a ) , f ( b )) , max ( f ( a ) , f ( b ))] ,
there exists at least one r r r that fulfills f ( r ) = c { r ∈ [ a , b ] } f(r) = c \{r \in [a,b]\} f ( r ) = c { r ∈ [ a , b ]} .
For a function f f f in interval [ a , b ] [a,b] [ a , b ] , U U U denotes the set of upper bounds (scalar) and L L L denotes the set of lower bounds (scalar).
Define v , ∀ v ∈ [ a , b ] v, \forall v \in [a, b] v , ∀ v ∈ [ a , b ] ,
l ≤ f ( v ) ≤ u , l ∈ L , u ∈ U l \leq f(v) \leq u, l \in L, u \in U
l ≤ f ( v ) ≤ u , l ∈ L , u ∈ U
When we limit the upper bound set and the lower bound set to only include values that exist as f f f 's output, we get EVT,
where L L L and U U U are each reduced respectively to a single value, f ( c ) f(c) f ( c ) and f ( d ) f(d) f ( d ) .
Let e e e be a number, b b b be a constant scalar, that results in
f ( x ) = e b x d f d x = b e b x = b ⋅ f ( x ) f(x) = e^{bx} \\
\frac{df}{dx} = be^{bx} = b \cdot f(x)
f ( x ) = e b x d x df = b e b x = b ⋅ f ( x )
, we can say that for all a ∈ R , a > 0 a \in R, a > 0 a ∈ R , a > 0 ,
f ( x ) = a x = e ln ( a ) x d f d x = ln ( a ) ⋅ e ln ( a ) x = ln ( a ) ⋅ f ( x ) f(x) = a^x = e^{\ln(a)x} \\
\frac{df}{dx} = \ln(a) \cdot e^{\ln(a)x} = \ln(a) \cdot f(x)
f ( x ) = a x = e l n ( a ) x d x df = ln ( a ) ⋅ e l n ( a ) x = ln ( a ) ⋅ f ( x )
We know that
d ( ln x ) d x = 1 x \frac{d(\ln x)}{dx} = \frac{1}{x}
d x d ( ln x ) = x 1
,
so for function f f f ,
d ( ln f ( x ) ) d x = d ( ln f ( x ) ) d f ⋅ d f d x = 1 f ( x ) ⋅ d f d x \frac{d(\ln f(x))}{dx} = \frac{d(\ln f(x))}{df} \cdot \frac{df}{dx} = \frac{1}{f(x)} \cdot \frac{df}{dx}
d x d ( ln f ( x )) = df d ( ln f ( x )) ⋅ d x df = f ( x ) 1 ⋅ d x df
. The above technique is actually chain rule if you look closely!
Given function f ( x ) f(x) f ( x ) , the linear approximation (tangent lines) at
( r , f ( r ) ) (r, f(r)) ( r , f ( r )) is y = f ′ ( r ) ( x − r ) + f ( r ) = g ( x ) y = f'(r)(x-r)+f(r) = g(x) y = f ′ ( r ) ( x − r ) + f ( r ) = g ( x ) .
(f ′ f' f ′ is the derivative of f f f )
( 1 + x ) r ≈ 1 + r x where x → 0 (1+x)^r \approx 1 + rx \text{ where } x \rightarrow 0
( 1 + x ) r ≈ 1 + r x where x → 0
sin ( x ) ≈ x where x → 0 \sin(x) \approx x \text{ where } x \rightarrow 0
sin ( x ) ≈ x where x → 0
cos ( x ) ≈ 1 where x → 0 \cos(x) \approx 1 \text{ where } x \rightarrow 0
cos ( x ) ≈ 1 where x → 0
e x ≈ 1 + x where x → 0 e^x \approx 1 + x \text{ where } x \rightarrow 0
e x ≈ 1 + x where x → 0
ln ( 1 + x ) ≈ x where x → 0 \ln(1+x) \approx x \text{ where } x \rightarrow 0
ln ( 1 + x ) ≈ x where x → 0
( 1 + x ) r ≈ ( 1 + 0 ) r + r ( 1 + 0 ) r − 1 x = 1 + r x (1+x)^r \approx (1+0)^r + r(1+0)^{r-1}x=1+rx
( 1 + x ) r ≈ ( 1 + 0 ) r + r ( 1 + 0 ) r − 1 x = 1 + r x
sin ( x ) ≈ sin ( 0 ) + cos ( 0 ) x = x \sin(x)\approx \sin(0) + \cos(0)x = x
sin ( x ) ≈ sin ( 0 ) + cos ( 0 ) x = x
cos ( x ) ≈ cos ( 0 ) + − sin ( 0 ) x = 1 \cos(x)\approx \cos(0) + -\sin(0)x = 1
cos ( x ) ≈ cos ( 0 ) + − sin ( 0 ) x = 1
e x ≈ e 0 + e 0 x = 1 + x e^x \approx e^0 + e^0 x = 1 + x
e x ≈ e 0 + e 0 x = 1 + x
ln ( 1 + x ) ≈ ln ( 1 ) + x 1 + 0 = x \ln(1+x) \approx \ln(1) + \frac{x}{1+0}=x
ln ( 1 + x ) ≈ ln ( 1 ) + 1 + 0 x = x
Think of a function f ( x ) = a x 2 + b x + c f(x)=ax^2+bx+c f ( x ) = a x 2 + b x + c , we want to see its derivatives at x = 0 x=0 x = 0 :
f ( 0 ) = c , f ′ ( 0 ) = b , f ′ ′ ( 0 ) = 2 a f(0)=c,
f'(0)=b,
f''(0)=2a
f ( 0 ) = c , f ′ ( 0 ) = b , f ′′ ( 0 ) = 2 a
which can represent f ( x ) f(x) f ( x ) as f ( 0 ) + f ′ ( 0 ) x + f ′ ′ ( 0 ) x 2 ⋅ ( 2 ! ) − 1 f(0) + f'(0)x + f''(0)x^2 \cdot (2!)^{-1} f ( 0 ) + f ′ ( 0 ) x + f ′′ ( 0 ) x 2 ⋅ ( 2 ! ) − 1 .
Generalizing this observation, we can get that the quadratic approximation at x = r x=r x = r is
f ( x ) ≈ f ( r ) + f ′ ( r ) ( x − r ) 1 1 ! + f ′ ′ ( r ) ( x − r ) 2 2 ! f(x) \approx f(r) + \frac{f'(r)(x-r)^1}{1!} +
\frac{f''(r)(x-r)^2}{2!}
f ( x ) ≈ f ( r ) + 1 ! f ′ ( r ) ( x − r ) 1 + 2 ! f ′′ ( r ) ( x − r ) 2
The notation for quadratic approximation of function f f f is Q ( f ) Q(f) Q ( f ) .
Q ( f ( x ) ⋅ g ( x ) ) = Q ( Q ( f ( x ) ) ⋅ Q ( g ( x ) ) ) Q(f(x) \cdot g(x)) = Q(Q(f(x))\cdot Q(g(x)))
Q ( f ( x ) ⋅ g ( x )) = Q ( Q ( f ( x )) ⋅ Q ( g ( x )))
It's not particularly difficult to see the pattern from above approximations. For function f f f , its
approximation at x = r x=r x = r would be
f ( x ) = f ( r ) + ∑ n = 1 ∞ f ( n ) ( x ) ⋅ ( x − r ) n n ! f(x) = f(r) + \sum^{\infty }_{n=1}{\frac{f^{(n)}(x) \cdot (x-r)^n}{n!}}
f ( x ) = f ( r ) + n = 1 ∑ ∞ n ! f ( n ) ( x ) ⋅ ( x − r ) n
, which is called Taylor series.