The Inverse Function Theorem

Revision of Injectivity

Recall that a function is injective if it doesn't map two distinct points to the same point.

We express this formally using the following definition:

Definition: If $X$ and $Y$ are sets and $f:X \rightarrow Y$ is a function then $f$ is injective iff for any two points $x,x'\in X$ such that $x\neq x'$ then $f(x)\neq f(x')$.

Recall that if $X$ and $Y$ are sets and $f:X \rightarrow Y$ is a function between them then $f$ is injective iff for any two points $x,x'\in X$ such that $f(x)=f(x')$ we have in fact that $x=x'$. In the case of functions $f:\mathbb{R} \rightarrow \mathbb{R}$ a function is injective iff it is strictly increasing or decreasing. For instance in the following diagram

the blue line $f(x)=x+1$ is injective but the flat green line $g(x)=3$ is not. To see that $g$ is not injective we only need to show that there are two we observe that for instance $g(1)=3=g(-2)$ but clearly $1\neq -2$. The following diagram shows the graph of $f(x)=\frac{1}{10}x^3+3$ in blue and the graph of $g(x)=-x^2+2$ in green:

The Definition of Derivative

If $f:\mathbb{R}^n \rightarrow \mathbb{R}^m$ is a function and $\vec{x}_0\in \mathbb{R}^n$ then a linear map $\alpha$ is a derivative for $f$ at $\vec{x}_0$ iff there exists a neighbourhood $U$ of $\vec{x}_0$ and a function $\epsilon:U \rightarrow \mathbb{R}^m$ and such that for all $\vec{x}\in U$: $$f(\vec{x}) = f(\vec{x}_0) + \alpha(\vec{x}-\vec{x}_0) + \epsilon(\vec{x})\|\vec{x}-\vec{x}_0\|$$ and $\epsilon(\vec{x}) \to 0$ as $\vec{x} \to 0$.

The One Dimensional Inverse Function Theorem

Let $f:\mathbb{R}\rightarrow \mathbb{R}$ and $\alpha\in \mathbb{R}$ be the derivative of $f$ at $-1$. Then the following equation holds $$f({x}) = f({x}_0) + \alpha\cdot({x}-{x}_0) + \epsilon({x})|{x}-{x}_0|$$ where $\epsilon$ is a function of $x$ such that $\epsilon(x)\to 0$ as $x\to 0$. In the diagram below the blue curve is the function $f(x)$ and we further assume that the derivative of $f$ at $x=-1$ is not zero. The black line is the linear approximation to $f(x)$ at $x$. The red line shows the size of the error term $\epsilon(a)$. Move the red point to see how the error changes with $a$.

Notice that we have chosen the green interval small enough so that the red error is never large enough to make the blue curve 'bend back on itself'. More precisely we have chosen the green interval small enough so that the function $f$ is injective on the green interval.

But can we always do this? In general we cannot. For instance consider the function $f(x)=x^2$ at the point $x=0$:

The inverse function theorem tells us that if $f$ is a differentiable function and $x$ is a point in the domain of $f$ such that the derivative of $f$ at $x$ is invertible then we can always find an open set around $x$ on which $f$ is injective.