I stumbled upon this question, which I think was answered incorrectly. Considering the Dirac delta function
δ:R→[0,∞],x↦{∞,if x=0,0,otherwise
as a mapping from R to [0,∞] (which is perfectly well-defined) and NOT as a distribution S(R)→R or C∞c(R)→R. I commonly see people use the sequence δn:=n⋅1[−12n,12n] to "prove" ∫δdλ=1, but this sequence isn't even increasing. If this argument actually holds, what prevents me from using δn:=α⋅n⋅1[−12n,12n],α>0 to "prove" ∫δdλ=α? Now my question is whether or not the function δ as defined above is actually Lebesgue-integrable? Here's my shot at proving that it is Lebesgue-integrable (maybe I just missed some detail):
δ:R→[0,∞] is B(R)-B(¯R)-measurable, because we have δ−1(B(¯R))={∅,{0},R∖{0},R}⊆B(R). Furthermore δn:=n⋅1{0}:R→[0,∞[ is a simple function (non-negative, bounded, measurable and has finite image) for every n≥1 and the sequence (δn)n≥1 is monotonically increasing, converges pointwise to δ and it holds that lim Hence, \delta is Lebesgue-integrable by definition and we have \int\delta\,d\lambda=0.
APPENDIX: Here is the definition of Lebesgue-integrability that I've learned. Let (\Omega,\mathcal A,\mu) be some measure space. A function f\colon\Omega\to[0,\infty) is called simple, if it is \mathcal A-\mathcal B(\mathbb R)-measurable and has a finite image, i.e. f=\sum_{i=1}^k\alpha_i1_{A_i} for some \alpha_1,\dots,\alpha_k\in[0,\infty[ and A_1,\dots,A_k\in\mathcal A (not necessarily disjoint). It's integral is defined as \int f\,d\mu:=\sum_{i=1}^k\alpha_i\mu(A_i)\in[0,\infty]. We then showed that this is well-defined, i.e. independent of the choice of \alpha_1,\dots,\alpha_k and A_1,\dots,A_k. Now let f\colon\Omega\to[0,\infty] be \mathcal A-\mathcal B(\overline{\mathbb R})-measurable. We proceeded to show that there exists an increasing sequence (f_n)_{n\in\mathbb N} of simple functions f_n\colon\Omega\to[0,\infty) such that f_n\to f pointwise. We then defined the integral of f to be \int f\,d\mu:=\lim_{n\to\infty}\int f_nd\mu\in[0,\infty] and showed that this is well-defined too, i.e. independent of the choice of (f_n)_{n\in\mathbb N}. We called f integrable, if \int f\,d\mu<\infty. A \mathcal A-\mathcal B(\overline{\mathbb R})-measurable function f\colon\Omega\to\overline{\mathbb R} is called integrable, if f^+:=f\cdot1_{\{f\geq0\}}\colon\Omega\to[0,\infty] and f^-:=-f\cdot1_{\{f\leq0\}}\colon\Omega\to[0,\infty] are integrable. In this case we define \int f\,d\mu:=\int f^+d\mu-\int f^-d\mu.
Using this definition, it is easy to show using measure theoretic induction that a \mathcal A-\mathcal B(\overline{\mathbb R})-measurable function f\colon\Omega\to\overline{\mathbb R} with \mu(\{f\neq0\})=0 is integrable and has integral 0.
\delta\colon\mathbb R\to[0,\infty] clearly is \mathcal B(\mathbb R)-\mathcal B(\overline{\mathbb R})-measurable and satisfies \lambda(\{\delta\neq0\})=\lambda(\{0\})=0. Hence \delta\colon\mathbb R\to[0,\infty] is integrable and \int\delta\,d\lambda=0.
Answer
It depends on your definition of Lebesgue Integrable...
If \int \delta d \lambda := \lim_{n \to \infty} \int \delta_n d \lambda, then, since the limit exists and is 1, \delta is integrable.
However, usually the Lebesgue integral is defined as \int \delta d \lambda := \sup\left\{ \sum_{i=1}^n (b_i-a_i)\alpha_i \right\}, where the \sup runs over all functions \varphi(t) := \sum_{i=1}^n \alpha_i \cdot 1_{(a_i,b_i)}(t), that satisfy 0 \leq \varphi(t) \leq \delta(t). (Here 1_S(t) is the indicator function of the set S.) In this definition, \int \delta d \lambda = 0, so \delta is integrable for a stupid reason: it's 0 almost everywhere.
PS: A generally difficult problem in analysis is to identify when \lim \int f_n = \int \lim f_n. With your \delta and \delta_n functions, you have an example of when the two sides are not equal.
No comments:
Post a Comment