Defining the value of a distribution at a point - MathOverflowmost recent 30 from www.4124039.com2019-04-17T18:30:04Zhttp://www.4124039.com/feeds/question/323753http://www.creativecommons.org/licenses/by-sa/3.0/rdfhttp://www.4124039.com/q/3237538Defining the value of a distribution at a pointB Khttp://www.4124039.com/users/581252019-02-21T15:59:48Z2019-02-25T14:20:50Z
<p>Let <span class="math-container">$\omega\in D'(\mathbb R^n)$</span> be a distribution and <span class="math-container">$p\in \mathbb R^n$</span>. If there is an open set <span class="math-container">$U\subset \mathbb R^n$</span> containing <span class="math-container">$p$</span> such that <span class="math-container">$\omega|_U$</span> is given by a continuous function <span class="math-container">$f\in C(U)$</span>, then for every <span class="math-container">$\phi\in C^\infty_c(\mathbb R^n)$</span> with <span class="math-container">$\int_{\mathbb R^n}\phi(x)d x=1$</span> we can define a Dirac sequence <span class="math-container">$\{\phi^p_j\}_{j\in \mathbb N}\subset D(\mathbb R^n)$</span> by <span class="math-container">$\phi^p_j(x):=j^n\phi(j(x-p))$</span> which fulfills
<span class="math-container">$$
\omega(\phi^p_j)\to f(p)\quad \text{ as }j\to \infty.
$$</span>
This shows that we can recover the value <span class="math-container">$\omega(p)\equiv f(p)$</span> of the distribution <span class="math-container">$\omega$</span> at the point <span class="math-container">$p$</span> via a limit of such Dirac sequences.</p>
<p>Now, suppose that for some <span class="math-container">$\omega\in D'(\mathbb R^n)$</span> and <span class="math-container">$p\in \mathbb R^n$</span> we just know that
<span class="math-container">$
\lim_{j\to \infty}\omega(\phi^p_j)
$</span>
exists for every <span class="math-container">$\phi\in C^\infty_c(\mathbb R^n)$</span> with <span class="math-container">$\int_{\mathbb R^n}\phi(x)d x=1$</span> and is independent of <span class="math-container">$\phi$</span>. In view of the above it then seems reasonable to define <span class="math-container">$\omega(p):=\lim_{j\to \infty}\omega(\phi^p_j)$</span> and to say that <span class="math-container">$\omega$</span> has a well-defined value at the point <span class="math-container">$p$</span>.</p>
<p><strong>Q:</strong> Is this definition useful in any sense? I have the feeling that it might be fundamentally flawed. In that case, I'd find it interesting to know what's the greatest generality in which one can make sense of "the value of a distribution at a point".</p>
<p><strong>Additional thoughts after 1st edit:</strong> Some "consistency checks" for the definition would in my opinion be the following: </p>
<ol>
<li><p>If the value of <span class="math-container">$\omega$</span> exists at every point in some open set <span class="math-container">$U\subset \mathbb R^n$</span> and the function <span class="math-container">$f$</span> defined on <span class="math-container">$U$</span> by these values is continuous, then <span class="math-container">$\omega|_U$</span> is given by <span class="math-container">$f$</span>.</p></li>
<li><p>If the value of <span class="math-container">$\omega$</span> exists at Lebesgue-almost every point in some open set <span class="math-container">$U\subset \mathbb R^n$</span> and the values define a function <span class="math-container">$f\in L^1_{\mathrm{loc}}(U)$</span>, then <span class="math-container">$\omega|_U$</span> is given by <span class="math-container">$f$</span>.</p></li>
</ol>
<p>I believe that at least property 1 should be true and I'll check it once I find the time.</p>
<p><strong>2nd edit</strong>: My question is related to <a href="http://www.4124039.com/questions/262606/distribution-that-vanishes-against-approximated-delta-is-zero?rq=1">this MO question</a> which corresponds to the case <span class="math-container">$f\equiv 0$</span>.</p>
http://www.4124039.com/questions/323753/-/323766#3237664Answer by Abdelmalek Abdesselam for Defining the value of a distribution at a pointAbdelmalek Abdesselamhttp://www.4124039.com/users/74102019-02-21T18:37:58Z2019-02-22T13:38:55Z<p>It's not a bad definition and I think it is better to think of it as a particular case of the "restriction problem", i.e., trying to define the restriction <span class="math-container">$\omega|_{\Gamma}$</span> of <span class="math-container">$\omega$</span> to some subset <span class="math-container">$\Gamma\subset\mathbb{R}^n$</span>.
When one succeeds the result is called a <em>trace theorem</em>. This usually requires some quantitative regularity hypothesis on <span class="math-container">$\omega$</span>, e.g., being in a Sobolev space <span class="math-container">$H^s$</span> with <span class="math-container">$s>$</span>something.</p>
<p>A particularly important case is when <span class="math-container">$\Gamma$</span> is an affine subspace, or say for simplicity a linear subspace like <span class="math-container">$\Gamma =\mathbb{R}^m\times\{0\}^{n-m}\subset\mathbb{R}^n$</span>.
A rather standard approach is to start with <span class="math-container">$\omega\in\mathcal{D}'(\mathbb{R}^n)$</span>.
The convolution <span class="math-container">$\omega\ast \phi_j^0$</span> is in the space of <span class="math-container">$C^{\infty}$</span> functions
<span class="math-container">$\mathcal{E}(\mathbb{R}^n)\subset
\mathcal{D}'(\mathbb{R}^n)$</span> and converges to <span class="math-container">$\omega$</span>
in the topology of <span class="math-container">$\mathcal{D}'(\mathbb{R}^n)$</span> (the strong topology).
The ordinary restriction <span class="math-container">$\omega\ast \phi_j^0|_{\Gamma}$</span> makes sense
and you can ask if the limit <span class="math-container">$\lim_{j\rightarrow\infty}\omega\ast \phi_j^0|_{\Gamma}$</span>
exists inside <span class="math-container">$\mathcal{D}'(\mathbb{R}^m)$</span>.</p>
<p>Your particular case <span class="math-container">$p=0$</span> corresponds to mine with <span class="math-container">$m=0$</span>.</p>
<p>Another problem of this kind is pointwise multiplication. If <span class="math-container">$\omega_1(x)$</span> and <span class="math-container">$\omega_2(x)$</span> are two distributions, then there is no problem defining <span class="math-container">$\omega_1(x_1)\omega_2(x_2)$</span> (tensor product), but the issue is how to restrict to the diagonal <span class="math-container">$\Gamma=\{x_1=x_2\}$</span>.</p>
<p>Finally, note that all of these problems become much more interesting for random distributions, because it's like magic: you can sometimes do the (deterministically) impossible.</p>
<hr>
<p><strong>Small addendum:</strong> Suppose that for some reason one has a trace theorem but only for large enough <span class="math-container">$m$</span> and one cannot do the <span class="math-container">$m=0$</span> or the point restriction case. Then one can still do the following "stabilization" trick: change <span class="math-container">$\omega$</span> to <span class="math-container">$\omega\otimes 1$</span> where one tensors with the constant function equal to one seen as a distribution in say <span class="math-container">$p$</span> new variables. If you can restrict it from <span class="math-container">$\mathbb{R}^{n+p}$</span> to a subspace of dimension <span class="math-container">$p$</span>, then you will have your point evaluation after factoring out the <span class="math-container">$\otimes 1$</span>.
The last step of course needs your restriction construction to be invariant/covariant by translation along <span class="math-container">$\Gamma$</span>.</p>
http://www.4124039.com/questions/323753/-/323805#3238052Answer by Daniele Tampieri for Defining the value of a distribution at a pointDaniele Tampierihttp://www.4124039.com/users/1137562019-02-22T06:47:03Z2019-02-22T17:14:59Z<p>The definition of the value of a distribution at a point you describe in your question does not seem flawed to me since, at least from the point of view of the independence on <span class="math-container">$\delta$</span>-sequences, follows the path traced years ago by Stanis?aw ?ojasiewicz in the paper [1], so I describe his approach to the problem below.</p>
<p>?ojasiewicz analyzes the problem for functions of one variable, i.e. <span class="math-container">$n=1$</span>: by using the definition of <em>change of variables in a distribution</em> (see for example [2], §1.9 pp. 21-22) and considering the change of variable <span class="math-container">$y=x_0+\lambda x$</span>, for <span class="math-container">$ x,x_0,\lambda \in\Bbb R$</span>, i.e.
<span class="math-container">$$
\begin{split}
T(x_0+\lambda x)&\triangleq \langle T(x_0+\lambda x),\varphi(x)\rangle\\
&=\left\langle T(y),\frac{\varphi\big(\lambda^{-1} (y-x_0)\big)}{\lambda}\right\rangle
\end{split}
\quad \varphi\in\mathscr{D}(\Bbb R)
$$</span>
he defines the <em>limit of a distribution at a point <span class="math-container">$x_o$</span></em> as ([1], §1 p. 2-3)
<span class="math-container">$$
\lim_{x\to x_0} T\triangleq \lim_{\lambda\to 0} T(x_0+\lambda x)
\label{1}\tag{1}
$$</span>
and proves that</p>
<ul>
<li><span class="math-container">$\lim_{x\to x_0} T=\lim_{x\to x_0^+} T=\lim_{x\to x_0^-} T$</span></li>
<li>by using an earlier result of Ziele?ny, if the limit \eqref{1} exists, it is necessarily a constant <span class="math-container">$C\in \Bbb C$</span>, or more precisely a <em>constant distribution</em> <span class="math-container">$C$</span>.</li>
<li>a necessary and sufficient condition for the limit \eqref{1} to exist is (see [1], §2, theorem 2.2, pp. 5-7) that <span class="math-container">$T=f^{(n)}$</span>, where <span class="math-container">$f\in C^0(\Bbb R)$</span> and
<span class="math-container">$$
\lim_{x\to x_0}\frac{f(x)}{(x-x_0)^n}=\frac{C}{n!}.
$$</span></li>
</ul>
<p>Then <strong><em>?ojasiewicz assumes \eqref{1} as the definition of the value of a distribution at a point</em></strong>: note that this definition does not rely on any particular test function (or sequence of such) <span class="math-container">$\varphi\in\mathscr{D}(\Bbb R)$</span>, as stated above. Now a few observations:</p>
<ol>
<li>?ojasiewicz ([1], §1 p. 1) states that the case <span class="math-container">$n>1$</span> will be analyzed in a subsequent paper which to my knowledge has never been published. However (but this only my opinion), a generalization of \eqref{1} could perhaps be tried by using the Stoltz condition as described, for example, in the textbook of Griffith Bailey Price (1984) <em>Multivariable Analysis</em>, Springer-Verlag.</li>
<li>?ojasiewicz gives another necessary and sufficient condition for the limit \eqref{1} to exists, in terms of Denjoy differentials ([1], §2, corollary to theorem 2.2, p. 7). </li>
<li>The term <span class="math-container">$\lambda^{-1}$</span>, more or less intrinsically used in \eqref{1}, suggests the possible use of the <a href="https://en.wikipedia.org/wiki/Mellin_transform" rel="nofollow noreferrer">Mellin transform</a>: this suggestion was followed by Bogdan Ziemian in [3], §12 pp. 41-42. He defines a <em>(generalized) spectral value of a function/distributions at a point</em> and proves ([3], §12 p. 43) that it coincides with ?ojasiewicz point value \eqref{1} when this exists (by using the necessary and sufficient condition above): the construction of Ziemaian however does not apply to all distributions.</li>
</ol>
<p>[1] Stanis?aw ?ojasiewicz (1957-1958), "<a href="http://matwbn.icm.edu.pl/ksiazki/sm/sm16/sm1611.pdf" rel="nofollow noreferrer">Sur la valeur et la limite d'une distribution en un point</a>" (French), Studia Mathematica, Vol. 16, Issue 1, pp. 1-36, <a href="http://www.ams.org/mathscinet-getitem?mr=MR0087905" rel="nofollow noreferrer">MR0087905</a> <a href="https://zbmath.org/?q=an%3A0086.09405" rel="nofollow noreferrer">Zbl 0086.09405</a>.</p>
<p>[2] V. S. Vladimirov (2002), <em><a href="https://books.google.it/books?id=6UpZDwAAQBAJ&printsec=frontcover&hl=it" rel="nofollow noreferrer">Methods of the theory of generalized functions</a></em>, Analytical Methods and Special Functions, Vol. 6, London¨CNew York: Taylor & Francis, pp. XII+353, ISBN 0-415-27356-0, <a href="http://www.ams.org/mathscinet-getitem?mr=MR2012831" rel="nofollow noreferrer">MR2012831</a>, <a href="https://zbmath.org/?q=an%3A1078.46029" rel="nofollow noreferrer">Zbl 1078.46029</a>.</p>
<p>[3] Bogdan Ziemian (1988), "<a href="http://pldml.icm.edu.pl/pldml/element/bwmeta1.element.zamlynska-12bb5d6a-bf89-45ae-82f6-d4843817119b/c/rm26401.pdf" rel="nofollow noreferrer">Taylor formula for distributions</a>", Rozprawy Matematyczne 264, pp. 56, ISBN 83-01-07898-7, ISSN 0012-3862, <a href="http://www.ams.org/mathscinet-getitem?mr=MR0931848" rel="nofollow noreferrer">MR0931848</a>, <a href="https://zbmath.org/?q=an%3A0685.46025" rel="nofollow noreferrer">Zbl 0685.46025</a>.</p>
http://www.4124039.com/questions/323753/-/323838#3238384Answer by user131781 for Defining the value of a distribution at a pointuser131781http://www.4124039.com/users/1317812019-02-22T14:08:41Z2019-02-22T14:08:41Z<p>As indicated above, the concept of the limit resp. value of a distribution at a point was studied intensively over 50 years ago. Here is a very elementary and natural definition due to Sebastião e Silva (it is definition 6.9 in his paper ?On integrals and orders of growth of distributions¡°. (I will not give a reference since it can be found online just by googling the title).</p>
<p>A distribution <span class="math-container">$s$</span> on an interval <span class="math-container">$I$</span> is said to be continuous at a point <span class="math-container">$c$</span> if there is a natural number <span class="math-container">$p$</span> and a continuous function <span class="math-container">$F$</span> on <span class="math-container">$I$</span> so that <span class="math-container">$f=D^pF$</span> (distributional derivative) and <span class="math-container">$\dfrac {F(x)}{(x-c)^p}$</span> converges in the usual sense as <span class="math-container">$x$</span> goes to <span class="math-container">$c$</span> Then we write <span class="math-container">$f(c)$</span> for <span class="math-container">$p!$</span> times this limit and call it the value of the distribution at <span class="math-container">$c$</span>. As an example, he shows that <span class="math-container">$\cos \frac 1 x$</span> has the value <span class="math-container">$0$</span> at <span class="math-container">$0$</span>.</p>
http://www.4124039.com/questions/323753/-/323839#3238392Answer by Gro-Tsen for Defining the value of a distribution at a pointGro-Tsenhttp://www.4124039.com/users/170642019-02-22T14:19:41Z2019-02-22T14:19:41Z<p>This is not an answer, and maybe even marginally off-topic, but I'd like to point out the following example which might be useful to keep in mind when trying to define the value of a distribution at a point (and which is too long to fit in a comment):</p>
<p>Let <span class="math-container">$g\colon\mathbb{R}\to\mathbb{R}$</span> be <span class="math-container">$g(x) = x^2\sin(\frac{1}{x})$</span> (obviously extended by <span class="math-container">$g(0)=0$</span>). This is a continuous, in fact, even, differentiable, function on <span class="math-container">$\mathbb{R}$</span>, so we can unproblematically identify it with a distribution, call it <span class="math-container">$T$</span>. Now since <span class="math-container">$g$</span> is differentiable, we probably want to identify its derivative <span class="math-container">$g'$</span>, as a real function, with the derivative <span class="math-container">$T'$</span> of the corresponding distribution <span class="math-container">$T$</span>, so we might want to conclude that the value at <span class="math-container">$0$</span> of the distribution <span class="math-container">$T'$</span> should be (well-defined and equal to) <span class="math-container">$g'(0) = 0$</span>. But since <span class="math-container">$g'$</span> is not continuous at <span class="math-container">$0$</span>, it is not easy to come up with a justification for why <span class="math-container">$T'$</span> takes that value at that point.</p>
ÌØÂëÉúÐ¤Í¼