8
$\begingroup$

Let $\omega\in D'(\mathbb R^n)$ be a distribution and $p\in \mathbb R^n$. If there is an open set $U\subset \mathbb R^n$ containing $p$ such that $\omega|_U$ is given by a continuous function $f\in C(U)$, then for every $\phi\in C^\infty_c(\mathbb R^n)$ with $\int_{\mathbb R^n}\phi(x)d x=1$ we can define a Dirac sequence $\{\phi^p_j\}_{j\in \mathbb N}\subset D(\mathbb R^n)$ by $\phi^p_j(x):=j^n\phi(j(x-p))$ which fulfills $$ \omega(\phi^p_j)\to f(p)\quad \text{ as }j\to \infty. $$ This shows that we can recover the value $\omega(p)\equiv f(p)$ of the distribution $\omega$ at the point $p$ via a limit of such Dirac sequences.

Now, suppose that for some $\omega\in D'(\mathbb R^n)$ and $p\in \mathbb R^n$ we just know that $ \lim_{j\to \infty}\omega(\phi^p_j) $ exists for every $\phi\in C^\infty_c(\mathbb R^n)$ with $\int_{\mathbb R^n}\phi(x)d x=1$ and is independent of $\phi$. In view of the above it then seems reasonable to define $\omega(p):=\lim_{j\to \infty}\omega(\phi^p_j)$ and to say that $\omega$ has a well-defined value at the point $p$.

Q: Is this definition useful in any sense? I have the feeling that it might be fundamentally flawed. In that case, I'd find it interesting to know what's the greatest generality in which one can make sense of "the value of a distribution at a point".

Additional thoughts after 1st edit: Some "consistency checks" for the definition would in my opinion be the following:

  1. If the value of $\omega$ exists at every point in some open set $U\subset \mathbb R^n$ and the function $f$ defined on $U$ by these values is continuous, then $\omega|_U$ is given by $f$.

  2. If the value of $\omega$ exists at Lebesgue-almost every point in some open set $U\subset \mathbb R^n$ and the values define a function $f\in L^1_{\mathrm{loc}}(U)$, then $\omega|_U$ is given by $f$.

I believe that at least property 1 should be true and I'll check it once I find the time.

2nd edit: My question is related to this MO question which corresponds to the case $f\equiv 0$.

$\endgroup$
  • $\begingroup$ Doesn't this just mean that the distribution is regular at $p$? $\endgroup$ – user1504 Feb 21 at 16:28
  • $\begingroup$ I think normally one would say that a distribution $\omega$ is regular at $p$ if $p$ does not lie in the singular support of $\omega$. This is much more strict than the assumption in my question: The singular support is a closed set, so if $\omega$ is regular at $p$ in the usual sense, then it is regular (i.e., smooth) in a whole neighborhood of $p$. $\endgroup$ – B K Feb 21 at 18:02
  • $\begingroup$ Can you give an example of a distribution for which this trick assigns a finite value to a singular point? It's clear (using gaussian test functions) that it must assign $\infty$ to the Dirac $\delta$'s singular point. But philosophically, distributions are linear combinations of derivatives of deltas, so my guess would be that it assigns finite values only to non-singular points. $\endgroup$ – user1504 Feb 21 at 18:35
  • $\begingroup$ A (boring) example is any distribution $\omega$ given by a continuous function which is not smooth at $p$. I guess you are right that by the classification of distributions there are probably no really fancy examples. $\endgroup$ – B K Feb 21 at 22:11
  • 1
    $\begingroup$ @user1504 Take $\xi$ to be a typical realisation of white noise on $R$ and $\phi$ any smooth function. Then, $\xi\cdot \phi$ will have the 'value' $0$ (in the sense described by B K) at any point where $\phi$ vanishes, but is only regular there if $\phi$ vanishes identically on a whole interval. $\endgroup$ – Martin Hairer Feb 22 at 10:14
4
$\begingroup$

It's not a bad definition and I think it is better to think of it as a particular case of the "restriction problem", i.e., trying to define the restriction $\omega|_{\Gamma}$ of $\omega$ to some subset $\Gamma\subset\mathbb{R}^n$. When one succeeds the result is called a trace theorem. This usually requires some quantitative regularity hypothesis on $\omega$, e.g., being in a Sobolev space $H^s$ with $s>$something.

A particularly important case is when $\Gamma$ is an affine subspace, or say for simplicity a linear subspace like $\Gamma =\mathbb{R}^m\times\{0\}^{n-m}\subset\mathbb{R}^n$. A rather standard approach is to start with $\omega\in\mathcal{D}'(\mathbb{R}^n)$. The convolution $\omega\ast \phi_j^0$ is in the space of $C^{\infty}$ functions $\mathcal{E}(\mathbb{R}^n)\subset \mathcal{D}'(\mathbb{R}^n)$ and converges to $\omega$ in the topology of $\mathcal{D}'(\mathbb{R}^n)$ (the strong topology). The ordinary restriction $\omega\ast \phi_j^0|_{\Gamma}$ makes sense and you can ask if the limit $\lim_{j\rightarrow\infty}\omega\ast \phi_j^0|_{\Gamma}$ exists inside $\mathcal{D}'(\mathbb{R}^m)$.

Your particular case $p=0$ corresponds to mine with $m=0$.

Another problem of this kind is pointwise multiplication. If $\omega_1(x)$ and $\omega_2(x)$ are two distributions, then there is no problem defining $\omega_1(x_1)\omega_2(x_2)$ (tensor product), but the issue is how to restrict to the diagonal $\Gamma=\{x_1=x_2\}$.

Finally, note that all of these problems become much more interesting for random distributions, because it's like magic: you can sometimes do the (deterministically) impossible.


Small addendum: Suppose that for some reason one has a trace theorem but only for large enough $m$ and one cannot do the $m=0$ or the point restriction case. Then one can still do the following "stabilization" trick: change $\omega$ to $\omega\otimes 1$ where one tensors with the constant function equal to one seen as a distribution in say $p$ new variables. If you can restrict it from $\mathbb{R}^{n+p}$ to a subspace of dimension $p$, then you will have your point evaluation after factoring out the $\otimes 1$. The last step of course needs your restriction construction to be invariant/covariant by translation along $\Gamma$.

$\endgroup$
  • $\begingroup$ I think it is nice to view the problem as a special case of a restriction problem. In fact I was not aware of this obvious interpretation. However, I feel that the restriction to a single point should be much more well-studied and have certain unique properties in comparison to more general restrictions (say to hypersurfaces). Perhaps this turns out not to be the case. Then I'd be happy to accept your answer as the best. $\endgroup$ – B K Feb 22 at 10:54
  • $\begingroup$ I added a few lines to explain that the two problems are much more related than seems at first sight. Also, pointing you in the direction of the restriction problem and supplying you with the search keywords "trace theorem" and "Sobolev" (or "Besov" also) will give you access to a vast literature relevant to your question. I don't think you will find as many articles if you search "evaluating a distribution at a point". $\endgroup$ – Abdelmalek Abdesselam Feb 22 at 19:32
4
$\begingroup$

As indicated above, the concept of the limit resp. value of a distribution at a point was studied intensively over 50 years ago. Here is a very elementary and natural definition due to Sebasti?o e Silva (it is definition 6.9 in his paper ?On integrals and orders of growth of distributions“. (I will not give a reference since it can be found online just by googling the title).

A distribution $s$ on an interval $I$ is said to be continuous at a point $c$ if there is a natural number $p$ and a continuous function $F$ on $I$ so that $f=D^pF$ (distributional derivative) and $\dfrac {F(x)}{(x-c)^p}$ converges in the usual sense as $x$ goes to $c$ Then we write $f(c)$ for $p!$ times this limit and call it the value of the distribution at $c$. As an example, he shows that $\cos \frac 1 x$ has the value $0$ at $0$.

$\endgroup$
  • $\begingroup$ Thanks for the remark: I forgot to report the whole statement from the paper of ?ojasiewicz, but I was not aware of the paper by Sebastião e Silva, so +1. $\endgroup$ – Daniele Tampieri Feb 22 at 17:06
2
$\begingroup$

This is not an answer, and maybe even marginally off-topic, but I'd like to point out the following example which might be useful to keep in mind when trying to define the value of a distribution at a point (and which is too long to fit in a comment):

Let $g\colon\mathbb{R}\to\mathbb{R}$ be $g(x) = x^2\sin(\frac{1}{x})$ (obviously extended by $g(0)=0$). This is a continuous, in fact, even, differentiable, function on $\mathbb{R}$, so we can unproblematically identify it with a distribution, call it?$T$. Now since $g$ is differentiable, we probably want to identify its derivative $g'$, as a real function, with the derivative $T'$ of the corresponding distribution $T$, so we might want to conclude that the value at $0$ of the distribution $T'$ should be (well-defined and equal to) $g'(0) = 0$. But since $g'$ is not continuous at $0$, it is not easy to come up with a justification for why $T'$ takes that value at that point.

$\endgroup$
  • 1
    $\begingroup$ Ah, well, user131781's answer posted in the mean time offers nice way to look at this! $\endgroup$ – Gro-Tsen Feb 22 at 14:33
  • 1
    $\begingroup$ This shows that the method of user131781 is actually more powerful in this case than the definition I suggested, which fails to assign a value to cos(1/x) at 0. $\endgroup$ – B K Feb 22 at 14:39
2
$\begingroup$

The definition of the value of a distribution at a point you describe in your question does not seem flawed to me since, at least from the point of view of the independence on $\delta$-sequences, follows the path traced years ago by Stanis?aw ?ojasiewicz in the paper [1], so I describe his approach to the problem below.

?ojasiewicz analyzes the problem for functions of one variable, i.e. $n=1$: by using the definition of change of variables in a distribution (see for example [2], §1.9 pp. 21-22) and considering the change of variable $y=x_0+\lambda x$, for $ x,x_0,\lambda \in\Bbb R$, i.e. $$ \begin{split} T(x_0+\lambda x)&\triangleq \langle T(x_0+\lambda x),\varphi(x)\rangle\\ &=\left\langle T(y),\frac{\varphi\big(\lambda^{-1} (y-x_0)\big)}{\lambda}\right\rangle \end{split} \quad \varphi\in\mathscr{D}(\Bbb R) $$ he defines the limit of a distribution at a point $x_o$ as ([1], §1 p. 2-3) $$ \lim_{x\to x_0} T\triangleq \lim_{\lambda\to 0} T(x_0+\lambda x) \label{1}\tag{1} $$ and proves that

  • $\lim_{x\to x_0} T=\lim_{x\to x_0^+} T=\lim_{x\to x_0^-} T$
  • by using an earlier result of Ziele?ny, if the limit \eqref{1} exists, it is necessarily a constant $C\in \Bbb C$, or more precisely a constant distribution $C$.
  • a necessary and sufficient condition for the limit \eqref{1} to exist is (see [1], §2, theorem 2.2, pp. 5-7) that $T=f^{(n)}$, where $f\in C^0(\Bbb R)$ and $$ \lim_{x\to x_0}\frac{f(x)}{(x-x_0)^n}=\frac{C}{n!}. $$

Then ?ojasiewicz assumes \eqref{1} as the definition of the value of a distribution at a point: note that this definition does not rely on any particular test function (or sequence of such) $\varphi\in\mathscr{D}(\Bbb R)$, as stated above. Now a few observations:

  1. ?ojasiewicz ([1], §1 p. 1) states that the case $n>1$ will be analyzed in a subsequent paper which to my knowledge has never been published. However (but this only my opinion), a generalization of \eqref{1} could perhaps be tried by using the Stoltz condition as described, for example, in the textbook of Griffith Bailey Price (1984) Multivariable Analysis, Springer-Verlag.
  2. ?ojasiewicz gives another necessary and sufficient condition for the limit \eqref{1} to exists, in terms of Denjoy differentials ([1], §2, corollary to theorem 2.2, p. 7).
  3. The term $\lambda^{-1}$, more or less intrinsically used in \eqref{1}, suggests the possible use of the Mellin transform: this suggestion was followed by Bogdan Ziemian in [3], §12 pp. 41-42. He defines a (generalized) spectral value of a function/distributions at a point and proves ([3], §12 p. 43) that it coincides with ?ojasiewicz point value \eqref{1} when this exists (by using the necessary and sufficient condition above): the construction of Ziemaian however does not apply to all distributions.

[1] Stanis?aw ?ojasiewicz (1957-1958), "Sur la valeur et la limite d'une distribution en un point" (French), Studia Mathematica, Vol. 16, Issue 1, pp. 1-36, MR0087905 Zbl 0086.09405.

[2] V. S. Vladimirov (2002), Methods of the theory of generalized functions, Analytical Methods and Special Functions, Vol. 6, London–New York: Taylor & Francis, pp. XII+353, ISBN 0-415-27356-0, MR2012831, Zbl 1078.46029.

[3] Bogdan Ziemian (1988), "Taylor formula for distributions", Rozprawy Matematyczne 264, pp. 56, ISBN 83-01-07898-7, ISSN 0012-3862, MR0931848, Zbl 0685.46025.

$\endgroup$
  • 2
    $\begingroup$ I find this interesting, although it seems to me contrary to the point of my question: The limits you mention depend on the behavior of the distribution nearby $x_0$ but need not correspond to an actual value at $x_0$. For example, they exist for the Dirac delta at $0$ which clearly does not have a well-defined value there. Also I don't understand the statement that the limit exists iff the distribution is a derivative of a continuous function. Isn't this the case for most (or essentially all) distributions? E.g., all derivatives of the Dirac delta are derivatives of the absolute value. $\endgroup$ – B K Feb 22 at 11:04
  • $\begingroup$ Interesting answer and references, but as BK saw, there seems to be a problem with $T=\delta(x)$ and $x_0=0$. $\endgroup$ – Abdelmalek Abdesselam Feb 22 at 13:48
  • $\begingroup$ @BK, I apologize for being late in answering to your comment and to the one by Abdelmalek Abdesselam: however I forgot the second part of the condition, which is precisely the one put forward by user131781. I corrected my answer. $\endgroup$ – Daniele Tampieri Feb 22 at 16:56
  • $\begingroup$ @AbdelmalekAbdesselam: apologies for my later in the answer to your comment: now I corrected my answer. $\endgroup$ – Daniele Tampieri Feb 22 at 17:03

Your Answer

By clicking "Post Your Answer", you agree to our terms of service, privacy policy and cookie policy

Not the answer you're looking for? Browse other questions tagged or ask your own question.