Active questions tagged sp.spectral-theory - MathOverflowmost recent 30 from www.4124039.com2019-08-18T05:53:13Zhttp://www.4124039.com/feeds/tag?tagnames=sp.spectral-theory&sort=newesthttp://www.creativecommons.org/licenses/by-sa/3.0/rdfhttp://www.4124039.com/q/3385711A question involving an summation of eigenvalues of the Laplacian operator on $\mathbb{S}^2$Marcelo Nogueirahttp://www.4124039.com/users/1370682019-08-17T18:21:25Z2019-08-17T18:50:42Z
<p>Infinite series involving eigenvalues of the Beltrami-Laplace operator on Riemannian manifolds as well as <span class="math-container">$L^p$</span>-estimates of eigenfunctions arise in the study of the nonlinear Schrödinger equation (NLS) on compact manifolds. Denote by <span class="math-container">$\mu_{k} := k(k + 1)$</span> the eigenvalue of the operator
<span class="math-container">$- \Delta_{\mathbb{S}^2}$</span> associated to the eigenfunction <span class="math-container">$e_k \in C^{\infty}(M)$</span>. Now, consider the summation
<span class="math-container">$$ \sum_{k = 0}^{\infty} \frac{1}{ \langle \mu_k - \alpha \rangle \langle \mu_k \rangle^{\varepsilon}}$$</span> where <span class="math-container">$\alpha > 0$</span> (is a positive arbitrary constant), <span class="math-container">$\varepsilon > 0$</span> and <span class="math-container">$\langle x \rangle : = 1 + |x|$</span>. My question is the following:</p>
<p><span class="math-container">$$\langle \mu_k - \alpha \rangle^{-1} \langle \mu_k \rangle^{- \varepsilon} \in \ell^{1}_{k}(\mathbb{N}) \mbox{ }, $$</span> </p>
<p>independently of the choice of <span class="math-container">$ \alpha$</span>? </p>
<p>My failed attempt was to consider two cases: (Case 1) <span class="math-container">$\mu_k \geq 4 \alpha$</span>. In this case we have <span class="math-container">$|\mu_k - \alpha| \geq \frac{3}{4} \mu_k$</span> and one can obtain the desired conclusion. (Case 2) <span class="math-container">$\mu_k \leq 4 \alpha$</span>. In this case, we have a finite sum, but I would like to prove that the summation is bounded by a constant which does not depends on <span class="math-container">$\alpha$</span>. Thanks in advance !!!</p>
http://www.4124039.com/q/3279187Weyl law for (non-semiclassical) Schrodinger operatorMaxim Bravermanhttp://www.4124039.com/users/743072019-04-12T18:42:04Z2019-08-17T10:21:29Z
<p>The Weyl law for a semiclassical Schrodinger operator
<span class="math-container">$$ A_h\ := \ -h^2\Delta+V(x) $$</span>
on an <span class="math-container">$d$</span>-dimensional complete Riemannian manifold <span class="math-container">$M$</span>
says that the number <span class="math-container">$N(A_h,1)$</span> of eigenvalues of <span class="math-container">$A_h$</span> which are smaller than 1 has asymptotic behavior
<span class="math-container">$$
N(A_h,1)\ \sim \
\frac1{(2\pi h)^d}\ \text{Vol}\
{\Large(}(x,\xi)\in T^*M:\ |\xi|^2+V(x)\le 1{\Large)}, \quad h\to 0.\ \ \ (\ast)
$$</span>
I am interested in a non-semiclassical Schrodinger operator
<span class="math-container">$$ A\ := \ -\Delta+V(x).$$</span>
I believe that the number <span class="math-container">$N(A,\lambda)$</span> of eigenvalues of <span class="math-container">$A$</span> smaller than <span class="math-container">$\lambda$</span> has a similar asymptotic
<span class="math-container">$$ N(A,\lambda) \sim \ \frac1{(2\pi)^d} \text{Vol}\
{\Large(}(x,\xi)\in T^*M:\ |\xi|^2+V(x)\le \lambda{\Large)}, \qquad \lambda\to \infty.
\quad(\ast\ast)
$$</span>
This is, of course, true on a compact manifolds, due to the classical Weyl's law. It is also not difficult to verify (**) for operator <span class="math-container">$-\Delta+|x|^n$</span> on <span class="math-container">$\mathbb{R}^d$</span> (where <span class="math-container">$A_h$</span> and <span class="math-container">$A$</span> can be related by rescaling of <span class="math-container">$x$</span>). </p>
<p>So my question: is (**) true and, if it is true, where can I find it? </p>
<p>PS. Victor Ivrii in his review article <a href="https://arxiv.org/abs/1608.03963" rel="noreferrer">https://arxiv.org/abs/1608.03963</a> mentions (page 5) that, using Birman-Schwinger principle, one can obtain a Weyl law for <span class="math-container">$N(A,\lambda)$</span> from (*). But I don't see how it can be done. </p>
http://www.4124039.com/q/33817013Computing spectra without solving eigenvalue problemsVictor Galitskihttp://www.4124039.com/users/99192019-08-12T03:25:39Z2019-08-13T17:55:36Z
<p>There is a rather remarkable conjecture formulated in this paper, "Computing spectra without solving eigenvalue problems," <a href="https://arxiv.org/pdf/1711.04888.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1711.04888.pdf</a> and in this talk by Svitlana Mayboroda at the International Congress of Mathematicians 2018 (towards the end): <a href="https://www.youtube.com/watch?v=FhPsWJL9eNQ" rel="nofollow noreferrer">https://www.youtube.com/watch?v=FhPsWJL9eNQ</a></p>
<p>Namely, she considers the following eigenvalue problem (otherwise known as the Schrödinger equation):
<span class="math-container">$$
[-\Delta + V(x)] \psi(x) = E\psi(x)
$$</span>
with <span class="math-container">$x\in \Omega \subset \mathbb{R}^d$</span> and <span class="math-container">$\psi({x})\Bigr|_{\partial \Omega}=0$</span>,and where <span class="math-container">$V(x)$</span> is a random potential (in some sense, as defined in the paper). I.e., the potential has many valleys of random location and possibly random depth, but the exact form of randomness appears unimportant. </p>
<p>The statement seems to be that if we solve instead the following much simpler problem
<span class="math-container">$$
[-\Delta + V(x)] u(x) = 1, \mbox{with } u(x)\Bigr|_{\partial \Omega}=0
$$</span>
then the <span class="math-container">$n$</span>-th consecutive minimum of the function, <span class="math-container">$u^{-1}(x)$</span>, dubbed localization landscape will determine with great accuracy (there is no exact statement) the <span class="math-container">$n$</span>-consecutive eigenvalue as follows
<span class="math-container">$$
E_n \approx (1 + d/4) \inf_x u^{-1}(x)|_n
$$</span></p>
<p>I wonder if there are experts here in ODE, etc, who could comment on the status of this statement/conjecture and in general this localization landscape perspective.</p>
<p>The conjecture seems suspicious to me, because diagonalizing and inverting operators are in different computational complexity classes (the latter - required for finding <span class="math-container">$u$</span> - is much simpler). But if it's actually true, it would have important implications for physics (I am a physicist). </p>
http://www.4124039.com/q/3351932What's the full assumption for Laplacian matrix $L=BB^T=\Delta-A$?Nick Donghttp://www.4124039.com/users/903432019-07-01T11:53:49Z2019-08-12T10:01:15Z
<p>Graph with no-selfloop, no-multi-edges, unweighted.</p>
<p><strong>directed</strong></p>
<p>For directed graph Adjacency matrix is a non-symmetric matrix <span class="math-container">$A_{in}$</span> considering indegree or <span class="math-container">$A_{out}$</span> considering outdegree. Degree matrix <span class="math-container">$\Delta$</span> is diagonal matrix <span class="math-container">$\Delta=\Delta_{in}+\Delta_{out}$</span>. The diagonal elements are sum of indegree and outdegree. The oriented incidence matrix <span class="math-container">$B_{oriented}$</span>, <span class="math-container">$N\times M$</span>. <span class="math-container">$b_{im}=1$</span> if edge <span class="math-container">$m$</span> start from <span class="math-container">$i$</span>. <span class="math-container">$b_{im}=?1$</span> if edge <span class="math-container">$m$</span> ended to <span class="math-container">$i$</span>. <span class="math-container">$b_{im}=0$</span> otherwise. </p>
<p><span class="math-container">$\Delta-A_{in}\neq\Delta-A_{out}\neq B_{oriented}B_{oriented}^T$</span></p>
<p><strong>undirected</strong></p>
<p>For undirected graph and oriented incidence matrix <span class="math-container">$B_{oriented}$</span> have dimension <span class="math-container">$N\times 2M$</span>. <span class="math-container">$B_{oriented}B_{oriented}^T=2\Delta-2A$</span>. </p>
<p>unoriented incidence matrix: <span class="math-container">$b_{im}=1$</span> if link <span class="math-container">$m$</span> incident -- start from <span class="math-container">$i$</span> or end to <span class="math-container">$i$</span>. <span class="math-container">$b_{im}=0$</span> otherwise. <span class="math-container">$B_{unoriented}B_{unoriented}^T=\Delta+A$</span>.</p>
<p><strong>Problem</strong></p>
<p>When Laplacian matrix <span class="math-container">$L=BB^T=\Delta-A$</span>? Many definitions I saw do not give clear assumption in the context. Say, undirected or directed, oriented or unoriented, <span class="math-container">$A_{in}$</span> or <span class="math-container">$A_{out}$</span> or <span class="math-container">$A$</span>? or whatever ... ?</p>
<p>Any references would be greatly appreciated. Thank you.</p>
<p><strong>EDIT</strong></p>
<p>The reference which make me confused is <code>2011, P.V. Mieghem, Graph Spectra for Complex Networks</code> <strong>Chapter 2 Algebraic graph theory</strong> P14. <strong>2.</strong> <strong>The relation between adjacency and incidence matrix is given by the admittance
matrix or Laplacian <span class="math-container">$Q=BB^T=\Delta-A$</span></strong></p>
<p><strong>EDIT2</strong></p>
<p><a href="https://i.stack.imgur.com/gdiZ3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gdiZ3.png" alt="enter image description here"></a></p>
<h2>1. adjacency matrix</h2>
<p>unweighted, nomultiple-edges <span class="math-container">$A$</span>, <span class="math-container">$N\times N$</span>, noselfloop <span class="math-container">$a_{ii}=0$</span></p>
<h3>1.1 directed</h3>
<p><span class="math-container">$A=\begin{pmatrix} 0 & 1 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 1\\ 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 0 & 1 & 0 \end{pmatrix}$</span></p>
<h3>1.2 undirected</h3>
<p><span class="math-container">$A=\begin{pmatrix} 0 & 1 & 1 & 0 & 0 & 1\\ 1 & 0 & 1 & 0 & 1 & 1\\ 1 & 1 & 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 1 & 0 & 1\\ 1 & 1 & 0 & 0 & 1 & 0 \end{pmatrix}$</span></p>
<h2>2. incidence matrix <span class="math-container">$B$</span>,</h2>
<p><span class="math-container">$N\times M$</span>, <span class="math-container">$M$</span> are edges. lexicographically ordered. </p>
<h3>2.1 directed</h3>
<p><span class="math-container">$N\times M$</span>, <span class="math-container">$N=6$</span>, <span class="math-container">$M=9$</span>,</p>
<p><span class="math-container">$e_1=1\rightarrow 2$</span>, <span class="math-container">$e_2=1\rightarrow 3$</span>, <span class="math-container">$e_3=1\leftarrow 6$</span>, </p>
<p><span class="math-container">$e_4=2\rightarrow 3$</span>, <span class="math-container">$e_5=2\leftarrow 5$</span>,<span class="math-container">$e_6=2\rightarrow 6$</span>, </p>
<p><span class="math-container">$e_7=3\rightarrow 4$</span>, </p>
<p><span class="math-container">$e_8=4\leftarrow 5$</span>,</p>
<p><span class="math-container">$e_9=5\leftarrow 6$</span></p>
<p><span class="math-container">$$B_{oriented}=\begin{pmatrix}
1 & 1 & -1 & 0 & 0 & 0 & 0 & 0 & 0\\
-1 & 0 & 0 & 1 & -1 & 1 &0 & 0 & 0\\
0 & -1 & 0 & -1 & 0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & -1\\
0 & 0 & 1 & 0 & 0 & -1 & 0 & 0 & 1
\end{pmatrix}$$</span></p>
<h3>2.2 undirected</h3>
<p><strong><em>(oriented)</em></strong> <span class="math-container">$N\times 2M$</span></p>
<p><span class="math-container">$N=6$</span>, <span class="math-container">$M=18$</span>, </p>
<p><span class="math-container">$e_1=1\rightarrow 2$</span>, <span class="math-container">$e_2=1\leftarrow 2$</span>, <span class="math-container">$e_3=1\rightarrow 3$</span>, </p>
<p><span class="math-container">$e_4=1\leftarrow 3$</span>, <span class="math-container">$e_5=1\rightarrow 6$</span>, <span class="math-container">$e_6=1\leftarrow 6$</span>, </p>
<p><span class="math-container">$e_7=2\rightarrow 3$</span>, <span class="math-container">$e_8=2\leftarrow 3$</span>, <span class="math-container">$e_9=2\rightarrow 5$</span>, </p>
<p><span class="math-container">$e_{10}=2\leftarrow 5$</span>, <span class="math-container">$e_{11}=2\rightarrow 6$</span>, <span class="math-container">$e_{12}=2\leftarrow 6$</span>, </p>
<p><span class="math-container">$e_{13}=3\rightarrow 4$</span>, <span class="math-container">$e_{14}=3\leftarrow 4$</span>, </p>
<p><span class="math-container">$e_{15}=4\rightarrow 5$</span>, <span class="math-container">$e_{16}=4\leftarrow 5$</span>, </p>
<p><span class="math-container">$e_{17}=5\rightarrow 6$</span>, <span class="math-container">$e_{18}=5\leftarrow 6$</span></p>
<p><span class="math-container">$$B_{oriented}=\begin{pmatrix}
1 & -1 & 1 & -1 & 1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
-1 & 1 & 0 & 0 & 0 & 0 & 1 & -1 & 1 & -1 & 1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & -1 & 1 & 0 & 0 & -1 & 1 & 0 & 0 & 0 & 0 & 1 & -1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 1 & 1 & -1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 1 & 0 & 0 & 0 & 0 & -1 & 1 & 1 & -1 \\
0 & 0 & 0 & 0 & -1 & 1 & 0 & 0 & 0 & 0 & -1 & 1 & 0 & 0 & 0 & 0 & -1 & 1\\
\end{pmatrix}$$</span></p>
<p><span class="math-container">$$(oriented)~b_{im}=
\begin{cases}
1, & \text{if link $e_m=i\rightarrow j$} \\
-1, & \text{if link $e_m=i\leftarrow j$} \\
0, & \text{otherwise}
\end{cases}$$</span></p>
<p><strong><em>(unoriented)</em></strong> <span class="math-container">$N\times M$</span></p>
<p><span class="math-container">$e_1=1 - 2$</span>, <span class="math-container">$e_2=1 - 3$</span>, <span class="math-container">$e_3=1 - 6$</span>, </p>
<p><span class="math-container">$e_4=2 - 3$</span>, <span class="math-container">$e_5=2 - 5$</span>,<span class="math-container">$e_6=2 - 6$</span>, </p>
<p><span class="math-container">$e_7=3 - 4$</span>, </p>
<p><span class="math-container">$e_8=4 - 5$</span>,</p>
<p><span class="math-container">$e_9=5 - 6$</span></p>
<p><span class="math-container">$$B_{unoriented}=\begin{pmatrix}
1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
1 & 0 & 0 & 1 & 1 & 1 &0 & 0 & 0\\
0 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1\\
0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1
\end{pmatrix}$$</span></p>
<p><span class="math-container">$$(unoriented)~b_{im}=
\begin{cases}
1, & \text{if link $e_m=i - j$ incident} \\
0, & \text{otherwise}
\end{cases}$$</span></p>
<h2>3. degree matrix</h2>
<p><span class="math-container">$\Delta_{ii} =deg(i) = \sum_j A_{ij}$</span>.
<span class="math-container">$\Delta_{ij}=0$</span>, <span class="math-container">$i\neq j$</span></p>
<h3>3.1 directed</h3>
<p><span class="math-container">$\begin{pmatrix} 2＋1 & 0 & 0 & 0 & 0 & 0\\ 0 & 2＋2 & 0 & 0 & 0 & 0\\ 0 & 0 & 1＋2 & 0 & 0 & 0\\ 0 & 0 & 0 & 0＋2 & 0 & 0\\ 0 & 0 & 0 & 0 & 2＋1 & 0\\ 0 & 0 & 0 & 0 & 0 & 2＋1 \end{pmatrix}$</span></p>
<h3>3.2 undirected</h3>
<p><span class="math-container">$\begin{pmatrix} 3 & 0 & 0 & 0 & 0 & 0\\ 0 & 4 & 0 & 0 & 0 & 0\\ 0 & 0 & 3 & 0 & 0 & 0\\ 0 & 0 & 0 & 2 & 0 & 0\\ 0 & 0 & 0 & 0 & 3 & 0\\ 0 & 0 & 0 & 0 & 0 & 3 \end{pmatrix}$</span></p>
<h2>4. Laplacian matrix</h2>
<h3>4.1 directed</h3>
<p><span class="math-container">$B_{oriented}B_{oriented}^T$</span> </p>
<p><span class="math-container">$\begin{pmatrix} 3 & -1 & -1 & 0 & 0 & -1\\ -1 & 4 & -1 & 0 & -1 & -1\\ -1 & -1 & 3 & -1 & 0 & 0\\ 0 & 0 & -1 & 2 & -1 & 0\\ 0 & -1 & 0 & -1 & 3 & -1\\ -1 & -1 & 0 & 0 & -1 & 3 \end{pmatrix}$</span></p>
<h3>4.2 undirected</h3>
<p><strong><em>(oriented)</em></strong></p>
<p><span class="math-container">$B_{oriented}B_{oriented}^T=2\Delta -2A$</span></p>
<p><span class="math-container">$\begin{pmatrix} 6 & -2 & -2 & 0 & 0 & -2\\ -2 & 8 & -2 & 0 & -2 & -2 \\ -2 & -2 & 6 & -2 & 0 & 0 \\ 0 & 0 & -2 & 4 & -2 & 0 \\ 0 & -2 & 0 & -2 & 6 & -2 \\ -2 & -2 & 0 & 0 & -2 & 6 \end{pmatrix}$</span></p>
<p><strong><em>(unoriented)</em></strong></p>
<p><span class="math-container">$B_{unoriented}B_{unoriented}^T=\Delta + A$</span></p>
<p><span class="math-container">$\begin{pmatrix} 3 & 1 & 1 & 0 & 0 & 1\\ 1 & 4 & 1 & 0 & 1 & 1\\ 1 & 1 & 3 & 1 & 0 & 0\\ 0 & 0 & 1 & 2 & 1 & 0\\ 0 & 1 & 0 & 1 & 3 & 1\\ 1 & 1 & 0 & 0 & 1 & 3 \end{pmatrix}$</span> </p>
http://www.4124039.com/q/3379320Asymptotic dispersion of a periodic Sturm–Liouville problemzryskyhttp://www.4124039.com/users/386452019-08-08T16:59:50Z2019-08-09T04:26:32Z
<p>From a physical problem of waves in periodic waveguides, I obtain the following Sturm–Liouville equation:
<span class="math-container">$$
\left(\frac{d^2}{dx^2}-k^2+\omega^2\ V(x)\right)\psi(x)=0
$$</span>
where <span class="math-container">$V(x+L)=V(x)>0$</span> is a positive periodic function, <span class="math-container">$k$</span> is a real parameter (physically, it is the wave vector in <span class="math-container">$y$</span> direction), <span class="math-container">$\omega$</span> is the eigenvalue, and the eigenfunction satisfies the periodic boundary condition <span class="math-container">$\psi(x+L)=\psi(x)$</span>, <span class="math-container">$\frac{d}{dx}\psi(x+L)=\frac{d}{dx}\psi(x)$</span>.</p>
<p>Consider the eigenvalues <span class="math-container">$\omega_n(k)$</span> changing with <span class="math-container">$k$</span>. From the numerical solutions of several different periodic functions <span class="math-container">$V(x)$</span>, I find that all <span class="math-container">$\omega_n(k)$</span> approach to linear asymptotic dispersion as <span class="math-container">$k\rightarrow \infty$</span>. And it seems that the asymptotic slope of any <span class="math-container">$\omega_n(k)$</span> (<span class="math-container">$n$</span> is finite) is determined by the maximum of <span class="math-container">$V(x)$</span>, namely
<span class="math-container">$$\lim_{k\rightarrow\infty}\frac{d\omega_n}{dk}=\frac{1}{\sqrt{V_\mathrm{max}}}.$$</span></p>
<p>The following figures show two examples. </p>
<p>My question is how to prove or disprove this conjecture (one may suppose the periodic function <span class="math-container">$V(x)$</span> has any needed good math properties)? </p>
<p><a href="https://i.stack.imgur.com/WtXWl.png" rel="nofollow noreferrer">Two examples</a></p>
http://www.4124039.com/q/3378502Functional calculus problem : how "similar" are the position operator and generator of dilations?Marc_Adrienhttp://www.4124039.com/users/1441242019-08-07T18:32:12Z2019-08-07T19:09:11Z
<p>The context is that of discrete Schr"odinger operators. We work in the Hilbert space <span class="math-container">$\ell^2(\mathbb{Z})$</span>. <span class="math-container">$S$</span> and <span class="math-container">$S^*$</span> denote the shift operators on the lattice <span class="math-container">$\mathbb{Z}$</span> respectively to the right and left. <span class="math-container">$N$</span> denotes the position operator.</p>
<p>So for <span class="math-container">$(u(n))_{n\in \mathbb{Z}} \in \ell^2(\mathbb{Z})$</span>, we have <span class="math-container">$(Su)(n) = u(n-1)$</span>, <span class="math-container">$(S^*u)(n) = u(n+1)$</span>, and <span class="math-container">$(Nu)(n) = n u(n)$</span>.</p>
<p>The discrete "equivalent" on <span class="math-container">$\ell^2(\mathbb{Z})$</span> of the "generator of dilations" i<span class="math-container">$(x \cdot \frac{d}{dx} + \frac{d}{dx} \cdot x)$</span> on <span class="math-container">$L^2(\mathbb{R})$</span> is (the closure of) </p>
<p><span class="math-container">$$ A = \mathrm{i} \big(N(S^* -S) + (S^* -S)N \big) = \mathrm{i} \big(2N(S^* -S) + (S^* +S)\big).$$</span></p>
<p>The last equality is based on the commutator relations
<span class="math-container">$[S^* ,N] = S^*$</span> and <span class="math-container">$[S,N] = -S$</span>.</p>
<p>Of course, <span class="math-container">$N$</span> and <span class="math-container">$A$</span> are unbounded self-adjoint operators on <span class="math-container">$\ell^2(\mathbb{Z})$</span>. The spectrum of <span class="math-container">$N$</span> is equal to <span class="math-container">$\mathbb{Z}$</span> and I believe the spectrum of <span class="math-container">$A$</span> is <span class="math-container">$\mathbb{R}$</span>. (At least the spectrum of i<span class="math-container">$(x \cdot \frac{d}{dx} + \frac{d}{dx} \cdot x)$</span> on <span class="math-container">$L^2(\mathbb{R})$</span> is equal to <span class="math-container">$\mathbb{R}$</span> and this can be seen by using the Mellin transform.) </p>
<p>The domains of <span class="math-container">$N$</span> and <span class="math-container">$A$</span> are not quite the same however, but <span class="math-container">$Domain(N) \subsetneq Domain(A)$</span>.</p>
<p>We use the notation <span class="math-container">$\langle t \rangle := \sqrt{t^2+1}$</span>. </p>
<p>Let <span class="math-container">$u \in \ell^2(\mathbb{Z})$</span> have compact support (i.e. finitely many non-zero terms). One can prove that for every <span class="math-container">$s > 0$</span>, there exists a constant <span class="math-container">$c > 0$</span> such that
<span class="math-container">\begin{equation}
\|\langle A \rangle ^{s} u\| \leqslant c \| \langle N \rangle ^{s} u \|.
\end{equation}</span></p>
<p>In particular this implies that <span class="math-container">$\langle A \rangle ^{s} \langle N \rangle ^{-s}$</span> is a bounded operator for every <span class="math-container">$s >0$</span>. The inequality can be shown directly for integer values of <span class="math-container">$s$</span> and then extended to other values of <span class="math-container">$s$</span> by interpolation. For a proof, see for example Lemma 5.1 from the article on <a href="https://arxiv.org/abs/1605.00879" rel="nofollow noreferrer">https://arxiv.org/abs/1605.00879</a> .</p>
<p>Because <span class="math-container">$\langle A \rangle ^{s} \langle N \rangle ^{-s}$</span> are bounded operators we can say that <span class="math-container">$A$</span> and <span class="math-container">$N$</span> "grow the same way at infinity". </p>
<p>My question is if it is also true that <span class="math-container">$\ln(\langle A \rangle) \ln(\langle N \rangle)^{-1}$</span> is a bounded operator ?
Or more generally <span class="math-container">$\ln(\langle A \rangle) ^{s} \ln(\langle N \rangle)^{-s}$</span> ?</p>
<p>If you have any ideas or references I will really appreciate! many thanks!</p>
<p>ps : If I'm not mistaken, however for all <span class="math-container">$s >0$</span>, <span class="math-container">$\langle N \rangle ^{s} \langle A \rangle ^{-s}$</span> is not a bounded operator.</p>
http://www.4124039.com/q/3369976Decay of eigenfunctions for LaplacianYannis Pimalishttp://www.4124039.com/users/1436252019-07-26T13:05:29Z2019-07-31T06:57:40Z
<p>Consider the discrete second derivative with Dirichlet boundary conditions on <span class="math-container">$\mathbb C^n$</span>.</p>
<p>Its eigendecomposition is fully known:
<a href="https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors_of_the_second_derivative#Pure_Dirichlet_boundary_conditions_2" rel="nofollow noreferrer">see wikipedia</a></p>
<p>It seems like the largest eigenvalue <span class="math-container">$\lambda_1$</span> is one with a fast decaying eigenfunction, by this I mean that at the first coordinate <span class="math-container">$\vert v_{1,1} \vert \le Cn^{-3/2}.$</span> The first <span class="math-container">$1$</span> indicates the eigenfunction, the second one the coordinate.</p>
<p>A priori there is no reason to have this type of decay, at the first coordinate, I guess.</p>
<p>Is there a way to prove this without(!) using that the eigenfunctions are explicitly known?-Thus, can one show this directly from the matrix?</p>
http://www.4124039.com/q/3372733Tight bound on spectral gap of compact homogeneous manifold?hwlinhttp://www.4124039.com/users/1127522019-07-30T18:10:53Z2019-07-30T18:10:53Z
<p><a href="http://www.springerlink.com/index/110J5634T1210717.pdf" rel="nofollow noreferrer">This paper</a> by Peter Li proves a bound on the spectral gap of the Laplacian on a compact homogeneous manifold of diameter <span class="math-container">$d$</span>:</p>
<p><span class="math-container">$$ \lambda_1 \ge c/d^2, $$</span>
where <span class="math-container">$c=\pi^2/4$</span>. Can this bound be strengthened in a large number of dimensions, or is it essentially tight? In other words, if we restrict to manifolds of dimension <span class="math-container">$N$</span> what is the largest <span class="math-container">$c_N$</span> in which this bound is true? </p>
<p>For example, a square torus in <span class="math-container">$N$</span> dimensions has a spectral gap <span class="math-container">$\lambda \sim 1 $</span> but a diameter <span class="math-container">$\sim \sqrt{N}$</span>, so this bound is far from saturation.
Similarly, for the bi-invariant metric on SO(n), the spectral gap is <span class="math-container">$\lim_{n \to \infty} \lambda \to 1/2$</span> whereas the diameter grows like <span class="math-container">$\sim \sqrt{n}$</span>.
Very naively, this suggests that <span class="math-container">$c_N \sim N$</span>.</p>
http://www.4124039.com/q/3372381Spectrum of a $1$-parameter family of symmetric linear operatorsRenato Moreirahttp://www.4124039.com/users/486182019-07-30T02:48:58Z2019-07-30T02:48:58Z
<p>I am working with certain submanifolds of symmetric spaces and, using a construction in Terng-Thorbergson, we ended up in the following Hilbert space problem:</p>
<p>Let <span class="math-container">$H$</span> be a (real) Hilbert Space and <span class="math-container">$T_t:H \rightarrow H$</span> be a smooth <span class="math-container">$1$</span>-parameter family of symmetric linear operators (and we can assume that those operators are defined in the whole space). Suppose that there is a finite-dimensional subespace <span class="math-container">$V$</span> of <span class="math-container">$H$</span> such that the spectrum of both <span class="math-container">$T_t$</span> and <span class="math-container">$T_t|_{V^\perp}$</span> does not depend on <span class="math-container">$t$</span>, in the sense that the spectrum of <span class="math-container">$T_t$</span> and <span class="math-container">$T_s$</span> for all <span class="math-container">$t,s \in \mathbb{R}$</span> are the same (similarly for <span class="math-container">$T_t|_{V^\perp}$</span>) . </p>
<p>Is it true that the trace of <span class="math-container">$T_t|_V$</span> also does not depend on <span class="math-container">$t$</span>?</p>
http://www.4124039.com/q/3368952Understanding a proof about limit of a sequence of open setsMainkithttp://www.4124039.com/users/1420482019-07-24T21:20:51Z2019-07-24T23:04:44Z
<p>We are reading a proof about the following limit
<span class="math-container">\begin{equation}\tag{1}
\lim_{n \to \infty} \sigma_1(T_n)= \sigma_1(T),
\end{equation}</span>
where <span class="math-container">$T:D(T) \subseteq H \to H$</span> and <span class="math-container">$T_n:D(T_n) \subseteq H \to H$</span> are linear operators on a Hilbert space <span class="math-container">$H$</span> and
<span class="math-container">$$\sigma_1(A)= \sigma(A) \cup \{ z \in \mathbb{C}: \|(z-A)^{-1} \|>1\}$$</span>
for a linear operator <span class="math-container">$A$</span>. In the proof it is said that it is enough to show the following:</p>
<p>(i) If <span class="math-container">$K \subseteq \sigma_1(T)$</span> is compact then there is <span class="math-container">$N \in \mathbb{N}$</span> such that <span class="math-container">$K \subseteq \sigma_1(T_n)$</span> for all <span class="math-container">$n \geq N$</span>.</p>
<p>(ii) If <span class="math-container">$K$</span> is compact and <span class="math-container">$K \cap \overline{\sigma_1(T)}=\emptyset$</span> then there is <span class="math-container">$N \in \mathbb{N}$</span> such that <span class="math-container">$K \cap \overline{\sigma_1(T_n)}=\emptyset$</span> for all <span class="math-container">$n \geq N$</span>.</p>
<p>But we don't know why (i) and (ii) implies (1)?.</p>
<p>The definition of convergence of sets which is used in (1) is the following:
Let <span class="math-container">$\{X_n\}$</span> a sequence of subsets of <span class="math-container">$\mathbb{C}$</span>. Then <span class="math-container">$x \in \limsup X_n$</span> iff there exists a subsequence <span class="math-container">$\{ X_{n_k}\}$</span> and a sequence <span class="math-container">$\{ x_k \}$</span> such that <span class="math-container">$x_k \in X_{n_k}$</span> and <span class="math-container">$\lim_{k \to \infty} x_k=x$</span>. On the other hand, <span class="math-container">$x \in \liminf X_n$</span> iff there is sequence <span class="math-container">$\{x_n \}$</span> with <span class="math-container">$x_n \in X_n$</span> suc that <span class="math-container">$\lim_{n \to \infty} x_n=x$</span>. The limit <span class="math-container">$\lim_{n \to \infty} X_n$</span> exists if <span class="math-container">$\limsup X_n=\liminf X_n$</span> and we define <span class="math-container">$\lim_{n \to \infty} X_n=\limsup X_n$</span>.</p>
<p><strong>Our attempt:</strong></p>
<p>We think that (i) implies that <span class="math-container">$\sigma_1(T) \subseteq \liminf \sigma_1(T_n)$</span> because if <span class="math-container">$z \in \sigma_1(T)$</span> then <span class="math-container">$\{z \}$</span> is compact.</p>
<p>From (ii) we can get that <span class="math-container">$\limsup \overline{\sigma_1(T_n)} \subseteq \overline{\sigma_1(T)}$</span>. Indeed if <span class="math-container">$z \notin \overline{\sigma_1(T)}$</span>, then there exists <span class="math-container">$\varepsilon>0$</span> such that <span class="math-container">$\overline{B(z;\varepsilon )} \cap \overline{\sigma_1(T)}= \emptyset$</span> and from (ii) we get <span class="math-container">$\overline{B(z;\varepsilon )} \cap \overline{\sigma_1(T_n)}= \emptyset$</span> for all large enough <span class="math-container">$n$</span>, so <span class="math-container">$z \notin \limsup \overline{\sigma_1(T_n)}$</span>.</p>
<p>But how can we show that <span class="math-container">$\limsup \sigma_1(T_n) \subseteq \sigma_1(T)$</span>?</p>
<p>Thank you for any help you can provide us.</p>
http://www.4124039.com/q/3367700Invertible Dirac operator for generic metricDLINhttp://www.4124039.com/users/952962019-07-23T02:25:32Z2019-07-23T02:25:32Z
<p>Let <span class="math-container">$(M,g)$</span> be an oriented closed spin Riemannian manifold.
We fix a spin structure of <span class="math-container">$M$</span>. Suppose that the Dirac operator <span class="math-container">$D^g$</span> associated with <span class="math-container">$g$</span> is invertible, i.e.
<span class="math-container">$$(D^g)^2\geq\mu>0.$$</span></p>
<p><strong>Q</strong> Does there exist a small neighborhood around <span class="math-container">$g$</span> in the metric space of <span class="math-container">$M$</span>, such that for any metric the associated Dirac operator is still invertible, i.e. <span class="math-container">$\exists N(g)\subset \mathcal Met(M)$</span>, such that <span class="math-container">$\forall h\in N(g)$</span>, we have <span class="math-container">$(D^h)^2\geq\frac\mu2$</span>. </p>
<p>Any reference is welcome. </p>
http://www.4124039.com/q/3366633Spectral gaps for spin manifold Laplace spectrumFofi Konstantopoulouhttp://www.4124039.com/users/1431722019-07-21T15:37:34Z2019-07-21T17:24:42Z
<p>For a (compact) spin manifold, we know that the eigenvalues <span class="math-container">$\lambda_n$</span> of the Dirac operator are countable, with finite multiplicity, and satisfy
<span class="math-container">$$
|\lambda_n| \to \infty, ~~~ \text{ as } n \to \infty.
$$</span>
This can be concluded, for example, from the fact that they have compact resolvent, as established in Friedrich's book on Dirac operators in Chapter 4.2.</p>
<p>I am wondering for the gap between the eigenvalues, as we tend to infinity, will it become as large as we want, or at least is there a minimum distance between succesive eigenvalues.</p>
<p>(Honestly, I care most about Hermitian manifolds that are spin, so if it is easier in this case, please let me know!)</p>
http://www.4124039.com/q/3236123Real part of eigenvalues and LaplacianYizhao Sunhttp://www.4124039.com/users/1360332019-02-20T00:36:55Z2019-07-20T05:01:05Z
<p>I am working on imaging and I am a bit puzzled by the behaviour of this matrix: </p>
<p><span class="math-container">$$A:=\left(
\begin{array}{cccccc}
1 & 0 & 0 & -1 & 0 & 0 \\
0 & 0 & 0 & 0 & -1 & 0 \\
0 & 0 & 1 & 0 & 0 & -1 \\
2 & -2 & 0 & 0 & 0 & 0 \\
-2 & 4 & -2 & 0 & 0 & 0 \\
0 & -2 & 2 & 0 & 0 & 0 \\
\end{array}
\right)$$</span></p>
<p>My matrix <span class="math-container">$A$</span> is a <span class="math-container">$4x4$</span> block matrix with an upper block <span class="math-container">$A_{11}:= \operatorname{diag}(1,0,1)$</span>, a second <span class="math-container">$A_{12}= -id$</span>, a lower block which is the graph Laplacian <span class="math-container">$-\Delta$</span> and then a block of zeros.</p>
<p>It is known that the lowest eigenvalue of the graph Laplacian is zero.</p>
<p>Now my matrix has all eigenvalues on the right hand side of the complex plane (non-negative real part) and the one with smallest real part has real part zero.</p>
<p>However, if I consider instead of the graph Laplacian in the <span class="math-container">$A_{21}$</span> block the matrix <span class="math-container">$-\Delta+id$</span> then this block is bounded away from zero and </p>
<p><span class="math-container">$$A:=\left(
\begin{array}{cccccc}
1 & 0 & 0 & -1 & 0 & 0 \\
0 & 0 & 0 & 0 & -1 & 0 \\
0 & 0 & 1 & 0 & 0 & -1 \\
2+1 & -2 & 0 & 0 & 0 & 0 \\
-2 & 4+1 & -2 & 0 & 0 & 0 \\
0 & -2 & 2+1 & 0 & 0 & 0 \\
\end{array}
\right)$$</span></p>
<p>has only eigenvalues with strictly positive real part. </p>
<p>I ask: Can anybody explain the relationship between the lower left corner of my matrix <span class="math-container">$A$</span> having spectrum bounded away from zero and all eigenvalues of <span class="math-container">$A$</span> being strictly contained in the right half plane?</p>
<p>How does <span class="math-container">$\lambda_{\text{min}}(A_{21})$</span> relate to <span class="math-container">$\operatorname{min}\Re(\sigma(A))$</span>?</p>
<p>EDIT: I was thinking that some Block matrix identities may be useful <a href="http://djalil.chafai.net/blog/2012/10/14/determinant-of-block-matrices/" rel="nofollow noreferrer">http://djalil.chafai.net/blog/2012/10/14/determinant-of-block-matrices/</a></p>
<p>by I do not quite get it together.</p>
http://www.4124039.com/q/3363381Spectral bound for maximum clique $k(G)$ in a permutation graphkvphxgahttp://www.4124039.com/users/113632019-07-17T18:07:06Z2019-07-18T14:50:03Z
<p>Let <span class="math-container">$\pi \in S_n$</span> be an arbitrary permutation. By permutation graph, we refer to a simple graph with nodes <span class="math-container">$[n]$</span> and edges that connect pairs of nodes that appear sorted in <span class="math-container">$\pi$</span>. Formally, <span class="math-container">$G=(V=[n],E=\{\{i,j\}\colon i<j\;\&\; \pi_i<\pi_j\})$</span>. It is clear from the definition that an increasing subsequence in <span class="math-container">$\pi$</span> would correspond to a clique in <span class="math-container">$G$</span>. As a consequence maximum clique size <span class="math-container">$k(G)$</span> is equal to longest increasing subsequence (LIS) in the permutation <span class="math-container">$LIS(\pi)$</span>. If <span class="math-container">$A$</span> denotes the adjacency matrix of <span class="math-container">$G$</span> (which is symmetric and transitive), question is: What can be said about spectral properties of <span class="math-container">$A$</span> and <span class="math-container">$LIS(\pi)$</span>? The general clique problem is known to be NP-hard, but there some interesting spectral bounds, for instance:
<span class="math-container">$$k(G)\ge \frac{n}{n-\lambda_1(A)}$$</span>
derived using the theorem by Motzkin and Straus (<a href="https://www.cambridge.org/core/journals/canadian-journal-of-mathematics/article/maxima-for-graphs-and-a-new-proof-of-a-theorem-of-turan/AC3CC45896B053B75C856F25829CA95C" rel="nofollow noreferrer">link</a>).
but since we know that this specific problem has a dynamic programming solution, I am wondering if tighter bounds exist? Moroever, are there also upper bounds for <span class="math-container">$k(G)$</span>? </p>
<p>This might seem like formulating an easy problem by a much harder one, but for reasons not discussed here, the spectral properties of permutation graph are of interest. </p>
http://www.4124039.com/q/3362960Different definitions of a relatively compact operatorJannik Pitthttp://www.4124039.com/users/1173932019-07-17T08:18:47Z2019-07-17T09:28:50Z
<p>(Cross-post from <a href="https://math.stackexchange.com/questions/3286716/different-definitions-of-a-relatively-compact-operator">Math Stackexchange</a>, where some work has been done in the comments)</p>
<p>Let <span class="math-container">$T,K$</span> be unbounded operators on a Hilbert space <span class="math-container">$H$</span>.
I've seen the following definition of a relatively compact operator:</p>
<blockquote>
<p>(i) The operator <span class="math-container">$K$</span> is called <em>relatively compact</em> with respect to <span class="math-container">$T$</span>, if for some <span class="math-container">$z$</span> in the resolvent set of <span class="math-container">$T$</span>, <span class="math-container">$KR_T(z)$</span> is compact, where <span class="math-container">$R_T(z):=(T-z)^{-1}.$</span></p>
</blockquote>
<p>I've also seen:</p>
<blockquote>
<p>(ii) The operator <span class="math-container">$K$</span> is called <em>relatively compact</em> with respect to <span class="math-container">$T$</span>, if for every sequence <span class="math-container">$(x_n)_{n \in \mathbb{N}}\subseteq H$</span> such that <span class="math-container">$(Tx_n)_{n \in \mathbb{N}}$</span> is bounded, <span class="math-container">$(Kx_n)_{n \in \mathbb{N}}$</span> contains a convergent subsequence.</p>
</blockquote>
<p>All of this is in the context of spectral theory and <span class="math-container">$T$</span> can be assumed to be self-adjoint.
Do definitions (i) and (ii) have something to do with each other, or are they distinct? <em>What is the intuition behind these definitions?</em> Definition (ii) looks like a generalisation of a compact operator, but definition (i) is just weird.</p>
http://www.4124039.com/q/3226592Spectrum of the Magnetic Stark Hamiltonians $H(\mu,\epsilon)$Kacdimahttp://www.4124039.com/users/1188482019-02-07T10:38:00Z2019-07-08T13:00:07Z
<p>I am looking for a document where I can find a proof the spectrum of the of the Magnetic Stark
Hamiltonians <span class="math-container">$H(\mu,\epsilon)=\big(D_x-\mu y)^2+D^2_y+\epsilon x+V(x,y)$</span> cited on the article below for <span class="math-container">$\epsilon\not=0$</span> see equation <span class="math-container">$(1.2)$</span> in</p>
<p><a href="http://www.hrpub.org/download/20140105/MS6-13401691.pdf" rel="nofollow noreferrer">http://www.hrpub.org/download/20140105/MS6-13401691.pdf</a></p>
<p>Thanks</p>
http://www.4124039.com/q/2030283Proof of eigenvalue stability inequality via Courant-Fischer min-max theoremTahahttp://www.4124039.com/users/344452015-04-15T19:24:45Z2019-06-30T23:45:01Z
<p>T. Tao in <a href="https://terrytao.wordpress.com/2010/01/12/254a-notes-3a-eigenvalues-and-sums-of-hermitian-matrices/" rel="nofollow noreferrer">his notes on eigenvalue inequalities uses Courant-Fischer min-max theorem to prove the eigenvalue stability inequality</a>. Specifically, I am looking for proof of Eq. (13) where he states as an immediate result of Eq. (6) and (10). But the problem is that the min-max function is not convex. I have read Stewart & Sun's book on <em>Matrix Perturbation Theory</em>, but it seems that they have felt that it is obvious too. </p>
<p>Can someone provide more details on how to derive Eq. (13)?</p>
http://www.4124039.com/q/33469322Rigorous justification for this formal solution to $f(x+1)+f(x)=g(x)$BigbearZzzhttp://www.4124039.com/users/801912019-06-24T14:23:22Z2019-06-28T13:06:48Z
<p>Let <span class="math-container">$g\in C(\Bbb R)$</span> be given, we want to find a solution <span class="math-container">$f\in C(\Bbb R)$</span> of the equation </p>
<blockquote>
<p><span class="math-container">$$
f(x+1) + f(x) = g(x).
$$</span></p>
</blockquote>
<p>We may rewrite the equation using the right-shift operator <span class="math-container">$(Tf)(x) = f(x+1)$</span> as
<span class="math-container">$$
(I+ T)f=g.
$$</span>
Formally, I can say that the solution of this equation is </p>
<blockquote>
<p><span class="math-container">$$
f= (I+ T)^{-1} g.
$$</span></p>
</blockquote>
<p>Of course, I am aware that there are infinitely many solutions to the equation since the kernel of <span class="math-container">$(I+T)$</span> consists of all <span class="math-container">$h\in C(\Bbb R)$</span> such that <span class="math-container">$h(x+1)=-h(x)$</span>, e.g. <span class="math-container">$\sin(\pi x)$</span>, but please bear with me for a moment here.</p>
<p>By the theory of operator algebra, <strong>if</strong> <span class="math-container">$f,g$</span> are from some nice Banach space <span class="math-container">$X$</span> <strong>and</strong> our linear operator <span class="math-container">$T:X\to X$</span> satisfies <span class="math-container">$\|T\|<1$</span>, then we have
<span class="math-container">$$
f = \left(\sum_{n=0}^\infty (-T)^n \right)g.
$$</span></p>
<p>However, it is not unreasonable to expect that we should have <span class="math-container">$\|T\|=1$</span> for a right-shift operator in most reasonable function spaces so let's try to solve the equation
<span class="math-container">$$
f= (I+\lambda T)^{-1} g.
$$</span>
for <span class="math-container">$|\lambda| <1$</span> first then we'll take <span class="math-container">$\lambda\to 1$</span>. Note that all the steps until now is purely formal since <span class="math-container">$C(\Bbb R)$</span> is not a normed space.</p>
<hr>
<p>To illustrate what I meant, let's say we take <span class="math-container">$g(x) = (x+2)^2$</span>. We now try to implement the above method (for <span class="math-container">$|\lambda|<1$</span>) to get
<span class="math-container">$$\begin{align}
f(x) &= \left(I - \lambda T + \lambda^2 T^2 - \dots \right) g(x) \\
&= (x+2)^2 -\lambda (x+3)^2 + \lambda^2 (x+4)^2 + \dots \\
&= \left(1-\lambda+\lambda^2-\dots \right)x^2 + \left(2-3\lambda+4\lambda^2-\dots \right)2x + \left(2^2-3^2\lambda+4^2\lambda^2-\dots \right) \\
&= \frac{1}{1+\lambda} x^2 + 2 \frac{2+\lambda}{(1+\lambda)^2} x + \frac{4+3\lambda + \lambda^2}{(1+\lambda)^3}.
\end{align}$$</span>
We shall be brave here and substitute <span class="math-container">$\lambda=1$</span> even though the series doesn't converge there. This gives
<span class="math-container">$$
f(x) = \frac 12 x^2 + \frac 32 x + 1
$$</span>
but voilà, for some mysterious reasons unknown to me, this <span class="math-container">$f$</span> actually solves our original equation <span class="math-container">$f(x+1) + f(x) = (x+2)^2$</span> !</p>
<hr>
<p>My question is simply:</p>
<blockquote>
<p>What are the hidden theories behind the miracle we observe here? How can we justify all these seemingly unjustifiable steps?</p>
</blockquote>
<p>I can't give you a reference to this method because I just conjured it up, thinking that it wouldn't work. To my greatest surprise, the answer actually makes sense. I am sure that similar method is probably practiced somewhere, probably by physicists.</p>
<p><strong>Remark</strong>: I posted <a href="https://math.stackexchange.com/questions/3272123/how-to-justify-solving-fx1-fx-gx-using-this-spectral-like-method">an isomorphic question on MSE</a> earlier and one of the commenters mentioned that this could be related to the <em>holomorphic functional calculus</em> (HFC), specifically the part where I let <span class="math-container">$\lambda \to 1$</span>. I once learned HFC just for an exam and don't have much memory of it so I don't immediately see if we can make the above method fully rigorous using merely the standard HFC or not.</p>
http://www.4124039.com/q/3349201Regarding essential spectrum of the unilateral shift operatoruser534666http://www.4124039.com/users/1276742019-06-27T13:31:31Z2019-06-27T18:31:50Z
<p>This is with context to Example 4.10 in Section 11 of the book : A course in functional Analysis by J.B Conway. Let <span class="math-container">$\sigma_{le}(S)$</span> and <span class="math-container">$\sigma_{re}(S)$</span> denote the left and right essential spectrum of the unilateral shift operator <span class="math-container">$S$</span> respectively. Let <span class="math-container">$\partial{\mathbb{D}}$</span> be the boundary of the open unit ball in the Complex plane.<a href="https://i.stack.imgur.com/4jMjV.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4jMjV.jpg" alt="enter image description here"></a> I can understand in the example why
<span class="math-container">$\partial{\mathbb{D}}\subseteq \sigma_{le}(S)\cap\sigma_{re}(S)$</span>. And can prove that
<span class="math-container">$\sigma_{le}(S)\cap\sigma_{re}(S)\subseteq\partial{\mathbb{D}} $</span>. But I do not understand how <span class="math-container">$\partial{\mathbb{D}}=\sigma_{le}(S)=\sigma_{re}(S)$</span>?
Can anyone explain how?</p>
http://www.4124039.com/q/3348012Inclusion of the spectrum of two differential operators defined on $L^2[-a,a]$ and $L^2[0, \infty)$Mainkithttp://www.4124039.com/users/1420482019-06-25T19:34:51Z2019-06-25T19:34:51Z
<p>Let <span class="math-container">$T$</span> be the formal operator defined by <span class="math-container">$$Tu:= \sum_{j=0}^{2n} a_j\frac{d^ju}{dx^j}$$</span> where <span class="math-container">$a_j \in \mathbb{C}$</span>. Consider the differential operators <span class="math-container">$T_a: D(T_a)\subseteq L^2[-a,a] \to L^2[-a,a]$</span> and <span class="math-container">$T_\infty: D(T_\infty)\subseteq L^2[0,\infty) \to L^2[0,\infty)$</span> defined by
<span class="math-container">$$T_af:=Tf, \ T_bg:=Tg, \ f \in D(T_a), g \in D(T_\infty),$$</span>
where <span class="math-container">$$D(T_a):=\{ f \in L^2[-a,a] : Tf \in L^2[-a,a], f^{(j)}(-a)=f^{(j)}(a)=0 \mbox{ for } 0 \leq j \leq n-1\}$$</span>
and
<span class="math-container">$$D(T_\infty):=\{ f \in L^2[0,\infty) : Tf \in L^2[0,\infty) , f^{(j)}(0)=0 \mbox{ for } 0 \leq j \leq n-1\}.$$</span></p>
<p>Can we say that <span class="math-container">$\sigma(T_a) \subseteq \sigma(T_\infty)$</span>?. I know that the inclusion is true if we take <span class="math-container">$Tu:=u''$</span> or <span class="math-container">$Tu:=-u''-2u'$</span>, for example.</p>
<p>Thanks in advance for any help you are able to provide.</p>
http://www.4124039.com/q/1434199Spectrum of Dirichlet Problem for Laplacian on a Parallelogramuser40600http://www.4124039.com/users/406002013-09-28T15:27:05Z2019-06-21T14:58:58Z
<p>Let $ M \subset \mathbb{R}^2 $ be parallelogram constructed by putting together two equilateral triangles (so that all sides of the parallelogram have length 1, and the internal angles are 60 and 120). What is the spectrum of the laplacian $ \Delta = \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} $ with dirichlet boundary conditions on $ M $?</p>
<p>The spectrum of the laplacian on the equilateral triangle is known, so some of the eigenfunctions - those that vanish on the diagonal - are known. But what about the whole spectrum?</p>
http://www.4124039.com/q/3341991Sum of Square of the Eigenvalues of Wishart Matrixkawahttp://www.4124039.com/users/1271502019-06-17T16:31:33Z2019-06-17T19:53:45Z
<p>Let <span class="math-container">$A\in\mathbb{R}^{m\times d}$</span> matrix with iid standard normal entries, and <span class="math-container">$m\geqslant d$</span>, and define <span class="math-container">$S=A^T A$</span>. </p>
<p>I want to have a tight upper bound for <span class="math-container">$\sum_{k=1}^d \lambda_k^2$</span>, where <span class="math-container">$\lambda_1,\dots,\lambda_d$</span> are the eigenvalues of <span class="math-container">$S$</span>. </p>
<p><strong>What I tried:</strong></p>
<ul>
<li>We know that (see e.g. Corollary 5.35 in Vershynin's notes), for <span class="math-container">$A\in\mathbb{R}^{m\times d}$</span>, for any <span class="math-container">$t\geqslant 0$</span>, with probability at least <span class="math-container">$1-2\exp(-\Omega(t^2))$</span>, it holds:
<span class="math-container">$$
\sqrt{m}-\sqrt{d}-t \leqslant \sigma_{min}(A)\leqslant \sigma_{max}(A)\leqslant \sqrt{m}+\sqrt{d}+t.
$$</span>
Simply ignoring <span class="math-container">$\sqrt{d},t$</span> terms (say I am in the regime <span class="math-container">$m\gg d,t$</span>), this yields <span class="math-container">$\lambda_i(A)<m^2$</span>, and thus, the sum above is upper bounded by <span class="math-container">$m^2d$</span>. </li>
<li>We also have the following:
<span class="math-container">$$
\sum_{k=1}^d (\lambda_k - m) = \sum_{i =1}^m \sum_{j=1}^d (A_{ij}^2-1),
$$</span>
which is sum of sub-exponential random variables, and thus, by a Bernstein-type bound, <span class="math-container">$\sum_{k=1}^d \lambda_k \leqslant md+\omega(\sqrt{md})$</span>, for some function <span class="math-container">$\omega(\sqrt{md})$</span> growing faster than <span class="math-container">$\sqrt{md}$</span>. </li>
<li>The sum above is simply the trace of <span class="math-container">$S^2=A^TAA^TA$</span>.</li>
</ul>
<p>I'm new to random matrix business, so any help is greatly appreciated.</p>
http://www.4124039.com/q/3341751Shift operator on a Banach spaceuser534666http://www.4124039.com/users/1276742019-06-17T10:19:56Z2019-06-17T10:19:56Z
<p>I have been the paper titled <a href="https://reader.elsevier.com/reader/sd/pii/S0022123696900324?token=93D6C4CCA8A2B6DC6C795926D55AC0A17FEC3A5E7FAC1AB84C26B9C9CC7682FB4B4710826CD6C36802605EB04E083EF8" rel="nofollow noreferrer">Dual Piecewise Analytic Bundle Shift Models of Linear Operators</a> by Dmitry Yakubovich.</p>
<p>In the second paragraph of the introduction it says "Let <span class="math-container">$T$</span> be a bounded Linear operator on a reflexive Banach space <span class="math-container">$X$</span>. It will be assumed that <em><span class="math-container">$T$</span> behaves like the shift operator, that is, that the eigenvalues of <span class="math-container">$T^*$</span> fill in some connected components of the complement of the essential spectrum of <span class="math-container">$T$</span>, whereas the point spectrum of <span class="math-container">$T$</span> is empty.</em></p>
<p>Can anyone explain why the statement in italics would be true? Why an operator which behaves like a shift operator have such properties? Any reference to theory related to this would be helpful.</p>
http://www.4124039.com/q/3332252Graph Fourier transform definitionHanna Gáborhttp://www.4124039.com/users/1414682019-06-04T09:31:11Z2019-06-05T09:02:21Z
<p>I have a question about the definition of the graph Fourier transform. Let me start with definition.</p>
<p>Let <span class="math-container">$A$</span> be the adjacency matrix of a graph <span class="math-container">$G$</span> with vertex set <span class="math-container">$V = \{1, 2, \dots, n\}$</span>. The Laplacian matrix of <span class="math-container">$G$</span> is defined as <span class="math-container">$L = D - A$</span>, where <span class="math-container">$D$</span> is a diagonal degree matrix with <span class="math-container">$d_{ii} = deg(i)$</span>. Let <span class="math-container">$\varphi_1, \dots, \varphi_n$</span> be an orthonormal eigenbasis of <span class="math-container">$L$</span> and <span class="math-container">$\lambda_1, \dots, \lambda_n$</span> be the corresponding eigenvalues. Let <span class="math-container">$f$</span> be a <span class="math-container">$V \rightarrow \mathbb{R}$</span> function. The graph Fourier transform is defined as
<span class="math-container">\begin{equation}
\hat{f}(\lambda_i) = \langle f, \varphi_i \rangle = \sum\limits_{k = 1}^n f(k) \varphi_i^*(k)
\end{equation}</span></p>
<p>My question is: what happens if <span class="math-container">$\lambda_i = \lambda_{i + 1}$</span>? It seems to me that this definition could give two different values for <span class="math-container">$\hat{f}(\lambda_i)$</span>.
Is it guaranteed that <span class="math-container">$\sum\limits_{k = 1}^n f(k) \varphi_i^*(k) = \sum\limits_{k = 1}^n f(k) \varphi_{i+1}^*(k)$</span>?</p>
<p>EDIT:
I have no problem with choosing an orthonormal eigenbasis even if there are eigenvalues with multiplicity bigger than <span class="math-container">$1$</span>. My problem arises only after the eigenvectors are chosen. Suppose that <span class="math-container">$\lambda_i = \lambda_{i + 1} = 3$</span>. Then I have two different formulas for <span class="math-container">$\hat{f}(3)$</span>:</p>
<p><span class="math-container">\begin{equation}
\hat{f}(3) = \sum\limits_{k = 1}^n f(k) \varphi_i^*(k)
\end{equation}</span>
and
<span class="math-container">\begin{equation}
\hat{f}(3) = \sum\limits_{k = 1}^n f(k) \varphi_{i+1}^*(k)
\end{equation}</span></p>
<p>I think there are two possibilities:</p>
<ol>
<li><p>The two quantities above are the same:
<span class="math-container">$\sum\limits_{k = 1}^n f(k) \varphi_i^*(k) = \sum\limits_{k = 1}^n f(k) \varphi_{i+1}^*(k)$</span>. I can't see why this would be true.</p></li>
<li><p>I misunderstand something about the definition.</p></li>
</ol>
http://www.4124039.com/q/2421367Why $M_1 \subset M_2 \not \Rightarrow N_{M_1} (\lambda) \leq N_{M_2} (\lambda)$ for eigenvalue problem? (EDIT)Sharpiehttp://www.4124039.com/users/915822016-06-14T01:19:42Z2019-06-03T09:35:41Z
<p>We know that for a direct problem with Dirichlet Boundary Condition (with Laplacian operator) that if two domains $M_1$ and $M_2$ are such that $M_1 \subset M_2$, then $\lambda(M_2) \leq \lambda(M_1)$, and hence, $N_{M_1} (\lambda) \leq N_{M_2} (\lambda)$. Why doesn't exist a similar result for a direct problem with Neumann Boundary Condition, i.e. $M_1 \subset M_2 \not \Rightarrow N_{M_1} (\lambda) \leq N_{M_2} (\lambda)$? Is there anyone could give me a <strong>clever counterexample?</strong> I think this is related by the fact that $H^1(M_1) \not \subset H^1(M_2)$.</p>
<p><strong>Precision :</strong> $N(\lambda) \equiv \text{the number of eigenvalues less than } \lambda$.</p>
<p><strong>EDIT :</strong></p>
<p>The Neumann eigenvalues of the rectangle with sides $a$ and $b$ are $$\nu_{k,l}=\frac{(\pi k)^2}{a^2}+\frac{(\pi l)^2}{b^2},$$ with $k,l \in \mathbb{N}_0$. So assuming that $a>b$, the first $3$ eigenvalues are $\nu_1=0$, $\nu_2=\frac{\pi^2}{a^2}$, and $\nu_3 = \frac{\pi^2}{b^2}.$ We pick $1 < a < \sqrt{2}$, and choose $b>0$ sufficiently small, so that the rectangle can be place inside the unit squre. For the unit square, the first $3$ Neumann eigenvalues are $\nu_1 ' = 0$, $\nu_2 ' = \pi^2$, and $\nu_3 ' = \pi^2$. Since $a>1$, we have $\nu_2 < \nu_2 '$, which could not happen if domain monotonicity were true.</p>
<p>Does this example work? If so, since the spectrum of the rectangle is the same as Dirichlet condition, why it is a counterexample for NBC but not for DBC?</p>
http://www.4124039.com/q/3328842Zero in the spectrum of an elliptic second order operatorMKOhttp://www.4124039.com/users/161832019-05-30T18:30:46Z2019-05-30T18:30:46Z
<p>This might be considered as a continuation of my previous question <a href="http://www.4124039.com/questions/332716/spectrum-of-a-linear-elliptic-operator">Spectrum of a linear elliptic operator</a>
but is independent. I have another question on V. Gribov's paper "Quantization of non-Abelian gauge theories". Nuclear Physics B. 139: 1–19.</p>
<p>Let <span class="math-container">$\frak{g}$</span> be a Lie algebra of a compact simple Lie group (e.g. <span class="math-container">$\frak{g}=su(2)$</span> is interesting enough).
Let <span class="math-container">$$A_\mu\colon \mathbb{R}^4\to \frak{g}, \mu=1,2,3,4.$$</span>
be smooth functions of fast decay such that
<span class="math-container">$$
\partial_\mu A_\mu=0 \,\,\,\,\, (1)
$$</span> (with summation convention over repeated indices).</p>
<p>Consider the following linear elliptic operator on maps <span class="math-container">$\alpha\colon \mathbb{R}^4\to \frak{g}$</span>:</p>
<p><span class="math-container">$$L_A(\alpha)=-\Delta\alpha +[A_\mu,\partial_\mu \alpha],$$</span>
where <span class="math-container">$\Delta$</span> is the usual Laplacian acting componentwise, <span class="math-container">$[\cdot,\cdot]$</span> is the Lie bracket. Due to (1) the operator <span class="math-container">$L_A$</span> is self adjoint.</p>
<p>V. Gribov makes the following claim without any explanation (see section 3 of his paper though he is using a different notation):</p>
<p><strong>There exist smooth <span class="math-container">$A_\mu$</span> with fast decay at infinity such that the discrete spectrum of <span class="math-container">$L_A$</span> contains 0.</strong></p>
<blockquote>
<p>I would like to have a proof of the above claim.</p>
</blockquote>
<p><strong>Remark.</strong> This claim is made on p. 5 of this paper <a href="https://reader.elsevier.com/reader/sd/pii/055032137890175X?token=E9E4528EF06235A698490920BA853B52363405F5D75E83E2B413C76CC74EE9A5977899AA8B8A09DEDF85780F6F703653" rel="nofollow noreferrer">https://reader.elsevier.com/reader/sd/pii/055032137890175X?token=E9E4528EF06235A698490920BA853B52363405F5D75E83E2B413C76CC74EE9A5977899AA8B8A09DEDF85780F6F703653</a></p>
http://www.4124039.com/q/3327162Spectrum of a linear elliptic operatorMKOhttp://www.4124039.com/users/161832019-05-28T21:44:11Z2019-05-28T22:02:08Z
<p>In the paper in quantum fields theory by
Gribov,V.; (1978) "Quantization of non-Abelian gauge theories". Nuclear Physics B. 139: 1–19;
in Section 3 the author makes the following claim from PDE and operators theory without any explanation which I would like to understand.</p>
<p>Let <span class="math-container">$\frak{g}$</span> be a Lie algebra of a compact Lie group. (If you feel uncomfortable with general Lie algebras you may think of a special case <span class="math-container">$\frak{g}=\mathbb{R}^3$</span> with the operation of Lie bracket <span class="math-container">$[\cdot,\cdot]$</span> equal to the vector product <span class="math-container">$\times $</span>.)
For <span class="math-container">$\mu=1,\dots,4$</span> let
<span class="math-container">$$A_\mu\colon \mathbb{R}^4\to \frak{g}$$</span>
be fixed smooth functions with compact support and satisfying <span class="math-container">$\partial_\mu A_\mu=0$</span> (where there is a summation convention in repeated indexes).</p>
<p>Consider the differential operator <span class="math-container">$L$</span> on <span class="math-container">$\frak{g}$</span>-valued functions
<span class="math-container">$$L\alpha =-\Delta \alpha +[A_\mu,\partial_\mu \alpha],$$</span>
where <span class="math-container">$\Delta$</span> is the ordinary Laplacian acting component-wise, <span class="math-container">$[\cdot,\cdot]$</span> is the Lie bracket. Clearly this is a symmetric operator.</p>
<blockquote>
<p>As far as I understand, Gribov claims that <span class="math-container">$L$</span> has no negative discrete spectrum provided <span class="math-container">$A_\mu$</span> are very small (in some sense). Why??</p>
</blockquote>
http://www.4124039.com/q/3323412Spectrum of a Hamiltonian which is a perturbation of LaplacianMKOhttp://www.4124039.com/users/161832019-05-23T23:14:56Z2019-05-24T16:28:33Z
<p>Let <span class="math-container">$\Delta =\frac{\partial^2}{\partial x_1^2}+\frac{\partial^2}{\partial x_2^2}+\frac{\partial^2}{\partial x_3^2}$</span> be the Laplacian on <span class="math-container">$\mathbb{R}^3$</span>.
Consider a self adjoint operator <span class="math-container">$H$</span> on complex valued functions on <span class="math-container">$\mathbb{R}^3$</span>
<span class="math-container">$$H\psi=\Delta\psi(x) +i\sum_{p=1}^3A_p(x)\frac{\partial \psi(x)}{\partial x_p} +B(x)\psi(x),$$</span>
where <span class="math-container">$A_i,B$</span> are smooth functions.</p>
<blockquote>
<p>I am looking for a precise result of the following approximate form:
(1) if <span class="math-container">$A_i$</span> and <span class="math-container">$B$</span> are 'small' then the discrete spectrum of <span class="math-container">$H$</span> is non-positive. (2) If <span class="math-container">$A_i,B$</span> are 'large' then the discrete spectrum of <span class="math-container">$H$</span> contains necessarily a positive element. </p>
</blockquote>
http://www.4124039.com/q/3322700Show convergence of a sequence of resolvent operators0xbadf00dhttp://www.4124039.com/users/918902019-05-23T06:15:12Z2019-05-23T06:15:12Z
<p>Let</p>
<ul>
<li><span class="math-container">$E$</span> be a locally compact separable metric space</li>
<li><span class="math-container">$(\mathcal D(A),A)$</span> be the generator of a strongly continuous contraction semigroup on <span class="math-container">$C_0(E)$</span></li>
<li><span class="math-container">$E_n$</span> be a metric space for <span class="math-container">$n\in\mathbb N$</span></li>
<li><span class="math-container">$(\mathcal D(A_n),A_n)$</span> be the generator of a strongly continuous contraction semigroup on<span class="math-container">$^1$</span> <span class="math-container">$B(E_n)$</span></li>
<li><span class="math-container">$\pi_n:E_n\to E$</span> be continuous and <span class="math-container">$$\iota_nf:=f\circ\pi_n\;\;\;\text{for }f\in C_0(E)$$</span> for <span class="math-container">$n\in\mathbb N$</span></li>
</ul>
<blockquote>
<p>Let <span class="math-container">$\lambda>0$</span> and <span class="math-container">$f\in C_0(E)$</span>. Assume<span class="math-container">$^2$</span> <span class="math-container">$$\left|\left(R_\lambda(A_n)\iota_nf\right)(x_n)-\left(R_\lambda(A)f\right)(x)\right|\xrightarrow{n\to\infty}0\tag1$$</span> for all <span class="math-container">$x_n\in E_n$</span>, <span class="math-container">$n\in\mathbb N$</span>, and <span class="math-container">$x\in E$</span> with <span class="math-container">$\pi_n(x_n)\xrightarrow{n\to\infty}x$</span>. Are we able to conclude <span class="math-container">$$\left\|R_\lambda(A_n)\iota_nf-\iota_nR_\lambda(A)f\right\|_\infty\xrightarrow{n\to\infty}0;\tag2$$</span> at least under suitable further assumptions (e.g. compactness of <span class="math-container">$E$</span>)?</p>
</blockquote>
<p>Note that the result holds if <span class="math-container">$E_n=E$</span> for all <span class="math-container">$n\in\mathbb N$</span>, <span class="math-container">$E$</span> is compact and <span class="math-container">$\iota_n$</span> is the identity for all <span class="math-container">$n\in\mathbb N$</span>: <a href="https://math.stackexchange.com/q/3139957/47771">https://math.stackexchange.com/q/3139957/47771</a>.</p>
<p>We may note that by contractivity, <span class="math-container">$(0,\infty)$</span> is contained in the resolvent sets of <span class="math-container">$(\mathcal D(A_n),A_n)$</span>, <span class="math-container">$n\in\mathbb N$</span>, and <span class="math-container">$(\mathcal D(A),A)$</span>. Moreover, <span class="math-container">$$\left\|R_\lambda(A_n)\right\|,\left\|R_\lambda(A)\right\|\le\frac1\lambda\;\;\;\text{for all }n\in\mathbb N.\tag3$$</span> This might be crucial.</p>
<hr>
<p><span class="math-container">$^1$</span> If <span class="math-container">$S$</span> is a set, let <span class="math-container">$B(S)$</span> denote the space of bounded functions from <span class="math-container">$S$</span> to <span class="math-container">$\mathbb R$</span> equipped with the supremum norm.</p>
<p><span class="math-container">$^2$</span> If <span class="math-container">$(\mathcal D(B),B)$</span> is a bounded linear operator on a Banach space and <span class="math-container">$\lambda$</span> is a regular value of <span class="math-container">$(\mathcal D(B),B)$</span>, let <span class="math-container">$R_\lambda(B)$</span> denote the <a href="https://en.wikipedia.org/wiki/Resolvent_set" rel="nofollow noreferrer">resolvent operator</a> of <span class="math-container">$(\mathcal D(B),B)$</span>.</p>
http://www.4124039.com/q/3319961Strong Differentiability of Spectral ProjectionsLR235http://www.4124039.com/users/1408602019-05-20T09:48:41Z2019-05-20T09:48:41Z
<p>Let <span class="math-container">$H$</span> be a Hilbert space and <span class="math-container">$W$</span> be a dense subspace, equipped with a different norm that turns it into a Hilbert space. Let <span class="math-container">$(A(t))_{t\in[0,T]}$</span> be a family of Operators in <span class="math-container">$B(W,H)$</span> (bounded operators from <span class="math-container">$W$</span> to <span class="math-container">$H$</span>) that are self-adjoint with discrete spectrum as unbounded operators in <span class="math-container">$H$</span> with domain <span class="math-container">$W$</span>. Assume that <span class="math-container">$A(t)$</span> is differentiable as a function of <span class="math-container">$t$</span> in the strong topology on <span class="math-container">$B(W,H)$</span> and that 0 is not in the spectrum of <span class="math-container">$A(t)$</span> for any <span class="math-container">$t$</span>. Does this imply that the positive spectral projection of <span class="math-container">$A(T)$</span>, i.e. <span class="math-container">$\chi_{[0,\infty)}(A(t))$</span>, is differentiable in <span class="math-container">$t$</span> with respect to the strong topology on <span class="math-container">$B(H,H)$</span>? Does someone know a reference where a statement like this might be found?</p>
特码生肖图