Saturday, September 30, 2017

algebra precalculus - An incorrect method to sum the first $n$ squares which nevertheless works


Start with the identity


$\sum_{i=1}^n i^3 = \left( \sum_{i = 1}^n i \right)^2 = \left(\frac{n(n+1)}{2}\right)^2$.


Differentiate the left-most term with respect to $i$ to get


$\frac{d}{di} \sum_{i=1}^n i^3 = 3 \sum_{i = 1}^n i^2$.


Differentiate the right-most term with respect to $n$ to get



$\frac{d}{dn} \left(\frac{n(n+1)}{2}\right)^2 = \frac{1}{2}n(n+1)(2n+1)$.


Equate the derivatives, obtaining


$\sum_{i=1}^n i^2 = \frac{1}{6}n(n+1)(2n+1)$,


which is known to be correct.


Is there any neat reason why this method happens to get lucky and work for this case?


Answer



Let $f_k(n)=\sum_{i=1}^n i^k$. We all know that $f_k$ is actually a polynomial of degree $k+1$. Also $f_k$ can be characterised by the two conditions: $$f_k(x)-f_k(x-1)=x^k$$ and $$f_k(0)=0.$$ Differentiating the first condition gives $$f_k'(x)-f_k'(x-1)=k x^{k-1}.$$ Therefore the polynomial $(1/k)f_k'$ satisfies the first of the two conditions that $f_{k-1}$ does. But it may not satisfy the second. But then $(1/k)(f_k'(x)-f_k'(0))$ does. So $$f_{k-1}(x)=\frac{f_k'(x)-f_k'(0)}k.$$


The mysterious numbers $f_k'(0)$ are related to the Bernoulli numbers, and when $k\ge3$ is odd they obligingly vanish...


calculus - Possibility to simplify $sumlimits_{k = - infty }^infty {frac{{{{left( { - 1} right)}^k}}}{{a + k}} = frac{pi }{{sin pi a}}} $



Is there any way to show that




$$\sum\limits_{k = - \infty }^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}} = \frac{1}{a} + \sum\limits_{k = 1}^\infty {{{\left( { - 1} \right)}^k}\left( {\frac{1}{{a - k}} + \frac{1}{{a + k}}} \right)}=\frac{\pi }{{\sin \pi a}}} $$



Where $0 < a = \dfrac{n+1}{m} < 1$



The infinite series is equal to



$$\int\limits_{ - \infty }^\infty {\frac{{{e^{at}}}}{{{e^t} + 1}}dt} $$



To get to the result, I split the integral at $x=0$ and use the convergent series in $(0,\infty)$ and $(-\infty,0)$ respectively:




$$\frac{1}{{1 + {e^t}}} = \sum\limits_{k = 0}^\infty {{{\left( { - 1} \right)}^k}{e^{ - \left( {k + 1} \right)t}}} $$



$$\frac{1}{{1 + {e^t}}} = \sum\limits_{k = 0}^\infty {{{\left( { - 1} \right)}^k}{e^{kt}}} $$



Since $0 < a < 1$



$$\eqalign{
& \mathop {\lim }\limits_{t \to 0} \frac{{{e^{\left( {k + a} \right)t}}}}{{k + a}} - \mathop {\lim }\limits_{t \to - \infty } \frac{{{e^{\left( {k + a} \right)t}}}}{{k + a}} = \frac{1}{{k + a}} \cr
& \mathop {\lim }\limits_{t \to \infty } \frac{{{e^{\left( {a - k - 1} \right)t}}}}{{k + a}} - \mathop {\lim }\limits_{t \to 0} \frac{{{e^{\left( {a - k - 1} \right)t}}}}{{k + a}} = - \frac{1}{{a - \left( {k + 1} \right)}} \cr} $$




A change in the indices will give the desired series.



Although I don't mind direct solutions from tables and other sources, I prefer an elaborated answer.






Here's the solution in terms of $\psi(x)$. By separating even and odd indices we can get



$$\eqalign{

& \sum\limits_{k = 0}^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} = \sum\limits_{k = 0}^\infty {\frac{1}{{a + 2k}}} - \sum\limits_{k = 0}^\infty {\frac{1}{{a + 2k + 1}}} \cr
& \sum\limits_{k = 0}^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a - k}}} = \sum\limits_{k = 0}^\infty {\frac{1}{{a - 2k}}} - \sum\limits_{k = 0}^\infty {\frac{1}{{a - 2k - 1}}} \cr} $$



which gives



$$\sum\limits_{k = 0}^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} = \frac{1}{2}\psi \left( {\frac{{a + 1}}{2}} \right) - \frac{1}{2}\psi \left( {\frac{a}{2}} \right)$$



$$\sum\limits_{k = 0}^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a - k}}} = \frac{1}{2}\psi \left( {1 - \frac{a}{2}} \right) - \frac{1}{2}\psi \left( {1 - \frac{{a + 1}}{2}} \right) + \frac{1}{a}$$



Then




$$\eqalign{
& \sum\limits_{k = - \infty }^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} = \sum\limits_{k = 0}^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} + \sum\limits_{k = 0}^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a - k}}} - \frac{1}{a} = \cr
& = \left\{ {\frac{1}{2}\psi \left( {1 - \frac{a}{2}} \right) - \frac{1}{2}\psi \left( {\frac{a}{2}} \right)} \right\} - \left\{ {\frac{1}{2}\psi \left( {1 - \frac{{a + 1}}{2}} \right) - \frac{1}{2}\psi \left( {\frac{{a + 1}}{2}} \right)} \right\} \cr} $$



But using the reflection formula one has



$$\eqalign{
& \frac{1}{2}\psi \left( {1 - \frac{a}{2}} \right) - \frac{1}{2}\psi \left( {\frac{a}{2}} \right) = \frac{\pi }{2}\cot \frac{{\pi a}}{2} \cr
& \frac{1}{2}\psi \left( {1 - \frac{{a + 1}}{2}} \right) - \frac{1}{2}\psi \left( {\frac{{a + 1}}{2}} \right) = \frac{\pi }{2}\cot \frac{{\pi \left( {a + 1} \right)}}{2} = - \frac{\pi }{2}\tan \frac{{\pi a}}{2} \cr} $$




So the series become



$$\eqalign{
& \sum\limits_{k = - \infty }^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} = \frac{\pi }{2}\left\{ {\cot \frac{{\pi a}}{2} + \tan \frac{{\pi a}}{2}} \right\} \cr
& \sum\limits_{k = - \infty }^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} = \pi \csc \pi a \cr} $$



The last being an application of a trigonometric identity.


Answer



EDIT: The classical demonstration of this is obtained by expanding in Fourier series the function $\cos(zx)$ with $x\in(-\pi,\pi)$.




Let's detail Smirnov's proof (in "Course of Higher Mathematics 2 VI.1 Fourier series") :



$\cos(zx)$ is an even function of $x$ so that the $\sin(kx)$ terms disappear and the Fourier expansion is given by :
$$\cos(zx)=\frac{a_0}2+\sum_{k=1}^{\infty} a_k\cdot \cos(kx),\ \text{with}\ \ a_k=\frac2{\pi} \int_0^{\pi} \cos(zx)\cos(kx) dx$$



Integration is easy and $a_0=\frac2{\pi}\int_0^{\pi} \cos(zx) dx= \frac{2\sin(\pi z)}{\pi z}$ while
$a_k= \frac2{\pi}\int_0^{\pi} \cos(zx) \cos(kx) dx=\frac1{\pi}\left[\frac{\sin((z+k)x)}{z+k}+\frac{\sin((z-k)x)}{z-k}\right]_0^{\pi}=(-1)^k\frac{2z\sin(\pi z)}{\pi(z^2-k^2)}$
so that for $-\pi \le x \le \pi$ :



$$
\cos(zx)=\frac{2z\sin(\pi z)}{\pi}\left[\frac1{2z^2}+\frac{\cos(1x)}{1^2-z^2}-\frac{\cos(2x)}{2^2-z^2}+\frac{\cos(3x)}{3^2-z^2}-\cdots\right]

$$



Setting $x=0$ returns your equality :
$$
\frac1{\sin(\pi z)}=\frac{2z}{\pi}\left[\frac1{2z^2}-\sum_{k=1}^{\infty}\frac{(-1)^k}{k^2-z^2}\right]
$$



while $x=\pi$ returns the $\mathrm{cotg}$ formula :



$$

\cot(\pi z)=\frac1{\pi}\left[\frac1{z}-\sum_{k=1}^{\infty}\frac{2z}{k^2-z^2}\right]
$$
(Euler used this one to find closed forms of $\zeta(2n)$)



The $\cot\ $ formula is linked to $\Psi$ via the Reflection formula :
$$\Psi(1-x)-\Psi(x)=\pi\cot(\pi x)$$



The $\sin$ formula is linked to $\Gamma$ via Euler's reflection formula :
$$\Gamma(1-x)\cdot\Gamma(x)=\frac{\pi}{\sin(\pi x)}$$


calculus - Evaluate $lim_{n to infty} int_{1}^{2}frac{sin(nx)}{x}dx$



I have to compute $$\lim_{n \to \infty} \int_{1}^{2}\frac{\sin(nx)}{x}dx$$

I have tried to tackle it in different ways but I'm getting nowhere.
In particular, I used substitution to obtain
$$\lim_{n \to \infty} \int_{1}^{2}\frac{\sin(nx)}{x}dx = \lim_{n \to \infty} \int_{n}^{2n}\frac{\sin(u)}{u}du$$



But from here I'm not sure about what to do. I've found information about $\int_0^\infty\frac{\sin(nx)}{x}dx$, but I don't see if and how I could relate my integral with that one.



Any hints? Thanks


Answer



Integrate by parts, letting $u=\frac{1}{x}$ and $dv=\sin(nx)\,dx$. Then $du=-\frac{1}{x^2}\,dx$ and we can take $v=-\frac{\cos nx}{n}$.




Our integral is equal to
$$\left. -\frac{1}{x}\cdot \frac{\cos(nx)}{n}\right|_1^2 -\int_1^2 \frac{\cos nx}{nx^2}\,dx.$$
Both parts $\to 0$ as $n\to\infty$.


elementary number theory - Show that $gcd(2^m-1, 2^n-1) = 2^ {gcd(m,n)} -1$







I'm trying to figure this out:



Show that for all positive integers $m$ and $n$




$\gcd(2^m-1, 2^n-1) = 2^{\gcd(m,n)} -1$



I appreciate your help,
Thanks.



Note: $\gcd$ stands for the greatest common divisor.

limits - Calculate $limlimits_{x to 0+}{ e^{frac{1}{x^2}}sin x}$

I want to investigate the value of $\lim_\limits{x \to 0+}{e^{\frac{1}{x^2}}\sin x}$. Since the expontial tends really fast to infinity but the sine quite slowly to 0 in comparison I believe the limit to be infinity. But I cannot find I way to prove it. I tried rewriting using the standard limit $\frac{\sin x}{x}$ as $\frac{\sin x}{x}\cdot xe^{\frac{1}{x^2}}$ but I still get an indeterminate form "$1 \cdot 0 \cdot \infty$".

calculus - How to find $lim_{xtoinfty}{frac{e^x}{x^a}}$?


$$\lim_{x\to\infty}{\frac{e^x}{x^a}}$$ For $a\in \Bbb R,$ find this limit.


I would say for $a\ge 0$ lim is equal to $\lim_{x\to\infty}{\frac{e^x}{a!x^0}=\infty}$ (from L'Hopital).


For $a<0$, lim eq. to $\frac{\infty}{0}$so lim doesnt exist. Is this correct?


Answer



Because $e^x > x$ for all $x$, $$\lim_{x \to \infty}\frac{e^x}{x}=\lim_{x \to \infty}\frac{1}{2}\left(\frac{e^{x/2}}{x/2}\right)e^{x/2} = \infty.$$ since $e^{x/2}/(x/2) > 1,$ and $e^{x/2} \to \infty$ as $x \to \infty$.



Then, it follows that $$\lim_{x \to \infty}\frac{e^x}{x^a}=\lim_{x \to \infty}\frac{1}{a^a}\cdot \left( \frac{e^{x/a}}{x/a}\right)^a = \infty$$


since we just showed that what is in parentheses approaches $\infty$ as $x \to \infty$, so the whole limit has to go to $\infty.$


substitution - Showing an inequality using Cauchy-Schwarz

I managed to solve the following inequality using AM-GM:
$$
\frac{a}{(a+1)(b+1)}+\frac{b}{(b+1)(c+1)}+\frac{c}{(c+1)(a+1)} \geq \frac{3}{4}

$$

provided that $a,b,c >0$ and $abc=1$.



However it was hinted to me that this could also be solved with Cauchy-Schwarz inequality but I have not been able to find a solution using it and I'm really out of ideas.

Friday, September 29, 2017

trigonometry - How to prove that $csc x - 2cos x cot 2x = 2 sin x$



My idea was...




\begin{align}
& \text{LHS} \\[10pt]
= {} & \frac 1 {\sin x} - \frac{2\cos x}{\tan2x} \\[10pt]
= {} & \frac{\tan2x-\sin2x}{\sin x\tan2x}
\end{align}



from here, I don't know how to continue, please help! thanks



ps, please teach me how to use the "divide" symbol



Answer



Somewhere you'll need a double-angle formula:
$$
\tan(2x) = \frac{2\tan x}{1 - \tan^2 x}
$$
or
$$
\tan(2x) = \frac{\sin(2x)}{\cos(2x)} = \frac{2\sin x\cos x}{\cos^2 x - \sin^2 x}.
$$


derivatives - How should we interpret $frac {d}{dt}$?

I've been using derivatives and integrals mechanically for years without really questioning the symbols. I recently watched some YouTube videos and came to understand that:$$\frac {dx}{dt}$$basically means, for some function, $f(t)=x$, an infinitesimal change in $t$, or $dt$, results in an infinitesimal change in $x$, or $dx$. The ratio of those two numbers is the derivative, or the instantaneous tangent line of $f(t)$ at $t$. So far, so good.



So could someone explain how to interpret this:$$\frac {d}{dt}$$I get that the bottom part is an infinitesimal change in $t$, but what is the top part? And how should I read an expression like $$\frac {d^2x}{dt^2}$$My main confusion is the $d$ part seems to have an existence on it's own without the dimension.

Thursday, September 28, 2017

calculus - Are all limits solvable without L'Hôpital Rule or Series Expansion

Is it always possible to find the limit of a function without using L'Hôpital Rule or Series Expansion?


For example,


$$\lim_{x\to0}\frac{\tan x-x}{x^3}$$


$$\lim_{x\to0}\frac{\sin x-x}{x^3}$$


$$\lim_{x\to0}\frac{\ln(1+x)-x}{x^2}$$


$$\lim_{x\to0}\frac{e^x-x-1}{x^2}$$


$$\lim_{x\to0}\frac{\sin^{-1}x-x}{x^3}$$



$$\lim_{x\to0}\frac{\tan^{-1}x-x}{x^3}$$

algebra precalculus - Square roots of Complex Number.

Calculate, in the form $a+ib$, where $a,b\in \Bbb R$, the square roots of $16-30i$.






My attempt with $(a+ib)^2 =16-30i$ makes me get $a^2+b^2=16$ and $2ab=−30$. Is this correct?

integration - A direct proof for $int_0^x frac{- x ln(1-u^2)}{u sqrt{x^2-u^2}} , mathrm{d} u = arcsin^2(x)$


I have been trying to evaluate $$ f(x) \equiv \int \limits_0^\infty - \ln\left(1 - \frac{x^2}{\cosh^2 (t)}\right) \, \mathrm{d} t $$ for $x \in [0,1]$ and similar integrals recently. I know that $$ \int \limits_0^\infty \frac{\mathrm{d} t}{\cosh^z (t)} = \frac{2^{z-2} \Gamma^2 (\frac{z}{2})}{\Gamma(z)} $$ holds for $\operatorname{Re} (z) > 0$, so by expanding the logarithm I found that $$ f(x) = \frac{1}{2} \sum \limits_{n=1}^\infty \frac{(2n)!!}{n^2 (2n-1)!!} x^{2n} \, .$$ But the right-hand side is the power series of the arcsine squared, so $f(x) = \arcsin^2 (x)$.


On the other hand, the substitution $u = \frac{x}{\cosh(t)}$ in the original integral leads to the representation $$ f(x) = \int \limits_0^x \frac{- x \ln(1-u^2)}{u \sqrt{x^2-u^2}} \, \mathrm{d} u \, ,$$ for which Mathematica (or WolframAlpha if you're lucky) gives the correct result.


I would like to compute this integral without resorting to the above power series and thereby find an alternative proof for the expansion. I have tried to transform the integral into the usual form $$ \arcsin^2 (x) = \int \limits_0^x \frac{2 \arcsin(y)}{\sqrt{1-y^2}} \, \mathrm{d} u $$ and thought about using the relations $$ \arcsin(x) = \arctan\left(\frac{x}{\sqrt{1-x^2}}\right) = 2 \arctan\left(\frac{x}{1+\sqrt{1-x^2}}\right) \, , $$ but to no avail. Maybe the solution is trivial and I just cannot see it at the moment, maybe it is not. Anyway, I would be grateful for any ideas or hints.


Answer



I have finally managed to put all the pieces together, so here's a solution that does not use the power series:


Let $u = x v$ to obtain $$ f(x) = \int \limits_0^1 \frac{- \ln(1 - x^2 v^2)}{v \sqrt{1-v^2}} \, \mathrm{d} v \, . $$ Now we can differentiate under the integral sign (justified by the dominated convergence theorem) and use the substitution $v = \sqrt{1 - w^2}\, .$ Then the derivative is given by \begin{align} f'(x) &= 2 x \int \limits_0^1 \frac{v}{(1-x^2 v^2) \sqrt{1-v^2}} \, \mathrm{d} v = 2 x \int \limits_0^1 \frac{\mathrm{d} w }{1-x^2 + x^2 w^2} \\ &= \frac{2}{\sqrt{1-x^2}} \arctan \left(\frac{x}{\sqrt{1-x^2}}\right) = \frac{2 \arcsin (x)}{\sqrt{1-x^2}} \end{align} for $x \in (0,1)$. Since $f(0)=0 \, ,$ integration yields $$ f(x) = f(0) + \int \limits_0^x \frac{2 \arcsin (y)}{\sqrt{1-y^2}} \, \mathrm{d} y = \arcsin^2 (x)$$ for $x \in [0,1]$ as claimed.


Wednesday, September 27, 2017

linear algebra - Relation between rank and number of distinct eigenvalues





$3 \times 3$ matrix B has eigenvalues 0, 1 and 2. Find the rank of B.



I understand that $0$ being an eigenvalue implies that rank of B is less than 3.



The solution is here (right at the top). It says that rank of B is 2 because the number of distinct non zero eigenvalues is 2.




This thread says that the only information that the rank gives is the about the eigenvalue $0$ and its corresponding eigenvectors.



What am I missing?






Edit

What I am really looking for is an explicit answer to this:





"Is the rank of a matrix equal to the number of distinct non zero eigenvalues of that matrix?"
Yes/No/Yes for some special cases



Answer



Take for example the matrix $A= \begin{bmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\0 & 0 & 0 \end{bmatrix}$, its rank is obviously $2$, eigenvalues are distinct and are $0,1,2$.
We have theorem which says that if eigenvalues are distinct then their eigenvectors must be linearly independent, but the rank of the matrix is $n-1$ if one of this eigenvalues is zero.



Edit after question edit



To answer more generally we need Jordan forms.




Let $A$ be $n$-dimensional square matrix with $n-k$ non-zero eigenvalues
(I don't make here a distinction between all different values and repeated- if all are distinct then there are $n-k$ values , if some are with multiplicities then summarize their number with multiplicities to make full sum $n-k$) and $k$ zero eigenvalues.



Express $A$ through similarity with the Jordan normal form $A=PJP^{-1}$.
The matrix $J$ can be presented as composition $J= \begin{bmatrix} J_n & 0 \\0 & J_z \end{bmatrix}$ where $J_n$ is a square part of Jordan matrix with $n-k$ non-zero values (which are eigenvalues) on diagonal and $J_z$ is a square part of Jordan matrix with $k$ zero values on its diagonal.



It is clear that because on the diagonal of upper-triangular matrix $J_n$ are non-zero values and the determinant as the product of these values is non-zero so $J_n$ has full rank i.e. $n-k$.



The rank of $J_z$ depends on the detailed form of this matrix and it can be from $0$ (when $J_z=0$) to $k-1$ ( see examples in comments). It can't be $k$ because $J_z$ is singular.



Therefore the final rank of $J$ can be from $n-k$ to $n-1$.
Similarity preserves rank so the rank of $A$ can be also from $n-k$ to $n-1$.



geometry - Length of diagonal compared to the limit of lengths of stair-shaped curves converging to it

enter image description here


I see this post and I am stunned. I think this is fallacious but I can't figure where is the fallacy?


If you know the fallacy. Please post a answer.

summation - Proof by induction that $sum_{i=1}^n frac{i}{(i+1)!}=1- frac{1}{(n+1)!}$

Prove via induction that $\sum_{i=1}^n \frac{i}{(i+1)!}=1- \frac{1}{(n+1)!}$
Having a very difficult time with this proof, have done pages of work but I keep ending up with 1/(k+2). Not sure when to apply the induction hypothesis and how to get the result $1- \frac{1}{(n+2)!}$. Please help!
thanks guys, youre the greatest!

geometry - Two paradoxes: $pi = 2$ and $sqrt 2 = 2$





Can anyone explain how to properly resolve two paradoxes in this YouTube video by James Tanton?


Answer



First: It is not a paradox: it is just wrong. The reasoning is wrong.


About $\pi = 2$ he says: "Well clearly we are approaching the diameter of the circle". That is a statement that he doesn't prove and which is false.


The same problem arises with the $\sqrt{2} = 2$ when he says: "Well clearly this geometric construction approaches the diagonal of the square". How does he know that?


All that this proves is that we have to be careful when we talk about finding limits from purely looking at pictures.


"Just because the sun sets in the west doesn't mean that it has to rise in the west as well.



Edit: There are plenty of example of proofs that seem right, but turn out to be wrong when we go over them in more detail. Take for example the proof that for complex numbers $$ 1 = \sqrt{1} = \sqrt{(-1)\cdot(-1)} = \sqrt{-1}\sqrt{-1} = i\cdot i = -1$$ Here again, the argument is invalid because the rule $\sqrt{ab} = \sqrt{a}\sqrt{b}$ doesn't hold for complex numbers.


Tuesday, September 26, 2017

combinatorics - Hockey-Stick Theorem for Multinomial Coefficients



Pascal's triangle has this famous hockey stick identity.
$$ \binom{n+k+1}{k}=\sum_{j=0}^k \binom{n+j}{j}$$
Wonder what would be the form for multinomial coefficients?


Answer



$$\binom{a_1+a_2+\cdots+a_t}{a_1,a_2,\cdots,a_t}=\sum_{i=2}^t \sum_{j=1}^{i-1} \sum_{k=1}^{a_i} \binom{ a_1+a_2+\cdots+a_{i-1}+k }{a_1,a_2,\cdots,a_j-1,\cdots,a_{i-1},k }$$


Existence of complex anti-derivative

So I'm currently reading about complex anti-derivatives. I was given these two questions:



Problem 1:
Does $f(z)= z^n, \quad n\in\mathbb N$ have an anti-derivative on $\mathbb C\setminus \{0\}$?




Solution:



Case 1.1: $n\neq -1$ We find $F(z)=\frac{1}{n+1}z^n$ to be an anti-derivative. (Since $f(z)$ is continuous on $\mathbb C\setminus \{0\}$ we can just integrate it. We also see $F'(z)=f(z)$, so we are good)



Case 1.2: $n=-1$ With $\gamma(t)=e^{2\pi i t}, \quad t\in[0,1]$ we find $\int_\gamma \frac{1}{z} dz = 2\pi i$ so, apparently this does not have an anti-derivative because of Cauchy's integral theorem.



Problem 2:
Does $f(z)= z^n, \quad n\in\mathbb N$ have an anti-derivative on $\mathbb C\setminus (-\infty, 0]$?



Case 2.1: $n\neq-1$ Basically the same argumentation as in Case 1.1.




Case 2.2: $n=-1$ Since $Log(z)$ is continuous on $\mathbb C\setminus (-\infty, 0]$ we find $F(z)=Log(z)$ and we verify with $F'(z)=f(z)$.



Questions:



Question 1: Apparently we use Cauchy's integral theorem to show that there isn't a anti-derivative in case 1.2, but how exactly do we use it?



At the moment I just think of it like that: The anti-derivative of $\frac{1}{z}$ would be $Log(z)$ but $Log(z)$ is not continuous on $\mathbb C \setminus \{0\}$ so it wouldn't fulfill $F'(z)=f(z)$. But this actually just tells me "Log(z)$ isn't the anti-derivative but it doesn't tell me that there isn't any at all.



Question 2: In Case 1.2 we get $\int_\gamma f(z)=2\pi i$ so what exactly does this tell us? Doesn't it mean we actually evaluated the integral? Is it possible to evaluate an integral without it having an anti-derivative?

definition - What is the square root of complex number i?


Square root of number -1 defined as i, then what is the square root of complex number i?, I would say it should be j as logic suggests but it's not defined in quaternion theory in that way, am I wrong?


EDIT: my question is rather related to nomenclature of definition, while square root of -1 defined as i, why not j defined as square root of i and k square root of j and if those numbers have deeper meanings and usage as in quaternions theory.


Answer




Unfortunately, this cannot be answered definitively. In fact, every non-zero complex number has two distinct square roots, because $-1\ne1,$ but $(-1)^2=1^2.$ When we are discussing real numbers with real square roots, we tend to choose the nonnegative value as "the" default square root, but there is no natural and convenient way to do this when we get outside the real numbers.


In particular, if $j^2=i,$ then putting $j=a+bi$ where $a,b\in\Bbb R,$ we have $$i=j^2=(a+bi)^2=a^2-b^2+2abi,$$ so we need $0=a^2-b^2=(a+b)(a-b)$ and $2ab=1.$ Since $0=(a+b)(a-b),$ then $a=\pm b.$ If we had $a=-b,$ then we would have $1=2ab=-2b^2,$ but this is impossible, since $b$ is real. Hence, we have $a=b,$ so $1=2ab=2b^2,$ whence we have $b=\pm\frac1{\sqrt{2}},$ and so the square roots of $i$ are $\pm\left(\frac1{\sqrt{2}}+\frac1{\sqrt{2}}i\right).$


I discuss in my answer here that $i$ is defined as one of two possible numbers in the complex plane whose square is $-1$ (it doesn't actually matter which, as far as the overall structure of the complex numbers is concerned). Once we've chosen our $i,$ though, we have fixed which "version" of the complex numbers we're talking about. We could then pick a canonical square root of $i$ (and call it $j$), but there's really no point. Once we've picked our number $i,$ we have an algebraically closed field, meaning (incredibly loosely) that we have all the numbers we could want already there, so we can't (or at least don't need to) add more, and there's no particular need to give any others of them special names.


Monday, September 25, 2017

linear algebra - Determinant of a rank $1$ update of a scalar matrix, or characteristic polynomial of a rank $1$ matrix


This question aims to create an "abstract duplicate" of numerous questions that ask about determinants of specific matrices (I may have missed a few):


The general question of this type is



Let $A$ be a square matrix of rank$~1$, let $I$ the identity matrix of the same size, and $\lambda$ a scalar. What is the determinant of $A+\lambda I$?



A clearly very closely related question is




What is the characteristic polynomial of a matrix $A$ of rank$~1$?



Answer



The formulation in terms of the characteristic polynomial leads immediately to an easy answer. For once one uses knowledge about the eigenvalues to find the characteristic polynomial instead of the other way around. Since $A$ has rank$~1$, the kernel of the associated linear operator has dimension $n-1$ (where $n$ is the size of the matrix), so there is (unless $n=1$) an eigenvalue$~0$ with geometric multiplicity$~n-1$. The algebraic multiplicity of $0$ as eigenvalue is then at least $n-1$, so $X^{n-1}$ divides the characteristic polynomial$~\chi_A$, and $\chi_A=X^n-cX^{n-1}$ for some constant$~c$. In fact $c$ is the trace $\def\tr{\operatorname{tr}}\tr(A)$ of$~A$, since this holds for the coefficient of $X^{n-1}$ of any square matrix of size$~n$. So the answer to the second question is



The characteristic polynomial of an $n\times n$ matrix $A$ of rank$~1$ is $X^n-cX^{n-1}=X^{n-1}(X-c)$, where $c=\tr(A)$.



The nonzero vectors in the $1$-dimensional image of$~A$ are eigenvectors for the eigenvalue$~c$, in other words $A-cI$ is zero on the image of$~A$, which implies that $X(X-c)$ is an annihilating polynomial for$~A$. Therefore




The minimal polynomial of an $n\times n$ matrix $A$ of rank$~1$ with $n>1$ is $X(X-c)$, where $c=\tr(A)$. In particular a rank$~1$ square matrix $A$ of size $n>1$ is diagonalisable if and only if $\tr(A)\neq0$.



See also this question.


For the first question we get from this (replacing $A$ by $-A$, which is also of rank$~1$)



For a matrix $A$ of rank$~1$ one has $\det(A+\lambda I)=\lambda^{n-1}(\lambda+c)$, where $c=\tr(A)$.



In particular, for an $n\times n$ matrix with diagonal entries all equal to$~a$ and off-diagonal entries all equal to$~b$ (which is the most popular special case of a linear combination of a scalar and a rank-one matrix) one finds (using for $A$ the all-$b$ matrix, and $\lambda=a-b$) as determinant $(a-b)^{n-1}(a+(n-1)b)$.


linear algebra - Why does Friedberg say that the role of the determinant is less central than in former times?



I am taking a proof-based introductory course to Linear Algebra as an undergrad student of Mathematics and Computer Science. The author of my textbook (Friedberg's Linear Algebra, 4th Edition) says in the introduction to Chapter 4:




The determinant, which has played a prominent role in the theory of linear algebra, is a special scalar-valued function defined on the set of square matrices. Although it still has a place in the study of linear algebra and its applications, its role is less central than in former times.




He even sets up the chapter in such a way that you can skip going into detail and move on:





For the reader who prefers to treat determinants lightly, Section 4.4 contains the essential properties that are needed in later chapters.




Could anyone offer a didactic and simple explanation that refutes or asserts the author's statement?


Answer



Friedberg is not wrong, at least on a historical standpoint, as I am going to try to show it.



Determinants were discovered in the second half of the 18th century, and had a rather rapid spread. Their discoverer, Cramer, used them in his celebrated rule for the solution of a linear system (in terms of quotients of determinants). Mathematicians of the next two generations discovered properties of determinants that now, with our vision, we mostly express in terms of matrices.




Cauchy has extended the use of determinants as explained in the very nice article by Hawkins referenced below :




  • around 1815, he discovered the multiplication rule (lines times columns) of two determinants. This is typical of a result that has been completely remodeled ; nowadays, this rule is for the multiplication of matrices, and determinants' multiplication is restated as the homomorphism rule $\det(A \times B)= \det(A)\det(B)$.


  • around 1825, he discovered eigenvalues "associated with a symmetric determinant" and established the important result that these eigenvalues are real ; this discovery has its roots in astronomy, in connection with Sturm, explaining the word "secular values" he attached to them: see for example this).




Matrices made a shy apparition in the mid-19th century (in England) ; "matrix" is a term coined by Sylvester see here. I strongly advise to take a look at his elegant style in his Collected Papers.



Together with his friend Cayley, they can rightly be named the founding fathers of linear algebra, with determinants as permanent reference. Here is a major quote of Sylvester:




"I have in previous papers defined a "Matrix" as a rectangular array of terms, out of which different systems of determinants may be engendered as from the womb of a common parent".



The characteristic polynomial is expressed as the famous $\det(A-\lambda I)$, the "resultant", invented by Sylvester (giving a characteristic condition for two polynomials to have a common root), is a determinant as well, etc.



Let us repeat it : for a mid-19th century mathematician, a square array of numbers has necessarily a value (its determinant): it cannot have any other meaning. If it is a rectangular array, the numbers attached to it are the determinants of submatrices that can be "extracted" from the array.



The identification of "Linear Algebra" as a full (and new) part of Mathematics is mainly due to the German School (from 1870 till the 1930's with Van der Waerden's book : "Moderne Algebra" that is still very readable). I don't cite the names, there are too many of them. An example among many others of the german dominance: the germenglish word "eigenvalue". The word "kernel" could have remained the german word "kern" that appears around 1900 (see this site).



The triumph of Linear Algebra is rather recent (mid-20th century). "Triumph" meaning that now Linear Algebra has found a very central place. Determinants in all that ? Maybe the biggest blade in this swissknife, but not more ; another invariant (this term would deserve a long paragraph by itself), the trace, would be another blade, not the smallest.




A special treatment should be made to the connection between geometry and determinants, which has paved the way, here in particular, to linear algebra. Some cornerstones:




  • the development of projective geometry, in its analytical form, in the 1850s. This development has led in particular to the modern study of conic sections described by a quadratic form, that can be written under an all-matricial expression $X^TMX=0$ where $M$ is a symmetrical $3 \times 3$ matrix
    Different examples of the emergence of new trends :



    a) the concept of rank: for example, a pair of straight lines is a conic section whose matrix has rank 1. The "rank" of a matrix used to be defined in an indirect way as the "dimension of the largest nonzero determinant that can be extracted from the matrix". Nowadays, the rank is defined in a more straightforward way as the dimension of the range space... at the cost of a little more abstraction.



    b) the concept of linear transformations and duality arising from geometry: $X=(x,y,t)^T\rightarrow U=MX=(u,v,w)$ between points $(x,y)$ and lines with equations $ux+vy+w=0$. More precisely, the tangential description, i.e., the constraint on the coefficients $U^T=(u,v,w)$ of the tangent lines to the conical curve has been recognized as associated with $M^{-1}$ (under the condition $\det(M) \neq 0$!), due to relationship





$$X^TMX=X^TMM^{-1}MX=(MX)^T(M^{-1})(MX)=U^TM^{-1}U=0$$
$$=\begin{pmatrix}u&v&w\end{pmatrix}\begin{pmatrix}A & B & D \\ B & C & E \\ D & E & F \end{pmatrix}\begin{pmatrix}u \\ v \\ w \end{pmatrix}=0$$



whereas, in XIXth century, it was classical to write the previous quadratic form as :



$$U^TM^{-1}U=\begin{vmatrix}a&b&d&u\\b&c&e&v\\d&e&f&w\\u&v&w&0\end{vmatrix}=0$$



as a "bordered determinant" directly with matrix $M$.




(see the excellent lecture notes (http://www.maths.gla.ac.uk/wws/cabripages/conics/conics0.html)). It is to be said that the idea of linear transformations, especially orthogonal transformations arose even earlier in the framework of the theory of numbers (quadratic representations).



Remark: the way the latter equalities have been written use matrix algebra notations and rules that were unknown in the 19th century, with the notable exception of Grassmann's "Ausdehnungslehre", whose ideas were too ahead of his time (1844) to have any influence.



c) the concept of eigenvector/eigenvalue, initially motivated by the determination of "principal axes" of conics and quadrics.




  • the very idea of "geometric transformation" (more or less born with Klein circa 1870) which, when it is a linear transformation, is associated with an array of numbers. A matrix, of course, is much more that an array of numbers... But think for example to the persistence of the expression "table of direction cosines" (instead of "orthogonal matrix") as can be found for example still in the 2002 edition of Analytical Mechanics by A.I. Lorrie.


  • and quaternions, etc...





References:



I just discover, 3 years later, a rather similar question with a very complete answer by Denis Serre, a French specialist in the domain of matrices :
https://mathoverflow.net/q/35988/88984



The article by Thomas Hawkins : "Cauchy and the spectral theory of matrices", Historia Mathematica 2, 1975, 1_29.



See also (http://www.mathunion.org/ICM/ICM1974.2/Main/icm1974.2.0561.0570.ocr.pdf)




An important bibliography is to be found in (http://www-groups.dcs.st-and.ac.uk/history/HistTopics/References/Matrices_and_determinants.html).



See also a good paper by Nicholas Higham : (http://eprints.ma.man.ac.uk/954/01/cay_syl_07.pdf)



For conic sections and projective geometry, see a) this excellent chapter of lectures of the University of Vienna (see the other chapters as well) : (https://www-m10.ma.tum.de/foswiki/pub/Lehre/WS0809/GeometrieKalkueleWS0809/ch10.pdf). See as well : (maths.gla.ac.uk/wws/cabripages/conics/conics0.html).



Don't miss the following very interesting paper about various kinds of useful determinants : https://arxiv.org/pdf/math/9902004.pdf



See also this




Interesting precisions on determinants in the answers here.



A fundamental book on "The Theory of Determinants" in 4 volumes has been written by Thomas Muir : http://igm.univ-mlv.fr/~al/Classiques/Muir/History_5/VOLUME5_TEXT.PDF (years 1906, 1911, 1922, 1923) for the last volumes or, for all of them https://ia800201.us.archive.org/17/items/theoryofdetermin01muiruoft/theoryofdetermin01muiruoft.pdf. It is very interesting to take random pages and see how the determinant-mania has been important, especially in the second half of the XIXth century, with results of uneven quality. Matrices appear at some places with the double bar convention that lasted a very long time. Matrices are mentionned here and there, rarely to their advantage...


Complex analysis on real integral



Use complex analysis to compute the real integral




$$\int_{-\infty}^\infty \frac{dx}{(1+x^2)^3}$$





I think I want to consider this as the real part of



$$\int_{-\infty}^\infty \frac{dz}{(1+z^2)^3}$$



and then apply the residue theorem. However, I am not sure how that is the complex form and the upper integral is the real part and how to apply.



Answer



Define a contour that is a semicircle in the upper half plane of radius R. Plus the real line from $-R$ to $R$



Then let R get to be arbitrarily large.



There is one pole at $z = i$ inside the contour



Cauchy integral formula says:



$$f^{(n)}(a) = \frac {n!}{2\pi i}\oint \frac {f(a)}{(z-a)^{n+1}}\ dz$$




$$\oint \frac {\frac {1}{(z+i)^3}}{(z-i)^3}\ dz = \pi i \frac{d^2}{dz^2} \frac {1}{(z+i)^3} \text{ evaluated at } z=i.$$



Next you will need to show that the integral along contour of the semi-cricle goes to $0$ as $R$ gets to be large.



$$z = R e^{it}, dz = iR e^{it}\\
\displaystyle\int_0^{\pi} \frac {iRe^{it}}{(R^2 e^{2it} + 1)^3} \ dt\\
\left|\frac {iRe^{it}}{(R^2 e^{2it} + 1)^3}\right| < R^{-5}\\
\left|\int_0^{\pi} \frac {iRe^{it}}{(R^2 e^{2it} + 1)^3} \ dt\right| < \int_0^{\pi} R^{-5}\ dt\\
\lim_\limits{R\to \infty}\int_0^{\pi} R^{-5}\ dt = 0$$



Proving Gamma function relation




Prompt.
Using integration by parts, show that the gamma function $$\Gamma (t) = \int_0^\infty x^{t-1} e^{-x} \, dx $$ satisfies the relation $t\Gamma (t) = \Gamma (t+1)$ for $t > 0$.



My solution. Let $ u = e^{-x}$, $du = - e^{-x}\,dx$, $v = \frac 1t x^t$ , $dv = x^{t-1} \,dx$. Then after applying integration by parts, we get $$\Gamma (t) = \frac 1t x^t e^{-x} + \frac 1t \int_0^\infty x^t e^{-x} \, dx$$ and subsequently $t\Gamma(t) = x^t e^ {-x} + \int_0^\infty x^t e^{-x} \, dx $ . Now, $\Gamma(t + 1) = \int_0^\infty x^t e^{-x} \, dx$.



We can rewrite $t\Gamma (t) = \Gamma'(t) + \Gamma (t + 1)$ .



Am I doing something wrong? Am I on the right track?


Answer



I'm sorry to say, but those are the wrong substitutiongs. By integration by parts, set$$u=x^t\qquad\qquad\mathrm du=tx^{t-1}\,\mathrm dx$$And therefore, we have$$\begin{align*}\Gamma(z+1) & =\int\limits_{0}^{\infty}e^{-t}t^z\,\mathrm dt\\ & =-t^ze^{-t}\,\biggr\rvert_{0}^{\infty}+\int\limits_{0}^{\infty}ze^{-t}t^{z-1}\,\mathrm dt\end{align*}$$The first term evaluates to zero. You can see this by taking the limit, and then using L'Hopital's. Therefore, we're left with$$\begin{align*}\Gamma(z+1) & =z\int\limits_{0}^{\infty}e^{-t}t^{z-1}\,\mathrm dt\\ & =z\Gamma(z)\end{align*}$$



Sunday, September 24, 2017

calculus - A closed form of $sum_{k=1}^inftyfrac{(-1)^{k+1}}{k!}Gamma^2left(frac{k}{2}right)$



I am looking for a closed form of the following series




\begin{equation}
\mathcal{I}=\sum_{k=1}^\infty\frac{(-1)^{k+1}}{k!}\Gamma^2\left(\frac{k}{2}\right)

\end{equation}




I have no idea how to answer this question. Wolfram Alpha gives me result:




$$\mathcal{I}\approx2.7415567780803776$$




Could anyone here please help me to obtain the closed form of the series preferably (if possible) with elementary ways (high school methods)? Any help would be greatly appreciated. Thank you.



Answer



You can use the identity given by the Euler Beta function
$$\int_{0}^{1}x^{a-1} (1-x)^{b-1} \,dx=\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}$$
to state:
$$S=\sum_{k=1}^{+\infty}\frac{(-1)^{k+1}}{k!}\Gamma(k/2)^2=\sum_{k=1}^{+\infty}\frac{(-1)^{k-1}}{k}\int_{0}^{1}\left(x(1-x)\right)^{k/2-1}\,dx $$
and by switching the series and the integral:
$$ S = \int_{0}^{1}\frac{\log(1+\sqrt{x(1-x)})}{x(1-x)}dx = 2\int_{0}^{1/2}\frac{\log(1+\sqrt{x(1-x)})}{x(1-x)}dx,$$
$$ S = 2\int_{0}^{1/2}\frac{\log(1+\sqrt{1/4-x^2})}{1/4-x^2}dx = 4\int_{0}^{1}\frac{\log(1+\frac{1}{2}\sqrt{1-x^2})}{1-x^2}dx,$$
$$ S = 4\int_{0}^{\pi/2}\frac{\log(1+\frac{1}{2}\sin\theta)}{\sin\theta}d\theta.$$
Now Mathematica gives me $\frac{5\pi^2}{18}$ as an explicit value for the last integral, but probably we are on the wrong path, and we only need to exploit the identity

$$\sum_{k=1}^{+\infty}\frac{1}{k^2\binom{2k}{k}}=\frac{\pi^2}{18}$$
that follows from the Euler acceleration technique applied to the $\zeta(2)$-series. The other "piece" (the $U$-piece in the Marty Cohen's answer) is simply given by the Taylor series of $\arcsin(z)^2$. More details to come.






As a matter of fact, both approaches lead to an answer.
The (Taylor) series approach, as Bhenni Benghorbal shows below, leads to the identity:
$$\sum_{k=1}^\infty\frac{(-1)^{k+1}}{k!}\Gamma^2\left(\frac{k}{2}\right)x^k= 2 \arcsin \left( x/2 \right) \left(\pi - \arcsin \left( x/2\right) \right),\tag{1}$$
while the integral approach, as Achille Hui pointed out in the comments, leads to:
$$\begin{eqnarray*}\int_{0}^{\pi/2}\frac{\log(1+\frac{1}{2}\sin\theta)}{\sin\theta}\,d\theta&=&\int_{0}^{1}\log\left(1+\frac{t}{1+t^2}\right)\frac{dt}{t}\\&=&\int_{0}^{1}\frac{\log(1-t^3)-\log(1-t)-\log(1+t^2)}{t}\,dt\\&=&\int_{0}^{1}\frac{-\frac{2}{3}\log(1-t)-\frac{1}{2}\log(1+t)}{t}\,dt\\&=&\frac{1}{6}\sum_{k=1}^{+\infty}\frac{4+3(-1)^k}{k^2}=\frac{1}{6}\left(4-\frac{3}{2}\right)\zeta(2)=\frac{5\pi^2}{72}.\end{eqnarray*}\tag{2}$$




Thanks to both since now this answer may become a reference both for integral-log-ish problems like $(2)$ and for $\Gamma^2$-series like $(1)$.






Update 14-06-2016. I just discovered that this problem can also be solved by computing
$$ \int_{-1}^{1} x^n\, P_n(x)\,dx, $$
where $P_n$ is a Legendre polynomial, through Bonnet's recursion formula or Rodrigues' formula. Really interesting.


calculus - Limits without L'Hopitals Rule

Evaluate the limit without using L'hopital's rule



a)$$\lim_{x \to 0} \frac {(1+2x)^{1/3}-1}{x} $$




I got the answer as $l=\frac 23$... but I used L'hopitals rule for that... How can I do it another way?



b)$$\lim_{x \to 5^-} \frac {e^x}{(x-5)^3}$$



$l=-\infty$



c)$$\lim_{x \to \frac {\pi} 2} \frac{\sin x}{\cos^2x} - \tan^2 x$$



I don't know how to work with this at all




So basically I was able to find most of the limits through L'Hopitals Rule... BUT how do I find the limits without using his rule?

linear algebra - Looking for insightful explanation as to why right inverse equals left inverse for square invertible matrices



The simple proof goes:



Let B be the left inverse of A, C the right inverse.



C = (BA)C = B(AC) = B



This proof relies on associativity yet does not give any insight as to why this surprising fact about matrices is true.




AC means a bunch of linear combinations of of columns of A.
CA means a bunch of linear combinations of rows of A. Completely different numbers get multiplied in each case.



The proof above is just a bunch of syntactical steps not having much to do with matrices directly, I cannot see how CA=AC=I.



Can anyone shed some light on this?


Answer



In fact, this isn't about matrices per se, but about inverses in general, and perhaps more specifically about inverses of functions. The same argument works for any function that has a left and a right inverse (and for elements of a monoid or ring, though these can also be interpreted as "functions" via an appropriate setting).




If you really want to try to understand the proof in terms of "meaning", then you should not think of matrices as a bunch of columns or a bunch of numbers, but rather as functions, i.e., as linear transformations.



Say $A$ is an $m\times n$ matrix; then $A$ is "really" a linear transformation from $\mathbb{R}^n$ to $\mathbb{R}^m$: the columns of $A$ are the images of the standard basis vectors of $\mathbb{R}^n$ under the transformation. If $B$ is a right inverse of $A$, then $B$ is $n\times m$, and $AB$ acts like the identity transformation on $\mathbb{R}^m$. In particular, $AB$ has to be onto, so the rank of $AB$ is $m$; since the rank of $AB$ is at most the rank of $A$, then the rank of $A$ has to be $m$; since the rank of $A$ is at most $\max(m,n)$, then $m\leq n$. This tells us that $A$ is onto (full rank), and that it has at least as many columns as it has rows.



If $C$ is a left inverse of $A$, then $C$ must be an $n\times m$ matrix, and $CA$ acts like the identity on $\mathbb{R}^n$. Because $CA$ is one-to-one, then $A$ has to be one-to-one. In particular, it's nullspace is trivial. That means that it cannot have more columns than rows (that would require a nontrivial nullspace, by the Rank-Nullity Theorem); since it has at least as many columns as it has rows, $A$ has exactly the same number of columns as rows, so $m=n$.



Moreover, $A$ is now one-to-one and onto. So it is in fact bijective. So it is in fact invertible. Invertible matrices have unique inverses by definition, so $B$ and $C$ have to be equal: they have no choice in the matter. It isn't about the details of the "recipe", it's about the properties of functions: once you have a function that is one-to-one and onto, it has an inverse and the inverse is unique.







I honestly think that trying to puzzle out the details of the "recipe" is not insightful here: it is staring at the bark of a single tree instead of trying to appreciate the forest.



But if you must (and I really think you shouldn't), then you want to realize that $AC$ and $CA$ are talking in a different language: the columns of $A$ specify a basis, $\gamma$, and tell you how to express the elements of $\gamma$ in terms of the standard basis $\beta$; it provides a "translation" from $\gamma$ to $\beta$. That is, $A=[\mathrm{Id}]_{\gamma}^{\beta}$. The inverse, $C$, explains how to express the elements of the standard basis $\beta$ in terms of the vectors in $\gamma$, $C=[\mathrm{Id}]_{\beta}^{\gamma}$.
$AC$ talks in the language of the standard basis $\beta$, $CA$ talks in the language of $\gamma$. Then it becomes clear why "the same recipe" (not really) should work. It's not really the same recipe, because in $CA$ you "hear" vectors in $\gamma$, translate them into $\beta$, and then translates them back in to $\gamma$. But in $AC$ you "hear" vectors in $\beta$, translate them into $\gamma$, and then back into $\beta$. The "translation recipes" are the same, whether you do $\beta\to\gamma$ first or you do it second (translating English to Russian is the same, whether you are translating something written originally in English into Russian, or something that was translated into English first).



$A$ establishes a bijection between the vectors expressed in terms of $\gamma$, and the vectors expressed in terms of $\beta$. $C$ is the bijection going "the other way". Both $AC$ and $CA$ are the identity, but they are the identity of slightly different structures: $AC$ is the identity of "$\mathbb{R}^n$-with-basis-$\beta$", and $CA$ is the identity of "$\mathbb{R}^n$-with-basis-$\gamma$." Only when you forget that $AC$ and $CA$ are being interpreted as matrices on vector-spaces-with-basis do you realize that in fact you have the same "formula" for the expressions (i.e., that both matrices are "the" identity matrix).



If you want to stare at the bark so intently that you must think of matrices in terms of "linear combinations of rows" or "linear combinations of columns", you are going to miss a lot of important properties for matrices. Matrices are really functions; multiplication of matrices is really composition of functions. The aren't a bunch of vectors or a bunch of numbers thrown into a box that you multiply based on some ad hoc rule. They are functions.



Compare how easy, and, yes, intuitive, it is to realize that matrix multiplication is associative because it is just "composition of functions", vs. figuring it out by expanding the double summations of an entry-by-entry expression of $(AB)C$ and $A(BC)$: you are going in the wrong direction for 'intuitive' understanding. Staring at those summations will not tell you why multiplication is associative, it will only tell you it is. Trying to puzzle out why a right inverse is also a left inverse in terms of "adding columns" and "adding rows" is not going to help either: you need to think of it in terms of functions.



sequences and series - $1 + 1 + 1 +cdots = -frac{1}{2}$




The formal series



$$
\sum_{n=1}^\infty 1 = 1+1+1+\dots=-\frac{1}{2}
$$



comes from the analytical continuation of the Riemann zeta function $\zeta (s)$ at $s=0$ and it is used in String Theory. I am aware of formal proofs by Prof. Terry Tao and Wikipedia, but I did not fully understand them. Could someone provide an intuitive proof or comment on why this should be true?


Answer



Let me walk you through the Riemann zeta computation. Call $S$ your original sum. Let's regulate the sum as follows:

$$S_s \equiv \sum_{n \geq 1} \frac{1}{n^s}.$$
Fix $n \geq 1.$ Then $n^{-s} \rightarrow 1$ as $s \rightarrow 0,$ so if we can assign a meaning to $S_s$ as $s \rightarrow 0$, we can interpret $S$ as this limit.



Now, for $s > 1$ the above sum exists and it equals the Riemann zeta function, $\zeta(s).$ $\zeta$ has a pole at $s=1$, which is just the statement that the (non-regulated) sum $\sum 1/n$ diverges. But we can analytically continue $\zeta$ if we take care to avoid this pole. Then we can Taylor expand around $s=0$



$$\zeta(s) = -\frac{1}{2} - \frac{1}{2} \ln(2\pi) s + \ldots$$
which implies that



$$S = \lim_{s \rightarrow 0} S_s = -\frac{1}{2}.$$
(The equality sign is to be understood in the regulated sense.)




There are many other ways to regulate the sum. You can e.g. suppress the tail as $\sim \exp(-\epsilon n)$, but then you need to add a counterterm to absorb a pole as $\epsilon \rightarrow 0.$


Saturday, September 23, 2017

sequences and series - Formula for the following sum?

I simply wonder if it exists a formula for the sum $S_n = \sum_{k=1}^{n} k^k$ ? If it does, then what is it? If not, how do we know that?

Friday, September 22, 2017

linear algebra characteristic polynomial, matrix rank, Matrix similarity


I'm having a problem solving the following assignment, can someone please help me?


I'm given 2 $n \times n$ matrices, $n>1$.


A=$\begin{bmatrix}1 & .& .& .& .& .& 1\\. &&&&&&.\\.&&&&&&.\\.&&&&&&.\\.&&&&&&.\\.&&&&&&.\\1 &.&.&.&.&.&1\end{bmatrix}$


B=$\begin{bmatrix}n & 0& & .& .& 0& 0\\0 &0&&&&&.\\.&&&&&&.\\.&&&&&&.\\.&&&&&&.\\.&&&&&&.\\0 &.&.&.&.&.&0\end{bmatrix}$


1) I need to find the characteristic polynomial of A using A's Rank.


2) I need to prove that the Coefficient of $t^n-1$ in the characteristic polynomial of A is equal -(trA).


3) I need to prove that A and B are similar matrices and find P so that $B = P^{-1}AP$


*All of A's entries = 1.


Answer




$A$ is symmetric, so the algebraic multiplicity of an eigenvalue is equal to the geometric multiplicity.


It is not hard to see that, for any $x$, $Ax = c(1,1,\dots,1)^T$ for some constant $c$. Thus, its rank is $1$ (corresponding to eigenvalue $\lambda = ...?$) and the other $n-1$ eigenvalues are $0$. Such a matrix has characteristic polynomial


$$ (t - \lambda)(t - 0)^{n-1} = t^{n-1}(t - \lambda) $$


For question 2, it is easy to directly calculate the trace, and you should now have the characteristic polynomial, so just verify.


To find a similarity transform, you can find all the eigenvectors (meaning, find $n$ linearly independent eigenvectors) of $A$, or of $B$. One will be much easier than the other.


elementary number theory - Prove that if 2 divides n and 7 divides n, then 14 divides n




Okay so I have to prove this. I can write that if 2 divides n and 7 divides n, then there must be integers k and m such that
$2*k=n$
and
$7*m=n$



So $14*k*m=n^2$



But what to do after that?



If I say that then 14 divides $n^2$, I get bit of a circular argument, but if I write that n divides $14*k*m$, then I don't know what to do next.




Any help/suggestions?


Answer



Following from what you have written, $$n = 2k=7m \implies k=\frac{7m}{2}.$$
Since $k$ is an integer and $\gcd(2,7)=1$, $m/2$ must be an integer; i.e., $m/2=r \implies m=2r$, where $r$ is an integer. Therefore,
$$n=7m=7\times 2r = 14 r.$$
Q.E.D.


Thursday, September 21, 2017

exponentiation - What comes before the cycles in sequences of repeated modular multiplication?




I was working on this problem and found a solution that involved sequences generated by modular exponentiation with an incrementing exponent:



$$x_{m,b,i} = b^{i} \mod m$$



For example, in modular base 10:




  • $0^1 \equiv 0$, $0^2\equiv 0$, $0^3\equiv 0$, …

  • $1^1 \equiv 1$, $1^2\equiv 1$, $1^3\equiv 1$, …


  • $2^1 \equiv 2$, $2^2\equiv 4$, $2^3\equiv 8$, $2^4\equiv 6$, $2^5\equiv 2$, $2^6\equiv 4$, $2^7\equiv 8$, $2^6\equiv 6$, …

  • $3^1\equiv 3$, $3^2\equiv 9$, $3^3\equiv 7$, $3^4\equiv 1$, $3^5\equiv 3$, $3^6\equiv 9$, $3^7\equiv 7$, $3^6\equiv 1$, …

  • $4^1 \equiv 4$, $4^2\equiv 6$, $4^3\equiv 4$, $4^4\equiv 6$, …

  • $5^1 \equiv 5$, $5^2\equiv 5$, $5^3\equiv 5$, …

  • $6^1 \equiv 6$, $6^2\equiv 6$, $5^3\equiv 6$, …

  • $7^1\equiv 7$, $7^2\equiv 9$, $7^3\equiv 3$, $7^4\equiv 1$, $7^5\equiv 7$, $7^6\equiv 9$, $3^7\equiv 3$, $7^6\equiv 1$, …

  • $8^1 \equiv 8$, $8^2\equiv 4$, $8^3\equiv 2$, $8^4\equiv 6$, $8^5 \equiv 8$, $8^6\equiv 4$, $8^7\equiv 2$, $8^8\equiv 6$, …

  • $9^1 \equiv 9$, $9^2\equiv 1$, $9^3\equiv 9$, $9^4\equiv 1$, …




This property of forming cycles is quite nice, and can be proven trivially to happen in all sequences for every exponential base and every modular base (every element depends only on the previous one, and using the pigeon hole principle we can exhaust the congruence classes until an element is produced that appeared before).



But these cycles do not always start at the beginning of the sequences. Exceptional examples I found:




  • $x_{b,m,0} = b^0 \mod m \equiv 1$ for all sequences

  • $x_{2,4} = 1, 2, 0, 0, 0$…

  • $x_{3,9} = 1, 3, 0, 0, 0$…

  • $x_{2,8} = 1, 2, 4, 0, 0$…




Now my question: What's special about these bases? How long are those irregular beginnings? Do they ever prepend a cycle of more than one element?



I did some research and modular multiplicative inverses seem to be related, as all my examples did not have one. I also found modular multiplicative groups (Wolfram article) which seem to describe the cycles, but I didn't completely understand them and didn't notice anything about to the sequence beginnings.


Answer



Your examples of $x_{2,4},x_{3,9},x_{2,8}$ sequences are cases where $b$ is a power of $m$, and will therefore eventually produce zeros when you take a sufficiently large power of $m$, but will not initially when $1 \le m\lt b$



You can go further than this: if $m$ and $b$ have the same distinct prime factors then the same sort of thing will happen, for example $12=2^2\times 3^1$ and $54=2^1\times 3^3$ so they both have the distinct prime factors $2$ and $3$, while $12^1\equiv 12 \pmod{54}$, $12^2\equiv 36 \pmod{54}$, and $12^i\equiv 0 \pmod{54}$ for $i \ge 3$



On the neater patterns you spotted, some are related to Fermat's little theorem $$a^p \equiv a \pmod{p}$$ for prime $p$ and $1\le a \lt p$ and to its generalisation, Euler's theorem $$a^{\varphi (n)} \equiv 1 \pmod{n}$$ for $a$ coprime to $n$ where $\varphi (n)$ is Euler's totient function



elementary number theory - Is it possible to do modulo of a fraction

I am trying to figure out how to take the modulo of a fraction.



For example: 1/2 mod 3.




When I type it in google calculator I get 1/2. Can anyone explain to me how to do the calculation?

complex numbers - Finding modulus of $sqrt{6} - sqrt{6},i$

I found the real part $=\sqrt{6}$.


But I don't know how to find imaginary part. I thought it was whatever part of the function that involved $i$, with the $i$ removed? Therefore the imaginary part would be $-\sqrt{6}$.



Meaning the modulus is equal to \begin{align} \sqrt{ (\sqrt{6})^2 + (-\sqrt{6})^2} = \sqrt{12}. \end{align} The answer was $2\sqrt{3}$.

Infinite series $sum _{n=2}^{infty } frac{1}{n log (n)}$




Recently, I encountered a problem about infinite series.
So my question is how to know whether the infinite series $\sum _{n=2}^{\infty } \frac{1}{n \log (n)}$ is convergent?


Answer



To see whether $\sum_2^\infty 1/(n \log n)$ converges, we can use the integral test. This series converges if and only if this integral does:
$$
\int_2^\infty \frac{1}{x \log x} dx = \left[\log(\log x)\right]_2^\infty
$$
and in fact the integral diverges.




This is part of a family of examples worth remembering. Note that
$$
d/dx \log(\log(\log x)) = d/dx \log(\log x) \cdot \frac{1}{\log (\log x)} = \frac{1}{x \log x \log(\log x)}
$$
and $\log (\log (\log x)) \to \infty$ as $x \to \infty$ hence $\sum \frac{1}{n \log n \log (\log n)}$ diverges as well. Similarly, by induction we can put as many iterated $\log$s in the denominator as we want (i.e. $\sum \frac{1}{n \log n \log(\log n) \ldots \log (\ldots (\log n) \ldots )}$ where the $i$th log is iterated $i$ times), and it will still diverge. However, as you should check, $\sum \frac{1}{x \log^2x}$ converges, and in fact (again by induction) if you square any of the iterated logs in $\sum \frac{1}{n \log n \log(\log n) \ldots \log (\ldots (\log n) \ldots )}$ the sum will converge.


Wednesday, September 20, 2017

Many other solutions of the Cauchy's Functional Equation


By reading the Cauchy's Functional Equations on the Wiki, it is said that




On the other hand, if no further conditions are imposed on f, then (assuming the axiom of choice) there are infinitely many other functions that satisfy the equation. This was proved in 1905 by Georg Hamel using Hamel bases. Such functions are sometimes called Hamel functions.



Could anyone give a more explicit explanation of these many other solutions?


Besides the trivial solution of the form $f(x)=C x$, where $C$ is a constant, and the solution above constructed by the Hamel Basis, are there any more solutions existing?


Answer



All solutions are "Hamel basis" solutions. Any solution of the functional equation is $\mathbb{Q}$-linear. Let $H$ be a Hamel basis. Given a solution $f$ of the functional equation, let $g(b)=f(b)$ for every $b\in H$, and extend by $\mathbb{Q}$-linearity. Then $f(x)=g(x)$ for all $x$.


There are lots of solutions because a Hamel basis has cardinality the cardinality $c$ of the continuum. A solution can assign arbitrary values to elements of the basis, and be extended to $\mathbb{R}$ by $\mathbb{Q}$-linearity. So there are $c^c$ solutions. There are only $c$ linear solutions, so "most" solutions of the functional equation are non-linear.


linear algebra - Sum of 5 square roots equals another square root. What is the minimum possible value of the summed square root?





In the equation $\sqrt{a}+\sqrt{b}+\sqrt{c}+\sqrt{d}+\sqrt{e}=\sqrt{f}$, each variable is a distinct positive integer. What is the least possible value for $f?$




Out of purely trial and error, I have come to the solution of a = 1, b= 4, c= 9, d = 16, e = 25 which sums to 15 thus $\sqrt{f} = 15$ and $f = 225$.






I suspect the final answer will turn out to be $f = 225 = 15^2$, but I have not yet managed to come up with a rigorous proof that no smaller value of $f$ is possible.







The main source of inspiration I've been trying work from so far is to consider a simpler variation of this problem where you only have two square roots on the left hand side,
$$\sqrt{a}+\sqrt{b} = \sqrt{f}.$$
In this variation of the problem, we can square both sides to find that
$$a+b+2\sqrt{ab} = f.$$
Therefore,
$$2\sqrt{ab} = f-a-b,$$
consequently,

$$4ab = (f-a-b)^2.$$
Since $a,b,f$ are integers and the left hand side ($4ab$) is an integer that is divisible by $2$, we see that the product $(f-a-b)^2 = (f-a-b)\cdot (f-a-b)$ is divisible by $2$. A product of integers is only divisible by a prime $p$ if and only if at least one of the factors is divisible by $p$. (This is a simple consequence of the Fundamental Theorem of Arithmetic, which is just the official name for the fact that prime factorizations of integers are unique.) Therefore, the integer factor $f-a-b$ itself must be divisible by $2$. In other words, we have deduced that $k = \frac{f-a-b}{2}$ must be an integer.



Thus we see that $$ab = k^2$$ is the square of an integer. Therefore, as a consequence of the Fundamental Theorem of Arithmetic (which is just the official name for the fact that prime factorizations of integers are unique), it must be the case that there exist integers $\alpha, \beta,\gamma$ such that $a = \alpha^2 \gamma$ and $b = \beta^2 \gamma,$ so
\begin{align*}f &= a+b+2\sqrt{ab}\\
&= \alpha^2\gamma + \beta^2\gamma+2\sqrt{\alpha^2\beta^2\gamma^2}\\
&= (\alpha^2+\beta^2+2\alpha\beta)\gamma\\
&= (\alpha^2+\beta^2)\gamma\end{align*}

Thus, in this simpler version of the problem where there is only a sum of two square roots, clearly the smallest possible value of $f$ comes from taking $\gamma = 1, \alpha = 1, \beta = 2$ which gives the solution
$$\sqrt{1}+\sqrt{4} = \sqrt{9}.$$




I've been looking for ways to then bootstrap this argument (or something like it) up to something that can be applied to solving the original problem with a sum of five square roots. So far, I haven't found the right path forward.






I have also been looking for ways to exploit some routine square root inequalities. In particular, for any positive numbers $a_1, a_2,\dots,a_n$, it is the case that
$$\sqrt{a_1+a_2+\dots+a_n} < \sqrt{a_1}+\sqrt{a_2}+\dots+\sqrt{a_n} \le \sqrt{n} \sqrt{a_1+a_2+\dots+a_n}.$$
This also has not yet led to progress.



I am consequently stuck here with no ideas how to continue.




Your help is appreciated!



Also, can you also help me on this($N$'s base-5 and base-6 representations, treated as base-10, yield sum $S$. For which $N$ are $S$'s rightmost two digits the same as $2N$'s?) problem too?



Thanks!



Max0815


Answer



You are asking to determine the minimum $f$ in




$$\sqrt{a} + \sqrt{b} + \sqrt{c} + \sqrt{d} + \sqrt{e} = \sqrt{f} \tag{1}\label{eq1}$$



where each variable is a distinct positive integer. Change this slightly to



$$\sqrt{a} + \sqrt{b} + \sqrt{c} + \sqrt{d} + \sqrt{e} - \sqrt{f} = 0 \tag{2}\label{eq2}$$



Each integer has a maximum square-free factor, even if it's $1$, and a remaining perfect square factor, even if it's $1$. The perfect square factors can be taken out of the square root. Also, you can combine any terms which have the same square-free integer into one term with an integral multiplier. Thus, you get some $1 \le n \le 6$, with a result of



$$\sum_{i = 1}^n g_i \sqrt{h_i} = 0 \tag{3}\label{eq3}$$




for some integers $g_i$ and distinct square-free integers $h_i$.



In the MSE The square roots of different primes are linearly independent over the field of rationals, the answer by Qiaochu Yuan gives a link to his blog post Square roots have no unexpected linear relationships which says




IMO medalist Iurie Boreico has an article in an issue of the Harvard College Mathematics Review about his favorite problem:



Let $n_1, ... n_k$ be distinct squarefree integers. Show that if $a_1, ... a_k \in \mathbb{Z}$ are not all zero, then the sum $S = a_1 \sqrt{n_1} + ... + a_k \sqrt{n_k}$ is nonzero.




In other words, the problem is to show that there are no "unexpected" linear relationships between the fractions $\sqrt{1}, \sqrt{2}, \sqrt{3}, \sqrt{5}, \sqrt{6}, \ldots$.




If you can prove this problem, then all of the square-root values must be integral as there's only one subtraction, there can be just one term for the coefficient(s) to be $0$, so anything other than integral values would cause a larger value of $f$. Thus, the minimum value of the RHS is 15, resulting in $f = 225$, as you've already determined & conjectured. Unfortunately, the provided links don't work. Although Qiaochu gives a more sophisticated proof in that blog post, he also provides a working link in his MSE answer to Iurie's article. It is Harvard College Mathematics Review, Vol. 2, No. 1, Spring 2008, with the article starting at page 87.


limits - Why can we resolve indeterminate forms?



I'm questioning myselfas to why indeterminate forms arise, and why limits that apparently give us indeterminate forms can be resolved with some arithmetic tricks. Why $$\begin{equation*}
\lim_{x \rightarrow +\infty}
\frac{x+1}{x-1}=\frac{+\infty}{+\infty}
\end{equation*} $$



and if I do a simple operation,




$$\begin{equation*}
\lim_{x \rightarrow +\infty}
\frac{x(1+\frac{1}{x})}{x(1-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(1-\frac{1}{x})}=1
\end{equation*} $$



I understand the logic of the process, but I can't understand why we get different results by "not" changing anything.


Answer



So you're looking at something of the form
$$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$

and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions:
$$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$
The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that
$$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as another quotient of functions but in such a way that the individual limits in the numerator and denominator now do exist, which allows the use of the rule in blue ("limit of a quotient, is the quotient of the limits; if these two limits exist"):
$$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.






When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens:
$$\begin{align}

\lim_{x \rightarrow +\infty} f(x) & = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt]
& = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\
\end{align}$$


algebra precalculus - Gaussian proof for the sum of squares?



There is a famous proof of the Sum of integers, supposedly put forward by Gauss.




$$S=\sum\limits_{i=1}^{n}i=1+2+3+\cdots+(n-2)+(n-1)+n$$



$$2S=(1+n)+(2+(n-2))+\cdots+(n+1)$$



$$S=\frac{n(1+n)}{2}$$



I was looking for a similar proof for when $S=\sum\limits_{i=1}^{n}i^2$



I've tried the same approach of adding the summation to itself in reverse, and I've found this:




$$2S=(1^2+n^2)+(2^2+n^2+1^2-2n)+(3^2+n^2+2^2-4n)+\cdots+(n^2+n^2+(n-1)^2-2(n-1)n$$



From which I noted I could extract the original sum;



$$2S-S=(1^2+n^2)+(2^2+n^2-2n)+(3^2+n^2-4n)+\cdots+(n^2+n^2-2(n-1)n-n^2$$



Then if I collect all the $n$ terms;



$$2S-S=n\cdot (n-1)^2 +(1^2)+(2^2-2n)+(3^2-4n)+\cdots+(n^2-2(n-1)n$$




But then I realised I still had the original sum in there, and taking that out mean I no longer had a sum term to extract.



Have I made a mistake here? How can I arrive at the answer of $\dfrac{n (n + 1) (2 n + 1)}{6}$ using a method similar to the one I expound on above? I.e following Gauss' line of reasoning?


Answer



You can use something similar, though it requires work at the end.



If $S_n = 1^2 +2^2 + \cdots + n^2$ then
$$S_{2n}-2S_n = ((2n)^2 - 1^2) + ((2n-1)^2-2^2) +\cdots +((n+1)^2-n^2)$$



$$=(2n+1)(2n-1 + 2n-3 + \cdots +1) = (2n+1)n^2$$ using the Gaussian trick in the middle.




Similarly $$S_{2n+1}-2S_n = (2n+1)(n+1)^2$$



So for example to work out $S_9$, you start



$$S_0=0^2=0$$



$$S_1=1 + 2S_0 = 1$$



$$S_2=3+2S_1=5$$




$$S_4=25+2S_2=30$$



$$S_9 = 225+2S_4 = 285$$



but clearly there are easier ways.


radicals - How to prove: if $a,b in mathbb N$, then $a^{1/b}$ is an integer or an irrational number?



It is well known that $\sqrt{2}$ is irrational, and by modifying the proof (replacing 'even' with 'divisible by $3$'), one can prove that $\sqrt{3}$ is irrational, as well. On the other hand, clearly $\sqrt{n^2} = n$ for any positive integer $n$. It seems that any positive integer has a square root that is either an integer or irrational number.





  1. How do we prove that if $a \in \mathbb N$, then $\sqrt a$ is an integer or an irrational number?





I also notice that I can modify the proof that $\sqrt{2}$ is irrational to prove that $\sqrt[3]{2}, \sqrt[4]{2}, \cdots$ are all irrational. This suggests we can extend the previous result to other radicals.





  1. Can we extend 1? That is, can we show that for any $a, b \in \mathbb{N}$, $a^{1/b}$ is either an integer or irrational?



Answer




These (standard) results are discussed in detail in



http://math.uga.edu/~pete/4400irrationals.pdf



This is the second handout for a first course in number theory at the advanced undergraduate level. Three different proofs are discussed:



1) A generalization of the proof of irrationality of $\sqrt{2}$, using the decomposition of any positive integer into a perfect $k$th power times a $k$th power-free integer, followed by Euclid's Lemma. (For some reason, I don't give all the details of this proof. Maybe I should...)



2) A proof using the functions $\operatorname{ord}_p$, very much along the lines of the one Carl Mummert mentions in his answer.




3) A proof by establishing that the ring of integers is integrally closed. This is done directly from unique factorization, but afterwards I mention that it is a special case of the Rational Roots Theorem.



Let me also remark that every proof I have ever seen of this fact uses the Fundamental Theorem of Arithmetic (existence and uniqueness of prime factorizations) in some form. [Edit: I have now seen Robin Chapman's answer to the question, so this is no longer quite true.] However, if you want to prove any particular case of the result, you can use a brute force case-by-case analysis that avoids FTA.


Tuesday, September 19, 2017

calculus - How to prove $lim limits_{x to 0}frac {tan (x)}{x} = 1$?



I was going through a Calculus Textbook, when came across the following two identities :



$$\lim \limits_{x \to 0}\dfrac {\sin (x)}{x} = 1
\qquad\text{and}\qquad\lim \limits_{x \to 0}\dfrac {\tan (x)}{x} = 1.$$



The first one is really popular/famous and has many proofs. But I am more interested in the second one.



The fraction $\frac {\tan (x)}{x}$ becomes $\frac {0}{0}$ at $x=0$, so the easiest approach is the L'Hopital's Rule.




So $$\lim \limits_{x \to 0}\dfrac {\tan (x)}{x} = \dfrac{\frac{d}{dx}\tan(x)}{\frac{d}{dx}x} = \dfrac{\sec^2(x)}{1}
\\ \implies \lim \limits_{x \to 0}\dfrac {\tan (x)}{x} = \sec^2(0) = 1.$$



A geometrical proof of the same can be found here. I am looking for some non-geometrical proof for this identity.


Answer



$$\lim \limits_{x\rightarrow 0} \dfrac{\tan{x}}{x}=\lim \limits_{x\rightarrow 0} \dfrac{\sin{x}}{x}\lim \limits_{x\rightarrow 0} \frac{1}{\cos{x}}=1\times 1=1 $$



since the two limits exist.


real analysis - If a sequence of absolute values is bounded, does it then converge?



I'm stuck on an easy proof. I have a bounded sequence $\sum\limits_{k=1}^{n}|x_{k}|$ and I need to prove that it converges. I don't see how this would work. I don't see how I could use cauchy and also I don't see why this sequence would have to have a limit.



EDIT: thanks to the tips the solution was easy. Another proof for the convergence of the sequence $\sum\limits_{k=1}^{n}x_{k}$ must be given. Now I cannot use monotonically increasing sequence. I was thinking about rearranging $S_{n}$ in a way that it becomes monotonically increasing but I don't know if that is allowed. Any suggestions?



Answer



Hint: The sequence is monotonically increasing.


real analysis - Approximation of $ int_{0}^{infty} frac{ln(t)}{sqrt{t}}e^{-nt} mathrm dt,nrightarrowinfty $

How can I find the first term of the series expansion of


$$ \int_{0}^{\infty} \frac{\ln(t)}{\sqrt{t}}e^{-nt} \mathrm dt,n\rightarrow\infty ?$$


Or:



As $$ \int_{0}^{\infty} \frac{\ln(t)}{\sqrt{t}}e^{-nt} \mathrm dt = \frac{1}{\sqrt{n}}\int_{0}^{\infty} \frac{\ln(\frac{t}{n})}{\sqrt{t}}e^{-t} \mathrm dt $$


What is $$ \lim_{n\rightarrow\infty} \int_{0}^{\infty} \frac{\ln(\frac{t}{n})}{\sqrt{t}}e^{-t} \mathrm dt?$$

calculus - Meaning of differentials when treated separately from the Leibniz notation dy/dx

I’ve heard people say $dy/dx$ is not a fraction with $dy$ as the numerator and $dx$ as the denominator; that it is just notation representing the derivative of the function $y$ with respect to the variable $x$. The calculus book I am reading (Calculus: A Complete Course - Adams & Essex) says that even though $dy/dx$, defined as the limit of $\Delta y / \Delta x$, appears to be “meaningless” if we treat it as a fraction; it can still be “useful to be able to define the quantities $dy$ and $dx$ in such a way that their quotient is the derivative $dy/dx$”. It then goes on to define the differential $dy$ as “a function of $x$ and $dx$”, as follows: $$dy = \frac{dy}{dx}\,dx = f’(x)\,dx$$ What is the meaning of $dx$ here? It is now an independent variable, yet it seemingly is not one supplied by most functions I would work with. In a later chapter, focused on using differentials to approximate change, the book gives the following: $$\Delta y = \frac{\Delta y}{\Delta x}\,\Delta x \approx \frac{dy}{dx}\,\Delta x = f’(x)\,\Delta x$$ This makes sense to me. The change in $y$, $\Delta y$, at/near a point can be approximated by multiplying the derivative at that point by some change in $x$, $\Delta x$, at/near the point. Here $\Delta y$ and $\Delta x$ are real numbers, so there is no leap in understanding that is necessary. The problem with the definition of $dy$ is that the multiplication is not between a derivative and a real number, such as in the approximation of $\Delta y$, but a product of a derivative and an object that is not explicitly defined. Because I do not understand what $dx$ is, I can’t use it to build an understanding of what $dy$ is. I also have no real understanding of what $dy$ is meant to be, so I cannot work backwards to attach some meaning to $dx$. Is $dy$ meant to be some infinitesimal quantity? If so, how can we be justified in using it when most of the book is restricted to the real numbers? Are $dy$ and $dx$ still related to the limits of functions, or are they detached from that meaning? Later in the chapter on using differentials to approximate change, the book says it is convenient to “denote the change in $x$ by $dx$ instead of $\Delta x$”. We can just switch out $\Delta x$ for $dx$? Why is it convenient to do this? More importantly, how are we justified in doing this?


What exactly is a differential? And https://math.blogoverflow.com/2014/11/03/more-than-infinitesimal-what-is-dx/ both contain discussions that are beyond my understanding. My problem arose in an introductory textbook, I find it strange that we can just swap out different symbols when we go to great lengths to say they are different entities.


In What is the practical difference between a differential and a derivative? Arturo Magidin’s answer says that it is not “literally true” that $dy = \frac{dy}{dx}\,dx$ and that replacing $\Delta x$ with $dx$ is an “abuse of notation”. If that is the case, then the quotient of $dy$ and $dx$ would not be $\frac{dy}{dx}$, but $\frac{dy}{dx}\frac{dx}{\Delta x}$, right?

calculus - Is there a novel way to integrate this without using complex numbers?



I've been reading a post on Quora about lesser known techniques of integration and I'm just curious if there's also a novel way to integrate this type of integral without resorting to complex analysis.




$$ \int^\infty_0 \frac {\cos (ax)\,dx}{x^2}, a \geq 0 $$


Answer



The given integral is not converging, so I assume you wanted to study:



$$f(a)=\int_{0}^{+\infty}\frac{1-\cos(ax)}{x^2}\,dx$$
that is an even function, hence we can assume $a\geq 0$ WLOG. Integration by parts then gives:
$$ f(a) = a\int_{0}^{+\infty}\frac{\sin(ax)}{x}\,dx $$
and by replacing $x$ with $\frac{z}{a}$ we get:
$$ f(a)=a\int_{0}^{+\infty}\frac{\sin x}{x}\,dx = \frac{\pi}{2}\,a $$
leading to:





$$\forall r\in\mathbb{R},\qquad \int_{0}^{+\infty}\frac{1-\cos(rx)}{x}\,dx = \frac{\pi}{2}|r|.$$



sequences and series - How Can I Represent These Progressions in Sigma Notation?



I would like to represent the following finite progressions in sigma notation:





  1. $Finding\ the \ n^{th} \ term \ of \ a \ geometric \ progression$: $a_n=a_1(r^{n-1})$, where $a_1$ is the first time and $r$ is the common ratio


  2. $The \ sum \ of \ a \ geometric \ progression: \ S_n=a_1\frac{1-r^n}{1-r}$


  3. $Determining \ the \ n^{th} \ term \ of \ an \ arithmetic \ progression: a_n=a_1+(n-1)d$,
    $\ $ where $d$ is the common difference

  4. And finally, the sum of an arithmetic progression: $S_n=\frac{n}{2}(2a_1+(n-1)d)$


Answer



You wish to expression the sum of the first $n$ terms of an geometric progression and the sum of the first $n$ terms of an arithmetic progression in summation notation.




Sum of a geometric progression: If the initial term is $a_1$ and the common ratio is $r$, then the $k$th term of the geometric progression is $a_k = a_1r^{k - 1}$. Hence, the sum of the first $n$ terms of the geometric progression is
$$S_n = \sum_{k = 1}^{n} a_1r^{k - 1} = \begin{cases}
a_1 \dfrac{1 - r^n}{1 - r} & \text{if $r \neq 1$}\\
na_1 & \text{if $r = 1$}
\end{cases}
$$

Notice that the index of the variable must be different from the index of the upper limit. Otherwise, all $n$ terms in the sum would be equal to $a_1r^{n - 1}$.



Sum of an arithmetic progression: If the initial term is $a_1$ and the common difference is $d$, then the $k$th term of the arithmetic progression is $a_k = a_1 + (k - 1)d$. Hence, the sum of the first $n$ terms of the arithmetic progression is

$$S_n = \sum_{k = 1}^{n} [a_1 + (k - 1)d] = \frac{n}{2}[2a_1 + (n - 1)d] = \frac{n(a_1 + a_n)}{2}$$


Monday, September 18, 2017

modular arithmetic - Find $a$ inverse modulo 30, $1le a le 30$. For each $a$ you find, find the inverse of each $a$ that have inverse modulo 30

Find $a$ inverse modulo 30, $1\le a \le 30$. For each a you find, find the inverse of each a that have inverse modulo 30



a={1,7,11,13,17,19,23,29}



They got a by the fact that relative primes are inverses. I was wondering if this rule also applied if $a \ge 30$?



Find the inverse of each of the integers in a that have an inverse modulo 30



Not really sure how to do this part

real analysis - Prove that $y_n=frac{x_1}{1^b}+frac{x_2}{2^b}+... +frac{x_n}{n^b} $ is convergent

Let $a\ge 0$ and $(x_n) _{n\ge 1}$ be a sequence of real numbers. Prove that if the sequence $\left(\frac{x_1+x_2+...+x_n}{n^a} \right)_{n\ge 1}$ is bounded, then the sequence $(y_n) _{n\ge 1}$, $y_n=\frac{x_1}{1^b}+\frac{x_2}{2^b}+... +\frac{x_n}{n^b} $ is convergent $\forall b>a$.
To me, $y_n$ is reminiscent of the p-Harmonic series, but I don't know if this is actually true. Anyway, I think that we may use the Stolz-Cesaro lemma on $\frac{x_1+x_2+...+x_n}{n^a}$, but I don't know if this is of any use.

elementary number theory - Prove that there do not exist distinct integers $a,b,c$ and polynomial $P$, with integer coefficients, such that $P(a)=b, P(b)=c, P(c)=a$.

Let $P(x)$ be a polynomial of degree $n$ with integer coefficients. Assume that there exist three distinct integers $a,b,c$ such that $$P(a)=b, P(b)=c, P(c)=a.$$



Since the integers $a,b,c$ are all different, there must be a least among them. Without loss of generality, we can write $a


Note that, for any $x,y \in{\mathbb{R}}$:



$$P(x) -P(y)=a_n(x^n-y^n)+a_{n-1}(x^{n-1}-y^{n-1})+ \cdots + a_1(x-y).$$



Thus, using the identity



$$x^n-y^n=(x-y)(x^{n-1}+x^{n-2}y+\cdots+xy^{n-2}+y^{n-1}),$$



it follows that




$$P(x) -P(y)=(x-y)Q(x,y),$$



where $Q(x,y)$ is a polynomial in $x,y$ of degree $(n-1)$. Evaluating $P(a)$ and $P(b)$ this way gives us the following equation:



$$P(a)-P(b)=(a-b) Q(a,b)=b-c $$



By our construction, the polynomial $Q$ must also have integer coefficients and so $Q(a,b)=m_1$, for some integer $m_1$. Arguing similarly as above, we get the following three equations:



$$(a-b)m_1=b-c$$

$$(b-c)m_2=c-a$$
$$(a-c)m_3=b-a$$



Clearly $m_i\neq0$ for any $i$. Dividing the first of these equations by the second:



$$\frac{a-b}{c-a}m_1=\frac{b-c}{(b-c)m_2}$$
$$\frac{a-b}{c-a}m_1=\frac{1}{m_2} \qquad (1)$$



A few algebraic manipulations of the last equation gives




$$\frac{1}{c-a}=\frac{m_3}{a-b},$$



which we can substitute into $(1)$ and get



$$m_1 m_3= \frac{1}{m_2}.$$



Since the $m_i$ are integers we must have $$m_i=\pm1.$$



We need only consider the two cases for $m_3$. If $m_3=-1$ this implies $c-a=b-a \implies c=b$. If $m_3= 1$ this implies $a-c=b-a$, which is not possible, since $a$ is less than both $c$ and $b$. Thus, since both possible values for $m_3$ lead to a contradiction, there is no polynomial $P(x)$ and distinct integers $a,b,c$ which satisfy the condition $P(a)=b, P(b)=c, P(c)=a.$ $\quad \square$

analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...