Wednesday, August 31, 2016

real analysis - Limit:$ limlimits_{nrightarrowinfty}left ( nbigl(1-sqrt[n]{ln(n)} bigr) right )$



I find to difficult to evaluate with $$\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right )$$ I tried to use the fact, that $$\frac{1}{1-n} \geqslant \ln(n)\geqslant 1+n$$

what gives $$\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right ) \geqslant \lim_{n\rightarrow\infty} n(1-\sqrt[n]{1+n}) =\lim_{n\rightarrow\infty}n *\lim_{n\rightarrow\infty}(1-\sqrt[n]{1+n})$$ $$(1-\sqrt[n]{1+n})\rightarrow -1\Rightarrow\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right )\rightarrow-\infty$$Is it correct? If not, what do I wrong?


Answer



Since it's so hard let's solve it in one line



$$\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right )=-\lim_{n\rightarrow\infty}\left(\frac{\sqrt[n]{\ln(n)}-1}{\displaystyle\frac{1}{n}\ln(\ln (n)) }\cdot \ln(\ln (n))\right)=-(1\cdot \infty)=-\infty.$$



Chris.


number theory - Prove that for any $m > 0$, $gcd(mb,mc) = mgcd(b,c)$.

Property of GCD





For any $m > 0$ , $\gcd(mb,mc) = m\gcd(b,c)$.




Please prove this. I am learning the Theory Of Numbers in Detail but I am not able to find the proof for this. It is not available in the internet also. So please help me with this problem. Please prove this without using the Euclidean Algorithm as it is deriver from this.

Tuesday, August 30, 2016

calculus - Intuitive explanation for formula of maximum length of a pipe moving around a corner?

For one of my homework problems, we had to try and find the maximum possible length $L$ of a pipe (indicated in red) such that it can be moved around a corner with corridor lengths $A$ and $B$ (assuming everything is 2d, not 3d):




corner



My professor walked us through how to derive a formula for the maximum possible length of the pipe, ultimately arriving at the equation $L = (A^{2/3} + B^{2/3})^{3/2}$.



The issue I have is understanding intuitively why this formula works, and exactly what it's doing. I understand the steps taken to get to this point, but there's an odd symmetry to the end result -- for example, is the fact that $\frac{2}{3}$ and its inverse are the only constants used just a coincidence, or indicative of some deeper relationship?



I also don't quite understand how the formula relates, geometrically, to the diagram. If I hadn't traced the steps myself, I would have never guessed that the formula was in any way related to the original problem.



If possible, can somebody give an intuitive explanation as to why this formula works, and how to interpret it geometrically?







Here's how he found the formula, if it's useful:



The formula is found by finding the maximum possible length of the pipe by expressing the length in terms of the angle $\theta$ formed between the pipe and the wall, and by taking the derivative to find when $\frac{dL}{d\theta} = 0$, which is the minimum of $\frac{dL}{d\theta}$ and is therefore when $L$ is the smallest:



$$
L = \min_{0 \leq \theta \leq \frac{\pi}{2}} \frac{A}{\cos{\theta}} + \frac{B}{\sin{\theta}} \\
0 = \frac{dL}{d\theta} = \frac{A\sin{\theta}}{\cos^2{\theta}} - \frac{B\cos{\theta}}{\sin^2{\theta}} \\

0 = \frac{A\sin^3{\theta} - B\cos^3{\theta}}{\sin^2{\theta}\cos^2{\theta}} \\
0 = A\sin^3{\theta} - B\cos^3{\theta} \\
\frac{B}{A} = \tan^3{\theta} \\
\theta = \arctan{\left( \frac{B}{A} \right)^{\frac{1}{3}}} \\
$$



At this point, we can substitute $\theta$ back into the original equation for $L$ by interpreting $A^{1/3}$ and $B^{1/3}$ as sides of a triangle with angle $\theta$ and hypotenuse $\sqrt{A^{2/3} + B^{2/3} }$:



$$
\cos{\theta} = \frac{A^{1/3}}{ \sqrt{A^{2/3} + B^{2/3} }} \\

\sin{\theta} = \frac{B^{1/3}}{ \sqrt{A^{2/3} + B^{2/3} }} \\
\therefore L = A^{2/3} \sqrt{A^{2/3} + B^{2/3} } + B^{2/3} \sqrt{A^{2/3} + B^{2/3} } \\
L = (A^{2/3} + B^{2/3}) \sqrt{A^{2/3} + B^{2/3} } \\
L = (A^{2/3} + B^{2/3})^{3/2} \\
$$



The equation for the formula for the maximum length of the pipe is therefore $L = (A^{2/3} + B^{2/3})^{3/2}$.

Linear transformation over real field vs. complex field



Let $V = \mathbb{C}$ be the field of complex numbers regarded as a vector space over the field of real numbers (with the usual operations). Find a function T: V → V such that T is a linear transformation on the real vector space V , but such that T is not a linear transformation when V is regarded as a vector space over the field of complex numbers.






My thoughts:



If a complex vector space over a real field is linear, but not linear when over a field of complex numbers. Then isn't that just any normal transformation? i.e. if T(x) is linear over a real field, then $T(\alpha x) = \alpha T(x)$ with $\alpha \in \mathbb{R}$, since any complex number multiplied by a real number is still complex, but if $\alpha \in \mathbb{C}$, then two complex number would produce a real number, i.e no long linear?


Answer




Hint: Try complex conjugation.



In fact, because the complex plane has dimension $1$ as a vector space over $\mathbb C$, every $\mathbb C$-linear transformation of the complex plane is given by $z \mapsto wz$ for $w = a +bi \in \mathbb C$, and so its matrix as a $\mathbb R$-linear transformation with respect to the canonical basis is of the form
$$
\begin{pmatrix}
a & -b \\
b & \hphantom- a
\end{pmatrix}
$$
Of course, not all $\mathbb R$-linear transformations of the plane have this special form.



radicals - Prove that if $n$ is a positive integer then $sqrt{n}+ sqrt{2}$ is irrational





Prove that if $n$ is a positive integer then $\sqrt{n}+ \sqrt{2}$ is irrational.




The sum of a rational and irrational number is always irrational, that much I know - thus, if $n$ is a perfect square, we are finished.
However, is it not possible that the sum of two irrational numbers be rational? If not, how would I prove this?



This is a homework question in my proofs course.


Answer



Multiply both sides by $\sqrt n - \sqrt 2$. Then $n - 2 = \frac{p}{q} ( \sqrt n - \sqrt 2 )$ so $\sqrt n - \sqrt 2$ is also rational. So we have two rational numbers whose difference (which must be rational) is $2 \sqrt 2$, meaning that $\sqrt 2$ is rational.


sequences and series - Evaluating N-th partial sums of polynomials











How can I find $\sum_{n=1}^N n^2-n$? Wolfram Alpha will tell you that it is $\frac{N}{3} (N-1)(N+1)$, and given the famous formulas for $\sum_{n=1}^N n^2$ and $\sum_{n=1}^N n$, you could piece together the first. But is there some sort of a general method here that might be of use in evaluating these kinds of partial sums?


Answer



I'm not sure what you mean by general but here are two ways to find $\sum_{n=1}^N p(n)$ for $p$ a polynomial.




  • If $p$ has degree $d$, find the value of the sum for $d+2$ values of $N$ and use Lagrange interpolation.


  • Write $p(n)$ in the binomial basis $n \choose k$ and use sum-of-column identity in the Pascal triangle.




sequences and series - How can I evaluate $sum_{n=0}^infty(n+1)x^n$?


How can I evaluate $$\sum_{n=1}^\infty\frac{2n}{3^{n+1}}$$? I know the answer thanks to Wolfram Alpha, but I'm more concerned with how I can derive that answer. It cites tests to prove that it is convergent, but my class has never learned these before. So I feel that there must be a simpler method.


In general, how can I evaluate $$\sum_{n=0}^\infty (n+1)x^n?$$


Answer



No need to use Taylor series, this can be derived in a similar way to the formula for geometric series. Let's find a general formula for the following sum: $$S_{m}=\sum_{n=1}^{m}nr^{n}.$$


Notice that \begin{align*} S_{m}-rS_{m} & = -mr^{m+1}+\sum_{n=1}^{m}r^{n}\\ & = -mr^{m+1}+\frac{r-r^{m+1}}{1-r} \\ & =\frac{mr^{m+2}-(m+1)r^{m+1}+r}{1-r}. \end{align*}
Hence $$S_m = \frac{mr^{m+2}-(m+1)r^{m+1}+r}{(1-r)^2}.$$
This equality holds for any $r$, but in your case we have $r=\frac{1}{3}$ and a factor of $\frac{2}{3}$ in front of the sum. That is \begin{align*} \sum_{n=1}^{\infty}\frac{2n}{3^{n+1}} & = \frac{2}{3}\lim_{m\rightarrow\infty}\frac{m\left(\frac{1}{3}\right)^{m+2}-(m+1)\left(\frac{1}{3}\right)^{m+1}+\left(\frac{1}{3}\right)}{\left(1-\left(\frac{1}{3}\right)\right)^{2}} \\ & =\frac{2}{3}\frac{\left(\frac{1}{3}\right)}{\left(\frac{2}{3}\right)^{2}} \\ & =\frac{1}{2}. \end{align*}



Added note:


We can define $$S_m^k(r) = \sum_{n=1}^m n^k r^n.$$ Then the sum above considered is $S_m^1(r)$, and the geometric series is $S_m^0(r)$. We can evaluate $S_m^2(r)$ by using a similar trick, and considering $S_m^2(r) - rS_m^2(r)$. This will then equal a combination of $S_m^1(r)$ and $S_m^0(r)$ which already have formulas for.


This means that given a $k$, we could work out a formula for $S_m^k(r)$, but can we find $S_m^k(r)$ in general for any $k$? It turns out we can, and the formula is similar to the formula for $\sum_{n=1}^m n^k$, and involves the Bernoulli numbers. In particular, the denominator is $(1-r)^{k+1}$.


Monday, August 29, 2016

elementary number theory - Calculate the last digit of $3^{347}$



I think i know how to solve it but is that the best way? Is there a better way (using number theory).

What i do is:
knowing that



1st power last digit: 3
2nd power last digit: 9
3rd power last digit: 7
4rh power last digit: 1
5th power last digit: 3



$3^{347} = 3^{5\cdot69+2} = (3^5)^{69} \cdot3^2 = 3\cdot3^2=3^3=27 $ so the result is $7$.


Answer



How about
$$
3^2 \equiv -1\pmod {10}

$$
so
$$
3^{347} \equiv 3^{2\cdot 173+1}
\equiv 3 \cdot(-1)^{173}
\equiv -3
\equiv 7 \pmod {10}
$$


real analysis - Discontinuous derivative.

Could someone give an example of a ‘very’ discontinuous derivative? I myself can only come up with examples where the derivative is discontinuous at only one point. I am assuming the function is real-valued and defined on a bounded interval.

Sunday, August 28, 2016

number theory - Induction on a double summation



Let $f_1, \ldots, f_k, \ldots$ be arbitrary real numbers. Prove that for each non-negative integer $k$:



$\left (1-f_{1} \right )...\left (1-f_{k} \right ) = \sum_{j=0}^{k}(-1)^{j} \sum_{i_{1}

Hint: Use induction on $k$. You may assume that empty products are $1$. In the case
that the $f_{i}$ are indicator functions of sets this is the “inclusion–exclusion principle”, “principle of cross classification” or “sieve principle”, depending on one’s point
of view.







So far I have been able to prove the base case where $k = 1$, and assumed the inductive hypothesis where$ k = n$. I am unsure how to go about proving that this holds for the $k = n + 1$ step.



Any help would be really appreciated.


Answer



Divide outer sum in RHS for $k=n+1$ into two parts: summands with $f_{n+1}$ and summands without. Now try to establish connection of these parts with RHS for $n=k$.



Let's denote RHS by $S(k)$. Then sum of terms in $S(n+1)$ without $f_{n+1}$ is $S(n)$. And sum of other terms is equals to

$$
\sum_{j=1}^{n+1} (-1)^j \sum_{i_1 < \ldots < i_{j-1} < i_j = n+1}
f_{i_1}\cdot \ldots \cdot f_{i_{j-1}}\cdot f_{n+1} =
\sum_{j=0}^{n} (-1)^{j+1} f_{n+1}\sum_{i_1 < \ldots < i_{j}}
f_{i_1}\cdot \ldots \cdot f_{i_{j}},
$$
it's exactly $-f_{n+1}\cdot S(n)$.


Saturday, August 27, 2016

elementary number theory - Calculating $2^{9999 }$ mod $100$ using Fermats little Theorem



By a modified version of the Fermat's little theorem we obtain that $a^{\phi(n)} \equiv 1$ mod $n$ whenever $(a,n)=1$. But my professor accidentally gave this question to calculate $2^{9999 }$ mod $100$. So everyone in the class calculated it using the theorem since they forgot to check that $(a,n)=1$. But to my surprise everyone got $88$ as the answer which is actually the correct answer.



So I first saw what happens in the case of mod $10$. Here $2^4 \equiv 6$ mod $10$. But magically enough this number multiplies with powers of $2$ as if it were identity i.e.



$$2\cdot6 \equiv 2 \pmod {10}$$




$$4\cdot6 \equiv 4 \pmod {10}$$



$$6\cdot6 \equiv 6 \pmod {10}$$



$$8\cdot6 \equiv 8 \pmod {10}$$



After seeing this I checked what happens in the case of $100$. Similar thing happens even here instead $2^{40} \equiv 76\pmod{100}$. Now $76 $ acts like $6$ just that $76*2 \neq 2$ but otherwise $76*4=4$ and $76*8=8$ and so on. Can anyone explain as to why it is happening here and does it happen in other cases as well.


Answer



What's going on here is a direct result of the prime factorizations of $10 = 2\cdot 5$ and $100 = 2^2\cdot 5^2$. Try writing it like this:

\begin{align*}
2\cdot 6\equiv 2\cdot(5+1)&\equiv 2\cdot 5 + 2\cdot 1\equiv 0 + 2\pmod{10}\\
4\cdot 6&\equiv4\cdot 5 + 4\cdot 1\equiv 0 + 4\pmod{10}
\end{align*}
and so on. Since $6 = 5+1$, multiplying by an even number will "cancel out" the $5$, and you'll just be left with the $1$, acting like an identity. Same thing is going on with $76 = 75 + 1 = 3\cdot 25 + 1$. To cancel out the $3\cdot 25$, you need an even number that's divisible by $4$ ($4\cdot 25 = 100$). If you test $76\cdot 6\pmod{100}$, you should find that the result is nonzero.



This sort of thing will happen whenever you're calculating mod $n = \prod_{i = 1}^m p_i^{e_i}$ (here $\prod_{i = 1}^m p_i^{e_i}$ is the prime factorization of $n$) and you look at multiplication by a number $k$ of the form $1 + (\textrm{anything})(\textrm{product of some of the }p_i\textrm{'s})$. Whenever you multiply $k$ by any number whose factorization includes the $p_i$'s not appearing in the "product of some of the $p_i$'s" (counted with multiplicity), you'll cancel off the second term of $k$ and be left essentially multiplying by $1$. As an easy example, look mod $6$. Multiplication by $3$ acts like the identity for $3$ and $0$ because $3 = 1 + 2$ and $6 = 2\cdot 3$.


calculus - Evaluating $int_0^infty frac {cos {pi x}} {e^{2pi sqrt x} - 1} mathrm d x$



I am trying to show that$$\displaystyle \int_0^\infty \frac {\cos {\pi x}} {e^{2\pi \sqrt x} - 1} \mathrm d x = \dfrac {2 - \sqrt 2} {8}$$



I have verified this numerically on Mathematica.



I have tried substituting $u=2\pi\sqrt x$ then using the cosine Maclaurin series and then the $\zeta \left({s}\right) \Gamma \left({s}\right)$ integral formula but this doesn't work because interchanging the sum and the integral isn't valid, and results in a divergent series.




I am guessing it is easy with complex analysis, but I am looking for an elementary way if possible.


Answer



This integral is one of Ramanujan's in his Collected Papers where he also shows the connection with the sin case.



Define $$\int_{0}^{\infty}\frac{\cos(\frac{a\pi x}{b})}{e^{2\pi \sqrt{x}}-1}dx$$



If a and b are both odd. In this case, they are both a=b=1.



Then, $$\displaystyle \frac{1}{4}\sum_{k=1}^{b}(b-2k)\cos\left(\frac{k^{2}\pi a}{b}\right)-\frac{b}{4a}\sqrt{b/a}\sum_{k=1}^{a}(a-2k)\sin\left(\frac{\pi}{4}+\frac{k^{2}\pi b}{a}\right)$$




letting a=b=1 results in your posted solution.


Abstract algebra, Field extension

Suppose F and H are fields of size $\;q=p^{r}\;$containing$\;GF(p)\;$as subfield.$\;\alpha\;$is a primitive element of F and $\;\beta\;$ is a primitive element of H.$\;m(x)\;$is the minimal polynomial of $\;\alpha\;.\;$Non-zero Elements of both fields satisfy the equation$\;x^{q-1}=1\; .\;\;\;m(x)\;$is divisor of $\;x^{q-1}-1.\;$Hence there is an element in H (say)$\;\beta^{t}\;\;$which is a root $\;\;m(x)\;.\;$
I want to show that there exists a field isomorphism $\;\phi:F\to H \;$ which carries zero to zero and $\;\alpha\;$to $\;\beta^{t}\;$
Can you help to prove it? I have tried $\;\phi(\alpha^{j} )=\beta^{tj}\;$ but I could not show that $\;\phi\;$ preserves addition.( This argument is presented by Vera Pless in the book Introduction to Theory of Error correcting codes.)

complex analysis - Finding real and imaginary parts of $frac{1-e^{2i pi x}}{R left(1-e^{frac {2i pi x}{R}} right)}$



I have a function given by



$$\frac{1-e^{2i \pi x}}{R \left(1-e^{\frac {2i \pi x}{R}} \right)}$$



Using Euler's formula, I expand into real and complex components:



$$\frac{1-\cos 2 \pi x-i\sin2 \pi x}{R \left(1-\cos \frac{2 \pi x}{R}-i\sin\frac{2 \pi x}{R} \right)}$$




But for some reason, here I come unstuck. It seems obvious to me that the real part should be



$$\frac{1-\cos 2 \pi x}{R \left(1-\cos \frac{2 \pi x}{R} \right)}$$



but this appears not to be the case. In fact, it's pretty obvious from the plots below (with $R=3$), that I'm wrong:



enter image description here



And it's not even as simple as multiplying through by $\frac{1}{R}$, as this plot charting the variance after multiplying through shows:




enter image description here



Could someone please explain what I'm doing wrong?



(I'd also appreciate help with the complex component.)


Answer



You are right in your use of Euler's formula to convert to rectangular coordinates, but you seemed to just divide the real part of the numerator by the real part of the denominator. To rectify the issue, you should multiply by the complex conjugate of the denominator on top and bottom (so that way the denominator becomes a real scalar that can be applied to both the imaginary and real parts of the numerator):



$$F = \frac{1-\cos 2 \pi x-i\sin2 \pi x}{R \left(1-\cos \frac{2 \pi x}{R}-i\sin\frac{2 \pi x}{R} \right)} \cdot \frac{1-\cos \frac{2 \pi x}{R}+i\sin\frac{2 \pi x}{R}}{1-\cos \frac{2 \pi x}{R}+i\sin\frac{2 \pi x}{R}}$$




Letting the following substitutions take place:



$\quad a = 1 - \cos2\pi x$



$\quad b = \sin 2\pi x$



$\quad c = 1 - \cos \frac{2\pi x}{R}$



$\quad d = \sin\frac{2\pi x}{R}$




Then we get:



$$F = \frac{a-bi}{R(c-di)} \cdot \frac{c+di}{c+di} = \frac{ac+bd + i(ad-bc)}{R(c^2 + d^2)} $$



Thus:



$$Re(F) = \frac{ac+bd}{R(c^2 + d^2)}$$



Appropriate trig identities and algebraic manipulation can massage the above into a nicer form if you desire.



Friday, August 26, 2016

Confused with imaginary numbers

In 9th grade I had an argument with my teacher that



${i}^{3}=i$



where $i=\sqrt{-1}$



But my teacher insisted (as is the accepted case) that:



${i}^{3}=-i$




My Solution:



${i}^3=(\sqrt{-1})^3$



${i}^3=\sqrt{(-1)^3}$



${i}^3=\sqrt{-1\times-1\times-1}$



${i}^3=\sqrt{-1}$




${i}^3=i$



Generally accepted solution:



${i}^3=(\sqrt{-1})^3$



${i}^3=\sqrt{-1}\times\sqrt{-1}\times\sqrt{-1}$



${i}^3=-\sqrt{-1}$




${i}^3=-i$



What is so wrong with my approach? Is it not logical?



I am using the positive square root. There seems to be something about the order in which the power should be raised? There must be a logical reason, and I need help understanding it.

complex numbers - Why Can my Phone Calculator do $e^{pisqrt{-1}}$ but not $sqrt{-1}$?


When I type in the identity $e^{\pi\sqrt{-1}}$ on my phone calculator (LG phone running Android), I get the correct result of $-1$


However, when I simply type $\sqrt{-1}$, it returns an error.


Why can the calculator do $e^{\pi\sqrt{-1}}$, but not do $\sqrt{-1}$ if $\sqrt{-1}$ is a direct part of $e^{\pi\sqrt{-1}}$?


Answer



$e^{\pi\sqrt{-1}}=\cos \pi + \sqrt{-1}\sin \pi=-1+0=-1$ which is a real number


BUT


$\sqrt{-1}=i$ is a complex number with a zero real part and a non-zero imaginary part.



Computation of complex numbers is possible in any calculator but showing the results containing imaginary numbers is not possible except in certain high-grade calculators.


Fractional/rational form of $0.999...$

Is it possible to express $0.999...$, a repeating number, as a fraction? Or as a ratio of two numbers?



Basically all (my) attempts at the problem cancels all the terms and returns $1$. Is it even possible? Or has it been proven to be an exercise of futility?

sequences and series - Why wolfram alpha claimed that this $sum_{n=1}^{infty}sin (frac{n}{sqrt{n!}}) $ is converge by test and in the same time is diverge?



I'm confused why wolfram alpha claimed that this sum $$\sum_{n=1}^{\infty}\sin \left(\frac{n}{\sqrt{n!}}\right) $$ is convergent by test criterion, and in the same time is divergent in result below in the picture?



In my guess it probably shows us the obscurity of evaluation of that series, or something like that or convergence test in Wolfram alpha is not enough to show wether that series is diverge or converge?




enter image description here


Answer



The backend has an error causing it to believe that $\sum_{n=1}^k \sin\left( \frac{n}{\sqrt{n!}} \right)$ is
$$ -\frac{i e^{-\frac{i k}{\sqrt{\text{Sum$\grave{ }$SumqBaseDump$\grave{
}$u$\$$3851}!}}} \left(-1+e^{\frac{i k}{\sqrt{\text{Sum$\grave{
}$SumqBaseDump$\grave{ }$u$\$$3851}!}}}\right) \left(-1+e^{\frac{i
(k+1)}{\sqrt{\text{Sum$\grave{ }$SumqBaseDump$\grave{
}$u$\$$3851}!}}}\right)}{2 \left(-1+e^{\frac{i}{\sqrt{\text{Sum$\grave{
}$SumqBaseDump$\grave{ }$u$\$$3851}!}}}\right)} $$


where "$\text{Sum$\grave{ }$SumqBaseDump$\grave{ }$u$\$$3851}$" is an internal symbol that should never have appeared in any result returned by Sum[].



The series converges, to about $4.322187510593720884347337899899583088\dots$ because $\frac{n}{\sqrt{n!}}$ approaches $0$ exponentially rapidly and sine of a very small positive number is very slightly less than that number. So this series is bounded by a geometric series and the comparison test shows it converges.


math history - "L'Hôpital's rule" vs. "L'Hospital's rule"?




I know this is not strictly a mathematical question, and I considered putting it on Linguistics SE, but I decided that seeing as this is most probably a mathematical history question, it would be better placed here on math SE.



My question is:




Why is "L'Hôpital's rule" often referred to as "L'Hospital's Rule" in english mathematical literature?




I am aware that the translation from French to English of "L'Hôpital" is "The Hospital", but I haven't seen any cases of other french names which correspond to proper nouns being translated into english, so why the special case here?




Again, sorry if this is completely the wrong place to put this question, and moderators feel free to migrate this question to a more appropriate SE board if one exists, but as I said, I believe this to be the most appropriate board.


Answer



There was a change in French orthography in the mid 18th century, where some mute s's were dropped and replaced by the circumflex accent. In the Marquis's own time (1661-1704), his name was spelled "l'Hospital".



EDIT: Apparently in at least one letter the Marquis spelled his name "Lhospital". The 1716 edition of "Analyse des infiniment petits" at http://archive.org/texts/flipbook/flippy.php?id=infinimentpetits1716lhos00uoft has "l'Hospital".
The 1768 edition of the book at http://archive.org/stream/analysedesinfini00lhos#page/n13/mode/2up has "l'Hôpital".


Thursday, August 25, 2016

general topology - 1-1 correspondence between [0,1] and [0,1)

I wonder how to build a 1-1 correspondence between [0,1] and [0,1). My professor offers an example such that 1 in the first set corresponds to 1/2 in the second set, and 1/2 in the first set corresponds to 1/4 in the second. I don't quite understand it. Does it mean every element in the first set corresponds to its half value in the second set? wouldn't that make some elements left out in the second set? Does it still count as 1-1 correspondence? Does it connect to Schroder-Bernstein theorem?

elementary number theory - Prove that $gcd(a^n - 1, a^m - 1) = a^{gcd(n, m)} - 1$

For all $a, m, n \in \mathbb{Z}^+$,


$$\gcd(a^n - 1, a^m - 1) = a^{\gcd(n, m)} - 1$$

elementary number theory - Proof of infinitely many primes, clarification



Proof:
The proof is by contradiction.
Suppose there are only finitely many primes.
Let the complete list be $p_1,p_2,\dots,p_n$.
Let $N = p_1p_2 \dots p_n+1$.
According to the Fundamental Theorem of Arithmetic, $N$ must be divisible by some prime.
This must be one of the primes in our list. Say $p_k \mid N$.
But $p_k\mid p_1\dots p_n$, so $p_k\mid(N-p_1 \dots p_n) = 1$
Hence contradiction.




I don't see how this proof works. I understand that $N$ isn't necessarily prime, but I don't understand how it apparently must show that some primes weren't in our list. A number could be made of different powers of the given primes right?



Someone please explain.


Answer



Suppose that there are only $n$ primes. Let $p_1,p_2,p_3,\cdots,p_n$ be all the primes in the world. Let $N$ be the product of all these primes plus $1$, i.e. $N=p_1p_2 \cdots p_n+1$. We know that $N$ must be a product of primes. But the only primes in the world are $p_1,p_2,\cdots,p_n$. So one of these must be a factor of $N$. Since it doesn't matter which, let's say $p_1$ is a factor of $N$. Then we use
$$
N=p_1p_2\cdots p_n +1
$$
to find that

$$
N-p_1p_2\cdots p_n =1
$$
But $p_1$ divides the left side of the equality since $p_1$ is a factor of $N$ (from above) and it divides $p_1p_2\cdots p_n$ because it's part of the product.



But since it divides the left side it must divide the right side of the equality. But that means $p_1$ divides $1$. But that can't happen because $p_1$ has to be at least as big as $2$ - the smallest prime there is - and no number except $1$ and $-1$ divide $1$. Therefore, we have a contradiction. This means our assumption - that there are only $n$ primes, is wrong.



What we have done is used our primes $p_1,p_2,\cdots,p_n$ to make a number which needs a new prime not among the list $p_1,p_2,\cdots,p_n$. So for any $n$ primes you have, you can always make a number forcing you to need at least one more prime which lets you make another such number and so forth. So of course there are infinitely many primes.


elementary number theory - Prove that if $pmid ab$ where $a$ and $b$ are positive integers and $alt p$ then $ple b$

I have found an old textbook called "Real Variables by Claude W. Burrill and John R. Knudsen" in the first chapter this textbook uses 15 axioms to derive much of the well known and basic facts about the integers, i have been reading and solving all the exercise and so far so good until exercise 1-27 which asks the following: "Prove that if $p$ is prime and divides $ab$ where $a$ and $b$ are positive and $a\lt p$, then $p\le b$." this would be very easy if we assume Euclid's lemma but it hasn't been proven and the very next exercise asks for its proof so i believe that there is a way to prove it without Euclid's lemma but how? Is there even a way to prove this without Euclid's lemma? I also believe i'm not allowed to use Bézout's identity because its proof is exercise 1-29



I have been thinking about this problem since yesterday and i searched online for exercise solutions for this textbook but there was no results.



As another question:does the theorem above imply Euclid's lemma in a straightforward way?

exponential function - How to show limit containing exp and cos series inside?


I've got some limit to show.


$$\lim_{x \rightarrow \infty} \frac{\sum_{n=0}^{\infty}( \frac{x^n}{n!})-1}{1-\sum_{n=0}^{\infty}(-1)^n\frac{x^{2n}}{(2n)!}}$$ What is equivalent to $$\lim_{x \rightarrow \infty} \frac{exp(x)-1}{1-cos(x)}$$



I tried to split it into "even" part and "odd" part, i mean first calculate for $n=0,2,4,6\dots$ and then for $n=1,3,5,7\dots$ but it all got messy and not really led me to any solution. This is my first this kind of task i have to solve, so i don't know any good tricks/ideas i can use here...


I'd appreciate some help! thanks


EDIT: some people state that this limit doesn't exist. But just being curious, what if we want to do series of out it? Seems like it will diverge, right? Why then wolframalpha gives answer = INF to this limit?


Answer



Since $2\geq 1-cos(x) \geq 0$ for all $x$, we have


$$\frac{e^x - 1}{1 - cos(x)} > \frac{e^x - 1}{2}$$


This value becomes arbitrarly large as $x$ becomes large. This means the limit, if any, is $\infty$ (this is of course a generalized limit, no standard limit exists)


Note, however, that even this generalized limit does not actually exist as the function is not defined for all $x$ of the type $x=2\pi k$ for $k\in\mathbb{N}$.


Wednesday, August 24, 2016

proof writing - Prove by induction that $3^{4n + 2} + 1$ is divisible by $10$


Prove by induction: $3^{(4n+2)} + 1$ is divisible by $10$.


My basic step: $3^{(4n+2)} + 1$, where $n = 1$ gives me $3^6 + 1 = 730$, which is divisible by $10$. However, then I have to do the induction hypothesis and I am kind of stuck because I do not have an equality. How do I finish proving this by induction?


Many thanks.


Edit: I am thinking of creating a formula which involves $10n$? Would this be correct?


Answer



$f(n): 3^{4n+2}+1$


STEP-$1$:



$f(1): 3^{4+2}+1 = 730$, which is divisible by $10$. Hence $f(1)$ holds true.


STEP-$2$:


Now let $n=k$, i.e., $f(n)=f(k)$ hold true .Hence, $f(k) = 3^{4k+2}+1$ is divisible by $10$.


Now we just need to prove that the criteria is satisfied for $n=k+1$.


STEP-$3$:


$$f(k+1) = 3^{4(k+1)+2}+1$$ $$ = 3^{4k+2}.3^{4}+1$$ $$ = 3^{4k+2}.(80+1)+1$$ $$ = (3^{4k+2}.80+3^{4k+2}.1)+1$$ $$ = 3^{4k+2}.80+(3^{4k+2}.1+1)$$


The first term is clearly divisible by 10. The second and third term are together divisible by 10 (from our assumption in step-2). So $f(n)=f(k+1)$ holds true.


Hence by induction $3^{4n+2}+1$ be divisible by $10$.


probability - Binomial random variable expectation bound

Let $X,Y \sim \text{Bin}(n,0.5)$ for some positive $n$.



What is a lower bound for $\mathbb{E}(XY)$? When is it achieved?



My try:



I got confused by the following two results and couldn't proceed!




By Jensen's, $\mathbb{E}(XY) \geq \mathbb{E}(X)\mathbb{E}(Y)$. But we also know that $\text{cov}(X,Y)=\mathbb{E}(XY) - \mathbb{E}(X)\mathbb{E}(Y)$, and this quantity can be negative!



Please help me to proceed. Thanks in advance!

complex analysis - How to finish proof of $ {1 over 2}+ sum_{k=1}^n cos(kvarphi ) = {sin({n+1 over 2}varphi)over 2 sin {varphi over 2}}$




I'm trying to prove the identity $$ {1 \over 2}+ \sum_{k=1}^n \cos(k\varphi ) = {\sin({n+1 \over 2}\varphi)\over 2 \sin {\varphi \over 2}}$$



What I've done so far:



From geometric series $\sum_{k=0}^{n-1}q = {1-q^n \over 1 - q}$ for $q = e^{i \varphi}$ and taking the real part on both sides I got



$$ \sum_{k=0}^{n-1}\cos (k\varphi ) = {\sin {\varphi \over 2} - \cos (n\varphi) + \cos (n-1)\varphi \over 2 \sin {\varphi \over 2}}$$



I checked all my calculations twice and found no mistake. Then I used trigonometric identities to get




$$ {1 \over 2} + \sum_{k=1}^{n-1}\cos (k\varphi ) + {\cos n\varphi \over 2 } = {\sin \varphi \sin (n \varphi ) \over 2 \sin {\varphi \over 2}}$$




How to finish this proof? Is there a way to rewrite



$\sin \varphi \sin (n \varphi )$ as



$\sin({n+1 \over 2}\varphi) - \sin {\varphi \over 2} \cos (n \varphi)
$?




Answer



There is a mistake in the real part.



$$
\frac{q^n - 1}{q - 1} = \frac{e^{in\phi} - 1}{e^{i\phi} - 1}
= \frac{e^{i(n-1/2)\phi} - e^{-i\phi/2}}{e^{i\phi/2} - e^{-i\phi/2}}
= \frac{- i e^{i(n-1/2)\phi} + i e^{-i\phi/2}} {2\sin{\phi/2}}
$$
the real part is

$$
\frac{\sin ((n-1/2)\phi) + \sin(\phi/2)} {2\sin{\phi/2}}
$$
yielding the right result.






However, there is a simpler solution:
$$
1 + 2\sum_{k=1}^n \cos k\phi

= 1+ \sum_{k=1}^n \left(e^{i k\phi} + e^{-i k\phi}\right) = \sum_{k=-n}^n e^{i k\phi}
$$
which simplifies to
$$
e^{-i n\phi} \frac{1 - e^{i (2n+1)\phi}}{1 - e^{i\phi}}
= \frac{e^{-i (n + 1/2)\phi} - e^{i (n + 1/2)\phi}}
{ e^{-i\phi/2} - e^{i\phi/2}} = \frac{\sin((n + 1/2)\phi)}{\sin(\phi/2)}
$$


calculus - Find the limit of $lim_{xto0}{frac{ln(1+e^x)-ln2}{x}}$ without L'Hospital's rule



I have to find: $$\lim_{x\to0}{\frac{\ln(1+e^x)-\ln2}{x}}$$
and I want to calculate it without using L'Hospital's rule. With L'Hospital's I know that it gives $1/2$.
Any ideas?


Answer



Simply differentiate $f(x)=\ln(e^x +1)$ at the point of abscissa $x=0$ and you’ll get the answer. in fact this is the definition of the derivative of $f$!!


How to prove Cauchy-Schwarz Inequality in $R^3$?



I am having trouble proving this inequality in $R^3$. It makes sense in $R^2$ for the most part. Can anyone at least give me a starting point to try. I am lost on this thanks in advance.


Answer




You know that, for any $x,y$, we have that



$$(x-y)^2\geq 0$$



Thus



$$y^2+x^2\geq 2xy$$



Cauchy-Schwarz states that




$$x_1y_1+x_2y_2+x_3y_3\leq \sqrt{x_1^2+x_2^2+x_3^3}\sqrt{y_1^2+y_2^2+y_3^3}$$



Now, for each $i=1,2,3$, set



$$x=\frac{x_i}{\sqrt{x_1^2+x_2^2+x_3^2}}$$



$$y=\frac{y_i}{\sqrt{y_1^2+y_2^2+y_3^2}}$$



We get




$$\frac{y_1^2}{{y_1^2+y_2^2+y_3^2}}+\frac{x_1^2}{{x_1^2+x_2^2+x_3^2}}\geq 2\frac{x_1}{\sqrt{x_1^2+x_2^2+x_3^2}}\frac{y_1}{\sqrt{y_1^2+y_2^2+y_3^2}}$$



$$\frac{y_2^2}{{y_1^2+y_2^2+y_3^2}}+\frac{x_2^2}{{x_1^2+x_2^2+x_3^2}}\geq 2\frac{x_2}{\sqrt{x_1^2+x_2^2+x_3^2}}\frac{y_2}{\sqrt{y_1^2+y_2^2+y_3^2}}$$



$$\frac{y_3^2}{{y_1^2+y_2^2+y_3^2}}+\frac{x_3^2}{{x_1^2+x_2^2+x_3^2}}\geq 2\frac{x_3}{\sqrt{x_1^2+x_2^2+x_3^2}}\frac{y_3}{\sqrt{y_1^2+y_2^2+y_3^2}}$$



Summing all these up, we get



$$\frac{y_1^2+y_2^2+y_3^2}{{y_1^2+y_2^2+y_3^2}}+\frac{x_1^2+x_2^2+x_3^2}{{x_1^2+x_2^2+x_3^2}}\geq 2\frac{x_1y_1+x_2y_2+x_3y_3}{\sqrt{y_1^2+y_2^2+y_3^2}\sqrt{x_1^2+x_2^2+x_3^2}}$$




$$\sqrt{y_1^2+y_2^2+y_3^2}\sqrt{x_1^2+x_2^2+x_3^2}\geq {x_1y_1+x_2y_2+x_3y_3}$$



This works for $\mathbb R^n$. We sum up through $i=1,\dots,n$ and set



$$y=\frac{y_i}{\sqrt{\sum y_i^2}}$$



$$x=\frac{x_i}{\sqrt{\sum x_i^2}}$$



Note this stems from the most fundamental inequality $x^2\geq 0$.


integration - Real Analysis Methodologies to show $gamma =2int_0^infty frac{cos(x^2)-cos(x)}{x},dx$



In THIS ANSWER, I used straightforward complex analysis to show that




$$\gamma =2\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx \tag 1$$





where $\gamma =-\int_0^\infty \log(x) e^{-x}\,dx$ is the Euler-Mascheroni Constant.



The key in the derivation of $(1)$ was to transform the cosine terms into real exponential ones.



To date, I have been unable to use strictly real analysis, without appealing to tabulated results of special functions (e.g., use of the $\text{Cin}(x)$ and $\text{Ci}(x)$ functions), to prove $(1)$.



I have tried introducing a parameter and using "Feynman's Trick to augment the integral into something manageable. Or somewhat equivalently, rewriting the integral in $(1)$ as a double integral and proceeding by exploiting Fubini-Tonelli.




QUESTION: What are ways to prove $(1)$ without relying on complex analysis and without simply appealing to tabulated relationships of special functions. For example, stating that the $Ci(x)$ function is defined as $\text{Ci}(x)\equiv -\int_x^\infty\frac{\cos(t)}{t}\,dt=\gamma +\log(x) +\int_0^x \frac{\cos(t)-1}{t}\,dt$ is unsatisfactory unless one proves the latter equality.




Answer



It turns out that we have the following observation:




Observation. For a nice function $f : [0,\infty) \to \Bbb{C}$, we have



$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + c(f) + o(1) \qquad \text{as } \epsilon \to 0^+ \tag{1} $$



where the constant $c(f)$ is computed by




$$ c(f) = \lim_{R\to\infty}\left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0)\log R\right) - f(0)\gamma. \tag{2} $$




The reasoning is surprisingly simple: First, define $g(x) = (f(x) - f(0)\mathbf{1}_{(0,1)}(x))/x$ and notice that



$$ \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -f(0)\log\epsilon + \int_{\epsilon}^{\infty} g(x) \, dx. $$



Assuming that the LHS of $\text{(1)}$ exists for all $\epsilon > 0$ and that $f$ behaves nice near $x = 0$, this implies $\text{(1)}$. Next, notice that $c(f) = \mathcal{L}g(0)$ and that $-(\mathcal{L}g(s))' = \mathcal{L}f(s) - f(0) (1-e^{-s})/s$. Therefore




\begin{align*}
c(f)
&= \lim_{R\to\infty} \int_{0}^{R} (-\mathcal{L}g(s))' \, ds \\
&= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) (1 - e^{-R})\log R + f(0) \int_{0}^{R} e^{-s}\log s \, ds \right) \\
&= \lim_{R\to\infty} \left( \int_{0}^{R} \mathcal{L}f(s) \, ds - f(0) \log R \right) - f(0)\gamma.
\end{align*}






At this moment this is just a heuristic computation. For a broad class of functions for which the LHS of $\text{(1)}$ exists, however, this computation can be made rigorous. This is particularly true for our function $f(x) = \cos x$. Now plugging $\mathcal{L}f(s) = \frac{s}{s^2+1}$ shows that $c(f) = -\gamma$ and thus




$$ \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx = -\log\epsilon - \gamma + o(1). $$



Plugging this asymptotics, we have



$$
\int_{\epsilon}^{\infty} \frac{\cos(x^2) - \cos x}{x} \, dx
= \frac{1}{2}\int_{\epsilon^2}^{\infty} \frac{\cos x}{x} \, dx - \int_{\epsilon}^{\infty} \frac{\cos x}{x} \, dx
= \frac{1}{2}\gamma + o(1)
$$




and the identity follows by letting $\epsilon \to 0^+$.






Here, the constant $c(f)$ can be thought as a regularized value of the divergent integral $\int_{0}^{\infty} \frac{f(x)}{x} \, dx$. This has the following nice properties (whenever they exist)




  • $c$ is linear: $c(\alpha f(x) + \beta g(x)) = \alpha c(f) + \beta c(g)$.

  • $c(f(x^p)) = \frac{1}{p}c(f)$ for $p > 0$,


  • $c(f(px)) = c(f) - f(0)\log p$ for $p > 0$,



Together with some known values, we can easily compute other types of integrals. For instance, using the fact that $c(\cos x) = -\gamma$ and $c(e^{-x}) = -\gamma$, we have



\begin{align*}
\int_{0}^{\infty} \frac{\cos (x^p) - \exp(-x^q)}{x} \, dx
&= c\left\{ \cos (x^p) - \exp(-x^q) \right\} \\
&= \frac{1}{p}c(\cos x) - \frac{1}{q}c(e^{-x})
= \gamma\left( \frac{1}{q} - \frac{1}{p}\right)

\end{align*}



for $p, q > 0$.


sequences and series - Show that $sum^limits{infty}_{n=1}frac{n-sqrt n}{n^{2}+5n}$ diverges


Show that $$\sum^{\infty}_{n=1}\frac{n-\sqrt n}{n^{2}+5n} \ \text{ diverges.}$$


I have tried Root test, Ratio Test, Cauchy condensation Test but all have failed. I think this has to be done by Comparison Test or Limit Comparison Test. But what is the suitable form it has to be reduced to?


Answer




Note that $$\dfrac{n-\sqrt{n}}{n^2+5n} \sim \dfrac1n$$ Conclude using limit comparison test.



EDIT Updated on the request of OP


$$\dfrac{n-\sqrt{n}}{n^2+5n} = \dfrac{1-1/\sqrt{n}}{n+5} = \dfrac1n \underbrace{\left(\dfrac{1-1/\sqrt{n}}{1+5/n}\right)}_{\to 1 \text{ as } n \to \infty}$$


linear algebra - Roots of a polynomial with real cofficients

Good evening;



Let $\alpha, \beta \in\mathbb{R}$, $n\in\mathbb{N}$. Please can you help me to prove that every polynomial of the form



$$ f(x)=x^{n+3}+\alpha x+\beta $$



admits at most 3 reals roots. Thank you for help.

Tuesday, August 23, 2016

combinatorics - Proof for this binomial coefficient's equation



For $k, l \in \mathbb N$ $$\sum_{i=0}^k\sum_{j=0}^l\binom{i+j}i=\binom{k+l+2}{k+1}-1$$ How can I prove this?



I thought some ideas with Pascal's triangle, counting paths on the grid and simple deformation of the formula.


It can be checked here (wolframalpha).


If the proof is difficult, please let me know the main idea.


Sorry for my poor English.


Thank you.


EDIT: I got the great and short proof using Hockey-stick identity by Anubhab Ghosal, but because of this form, I could also get the Robert Z's specialized answer. Then I don't think it is fully duplicate.



Answer



Your idea about a combinatorial proof which is related to counting paths in a grid is a good one!


The binomial $\binom{i+j}i$ counts the paths on the grid from $(0,0)$ to $(i,j)$ moving only right or up. So the double sum $$\sum_{i=0}^k\sum_{j=0}^l\binom{i+j}i-1$$ counts the number of all such paths from $(0,0)$ to any vertex inside the rectangle $(0,k)\times (0,l)$ different from $(0,0)$.


Now consider the paths from $(0,0)$ to $(k+1,l+1)$ different from $$(0,0)\to (0,l+1)\to (k+1,l+1)\quad\text{and}\quad (0,0)\to (k+1,0)\to (k+1,l+1)$$ which are $$\binom{k+l+2}{k+1}-2.$$


Now any path of the first kind can be completed to a path of the second kind by changing direction, going to the boundary of the rectangle $(0,k+1)\times(0,l+1)$ and then moving to the corner $(k+1,l+1)$ along the side.


Is this a bijection between the first set of paths and the second one?


calculus - How big is $displaystyle int_{1}^x sqrt t left( sqrt{frac{log (t+1)}{log(t)}}+ sqrt{frac{log (t)}{log(t+1)}}-2,right) dt$?



During the analysis of an optimization algorithm I encountered the following sum



$\displaystyle \sum_{n=1}^N\sqrt n \left( \sqrt{\frac{\log (n+1)}{\log(n)}}+ \sqrt{\frac{\log (n)}{\log(n+1)}}-2\,\right)$



I am interested in the order of the above as $N \to \infty$.




For the moment let's assume the terms are decreasing as this allows us estimate the sum using the corresponding integral, which tends to be easier to work with since we can do substitutions and other tricks.



The integral seems to grow incredibly slowly. For example see
https://www.desmos.com/calculator/y5fs86fbzw where the integrand is orange and the integral is green:



enter image description here



The integral is about $0.7$ for $x=10$ and $0.712$ for $x$ equal one million.
It is very quickly outpaced by $\log(\log x)$ and if you click the link and scroll to the right you'll see it's eventually outpaced by $\log(\log (\log x))$.




I would wager the integral tends to infinity but is eventually outpaced by any $\log^n(x)$.



If you estimate $\log(x+1) \simeq \log(x)+1/x$ for large $x$ and $\sqrt {1+\epsilon} \simeq 1 + \frac{\epsilon}{2}$ for small $\epsilon$ you see the square roots are approximately $\displaystyle 1 \pm \frac{1}{2 x \log (x)} $ but with opposite signs. Hence they cancel the $2$ and you end up with $\sqrt x$ times their difference. That should be very small indeed.



Any ideas on how to proceed?


Answer



As Peter Foreman hinted in their comment: one can use power series to get more precise approximations than $\sqrt{1+\varepsilon} \approx 1+\varepsilon/2$, for example $\sqrt{1+\varepsilon} =1+\varepsilon/2 - \varepsilon^2/8 + O(\varepsilon^3)$.



In this way one can show that

\begin{align*}
\sqrt{\frac{\log(t+1)}{\log t}} &= 1+\frac{1}{2 t \log t}+\frac{-2 \log t-1}{8 t^2 \log^2t}+O\big(t^{-3}\big) \\
\sqrt{\frac{\log t}{\log(t+1)}} &= 1-\frac{1}{2 t \log t}+\frac{2 \log t+3}{8 t^2 \log^2t}+O\big(t^{-3}\big),
\end{align*}

which shows that your integral is
$$
\int_1^x \bigg( \frac1{4t^{3/2}(\log t)^2} + O\big(t^{-5/2}\big) \bigg)\,dt,
$$

and in particular does not tend to infinity but in fact is convergent.


Different ways to solve nested radicals with cubic roots

I want to obtain the result of:
$$\sqrt[3]{\sqrt{5}+2}-\sqrt[3]{\sqrt{5}-2}$$
Which turns out to be 1. Now, let's prettend we don't know what the result is. I solved it by stating
$$\sqrt[3]{\sqrt{5}+2}-\sqrt[3]{\sqrt{5}-2}=z$$
Then by cubing the equation:
$$4-3\biggr(\sqrt[3]{\sqrt{5}+2}-\sqrt[3]{\sqrt{5}-2} \biggr)=z^3$$
$$ z^3+3z-4=0$$

Now, just by an inexplicable mysticism, the equation can be restated as:
$$(z-1)(z^2+z+4)=0$$
Therefore, $z=1$, which is what I wanted to prove.



Are there another ways to solve this problem? I find this method quite impractical and not so elegant. I'm interested in ways to solve it that are MUCH simpler.



Thanks in advance!

abstract algebra - Suppose that for all integers $x$, $x|a$ and $x|b$ if and only if $x|c$. Then $c = gcd(a,b)$



The following question is from Pinter's Abstract Algebra:




Suppose that for all integers $x$, $x|a$ and $x|b$ if and only if $x|c$. Prove $c = \gcd(a,b)$.



The definition of greatest common divisor is the usual two conditions:
(1) $c$ is a common divisor of $a$ and $b$; and
(2) any common divisor of $a$ and $b$ must divide $c$.



Further, here $\gcd$ means the greatest common positive divisor.



I just can't seem to figure out how to approach the problem.
I've tried starting with the following idea: if we have $x|a \iff x|c$, then we have that either $a|c$ or $c|a$. Similarly, $b|c$ or $c|b$.
If I could then deduce that $c|a$ and $c|b$ then it would be possible to almost conclude it there, but there appears to be no way to get to this point.



Any help would be appreciated, thanks.


Answer




As John Hughes points out you need $c > 0$.



Suppose $x|a$ and $x|b$ implies $x|c$. Since $\gcd(a,b)$ divides both $a$ and $b$, it divides $c$ too. Thus $\gcd(a,b) | c$ and in particular $\gcd(a,b) \le c$. (Here you need $c > 0$).



Suppose $x|c$ implies $x|a$ and $x|b$. Since $c|c$, you conclude that $c|a$ and $c|b$. Thus $c$ is a common divisor of $a$ and $b$ it cannot exceed the greatest common divisor, so $c \le \gcd(a,b)$.



If both implications hold you get $c = \gcd(a,b)$.


linear algebra - $Ain M_n(mathbb{R})$ symmetric matrix , $A-lambda I$ is Positive-definite matrix- prove: $det Ageq a^n $




Let $a>1$ and $A\in M_n(\mathbb{R})$ symmetric matrix such that $A-\lambda I$ is Positive-definite matrix (All eigenvalues $> 0$) for every $\lambda

First, I'm not sure what does it mean that $A-\lambda I$ is positive definite for every $\lambda 0$ and all eigenvalues are bigger than $0$ or it's not.



Then, If it's symmetric I can diagonalize it, I'm not sure what to do...



Thanks!


Answer



$A-\lambda I$ is positive definite for every $\lambda0$, $A-(a-\epsilon) I$ is positive definite. That means, each eigenvalue of $A$ is larger than $a-\epsilon$, thus their product $\det A\ge \prod (a-\epsilon)$ ... let $\epsilon$ goes to zero, you get what you want.


Monday, August 22, 2016

combinatorics - Fermat's Combinatorial Identity: How to prove combinatorially?

$$\binom{r}{r} + \binom{r+1}{r} + \binom{r+2}{r} + \dotsb + \binom{n}{r} = \binom{n+1}{r+1}$$



I don't have much experience with combinatorial proofs, so I'm grateful for all the hints.



(Presumptive) Source: Theoretical Exercise 1.11, P18, A First Course in Pr, 8th Ed, by S Ross

Sunday, August 21, 2016

number theory - N and M are positive integers having same digits but in different order, N + M = 10^10, then prove that N is divisible by 10.





"N and M are positive integers having same digits but in different order, $$N + M = 10^{10}$$ then prove that N is divisible by 10."




I have tried solving the question but to no avail. Please help.


Answer



Suppose that the last two digits $p$ and $q$ of $M$ and $N$ are not zero. It means that $p+q=10$.



Case 1: $p\ne q$




Let us first consider the case where $p\ne q$. I will use concrete values for $p,q$ but you can replace them with any other two. Suppose $p=6$, $q=4$.



The numbers $M,N$ are:



$$.........6 \\
.........4$$



Now, number 4 has to appear somewhere in the first number:




$$..4......6 \\
.........4$$



Under that number we must have digit 5 in the second number:



$$..4......6 \\
..5......4$$



Now you have 5 in the second number, it has to appear somewhere in the first:




$$..4.5....6 \\
..5......4$$



The matching number under it has to be 4:



$$..4.5....6 \\
..5.4....4$$



But now you have two 4s in the second number so you have to add one more to the first number and again add one 5 below it:




$$..4.5..4.6 \\
..5.4..5.4$$



But now you have the wrong count of 5s... and you are clearly in the infinite loop that you cannot exit.



Case 2: $p=q=5$



We have ten digit numbers both ending in 5:



$$.........5\\

.........5
$$



We have to fill 9 remaining positions in each number.



Missing digits come in pairs with their sum equal to 9 (digits cannot be equal, obviously). For example:



$$.3.......5\\
.6.......5
$$




But to keep the same digits in both numbers you have to put 6 into the first number and 3 into the last number:



$$.3..6....5\\
.6..3....5
$$



We have 7 positions left. Pick any two digits and you will have only five empty positions left:



$$.3..62.7.5\\

.6..37.2.5
$$



Eventually, you will end up with two incomplete numbers with all the same digits, just in a different order, and one empty position in each one. Whatever you choose to put there will break the "symmetry", your numbers won't have the same collection of digits.



Based on these two cases, we have a conclusion: the only way to construct the requested numbers is to start with $p=q=0$.


Saturday, August 20, 2016

integration - Find $int_{0}^{infty }frac{cos x-cos x^2}{x}mathrm dx$



Recently, I met a integration below

\begin{align*}
\int_{0}^{\infty }\frac{\sin x-\sin x^2}{x}\mathrm{d}x&=\int_{0}^{\infty }\frac{\sin x}{x}\mathrm{d}x-\int_{0}^{\infty }\frac{\sin x^{2}}{x}\mathrm{d}x\\
&=\int_{0}^{\infty }\frac{\sin x}{x}\mathrm{d}x-\frac{1}{2}\int_{0}^{\infty }\frac{\sin x}{x}\mathrm{d}x\\
&=\frac{1}{2}\int_{0}^{\infty }\frac{\sin x}{x}\mathrm{d}x=\frac{\pi }{4}
\end{align*}
the same way seems doesn't work in
$$\int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\mathrm dx$$
but why? Then how to evaluate it? Thx!


Answer



Because $\displaystyle \int_{0}^{\infty }\frac{\cos x}{x}\, \mathrm{d}x$ does not converge, you can see here for a proof.




So we have to find another way to evaluate it.



I'll think about it and post a solution later.






Solution:
\begin{align*}
\int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x&=\lim_{\alpha \rightarrow \infty }\int_{0}^{\alpha }\frac{\cos x-\cos x^2}{x}\,\mathrm{d}x=\lim_{\alpha \rightarrow \infty }-\int_{0}^{\alpha }\frac{1-\cos x+\cos x^2-1}{x}\,\mathrm{d}x\\

&=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\int_{0}^{\alpha }\frac{1-\cos x^2}{x}\,\mathrm{d}x \right )\\
&=\lim_{\alpha \rightarrow \infty }\left ( -\int_{0}^{\alpha }\frac{1-\cos x}{x}\,\mathrm{d}x+\frac{1}{2}\int_{0}^{\alpha^{2} }\frac{1-\cos x}{x}\,\mathrm{d}x \right )\\
&=\lim_{\alpha \rightarrow \infty }\left \{ \mathrm{Ci}\left ( \alpha \right )-\gamma -\ln\alpha +\frac{1}{2}\left [ \gamma +\ln\alpha ^{2}-\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ] \right \}\\
&=\lim_{\alpha \rightarrow \infty }\left [ -\frac{\gamma }{2}+\mathrm{Ci}\left ( \alpha \right )-\frac{1}{2}\mathrm{Ci}\left ( \alpha ^{2} \right ) \right ]\\
&=-\frac{\gamma }{2}
\end{align*}
where $\mathrm{Ci}\left ( \cdot \right )$ is Cosine Integral and we can easily find that $\mathrm{Ci}\left ( \alpha \right )$ goes to $0$ when $\alpha \rightarrow \infty $.


divisibility - Solving system of congruences using CRT

$$4x \equiv 5 \pmod 7$$ $$7x \equiv 4 \pmod {15}$$


I need to solve this system of congruences using Chinese Reminder Theorem. It would be easy to use CRT if not those 4 and 7 near the x variables. How can I do this? Just divide both congruences by 4/7 and use CRT in something like:


$$x \equiv \frac54 \pmod 7$$ $$x \equiv \frac47 \pmod {15}$$


? It gives me $\frac{283}4 + 105k$ as the result.

calculus - Evaluating $int _0^{pi }int _0^xsqrt{1-x^2}:dydx$



Evaluate:

$\int _0^{\pi }\int _0^x\sqrt{1-x^2}\:dydx$



I've gotten it done to:



$\int _0^{\pi }x\sqrt{1-x^2}dx$



Should I now change to polar coordinates because of the pi, or how should I proceed?


Answer



I think you made a typo, the upper limit of the integral looks more like $1$ instead of $\pi$ (otherwise the square root is not well defined), I will give my answer based on this correction.







Try to make the change of variable $x = \sin \theta$, the $dx = \cos\theta d\theta$, and the integral becomes
\begin{align}
& \int_0^1 x\sqrt{1 - x^2} dx \\
= & \int_0^{\pi/2} \sin\theta \cos \theta \cos\theta d\theta \\
= & -\int_0^{\pi/2} \cos^2(\theta) d(\cos\theta) \\
= & -\left.\frac{1}{3}\cos^3(\theta)\right|_0^{\pi/2} \\
= & \frac{1}{3}.
\end{align}



algebra precalculus - Proving $1^3+ 2^3 + cdots + n^3 = left(frac{n(n+1)}{2}right)^2$ using induction



How can I prove that





$$1^3+ 2^3 + \cdots + n^3 = \left(\frac{n(n+1)}{2}\right)^2$$




for all $n \in \mathbb{N}$? I am looking for a proof using mathematical induction.



Thanks


Answer



You are trying to prove something of the form, $$A=B.$$ Well, both $A$ and $B$ depend on $n$, so I should write, $$A(n)=B(n).$$ First step is to verify that $$A(1)=B(1).$$ Can you do that? OK, then you want to deduce $$A(n+1)=B(n+1)$$ from $A(n)=B(n)$, so write out $A(n+1)=B(n+1)$. Now you're trying to get there from $A(n)=B(n)$, so what do you have to do to $A(n)$ to turn it into $A(n+1)$, that is (in this case) what do you have to add to $A(n)$ to get $A(n+1)$? OK, well, you can add anything you like to one side of an equation, so long as you add the same thing to the other side of the equation. So now on the right side of the equation, you have $B(n)+{\rm something}$, and what you want to have on the right side is $B(n+1)$. Can you show that $B(n)+{\rm something}$ is $B(n+1)$?


Friday, August 19, 2016

calculus - Evaluate $int_{Gamma}xy^2dx+xydy$ on $Gamma={y=x^2}$



Evaluate $\int_{\Gamma}xy^2dx+xydy$ on $\Gamma=\{(x,y)\in\mathbb{R}^2:y=x^2,x\in[-1,1]\}$ with orientation clockwise using Green theorem



So $\Gamma$ is a parabola to use Green we have to close the curve, to do so we will add the line from $(1,1)$ to $(-1,1)$



Then




$\gamma_1(t)=(-t,1),t\in[-1,1]$



$\gamma_2(t)=(t,t^2),t\in[-1,1]$



$\int_{wanted}=\int_{\gamma_1(t)\cup \gamma_2(t)}-\int_{\gamma_1(t)}$



But we must have one parameterization of $2$ variables which is closed to use green?



maybe $\phi(r,\theta)=(\sin t\cos t,\sin ^2t-\sin t ),t\in [-\pi,-2\pi]$ is the closed curve?



Answer



By green's theorem,



$\int Mdx+Ndy = \iint (N_x-M_y)dxdy \\ M = xy^2, N = xy \\\int Mdx+Ndy = \int_{x=-1}^{x=1}\int_{y=0}^{y=x^2} (y-2xy) dydx \\ = \int_{x=-1}^1 (1-2x )x^4 \frac{1}{2} dx \\ = \frac{1}{5}$


algebra precalculus - What is $sqrt{-4}sqrt{-9}$?




I assumed that since $a^c \cdot b^c = (ab)^{c}$, then something like $\sqrt{-4} \cdot \sqrt{-9}$ would be $\sqrt{-4 \cdot -9} = \sqrt{36} = \pm 6$ but according to Wolfram Alpha, it's $-6$?


Answer



The property $a^c \cdot b^c = (ab)^{c}$ that you mention only holds for integer exponents and nonzero bases. Since $\sqrt{-4} = (-4)^{1/2}$, you cannot use this property here.




Instead, use imaginary numbers to evaluate your expression:



$$
\begin{align*}
\sqrt{-4} \cdot \sqrt{-9} &= (2i)(3i) \\
&= 6i^2 \\
&= \boxed{-6}
\end{align*}
$$



Thursday, August 18, 2016

calculus - Prove an inequality for an exponential function.

I need to prove the following inequality:



$\displaystyle e^x > \frac{x^n}{n!}$, for any $x\geqslant0$



I understand I need to use taylor's expansion somehow but not sure how.




Thanks!

elementary set theory - Is $mathbb R^2$ equipotent to $mathbb R$?

I know that $\mathbb N^2$ is equipotent to $\mathbb N$ (By drawing zig-zag path to join all the points on xy-plane). Is this method available to prove $\mathbb R^2 $ equipotent to $\mathbb R$?

sequences and series - Compute $1 cdot frac {1}{2} + 2 cdot frac {1}{4} + 3 cdot frac {1}{8} + cdots + n cdot frac {1}{2^n} + cdots $





I have tried to compute the first few terms to try to find a pattern but I got



$$\frac{1}{2}+\frac{1}{2}+\frac{3}{8}+\frac{4}{16}+\frac{5}{32}+\frac{6}{64}$$



but I still don't see any obvious pattern(s). I also tried to look for a pattern in the question, but I cannot see any pattern (possibly because I'm overthinking it?) Please help me with this problem.


Answer




$$I=\frac{1}{2}+\frac{2}{4}+\frac{3}{8}+\frac{4}{16}+\frac{5}{32}+\frac{6}{64}+\cdots$$
$$2I=1+1+\frac{3}{4}+\frac{4}{8}+\frac{5}{16}+\frac{6}{32}+\cdots$$
$$2I-I=1+\left(1-\frac 12 \right)+\left(\frac 34 -\frac 24 \right)+\left(\frac 48 -\frac 38 \right)+\left(\frac {5}{16} -\frac {4}{16} \right)+\cdots$$
$$I=1+\frac 12+\frac 14+\frac 18+\cdots=2$$


sequences and series - Showing $sum _{k=1} 1/k^2 = pi^2/6$








I read my book of EDP, and there appears the next serie
$$\sum _{k=1} \dfrac{1}{k^2} = \dfrac{\pi^2}{6}$$
And, also, we prove that this series is equal $\frac{\pi^2}{6}$ for methods od analysis of Fourier, but...



Do you know other proof, any more simple or beautiful?

elementary number theory - Understanding how to compute $5^{15}pmod 7$




Compute $5^{15} \pmod 7$.





Can someone help me understand how to solve this? I know there is a trick, but my professor did not completely explain it in class and I'm stuck.


Answer



You know 7 is prime and 7 does not divide 5 so you can use Fermats Little Theorm to get $5^6\equiv1 (mod 7)$ $\Rightarrow$ $5^{15} \equiv 5^3 (mod 7)$
then you can do $ (25)(5)\equiv (-4)(2) (mod7) $ then $ -8 \equiv 6 (mod 7)$ $\Rightarrow $ $5^{15} \equiv 6 (mod 7)$ hence $5^{15}$ Modulo 7 is 6


Finitely generated algebraic field extensions are finite extensions?




Suppose I have $F = k[a_1, ..., a_n]$ is a field. We know that $F$ is a finitely generated algebra over $k.$ However, if $F$ is an algebraic extension over the field $k$ does that mean the extension is finite? I think so. Here is my reasoning:



If $F$ is a field then $F$ can be rewritten as $F = k(a_1, ..., a_n)$ (is this true?). Thus, since each $a_i$ is algebraic over $k,$ $k(a_1)$ is a finite extension over $k.$ So is $k(a_1, a_2)$ and so on for a finite number of steps. Is my reasoning correct?


Answer



Your arguments are correct.



But we should mention that there is even more. Even without knowing that $F$ is an algebraic extension, we get that the extension is finite. This is the famous Zariski's lemma leading to Hilbert's Nullstellensatz.


Wednesday, August 17, 2016

trigonometric series - Complex number trigonometry problem

Use $cos (n\theta)$ = $\frac{z^n +z^{-n}}{2}$ to express

$\cos \theta + \cos 3\theta + \cos5\theta + ... + \cos(2n-1)\theta$ as a geometric series in terms of z. Hence find this sum in terms of $\theta$.



I've tried everything in the world and still can't match that of the final answer. Could I pleas have a slight hint on the right path to follow.



Thanks

sequences and series - Different methods to compute $sumlimits_{k=1}^infty frac{1}{k^2}$ (Basel problem)



As I have heard people did not trust Euler when he first discovered the formula (solution of the Basel problem)
$$\zeta(2)=\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{6}.$$
However, Euler was Euler and he gave other proofs.



I believe many of you know some nice proofs of this, can you please share it with us?


Answer




OK, here's my favorite. I thought of this after reading a proof from the book "Proofs from the book" by Aigner & Ziegler, but later I found more or less the same proof as mine in a paper published a few years earlier by Josef Hofbauer. On Robin's list, the proof most similar to this is number 9
(EDIT: ...which is actually the proof that I read in Aigner & Ziegler).



When $0 < x < \pi/2$ we have $0<\sin x < x < \tan x$ and thus
$$\frac{1}{\tan^2 x} < \frac{1}{x^2} < \frac{1}{\sin^2 x}.$$
Note that $1/\tan^2 x = 1/\sin^2 x - 1$.
Split the interval $(0,\pi/2)$ into $2^n$ equal parts, and sum
the inequality over the (inner) "gridpoints" $x_k=(\pi/2) \cdot (k/2^n)$:
$$\sum_{k=1}^{2^n-1} \frac{1}{\sin^2 x_k} - \sum_{k=1}^{2^n-1} 1 < \sum_{k=1}^{2^n-1} \frac{1}{x_k^2} < \sum_{k=1}^{2^n-1} \frac{1}{\sin^2 x_k}.$$
Denoting the sum on the right-hand side by $S_n$, we can write this as

$$S_n - (2^n - 1) < \sum_{k=1}^{2^n-1} \left( \frac{2 \cdot 2^n}{\pi} \right)^2 \frac{1}{k^2} < S_n.$$



Although $S_n$ looks like a complicated sum, it can actually be computed fairly easily. To begin with,
$$\frac{1}{\sin^2 x} + \frac{1}{\sin^2 (\frac{\pi}{2}-x)} = \frac{\cos^2 x + \sin^2 x}{\cos^2 x \cdot \sin^2 x} = \frac{4}{\sin^2 2x}.$$
Therefore, if we pair up the terms in the sum $S_n$ except the midpoint $\pi/4$ (take the point $x_k$ in the left half of the interval $(0,\pi/2)$ together with the point $\pi/2-x_k$ in the right half) we get 4 times a sum of the same form, but taking twice as big steps so that we only sum over every other gridpoint; that is, over those gridpoints that correspond to splitting the interval into $2^{n-1}$ parts. And the midpoint $\pi/4$ contributes with $1/\sin^2(\pi/4)=2$ to the sum. In short,
$$S_n = 4 S_{n-1} + 2.$$
Since $S_1=2$, the solution of this recurrence is
$$S_n = \frac{2(4^n-1)}{3}.$$
(For example like this: the particular (constant) solution $(S_p)_n = -2/3$ plus the general solution to the homogeneous equation $(S_h)_n = A \cdot 4^n$, with the constant $A$ determined by the initial condition $S_1=(S_p)_1+(S_h)_1=2$.)




We now have
$$ \frac{2(4^n-1)}{3} - (2^n-1) \leq \frac{4^{n+1}}{\pi^2} \sum_{k=1}^{2^n-1} \frac{1}{k^2} \leq \frac{2(4^n-1)}{3}.$$
Multiply by $\pi^2/4^{n+1}$ and let $n\to\infty$. This squeezes the partial sums between two sequences both tending to $\pi^2/6$. Voilà!


summation - Find the sum of the following series to n terms $frac{1}{1cdot3}+frac{2^2}{3cdot5}+frac{3^2}{5cdot7}+dots$



Find the sum of the following series to n terms $$\frac{1}{1\cdot3}+\frac{2^2}{3\cdot5}+\frac{3^2}{5\cdot7}+\dots$$



My attempt:



$$T_{n}=\frac{n^2}{(2n-1)(2n+1)}$$



I am unable to represent to proceed further. Though I am sure that there will be some method of difference available to express the equation. Please explain the steps and comment on the technique to be used with such questions.




Thanks in advance !


Answer



Use partial fractions to get
$$
\begin{align}
\sum_{k=1}^n\frac{k^2}{(2k-1)(2k+1)}
&=\sum_{k=1}^n\frac18\left(2+\frac1{2k-1}-\frac1{2k+1}\right)\\
&=\frac n4+\frac18-\frac1{16n+8}\\[3pt]
&=\frac{(n+1)n}{4n+2}
\end{align}

$$
where we finished by summing a telescoping series.


matrices - Check the determinant of a matrix given a parameter





  • How do I calculate the determinant of the following matrix?

  • And for which values of m is the determinant non null?



\begin{bmatrix}
1 & 1 & 1 \\
2 & m & 3 \\
4 & m^2 & 9

\end{bmatrix}

I have tried the




co-factoring method




and what I got was m(m+1) + 6
I'm trying to figure what would be the right way to do this?


Answer




In general, you can calculate the determinant of any 3x3 matrix with the method described in the introductory section of the corresponding wikipedia article.



After you have successfully calculated the determinant (-m^2 + 5 m - 6), determine the zeros of this quadratic equation depending on m. Except for these two values of m the determinant will be non-zero.


determinant - Is the decomposition of a matrix as product of elementary matrices unique?



We know that an invertible matrix $A$ can be written as a product of elementary matrices, say $E_1\cdots E_n$. This decomposition is, clearly, not unique. For example, the elementary matrix $M$ that exchanges row 1 and row 2 can be inserted in the product an even number of times like this:

$$E_1MMMME_2\cdots E_n$$



We also know that an elementary decomposition can be found by doing row operations on the matrix to find its inverse, and taking the inverses of those elementary matrices. Suppose we are using the most efficient method to find the inverse, by most efficient I mean the least number of steps:




  • Is the resulting decomposition unique? Put another way, is the most efficient method unique? (I suspect it is not.)


  • Is the resulting decomposition unique up to the ordering of the matrices? (I suspect it is in the case of $2\times 2$ matrices, and it isn't in matrices of higher dimensions)


  • Failing those two questions to be true, there must be some uniqueness because the determinant of the matrix is unique. What could it be?



Answer




Let $$A=\pmatrix{2&3\cr4&5\cr}$$ (for example – almost any example should do). You could divide the first row by 2; subtract 4 times the first row from the second; divide the second row by the appropriate number (to get a 1 in the lower right corner); subtract the appropriate multiple of the second row from the first.



Or, you could divide the second row by 5; subtract 3 times the second row from the first; divide the first row by the appropriate number; subtract the appropriate multiple of the first row from the second.



Either way, you get (efficiently) a factorization into four elementary matrices, but they are different factorizations, even if reordering the matrices is allowed.



EDIT: More simply, one could just note that $$\pmatrix{a&0\cr0&1\cr}\pmatrix{1&b\cr0&1\cr}=\pmatrix{1&ab\cr0&1\cr}\pmatrix{a&0\cr0&1\cr}$$


Radius of convergence of the series $ sumlimits_{n=1}^infty (-1)^nfrac{1cdot3cdot5cdots(2n-1)}{3cdot6cdot9cdots(3n)}x^n$


How do I find the radius of convergence for this series:


$$ \sum_{n=1}^\infty (-1)^n\dfrac{1\cdot3\cdot5\cdot\cdot\cdot(2n-1)}{3\cdot6\cdot9\cdot\cdot\cdot(3n)}x^n $$



Treating it as an alternating series, I got $$x< \dfrac{n+1}{2n+1}$$


And absolute convergence tests yield $$-\dfrac{1}{2}

I feel like it's simpler than I expect but I just can't get it. How do I do this?


Answer in book: $\dfrac{3}{2}$


Answer



The ratio test allows to determine the radius of convergence.


For $n \in \mathbb{N}^{\ast}$, let :


$$ a_{n} = (-1)^{n}\frac{1 \times 3 \times \ldots \times (2n-1)}{3 \times 6 \times \ldots \times (3n)}. $$


Then,


$$ \begin{align*} \frac{\vert a_{n+1} \vert}{\vert a_{n} \vert} &= {} \frac{1 \times 3 \times \ldots \times (2n-1) \times (2n+1)}{3 \times 6 \times \ldots (3n) \times (3n+3)} \times \frac{3 \times 6 \times \ldots \times (3n)}{1 \times 3 \times \ldots \times (2n-1)} \\[2mm] &= \frac{2n+1}{3n+3} \; \mathop{\longrightarrow} \limits_{n \to +\infty} \; \frac{2}{3}. \end{align*} $$



Since the ratio $\displaystyle \frac{\vert a_{n+1} \vert}{\vert a_{n} \vert}$ converges to $\displaystyle \frac{2}{3}$, we can conclude that $R = \displaystyle \frac{3}{2}$.


Tuesday, August 16, 2016

limits - Evaluate $lim_{xto0}frac{e^x-sum_{k=0}^{n}frac{x^k}{k!}}{x^{n+1}}$




Evaluate the following limit
$$\lim_{x\to0}\frac{e^x-\sum_{k=0}^{n}\frac{x^k}{k!}}{x^{n+1}}$$
where $n\in\mathbb{N}$.





I want to find this limit, if it is possible, without using L'Hopital's rule, derivatives and function series. I want to solve it algebraically. I tried to reduce this limit to some form where I can use other well-known limits at $x\to0$ such as $\frac{\sin x}{x}=1,\frac{e^x-1}{x}=1,\frac{\ln(1+x)}{x}=1,\frac{(1+x)^k-1}{x}=k,(1+x)^\frac{1}{x}=e$. Also, I can use "exponential beats polynomial" rule $x\ln x=0$.
So far I made many attempts. Because there is $e^x$ in the numerator, I tried to apply $\frac{e^x-1}{x}=1$ somewhere. I wrote limit as
$$\lim_{x\to0}\frac{\frac{e^x-1}{x}-\frac{\sum_{k=1}^{n}\frac{x^k}{n!}}{x}}{x^n}$$
But, I cannot apply known limit here because I still have $x^n\to0$ in the denominator.
After that, I tried to substitute $t=e^x$, but it didn't help me. Also, I tried to separate these expressions in the numerator to get something similar to known limits, but I could't.
My another attempt was induction. For $n=1$ limit becomes
$$\lim_{x\to0}\frac{e^x-x-1}{x^2}$$
But, I still couldn't reduce this fraction to some well-known form.
Is there any way to solve this limit without using derivatives, inequalities or geometrical approach? How we can apply well-known limits here?


Answer



By the Taylor Theorem,
$$ e^x=\sum_{k=0}^n\frac{x^n}{n!}+\frac{e^\theta}{(n+1)!}x^{n+1} $$
where $\theta$ is between $0$ and $x$. So

\begin{eqnarray}
\lim_{x\to0}\frac{e^x-\sum_{k=0}^n\frac{x^n}{n!}}{x^{n+1}}=\lim_{x\to0}\frac{e^\theta}{(n+1)!}=\frac{1}{(n+1)!}.
\end{eqnarray}


number theory division of power for the case $(n^r −1)$ divides $(n^m −1)$ if and only if $r$ divides $m$.

Let $n > 1$ and $m$ and $r$ be positive integers. Prove that $(n^r −1)$ divides $(n^m −1)$ if and only if $r$ divides $m$.

calculus - How can I evaluate $int_{-infty}^{infty} e^{-x^2} dx$ without using polar coordinates




I know from probability class that the area under the bell curve $e^{-x^2}$ is $\sqrt{\pi}$. I would like to be able to verify this, so in other words, solve this integral:

$$\int_{-\infty}^{\infty} e^{-x^2} dx $$
The proof we saw in class used polar coordinates, but I don't like polar coordinates! Is there another way to evalutuate it?



Thanks!


Answer



What you could use is the following formula:
$$V = \pi \int_a^b f(x)^2 dx $$
Which tells you the volume of the solid of revolution created by rotating $f$ around the $x$-axis. It's easy to see why this is true.



We start of the usual way, by setting your integral equal to $I$, so




$$I^2 = I\cdot I = \int_{-\infty}^{\infty} e^{-x^2} dx \cdot \int_{-\infty}^{\infty} e^{-x^2} dx$$



and rename the variable in the second integral:
$$I^2=\int_{-\infty}^{\infty} e^{-x^2} dx\cdot \int_{-\infty}^{\infty} e^{-y^2} dy$$



The first integral is a constant, so we can move it inside the second integral:



$$I^2=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-x^2} dx\cdot e^{-y^2} dy$$




And rewrite as



$$I^2=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-(x^2+y^2)}\ dx\ dy$$



This means geometrically that we are looking for the area underneath the graph of $e^{-(x^2+y^2)}$. Notice that this is a function of $x^2+y^2$, which means that it's constant on circles with center $(0,0)$. That means it's a solide of revolution around the $z$-axis!



3D plot



The curve you need to rotate to get this shape can be found by choosing $y=0$, so that we get the image of the function above the $x$-axis. This results in $z=e^{-x^2}$, our original function. Just as a visual refference, it looks like this:




curve



The formula to get the volume only works for solids of revolution around the $x$-axis, so we flip this curve on its side by inverting it (just switch the $z$ and $x$ and solve):



$$x=e^{-z^2} \iff z=\sqrt{-ln(x)}$$



plot of inverse function



Notice that we lost half of the curve because of taking only the positive square root, but when rotating this all the way around the $x$-axis, the same shape is created.




All that is left to do now is use the formula to get the volume of the solid that is created by rotating this function around the $x$-axis.



$$I^2 = \pi \int_0^1 -\ln(x) dx$$



The square cancelled with the square root, and we integrate from 0 to 1 to get the whole solid. The integral is has now become easy and evaluates to 1, so only the $\pi$ is left:
$$I^2=\pi \iff I=\sqrt{\pi}$$


integration - how to calculate the integral of $sin^2(x)/x^2$





$\int_{-\infty}^{\infty}\sin^2(x)/x^2=\pi$ according to wolfram alpha. That is such a beautiful result! But how do I calculate this integral by hand?


Thanks in advance.


Answer



The answer may be found by using complex analysis, specifically the residue theorem. A full deriviation may be found here.


I know of an easy way to derive this result using real analysis alone.


soft question - Non-repeating decimal expansion of known number

The decimal expansion of any irrational number $x>0$ is non-repeating. This is well known. So, we have a way of obtaining irrational numbers, such has$$x=0.101\,101\,110\,111\,101\,111\,101\ldots$$(after the decimal point, we have one $1$, one $0$, two $1$'s, one $0$, three $1$'s, one $0$, and so on).



My (admittedly vague) question is this: how to obtain a known irrational number whose decimal expansion (or, for that matter, whose expansion on some base) is easily shown to be non-repeating? Of course, when I write that that the decimal expansion “is easily shown to be non-repeating” what I mean is that it is easy to describe its decimal expansion (and to see that it is non-repeating); otherwise, one could just say that, since the number is irrational, its decimal expansion must be non-repeating. And by “known number” I mean something like, say, $\sqrt[3]2$ or $\pi^e$.

Using Induction prove the given statement;

By using the Principle Of Mathematical Induction prove that:$1^3+2^3+3^3+.......+n^3=[\frac {n(n+1)}{2}]^2$.




My Approach:



Let, $P(n): 1^3+2^3+3^3+.....+k^3=[\frac {n(n+1)}{2}]^2$.



Base case $(n=1)$
$$L.H.S=1$$
$$R.H.S=[\frac {1(1+1)}{2}]^2$$
$$=[\frac {1\times 2}{2}]^2$$
$$=1$$.




$i.e., L.H.S=R.H.S$. So, $P(1)$ is true.



Induction Hypothesis:$(let, n=k)$.



Assume $P(k): 1^3+2^3+3^3+....+k^3=[\frac {k(k+1)}{2}]^2$ is true.



Please help to continue from here.

abstract algebra - How many irreducible polynomials of degree $n$ exist over $mathbb{F}_p$?



I know that for every $n\in\mathbb{N}$, $n\ge 1$, there exists $p(x)\in\mathbb{F}_p[x]$ s.t. $\deg p(x)=n$ and $p(x)$ is irreducible over $\mathbb{F}_p$.




I am interested in counting how many such $p(x)$ there exist (that is, given $n\in\mathbb{N}$, $n\ge 1$, how many irreducible polynomials of degree $n$ exist over $\mathbb{F}_p$).




I don't have a counting strategy and I don't expect a closed formula, but maybe we can find something like "there exist $X$ irreducible polynomials of degree $n$ where $X$ is the number of...".




What are your thoughts ?


Answer



Theorem: Let $\mu(n)$ denote the Möbius function. The number of monic irreducible polynomials of degree $n$ over $\mathbb{F}_q$ is the necklace polynomial
$$M_n(q) = \frac{1}{n} \sum_{d | n} \mu(d) q^{n/d}.$$



(To get the number of irreducible polynomials just multiply by $q - 1$.)



Proof. Let $M_n(q)$ denote the number in question. Recall that $x^{q^n} - x$ is the product of all the monic irreducible polynomials of degree dividing $n$. By counting degrees, it follows that
$$q^n = \sum_{d | n} d M_d(q)$$




(since each polynomial of degree $d$ contributes $d$ to the total degree). By Möbius inversion, the result follows.



As it turns out, $M_n(q)$ has a combinatorial interpretation for all values of $q$: it counts the number of aperiodic necklaces of length $n$ on $q$ letters, where a necklace is a word considered up to cyclic permutation and an aperiodic necklace of length $n$ is a word which is not invariant under a cyclic permutation by $d$ for any $d < n$. More precisely, the cyclic group $\mathbb{Z}/n\mathbb{Z}$ acts by cyclic permutation on the set of functions $[n] \to [q]$, and $M_n(q)$ counts the number of orbits of size $n$ of this group action. This result also follows from Möbius inversion.



One might therefore ask for an explicit bijection between aperiodic necklaces of length $n$ on $q$ letters and monic irreducible polynomials of degree $n$ over $\mathbb{F}_q$ when $q$ is a prime power, or at least I did a few years ago and it turns out to be quite elegant.



Let me also mention that the above closed form immediately leads to the "function field prime number theorem." Let the absolute value of a polynomial of degree $d$ over $\mathbb{F}_q$ be $q^d$. (You can think of this as the size of the quotient $\mathbb{F}_q[x]/f(x)$, so in that sense it is analogous to the norm of an element of the ring of integers of a number field.) Then the above formula shows that the number of monic irreducible polynomials $\pi(n)$ of absolute value less than or equal to $n$ satisfies
$$\pi(n) \sim \frac{n}{\log_q n}.$$


Monday, August 15, 2016

sequences and series - Find the limit of $frac{(n+1)^sqrt{n+1}}{n^sqrt{n}}$.


Find $$\lim_{n\to \infty}\frac{(n+1)^\sqrt{n+1}}{n^\sqrt{n}}$$


First I tried by taking $\ln y_n=\ln \frac{(n+1)^\sqrt{n+1}}{n^\sqrt{n}}=\sqrt{n+1}\ln(n+1)-\sqrt{n}\ln(n),$


which dose not seems to take me anywhere. Then I tried to use squeeze theorem, since $\frac{(n+1)^\sqrt{n+1}}{n^\sqrt{n}}\geq 1$, but I need an upper bound now. It's for a while I am trying to come up with it but stuck. Can you help me please.


Answer



Notice that for $f(x)=\sqrt x \ln x$ you have $$f'(x)=\frac{\ln x}{2\sqrt x}-\frac1{\sqrt x}.$$ Now by mean value theorem $$f(n+1)-f(n) = f'(\theta_n)$$ for some $\theta_n$ such that $n<\theta_n

If we notice that $\lim\limits_{x\to\infty} f'(x) = 0$ we get that $$\lim\limits_{n\to\infty} (\sqrt{n+1}\ln(n+1)-\sqrt{n}\ln n) = \lim\limits_{n\to\infty} f'(\theta_n) = 0.$$


complex analysis - Show that $intnolimits^{infty}_{0} x^{-1} sin x dx = fracpi2$

Show that $\int^{\infty}_{0} x^{-1} \sin x dx = \frac\pi2$ by integrating $z^{-1}e^{iz}$ around a closed contour $\Gamma$ consisting of two portions of the real axis, from -$R$ to -$\epsilon$ and from $\epsilon$ to $R$ (with $R > \epsilon > 0$) and two connecting semi-circular arcs in the upper half-plane, of respective radii $\epsilon$ and $R$. Then let $\epsilon \rightarrow 0$ and $R \rightarrow \infty$.



[Ref: R. Penrose, The Road to Reality: a complete guide to the laws of the universe (Vintage, 2005): Chap. 7, Prob. [7.5] (p. 129)]



Note: Marked as "Not to be taken lightly", (i.e. very hard!)



Update: correction: $z^{-1}e^{iz}$ (Ref: http://www.roadsolutions.ox.ac.uk/corrections.html)

Sunday, August 14, 2016

real analysis - Does $f_n(z)neq 0$ for all $ngeq 1$ imply $f(z)neq 0$?




Let $z\in\mathbb{C}$, and $\left(f_n(z)\right)_{n=1}^\infty$ be a sequence of non-zero functions of $z$, i.e. $f_n(z)\neq 0$ for all $n\geq 1$. Suppose that $f_n(z)\to f(z)$ uniformly with $f$ not identically zero, and $f$ and $f_n$ holomorphic. Does this imply that $f(z)\neq 0$ ?



I have been looking at Hurwitz' theorem from complex analysis, but I get the impression under Hurwitz' theorem the condition $f_n(z)\neq 0$ has to hold on whole sets of points, and does not work for single points.


Answer



You probably want to also add the hypotheses that $f_n$ and $f$ are holomorphic. But as you pointed out, Hurwitz's theroem doesn't work at a single point. It only tells you that if $f$ has a zero at $z_0$, then the $f_n$ must have zeros near $z_0$ for sufficiently large $n$. For example, if you define $f_n(z) = z + \frac{1}{n}$, then $f_n \to f(z) = z$ uniformly on $\mathbb{C}$. Now $f(0) = 0$, but $f_n(0) \neq 0$ for all $n \geq 1$. However, each $f_n$ has a zero at $z = -\frac{1}{n}$, thus if you take any $\epsilon > 0$, then $f_n$ has a zero inside the disk $\vert z \vert < \epsilon$ for sufficiently large $n$.


calculus - $lim_{xrightarrow 0^+}frac{(1+cos x)}{(e^x-1)}= infty$ using l'Hopital

I need to show $$\lim_{x\rightarrow 0^+}\frac{1+\cos x}{e^x-1}=\infty$$



I know that, say, if you let
$f(x) = 1 + \cos x$
and
$g(x) = \dfrac{1}{e^x-1}$,
and then multiply the limits of $f(x)$ and $g(x)$, you get $\frac{2}{0}$. I can't figure out how to make it work for l'Hopital's rule however, i.e. how to rewrite it so that it is in the form $\frac{0}{0}$ or $\frac{\infty}{\infty}$.




I also tried multiplying $h(x)$ by the conjugate of $f(x)$, but I don't think this is fruitful. Any hints appreciated.

calculus - Evaluate the following improper integral with bounds.



I need ideas for solve this improper integral, i know is hard and is a bonus for my analysis course, so i would really appreciate your help, thanks



$$\int_{1}^{\infty}\dfrac{x\sin(2x)}{x^2+3}dx$$



Hint: $$\begin{cases} \sin(\theta)\geq \dfrac{2\theta}{\pi},& 0\leq\theta\leq \dfrac{\pi}{2}\\\\\sin(\theta)\geq \dfrac{-2\theta}{\pi}+2,& \dfrac{\pi}{2}\leq\theta\leq{\pi}\end{cases}$$



In order to bound the integral




$$\int_{0}^{\pi} e^{-2R\sin(\theta)}d\theta$$



I don't know a nice and beauty approach in order to attack this properly to obtain a closed answer, so....


Answer



Hint: Use
$$
\int_1^\infty \frac{x \sin(2x)}{x^2+3} \mathrm{d} x = \Im \int_1^\infty \frac{x \mathrm{e}^{2 i x} }{x^2+3} \mathrm{d} x = \frac{1}{2} \Im \int_1^\infty \mathrm{e}^{2 i x} \left( \frac{1}{x - i \sqrt{3}} - \frac{1}{x + i \sqrt{3}} \right) \mathrm{d} x
$$
Then check out the definition of the exponential integral.


algebra precalculus - What is the next step in the prove? (Mathematical Induction) $left(x^{n}+1right)



I have to prove this preposition by mathematical induction:



$$\left(x^n+1\right)<\left(x+1\right)^n \quad \forall n\geq 2 \quad \text{and}\quad x>0,\,\, n \in \mathbb{N}$$




I started the prove with $n=2$:



$\left(x^{2}+1\right)<\left(x+1\right)^{2}$



$x^{2}+1



We see that;



$x^{2}+1-x^{2}-1<2x$




$0<2x$



Then



$x>0$



And this one carries out for $n=2$



Now for $\quad n=k \quad$ (Hypothesis)




$\left(x^{k}+1\right)<\left(x+1\right)^{k}$



We have



$\displaystyle x^{k}<\left(x+1\right)^{k}-1\ldots \quad (1)$



Then, we must prove for $\quad n= k+1 \quad$ (Thesis):



$x^{k+1}+1<\left(x+1\right)^{k+1}$




We develop before expression as:



$x^{k+1}<\left(x+1\right)^{k+1}-1\ldots \quad (2)$



According to the steps of mathematical induction, the next stpe would be use the hypothesis $(1)$ to prove thesis $(2)$. It's in here when I hesitate if the next one that I am going to write is correct:



First way:



We multiply hypothesis $(1)$ by $\left(x+1\right)$ and we have:




$x^{k}\left(x+1\right)<\left[\left(x+1\right)^{k}-1\right]\left(x+1\right)$



$x^{k}\left(x+1\right)<\left(x+1\right)^{k+1}-\left(x+1\right)$



Last expression divided by $\left(x+1\right)$ we have again the expression $(1)$:



$\displaystyle \frac{x^{k}\left(x+1\right)<\left(x+1\right)^{k+1}-\left(x+1\right)}{\left(x+1\right)}$



$x^{k}<\left(x+1\right)^{k}-1$




Second way:



If we multiply $(2)$ by $x$ we have:



$xx^{k}



$x^{k+1}



And if we again divided last expression by $x$, we arrive at the same result




$\displaystyle \frac{x^{k+1}



$x^{k}<\left(x+1\right)^{k}-1$



I do not find another way to prove this demonstration, another way to solve the problem is using Newton's theorem binomial coeficients, but the prove lies in the technical using of mathematical induction. If someone can help me, I will be very grateful with him/her!
Thanks
-Víctor Hugo-


Answer



Suppose that $(1+x)^n>1+x^n$ for some $n\ge 2$. Then
$$

(1+x)^{n+1}=(1+x)^n(1+x)>(1+x^n)(1+x)=1+x+x^n+x^{n+1}>1+x^{n+1}
$$

since $x>0$ where in the first inequality we used the induction hypothesis.


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...