Sunday, April 30, 2017

integration - Closed form for $ int_0^infty {frac{{{x^n}}}{{1 + {x^m}}}dx }$



I've been looking at



$$\int\limits_0^\infty {\frac{{{x^n}}}{{1 + {x^m}}}dx }$$




It seems that it always evaluates in terms of $\sin X$ and $\pi$, where $X$ is to be determined. For example:



$$\displaystyle \int\limits_0^\infty {\frac{{{x^1}}}{{1 + {x^3}}}dx = } \frac{\pi }{3}\frac{1}{{\sin \frac{\pi }{3}}} = \frac{{2\pi }}{{3\sqrt 3 }}$$



$$\int\limits_0^\infty {\frac{{{x^1}}}{{1 + {x^4}}}dx = } \frac{\pi }{4}$$



$$\int\limits_0^\infty {\frac{{{x^2}}}{{1 + {x^5}}}dx = } \frac{\pi }{5}\frac{1}{{\sin \frac{{2\pi }}{5}}}$$



So I guess there must be a closed form - the use of $\Gamma(x)\Gamma(1-x)$ first comess to my mind because of the $\dfrac{{\pi x}}{{\sin \pi x}}$ appearing. Note that the arguments are always the ratio of the exponents, like $\dfrac{1}{4}$, $\dfrac{1}{3}$ and $\dfrac{2}{5}$. Is there any way of finding it? I'll work on it and update with any ideas.







UPDATE:



The integral reduces to finding



$$\int\limits_{ - \infty }^\infty {\frac{{{e^{a t}}}}{{{e^t} + 1}}dt} $$



With $a =\dfrac{n+1}{m}$ which converges only if




$$0 < a < 1$$



Using series I find the solution is




$$\sum\limits_{k = - \infty }^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} $$




Can this be put it terms of the Digamma Function or something of the sort?



Answer



I would like to make a supplementary calculation on BR's answer.



Let us first assume that $0 < \mu < \nu$ so that the integral
$$ \int_{0}^{\infty} \frac{x^{\mu-1}}{1+x^{\nu}} \; dx $$
converges absolutely. By the substitution $x = \tan^{2/\nu} \theta$, we have
$$ \frac{dx}{1+x^{\nu}} = \frac{2}{\nu} \tan^{(2/\nu)-1} \theta \; d\theta. $$
Thus
$$ \begin{align*}
\int_{0}^{\infty} \frac{x^{\mu-1}}{1+x^{\nu}} \; dx

& = \int_{0}^{\frac{\pi}{2}} \frac{2}{\nu} \tan^{\frac{2\mu}{\nu}-1} \theta \; d\theta \\
& = \frac{1}{\nu} \beta \left( \frac{\mu}{\nu}, 1 - \frac{\mu}{\nu} \right) \\
& = \frac{1}{\nu} \Gamma \left( \frac{\mu}{\nu} \right) \Gamma \left( 1 - \frac{\mu}{\nu} \right) \\
& = \frac{\pi}{\nu} \csc \left( \frac{\pi \mu}{\nu} \right),
\end{align*} $$
where the last equality follows from Euler reflexion formula.


Hot Linked Questions

Is there a bijection from $(-\infty,\infty)$ to $[0,1]$?


Previous questions established, for example, that a continuous bijection $f:(0,1)\to [0,1]$ does not exist, but the proof relied on continuity. Clearly, with continuity similar proofs can be invoked ...





algebra precalculus - How does $frac12sqrt{n^2 + frac{4m^3}{27}}$ become $sqrt{frac{n^2}4 + frac{m^3}{27}}$?



$$\begin{align}
&\frac n2 \pm \frac12 \,\sqrt{n^2 + \frac{4m^3}{27}}\\[6pt]
=\; &\frac n2 \pm \phantom{\frac12}\sqrt{\frac{n^2}4 + \frac{m^3}{27}}
\end{align}$$




Can someone please explain me how this radical multiplied by $1/2$ becomes this? You can see $1/2$ disappeared, and division by $4$ appeared under square root.



Thanks.

elementary number theory - Solving simultaneous linear congruences.

I'm struggling when solving the simultaneous linear congruences $$x\equiv 3 \pmod{101^{1000}}$$ and $$x\equiv 3 \pmod{7^{200}}$$ where the moduli are very large. I haven't got an issue when solving more reasonably sized moduli.


Could I solve this by reducing them to $x\equiv 3 \pmod{101}$ and $x\equiv 3 \pmod7$? I did this and I got $x\equiv 3 \pmod{707}$ using the Chinese Remainder Theorem. Could I somehow use this result to be $mod7^{200}101^{1000}$ or have I approached this problem completely wrong?



Thanks in advance.

summation - Sum of $sum_{n= 1}^{infty} frac{(-1)^n ln(n)}{n(n+1)}$



I am curious about this sum because (as wolfram alpha tells me) it simplifies to a rational number:




$$\sum_{n= 1}^{\infty} \frac{(-1)^n\ln(n)}{n(n+1)} = 0.063254$$



I found this interesting because I did not expect this complicated sum to converge so nicely.



My question is two fold:



1) Is there a general technique that is used to solve sums that involve $\ln(n)$?



2) What hints can you give me to help solve this sum?


Answer




We can find integral representations of this series. The most direct way would be to represent the logarithm as an integral:



$$\ln n=(n-1) \int_0^1 \frac{dt}{1+(n-1)t}$$



Interchanging integration and summation, we can write the series as:



$$\sum_{n= 2}^{\infty} \frac{(-1)^n\ln n}{n(n+1)}=\int_0^1 dt \sum_{n= 2}^{\infty} \frac{(-1)^n (n-1)}{n(n+1)(t n-t+1)}$$



The inner sum can be found in hypergeometric form the following way. First we shift the index:




$$\sum_{n= 2}^{\infty} \frac{(-1)^n (n-1)}{n(n+1)(t n-t+1)}=\sum_{n= 0}^{\infty} \frac{(-1)^n (n+1)}{(n+2)(n+3)(t n+t+1)}$$



Now we find the $0$th term and the ratio of successive terms:



$$c_0=\frac{1}{6(1+t)}$$



$$\frac{c_{n+1}}{c_n}=\frac{(n+2)(n+2)\left(n+\frac{1}{t}+1 \right)}{(n+4)\left(n+\frac{1}{t}+2 \right)} \frac{(-1)}{n+1}$$



Which makes the series equal to:




$$\sum_{n= 2}^{\infty} \frac{(-1)^n (n-1)}{n(n+1)(t n-t+1)}=\frac{1}{6(1+t)}~ {_3 F_2} \left(2,2,\frac{1}{t}+1;4,\frac{1}{t}+2;-1 \right)$$



This gives us an integral representation:




$$\sum_{n= 2}^{\infty} \frac{(-1)^n\ln n}{n(n+1)}= \frac{1}{6} \int_0^1 {_3 F_2} \left(2,2,\frac{1}{t}+1;4,\frac{1}{t}+2;-1 \right)\frac{dt}{1+t} \tag{1}$$




We can use Euler's integral transform to reduce the order of the hypergeometric function and obtain a double integral in terms of Gauss hypergeometric function ${_2 F_1} (2,2;4;-x)$, which in this case has elementary form. Then we integrate w.r.t. $t$ and obtain another integral representation:





$$\sum_{n= 2}^{\infty} \frac{(-1)^n\ln n}{n(n+1)}=\int_0^1 \Gamma(0,-\ln x) \left((2+x) \ln (1+x)-2x \right) \frac{dx}{x^3} \tag{2}$$




Where the incomplete Gamma function appears.






Using Simply Beautiful Art's result, we can also write:





$$\sum_{n= 2}^{\infty} \frac{(-1)^n\ln n}{n(n+1)}=\gamma \ln 2-\frac{\ln^2 2}{2} -\frac{1}{3} \int_0^1 {_3 F_2} \left(2,3,\frac{1}{t}+1;4,\frac{1}{t}+2;-1 \right)\frac{dt}{1+t} \tag{3}$$




The integral in $x$ will be a little more complicated than $(2)$.


Saturday, April 29, 2017

analytic number theory - Relation between $zeta(s), Re(s) < 1$ and the summation $sum_{k=1}^infty k^{-s}$



First thing I want to mention is that this is not a topic about why $1+2+3+... = -1/12$ but rather the connection between this summation and $\zeta$.




I perfectly understand that the definition using the summation $\sum_{k=1}^\infty k^{-s}$ of the zeta function is only valid for $Re(s) > 1$ and that the function is then extrapolated through analytic continuation in the whole complex plan.



However some details bother me : Why can we manipulate the sum and still obtain correct final answer.
$$
S_1 = 1-1+1-1+1-1+... = 1-(1-1+1-1+1-...)= 1-S_1 \implies S_1 = \frac{1}{2} \\
S_2 = 1-2+3-4+5-... \implies S_2 - S_1 = 0-1+2-3+4-5... = -S_2 \implies S_2 = \frac{1}{4} \\
S = 1+2+3+4+5+... \implies S-S_2 = 4(1+2+3+4+...) = 4S \implies S = -\frac{1}{12} \\
S "=" \zeta(-1)
$$
Clearly these manipulations are not legal since we're dealing with infinite non-converging sums. But it works ! Why ?

Is there a real connection between the analytic continuation which yields the "true" value $\zeta(-1) = -1/12$ and these "forbidden manipulations" ? Could we somehow consider these manipulations as "continuation of non-converging sums" ? If so, is there a well-defined framework with defined rules because it is clear that we must be careful when playing with non-converging sums if we don't want to break the mathematics ! (For example Riemann rearrangement theorem)



And since it seems that these illegal operations can be used to compute some value of zeta in the extended domain $Re(s) < 1$, are there other examples of such derivations, for example $0 = \zeta(-2) "=" 1^2 + 2^2 + 3^2 + 4^2 + ...$ ?



Hopefully this is not an umpteenth vague question about zeta and $1+2+3+4...$ I did some research about it but couldn't find any satisfying answer. Thanks !


Answer



They "work" because the manipulations you present work where the original definition was valid. Notice that:



$$\eta(s)=\sum_{n=1}^\infty\frac{(-1)^{n+1}}{n^s}$$




This function can be made to converge for all $s:$



$$\eta(s)=\lim_{r\to1^{-1}}\sum_{n=1}^\infty\frac{(-1)^{n+1}}{n^s}r^n$$



Whereupon we find that for $s=0$,



$$\begin{align}S_{1r}&=r-r^2+r^3-\dots\\&=r-r(r-r-r^2+r^3-\dots)\\&=r-rS_{1r}\\\implies S_{1r}&=\frac r{1+r}\end{align}$$



Then take $r\to1$ to get $S_1=1/2$. Notice the similarities and differences between this and your method.




In the same manner,



$$\begin{align}S_{2r}&=r-2r^2+3r^3-\dots\\&=(r-r^2+r^3-\dots)-r(r-2r^2+3r^3-\dots)\\&=S_{1r}-rS_{2r}\\\implies S_{2r}&=\frac r{(1+r)^2}\end{align}$$



Letting $r\to1$, we get $S_2=1/4$.



Notice that in each of the steps above, if we replace $r$ with $1$, we get the methods you present, despite that they don't really make sense in that way.







Also notice that, by manipulation of the original definitions,



$$\begin{align}\zeta(s)-\eta(s)&=2\left(\frac1{2^s}+\frac1{4^s}+\frac1{6^s}+\dots\right)\\&=2^{1-s}\left(\frac1{1^s}+\frac1{2^s}+\frac1{3^s}+\dots\right)\\&=2^{1-s}\zeta(s)\end{align}$$



$$\zeta(s)=\frac1{1-2^{1-s}}\eta(s)=\frac1{1-2^{1-s}}\lim_{r\to1^{-1}}\sum_{n=1}^\infty\frac{(-1)^{n+1}}{n^s}r^n$$



In this way, everything you have presented makes sense in the context of $\Re(s)>1$, so it holds for all $s$ by analytic continuation.



Notice things like the Riemann rearrangement theorem does not come in to play due to absolute convergence for $\Re(s)>1$, where we derive these formulas. Also, $S_r$ converges absolutely for any $|r|<1$. Also notice that what determines where a certain set of parentheses are allowed comes from these convergent scenarios.




From all of the above, I note that instead of using algebra, derivatives may be used to give




$$\zeta(-s)=\frac1{1-2^{1+s}}\lim_{r\to1}\left[\underbrace{r\frac r{dr}r\frac r{dr}\dots r\frac r{dr}}_s\frac{-1}{1+r}\right]$$




For whole numbers $s>0$.







The rules that allow this to work is to work only with convergent sums that are analytic continuations of the original sums. Then, everything becomes normal and there are not so many worries.


Find the sum of the first $75$ terms of the arithmetic sequence that starts $5, 8, 11, ldots$



Find the sum of the first $75$ terms of the arithmetic sequence that starts $5, 8, 11, \ldots$




The answer is $8700$.



I found a formula to be $3x+2$.
So the $1$st term is
$$3(1)+2=5$$
2nd term
$$3(2)+2=8$$
3rd term
$$3(3)+2=11$$

And so on to the 75th term
$$3(75)+2=227$$
I did not get the right answer? What did I do wrong? Please help?


Answer



$$u_1=5$$
$$u_2=8=5+3$$
$$u_3=11=5+2.3$$
$$u_{75}=5+74.3=227$$



$$S=5+8+11+...+224+227$$

$$S=227+224+...8+5$$
by sum
$$2S=232+232+...232=232.75$$



$$S=116.75=8700$$


summation - Equality of the sums $sumlimits_{v=0}^k frac{k^v}{v!}$ and $sumlimits_{v=0}^k frac{v^v (k-v)^{k-v}}{v!(k-v)!}$


How can one proof the equality $$\sum\limits_{v=0}^k \frac{k^v}{v!}=\sum\limits_{v=0}^k \frac{v^v (k-v)^{k-v}}{v!(k-v)!}$$ for $k\in\mathbb{N}_0$?


Induction and generating functions don't seem to be useful.


The generation function of the right sum is simply $f^2(x)$ with $\displaystyle f(x):=\sum\limits_{k=0}^\infty \frac{(xk)^k}{k!}$



but for the left sum I still don't know.


It is $\displaystyle f(x)=\frac{1}{1-\ln g(x)}$ with $\ln g(x)=xg(x)$ for $\displaystyle |x|<\frac{1}{e}$.


Answer



Recall the combinatorial class of labeled trees which is


$$\def\textsc#1{\dosc#1\csod} \def\dosc#1#2\csod{{\rm #1{\small #2}}}\mathcal{T} = \mathcal{Z}\times \textsc{SET}(\mathcal{T})$$


which immediately produces the functional equation


$$T(z) = z \exp T(z) \quad\text{or}\quad z = T(z) \exp(-T(z)).$$


By Cayley's theorem we have


$$T(z) = \sum_{q\ge 1} q^{q-1} \frac{z^q}{q!}.$$


This yields



$$T'(z) = \sum_{q\ge 1} q^{q-1} \frac{z^{q-1}}{(q-1)!} = \frac{1}{z} \sum_{q\ge 1} q^{q-1} \frac{z^{q}}{(q-1)!} = \frac{1}{z} \sum_{q\ge 1} q^{q} \frac{z^{q}}{q!}.$$


The functional equation yields


$$T'(z) = \exp T(z) + z \exp T(z) T'(z) = \frac{1}{z} T(z) + T(z) T'(z)$$


which in turn yields


$$T'(z) = \frac{1}{z} \frac{T(z)}{1-T(z)}$$


so that


$$\sum_{q\ge 1} q^{q} \frac{z^{q}}{q!} = \frac{T(z)}{1-T(z)}.$$


Now we are trying to show that


$$\sum_{v=0}^k \frac{v^v (k-v)^{k-v}}{v! (k-v)!} = \sum_{v=0}^k \frac{k^v}{v!}.$$


Multiply by $k!$ to get



$$\sum_{v=0}^k {k\choose v} v^v (k-v)^{k-v} = k! \sum_{v=0}^k \frac{k^v}{v!}.$$


Start by evaluating the LHS.


Observe that when we multiply two exponential generating functions of the sequences $\{a_n\}$ and $\{b_n\}$ we get that


$$ A(z) B(z) = \sum_{n\ge 0} a_n \frac{z^n}{n!} \sum_{n\ge 0} b_n \frac{z^n}{n!} = \sum_{n\ge 0} \sum_{k=0}^n \frac{1}{k!}\frac{1}{(n-k)!} a_k b_{n-k} z^n\\ = \sum_{n\ge 0} \sum_{k=0}^n \frac{n!}{k!(n-k)!} a_k b_{n-k} \frac{z^n}{n!} = \sum_{n\ge 0} \left(\sum_{k=0}^n {n\choose k} a_k b_{n-k}\right)\frac{z^n}{n!}$$


i.e. the product of the two generating functions is the generating function of $$\sum_{k=0}^n {n\choose k} a_k b_{n-k}.$$


In the present case we have $$A(z) = B(z) = 1 + \frac{T(z)}{1-T(z)} = \frac{1}{1-T(z)} $$ by inspection.


We added the constant term to account for the fact that $v^v=1$ when $v=0$ in the convolution. We thus have


$$\sum_{v=0}^k {k\choose v} v^v (k-v)^{k-v} = k! [z^k] \frac{1}{(1-T(z))^2}.$$


To compute this introduce


$$\frac{k!}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{k+1}} \frac{1}{(1-T(z))^2} \; dz$$



Using the functional equation we put $z=w\exp(-w)$ so that $dz = (\exp(-w)-w\exp(-w)) \; dw$ and obtain


$$\frac{k!}{2\pi i} \int_{|w|=\gamma} \frac{\exp((k+1)w)}{w^{k+1}} \frac{1}{(1-w)^2} (\exp(-w)-w\exp(-w)) \; dw \\ = \frac{k!}{2\pi i} \int_{|w|=\gamma} \frac{\exp(kw)}{w^{k+1}} \frac{1}{1-w} \; dw$$


Extracting the coefficient we get


$$k! \sum_{v=0}^k [w^v] \exp(kw) [w^{k-v}] \frac{1}{1-w} = k! \sum_{v=0}^k \frac{k^v}{v!}$$


as claimed.


Remark. This all looks very familiar but I am unable to locate the duplicate among my papers at this time.


Friday, April 28, 2017

sequences and series - Sum of First $n$ Squares Equals $frac{n(n+1)(2n+1)}{6}$

I am just starting into calculus and I have a question about the following statement I encountered while learning about definite integrals:


$$\sum_{k=1}^n k^2 = \frac{n(n+1)(2n+1)}{6}$$


I really have no idea why this statement is true. Can someone please explain why this is true and if possible show how to arrive at one given the other?

Thursday, April 27, 2017

elementary number theory - $gcd(b^x - 1, b^y - 1, b^ z- 1,...) = b^{gcd(x, y, z,...)} -1$




Dear friends,


Since $b$, $x$, $y$, $z$, $\ldots$ are integers greater than 1, how can we prove that $$ \gcd (b ^ x - 1, b ^ y - 1, b ^ z - 1 ,\ldots)= b ^ {\gcd (x, y, z, .. .)} - 1 $$ ?


Thank you!



Paulo Argolo

What is the limit of this sequence as it approaches infinity

We've got a question that shows us how to find the limit of this sequence as the nth term approaches infinity. I'm unsure of if we should use L'Hopitals rule, or if not what we should use instead. I can see, we may be able to use L'hopital as the formula will be infinity/infinity.



We have that



$$a_n=\frac{n\cos(n\pi+\pi/3)+n(-1)^n}{n^2+1}$$




Then, how evaluate $\lim_{n\to\infty}a_n$?

real analysis - Discontinuous derivative.

Could someone give an example of a ‘very’ discontinuous derivative? I myself can only come up with examples where the derivative is discontinuous at only one point. I am assuming the function is real-valued and defined on a bounded interval.

Wednesday, April 26, 2017

calculus - Elegant way to make a bijection from the set of the complex numbers to the set of the real numbers





Make a bijection that shows $|\mathbb C| = |\mathbb R| $



First I thought of dividing the complex numbers in the real parts and the complex parts and then define a formula that maps those parts to the real numbers. But I don't get anywhere with that. By this I mean that it's not a good enough defined bijection.





Can someone give me a hint on how to do this?




Maybe I need to read more about complex numbers.


Answer



You can represent every complex number as $z=a+ib$, so let us denote this complex number as $(a,b) , ~ a,b \in \mathbb R$. Hence we have cardinality of complex numbers equal to $\mathbb R^2$.



So finally, we need a bijection in $\mathbb R$ and $\mathbb R^2$.



This can be shown using the argument used here.





Note that since there is a bijection from $[0,1]\to\Bbb R$ (see appendix), it is enough to find a bijection from the unit square $[0,1]^2$ to the unit interval $[0,1]$. By constructions in the appendix, it does not really matter whether we consider $[0,1]$, $(0,1]$, or $(0,1)$, since there are easy bijections between all of these.




Mapping the unit square to the unit interval




There are a number of ways to proceed in finding a bijection from the unit square to the unit interval. One approach is to fix up the "interleaving" technique I mentioned in the comments, writing $\langle 0.a_1a_2a_3\ldots, 0.b_1b_2b_3\ldots\rangle$ to $0.a_1b_2a_2b_2a_3b_3\ldots$. This doesn't quite work, as I noted in the comments, because there is a question of whether to represent $\frac12$ as $0.5000\ldots$ or as $0.4999\ldots$. We can't use both, since then $\left\langle\frac12,0\right\rangle$ goes to both $\frac12 = 0.5000\ldots$ and to $\frac9{22} = 0.40909\ldots$ and we don't even have a function, much less a bijection. But if we arbitrarily choose to the second representation, then there is no element of $[0,1]^2$ that is mapped to $\frac12$, and if we choose the first there is no element that is mapped to $\frac9{22}$, so either way we fail to have a bijection.




This problem can be fixed.



First, we will deal with $(0,1]$ rather than with $[0,1]$; bijections between these two sets are well-known, or see the appendix. For real numbers with two decimal expansions, such as $\frac12$, we will agree to choose the one that ends with nines rather than with zeroes. So for example we represent $\frac12$ as $0.4999\ldots$.



Now instead of interleaving single digits, we will break each input number into chunks, where each chunk consists of some number of zeroes (possibly none) followed by a single non-zero digit. For example, $\frac1{200} = 0.00499\ldots$ is broken up as $004\ 9\ 9\ 9\ldots$, and $0.01003430901111\ldots$ is broken up as $01\ 003\ 4\ 3\ 09\ 01\ 1\ 1\ldots$.



This is well-defined since we are ignoring representations that contain infinite sequences of zeroes.



Now instead of interleaving digits, we interleave chunks. To interleave $0.004999\ldots$ and $0.01003430901111\ldots$, we get $0.004\ 01\ 9\ 003\ 9\ 4\ 9\ldots$. This is obviously reversible. It can never produce a result that ends with an infinite sequence of zeroes, and similarly the reverse mapping can never produce a number with an infinite sequence of trailing zeroes, so we win. A problem example similar to the one from a few paragraphs ago is resolved as follows: $\frac12 = 0.4999\ldots$ is the unique image of $\langle 0.4999\ldots, 0.999\ldots\rangle$ and $\frac9{22} = 0.40909\ldots$ is the unique image of $\langle 0.40909\ldots, 0.0909\ldots\rangle$.




discrete mathematics - Determine the number of 0 digits at the end of 100!

I got this question, and I'm totally lost as to how I solve it!
Any help is appreciated :)



When 100! is written out in full, it equals
100! = 9332621...000000.

Without using a calculator, determine the number of 0 digits at the end of this number



EDIT:
Just want to confirm this is okay --



I got 24 by splitting products into 2 cases 1) multiples of 10 and 2) multiples of 5
Case I
(1*3*4*6*7*8*9*10)(100,000,000,000)--> 12 zeroes



Similarly got 12 zeroes for Case 2.




So 24 in total? Is that correct?

Tuesday, April 25, 2017

calculus - Evaluate the limit of the sequence: $lim_{n_toinfty}frac{sqrt{(n-1)!}}{(1+sqrt{1})cdot(1+sqrt{2})cdot (1+sqrt{3})cdots (1+sqrt{n})}$



Evaluate the limit of the sequence:



$$\lim_{n\to\infty}\frac{\sqrt{(n-1)!}}{(1+\sqrt{1})\cdot(1+\sqrt{2})\cdot (1+\sqrt{3})\cdots (1+\sqrt{n})}$$







My try:



Stolz-cesaro: The limit of the sequence is $\frac{\infty}{\infty}$



$$\lim_{n\to\infty}\frac{a_n}{b_n}=\lim_{n\to\infty}\frac{a_{n+1}-a_n}{b_{n+1}-b_n}$$



For our sequence:



$\lim_{n\to\infty}\frac{\sqrt{(n-1)!}}{(1+\sqrt{1})\cdot(1+\sqrt{2})\cdot (1+\sqrt{3})\cdots (1+\sqrt{n})}=\lim_{n\to\infty}\frac{\sqrt{n!}-\sqrt{(n-1)!}}{(1+\sqrt{1})\cdot(1+\sqrt{2})\cdot (1+\sqrt{3})\cdots (1+\sqrt{n})\cdot(1+\sqrt{n+1})-(1+\sqrt{1})\cdot(1+\sqrt{2})\cdot (1+\sqrt{3})\cdots (1+\sqrt{n})}=\lim_{n\to\infty}\frac{\sqrt{(n-1)!}\cdot(\sqrt{n-1})}{\left((1+\sqrt{1})\cdot(1+\sqrt{2})\cdot (1+\sqrt{3})\cdots (1+\sqrt{n})\right)\cdot(\sqrt{n}+1)}$




Which got me nowhere.


Answer



Consider:
$$
(1+\sqrt{1})\cdot(1+\sqrt{2})\cdot (1+\sqrt{3})\cdots (1+\sqrt{n})
$$



Take the root from each pair of parentheses and multiply them, then:
$$
(1+\sqrt{1})\cdot(1+\sqrt{2})\cdot (1+\sqrt{3})\cdots (1+\sqrt{n}) > \sqrt{n!} \iff \\

\iff \frac{1}{(1+\sqrt{1})\cdot(1+\sqrt{2})\cdot (1+\sqrt{3})\cdots (1+\sqrt{n})} < \frac{1}{\sqrt{n!}}
$$

Going back to original we have that:
$$
\frac{\sqrt{(n-1)!}}{(1+\sqrt{1})\cdot(1+\sqrt{2})\cdot (1+\sqrt{3})\cdots (1+\sqrt{n})} \le \frac{\sqrt{(n-1)!}}{\sqrt{n!}} = \frac{1}{\sqrt n}
$$



But the function is greater than $0$ and hence using squeeze theorem we conclude that:
$$
0 \le \lim_{n\to\infty}x_n \le \lim_{n\to\infty}\frac{1}{\sqrt n} = 0

$$



Hence the limit is $0$.


sequences and series - The sides of a triangle are in Arithmetic progression



If the sides of a triangle are in Arithmetic progression and the greatest and smallest angles are $X$ and $Y$, then show that




$$4(1- \cos X)(1-\cos Y) = \cos X + \cos Y$$



I tried using sine rule but can't solve it.


Answer



Let the sides be $a-d,a,a+d$ (with $a>d)$ be the three sides of the triangle, so $X$ corresponds to the side with length $a-d$ and $Y$ that to with length $a+d$. Using cosine formula
\begin{align*}
\cos X & = \frac{(a+d)^2+a^2-(a-d)^2)}{2a(a+d)}=\frac{a+4d}{2(a+d)}\\
\cos Y & = \frac{(a-d)^2+a^2-(a+d)^2)}{2a(a-d)}=\frac{a-4d}{2(a-d)}\\
\end{align*}
Then

$$\cos X +\cos Y=\frac{a^2-4d^2}{a^2-d^2}=4 \frac{(a-2d)}{2(a+d)}\frac{(a+2d)}{2(a-d)}=4(1-\cos X)(1-\cos Y).$$


analysis - Set Theoretic Definition of Complex Numbers: How to Distinguish $mathbb{C}$ from $mathbb{R}^2$?



I have spent some time looking for a rigorous, set-theoretic definition of the complex numbers. I have read the book Elements of Set Theory by Herbert Enderton (1977) which does an excellent job of constructing numbers from sets including the natural numbers, integers, and rational numbers, but stops at the real numbers.




So far, I have only found two comparable constructions of complex numbers




  • The set of all $2 \times 2$ matrices taking real-valued components

  • The set of all ordered pairs taking real-valued components



I favor the second construction better, because I feel it has a stronger geometric interpretation because of its similarities to Euclidean vector spaces. That is, define
\begin{equation*}
\mathbb{C}=\{(x,y):x,y \in \mathbb{R}\},

\end{equation*}
which also is exactly how the Euclidean plane, $\mathbb{R}^2$, is defined.



This leads me to my question. With $\mathbb{C}$ defined exactly the same as how one defines $\mathbb{R}^2$, how does one distinguish the elements of these two sets? For example, how does one distinguish the ordinary vector $(0,1) \in \mathbb{R}^2$ from what we define to be $i$, namely the number $i=(0,1) \in \mathbb{C}$, when they are set-theoretically identical? In set theory, these two very different "numbers" -- the vector $(0,1)$ and the number $i$ -- are exactly the same set!



Thanks for your thoughts!


Answer



Consider these two ordered sets:





  1. $(\{0,1\},<)$ where $<$ is the usual order, $0<1$.

  2. $(\{0,1\},\prec)$ where $\prec$ is the discrete order, $1\nprec 0$ and $0\nprec1$.



How do you distinguish between $0$ in the first and in the second? It's the same set, $\{0,1\}$! And indeed you cannot distinguish between them. If it's the same set, then it's the same set. Period.



But $\Bbb C$ and $\Bbb R^2$ have additional structure, they are not just sets. They have addition, multiplication, and so on defined on them. How do you distinguish between $0$ in the first order and in the second, you don't.? How distinguish between the two ordered sets? They are different ordered sets, one is linear and the other is not.



If you define $\Bbb C$ as a field whose underlying set is $\Bbb R^2$, and you can do that, then you do not distinguish $(0,1)$ from $i$. They are defined to be the same object. But you distinguish $\Bbb C$ and $\Bbb R^2$ by the fact they have different multiplication defined on them, one is a field and the other is a ring with zero divisors.




Of course, you can define $\Bbb C$ as a quotient of $\Bbb R[x]$ instead, which gives a completely different underlying set.


Monday, April 24, 2017

functional equations - Does a function that satisfies the equality $f(a+b) = f(a)f(b)$ have to be exponential?



I understand the other way around, where if a function is exponential then it will satisfy the equality $f(a+b)=f(a)f(b)$. But is every function that satisfies that equality always exponential?



Answer



First see that $f(0)$ is either 0 or 1. If $f(0)=0$, then for all $x\in\mathbb R$, $f(x)=f(0)f(x)=0$. In this case $f(x)=0$ a constant function.


Let's assume $f(0)=1$. See that for positive integer $n$, we have $f(nx)=f(x)^n$ which means $f(n)=f(1)^n$. Also see that: $$ f(1)=f(n\frac 1n)=f(\frac 1n)^n\implies f(\frac 1n)=f(1)^{1/n}. $$ Therefore for all positive rational numbers: $$ f(\frac mn)=f(1)^{m/n}. $$ If the function is continuous, then $f(x)=f(1)^x$ for all positive $x$. For negative $x$ see that: $$ f(0)=f(x)f(-x)\implies f(x)=\frac{1}{f(-x)}. $$ So in general $f(x)=a^x$ for some $a>0$.



Without continuity, consider the relation: $xRy$ if $\frac xy\in \mathbb Q$ (quotient group $\mathbb R/\mathbb Q$). This relation forms an equivalence class and partitions $\mathbb R$ to sets with leaders $z$. In each partition the function is exponential with base $f(z)$.


elementary number theory - Show $aBbb Z+bBbb Z = gcd(a,b)Bbb Z$



I have the following problem:



Let $a, b \in\mathbb{Z}$. Show that $\,\{ ax + by\ :\ x, y \in \mathbb{Z}\} = \{ n \gcd(a,b)\ :\ n\in \mathbb{Z} \}$




I understand that the Bezout's lemma says that $gcd(a,b) = ax +by$, so Im not really how you would go about proving the above, it doesn't really make sense to me. Any help is appreciated1


Answer



By Bezout, for some $\,i,j\in\Bbb Z\!:$ $\ n\gcd(a,b) = n(aj\!+\!bk)\,$ $\Rightarrow$ $\,\gcd(a,b)\,\Bbb Z\subseteq a\Bbb Z+b\Bbb Z.\, $ Reversely,



$\,\gcd(a,b)\mid a,b\,\Rightarrow\,\gcd(a,b)\mid ax\!+\!by,\,$ so $\,ax\!+\!by = n\gcd(a,b),\,$ so $\,a\,\Bbb Z+b\,\Bbb Z\subseteq \gcd(a,b)\Bbb Z$






Or we can induct. $ $ wlog $\,a,b> 0\,$ by $\,-a\Bbb Z = a\Bbb Z,\, (\pm a,\pm b) = (a,b),\,$ and it is true if $\,a\,$ or $\,b=0.$




Proof by induction on $\,\color{#90f}{{\rm size} := a+b}.\,$ True if $\,a = b\!:\ a\Bbb Z + a\Bbb Z = a\Bbb Z.\,$ Else $\,a\neq b.\,$ By symmetry, wlog $\,a>b.\,$ $\,a\Bbb Z+b\Bbb Z = \color{#0a0}{(a\!-\!b)\Bbb Z+b\Bbb Z} = (a\!-\!b,b)\Bbb Z = (a,b)\Bbb Z\,$ because the $\,\color{#0a0}{\rm green}\,$ instance has smaller $\,\color{#90f}{{\rm size}} = (a\!-\!b)+b = a < \color{#90f}{a+b},\,$ so $\rm\color{}{induction}\,$ applies. $\ $ QED


What 's the short proof that for square matrices $AB = I$ implies $BA = I$?








I'm trying to remember the one line proof that for square matrices $AB = I$ implies $BA = I$.
I think it uses only elementary matrix properties and nothing else. Does anyone know the proof?



I remember it beginning with $BAB = B$ and the result following almost immediately.

arithmetic - Word problem with group dividing time evenly



I was asked this question in an interview and am unsure of the correct answer:



There are 5 people invigilating a test but only 4 of them are to be present at a time. The test is 3 hours. Each invigilator works the same amount of time. How long does each work?



I know it's between $\frac{3}{5}$ and $3$ hours.


Answer



If each works $h$ hours, then $5h = 4 \times 3$ (the left-hand side is the number of person-hours worked by considering each invigilator; the right-hand side is the number of person-hours worked by considering the exam hall at each instance of time). So $h = \frac{12}{5}$.



Sunday, April 23, 2017

trigonometry - Proving that $frac{sin alpha + sin beta}{cos alpha + cos beta} =tan left ( frac{alpha+beta}{2} right )$

Using double angle identities a total of four times, one for each expression in the left hand side, I acquired this.




$$\frac{\sin \alpha + \sin \beta}{\cos \alpha + \cos \beta} = \frac{\sin \left ( \frac{\alpha}{2}\right ) \cos \left ( \frac{\alpha}{2}\right ) + \sin \left ( \frac{\beta}{2}\right ) \cos \left ( \frac{\beta}{2}\right )}{\cos^2 \left ( \frac{\alpha}{2} \right) - \sin ^2 \left ( \frac{\beta}{2} \right )}$$



But I know that if $\alpha$ and $\beta$ are angles in a triangle, then this expression should simplify to



$$\tan \left ( \frac{\alpha + \beta}{2} \right )$$



I can see that the denominator becomes $$\cos \left ( \frac{\alpha + \beta}{2} \right ) $$



But I cannot see how the numerator becomes




$$\sin \left ( \frac{\alpha + \beta}{2} \right )$$



What have I done wrong here?

modular arithmetic - Solving system of linear congruences

I have the following:



$$12x+28y=20$$



I'm trying to find solutions to the equation above defined by: $12x\equiv 20\pmod {28}$



The GCD is $d = gcd(28,12)=4$ and since $4 | 20 $, then there are 4 solutions that exist. (please correct me if I'm wrong).



Using the extending Euclidean Algorithm, we find $x_0=-2$ and $y_0=1$. The general solution is defined by: $$x_0+t(\frac nd)$$ which in turn gives $-2+7t$ in our case. But how can we have a negative remainder if $x=-2 \pmod 7$ which can't happen.

calculus - Why does Euler's formula have to be $e^{ix} = cos(x) + isin(x)$




In part one of this youtube video the uploader goes on to explain the calculus proof for Euler's Formula.



The Formula
$$e^{ix} = \cos(x) + i\sin(x)$$
Differentiate
$$ie^{ix} = f'(x) + i g'(x)$$
Multiply original formula by $i$
$$ie^{ix} = if(x) - g(x)$$
Equate the differentiation and the multiplied version

$$f'(x) + ig'(x) = if(x) - g(x)$$
Equate real and imaginary (and cancel the i)
$$f'(x) = -g(x) \qquad g'(x) = f(x)$$



Then he goes on to explain $f(x) = \cos(x)$ and $g(x) = \sin(x)$. My question is why can't $f(x) = \sin(x)$ and $g(x) = -\cos(x)$? Can further proof be added to this proof to eliminate $f(x) = \sin(x)$ and $g(x) = -\cos(x)$?


Answer



Because we know the initial condition $e^{i0}=1$ holds. As with most differential equations, there's an family of answers that you need to use the initial condition to find the correct one for.


linear algebra - Determinant of rank-one perturbation of a diagonal matrix



Let $A$ be a rank-one perturbation of a diagonal matrix, i. e. $A = D + s^T s$, where $D = \DeclareMathOperator{diag}{diag} \diag\{\lambda_1,\ldots,\lambda_n\}$, $s = [s_1,\ldots,s_n] \neq 0$. Is there a way to easily compute its determinant?




One the one hand, $s^Ts$ has rank one so that it has only one non-zero eigenvalue which is equal to its trace $|s|^2 = s_1^2+\cdots+s_n^2$. On the other hand, if $D$ was a scalar operator (i.e. all $\lambda_i$'s were equal) then all eigenvalues of $A$ would be shifts of the eigenvalues of $s^T s$ by $\lambda$. Thus one eigenvalue would be equal to $\lambda+|s|^2$ and the others to $\lambda$. Hence in this case we would obtain $\det A = \lambda^{n-1} (\lambda+|s|^2)$. But is it possible to generalize these considerations to the case of diagonal non-scalar $D$?


Answer



As developed in the comments, for positive diagonal entries:



$$\det(D + s^Ts) = \prod\limits_{i=1}^n \lambda_i + \sum_{i=1}^n s_i^2 \prod\limits_{j\neq i} \lambda_j $$



It's general application can be deduced by extension from the positive cone of $\mathbb{R}^n$ by analytic continuation. Alternatively we can advance a slightly modified argument for all nonzero diagonal entries. The determinant is a polynomial in the $\lambda_i$'s, so proving the formula for nonzero $\lambda_i$'s enables us to prove it for all $D$ by a brief continuity argument.



First assume all $\lambda_i \neq 0$, and define vector $v$ by $v_i = s_i/\lambda_i$. Similar to the OP's observations:




$$ \det(D+s^Ts) = \det(I+s^Tv)\det(D) = (1 + \sum\limits_{i=1}^n s_i^2/\lambda_i) \prod\limits_{i=1}^n \lambda_i $$



where $\det(I+s^Tv)$ is the product of $(1 + \mu_i)$ over all the eigenvalues $\mu_i$ of $s^Tv$. As the OP noted, at most one of these eigenvalues is nonzero, so the product equals $1$ plus the trace of $s^T v$, i.e. the potentially nonzero eigenvalue, and that trace is the sum of entries $s_i^2/\lambda_i$.



Distributing the product of the $\lambda_i$'s over that sum gives the result at top. If some of the $\lambda_i$'s are zero, the formula can be justified by taking a sequence of perturbed nonzero $\lambda_i$'s whose limit is the required $n$-tuple. By continuity of the polynomial the formula holds for all diagonal $D$.


number theory - Explanation of Zeta function and why 1+2+3+4+... = -1/12











I found this article on Wikipedia which claims that $\sum\limits_{n=0}^\infty n=-1/12$. Can anyone give a simple and short summary on the Zeta function (never heard of it before) and why this odd result is true?


Answer



The answer is much more complicated than $\lim_{x \to 0} \frac{\sin(x)}{x}$.



The idea is that the series $\sum_{n=1}^\infty \frac{1}{n^z}$ it is convergent when $Re(z) >1$, and this works also for complex numbers.



The limit is a nice function (analytic) and can be extended in an unique way to a nice function $\zeta$. This means that




$$\zeta(z)=\sum_{n=1}^\infty \frac{1}{n^z} \,;\, Re(z) >1 \,.$$



Now, when $z=-1$, the right side is NOT convergent, still $\zeta(-1)=\frac{-1}{12}$. Since $\zeta$ is the ONLY way to extend $\sum_{n=1}^\infty \frac{1}{n^z}$ to $z=-1$, it means that in some sense



$$\sum_{n=1}^\infty \frac{1}{n^{-1}} =-\frac{1}{12}$$



and this is exactly what that means. Note that, in order for this to make sense, on the LHS we don't have convergence of series, we have a much more suttle type of convergence: we actually ask that the function $\sum_{n=1}^\infty \frac{1}{n^z}$ is differentiable as a function in $z$ and make $z \to -1$...



In some sense, the phenomena is close to the following:




$$\sum_{n=0}^\infty x^n =\frac{1}{1-x} \,;\, |x| <1 .$$



Now, the LHS is not convergent for $x=2$, but the RHS function makes sense at $x=2$. One could say that this means that in some sense $\sum_{n=0}^\infty 2^n =-1$.



Anyhow, because of the Analyticity of the Riemann zeta function, the statement about $\zeta(-1)$ is actually much more suttle and true on a more formal level than this geometric statement...


Saturday, April 22, 2017

The Limit of $frac {2^sqrt { log(log n)}}{log n}$



Wolfram tells me that the the limit is $0$ when $n$ goes to infinity.
Unfortunately, I have no idea how to prove it...



$$\lim_{n\to\infty}\frac {2^\sqrt { \log(\log n)}}{\log n}.$$




Any help would be appreciated,
thanks in advance.


Answer



Hints:




  1. The logarithm of this quantity is $\log 2\cdot\sqrt{\log(\log n)}-\log(\log n)$.


  2. When $n\to+\infty$, $\log(\log n)\longrightarrow$ $_________$.


  3. When $x\to+\infty$, $\log2\cdot\sqrt{x}-x\longrightarrow$ $_________$.


  4. Hence $\log2\cdot\sqrt{\log(\log n)}-\log(\log n)\longrightarrow$ $________$ when $n\to+\infty$.



  5. And finally $2^{\sqrt{\log(\log n)}}/\log n\longrightarrow$ $_________$ when $n\to+\infty$.



big list - What is the most unusual proof you know that $sqrt{2}$ is irrational?



What is the most unusual proof you know that $\sqrt{2}$ is irrational?



Here is my favorite:





Theorem: $\sqrt{2}$ is irrational.



Proof:
$3^2-2\cdot 2^2 = 1$.




(That's it)



That is a corollary of

this result:




Theorem:
If $n$ is a positive integer
and there are positive integers
$x$ and $y$ such that
$x^2-ny^2 = 1$,
then
$\sqrt{n}$ is irrational.





The proof is in two parts,
each of which
has a one line proof.



Part 1:




Lemma: If

$x^2-ny^2 = 1$,
then there are arbitrarily large integers
$u$ and $v$ such that
$u^2-nv^2 = 1$.




Proof of part 1:




Apply the identity

$(x^2+ny^2)^2-n(2xy)^2
=(x^2-ny^2)^2
$
as many times as needed.




Part 2:




Lemma: If

$x^2-ny^2 = 1$
and
$\sqrt{n} = \frac{a}{b}$
then
$x < b$.




Proof of part 2:





$1
= x^2-ny^2
= x^2-\frac{a^2}{b^2}y^2
= \frac{x^2b^2-y^2a^2}{b^2}
$
or
$b^2
= x^2b^2-y^2a^2
= (xb-ya)(xb+ya)
\ge xb+ya

> xb
$
so
$x < b$.




These two parts
are contradictory,
so
$\sqrt{n}$

must be irrational.



Two things to note about
this proof.



First,
this does not need
Lagrange's theorem
that for every
non-square positive integer $n$

there are
positive integers $x$ and $y$
such that
$x^2-ny^2 = 1$.



Second,
the key property of
positive integers needed
is that
if $n > 0$

then
$n \ge 1$.


Answer



Suppose that $\sqrt{2} = a/b$, with $a,b$ positive integers. Meaning $a = b\sqrt{2}$. Consider $$A = \{ m \in \Bbb Z \mid m > 0 \text{ and }m\sqrt{2} \in \Bbb Z \}.$$



Well, $A \neq \varnothing$, because $b \in A$. By the well-ordering principle, $A$ has a least element, $s$. And $s,s\sqrt{2} \in \Bbb Z_{>0}$. Then consider the integer: $$r= s\sqrt{2}-s.$$
We have $r =s(\sqrt{2}-1) < s$, and $r > 0$. But $r\sqrt{2} = 2s-s\sqrt{2}$ is again an integer. Hence $r \in A$ and $r < s$, contradiction.


real analysis - Counting a Directional derivative.



Let f be a function that satysfies : $$f: \mathbb{R}^2 \rightarrow \mathbb{R} , (x,y) \mapsto
\begin{cases}
0 & \text{for } (x,y)=(0,0) \\
\frac{x^3}{x^2+y^2} & \text{for } (x,y) \neq (0,0)

\end{cases} $$
I want to show that f has directional derivative at point $(0,0)$ at every direction. So, let $v=(v_1,v_2) \in R^2, (v_1,v_2) \neq (0,0)$. f has directional derivative in $(0,0)$ if and only if following limit exist :
$$\lim_{t->0}\frac{f(a+tv)-f(a)}{t}=\lim_{t->0}\frac{f(tv_1,tv_2)}{t}=\lim_{t->0}\frac{t^3v_1^3}{t(t^2v_1^2+t^2v_2^2)}=\frac{v_1^3}{v_1^2+v_2^2} $$.



So i show that above limit exist and it depends of choosing vector v. So the directional direvative exist at point $(0,0)$ in every direction. The main question is :



Am i thinking correcly ?


Answer



Yes, you are right. The directional derivative exists at $(0,0)$ in every dirction $v=(v_1,v_2)$ and is $=\frac{v_1^3}{v_1^2+v_2^2}.$


linear algebra - Quick ways to _verify_ determinant, minimal polynomial, characteristic polynomial, eigenvalues, eigenvectors ...

What are easy and quick ways to verify determinant, minimal polynomial, characteristic polynomial, eigenvalues, eigenvectors after calculating them?



So if I calculated determinant, minimal polynomial, characteristic polynomial, eigenvalues, eigenvectors, what are ways to be sure that I didn't do a major mistake? I don't want to verify my solutions all the way through, I just want a quick way which gives me that it is highly likely that the calculated determinant is right etc.






Let $A$ be a matrix $A \in \operatorname{Mat}(n, \mathbb{C})$,




let $\det(A)$ be the determinant of matrix $A$,



let $v_1, v_2, ..., v_k$ be eigenvectors of matrix $A$,



let $\lambda_1, \lambda_2, ..., \lambda_n$ be eigenvalues of matrix $A$,



let $\chi_A(t) = t^n + a_{n-1}t^{n-1}+\cdots + a_0 = (t-\lambda_1)\cdots(t-\lambda_n)$ be the characteristic polynomial of matrix $A$,



let $\mu_A(t)$ be the minimal polynomial of matrix $A$.







Verifications suggested so far:



eigenvectors / eigenvalues




  • $\det(A) = \lambda_1^{m_1} \lambda_2^{m_2} \cdots \lambda_n^{m_l}$ where $m_i$ is the multiplicity of the corresponding eigenvalue

  • $a_0 = (-1)^n\lambda_1\cdots\lambda_n$

  • eigenvectors can be verified by multiplying with the matrix; the eigenvalues can be verified at the same time; i.e. $A v_i = \lambda_i v_i$




determinant




  • $\det(A) = \lambda_1^{m_1} \lambda_2^{m_2} \cdots \lambda_l^{m_l}$ where $m_i$ is the multiplicity of the corresponding eigenvalue



characteristic / minimal polynomial





  • $a_0 = (-1)^n\lambda_1\cdots\lambda_n$

  • $\mu_A(A) = 0$ and $\chi_A(A) = 0$

  • $\mu_A(t) \mid \chi_A(t)$

discrete mathematics - Proof of divisibility by 2 and 3 if and only if divisible by 6

I can't find a way of proving that:




For integer a, a is divisible by 2 and divisible by 3 if and only if a is divisible by 6.





I’m not sure where to go from here. Any help would be great!

algebra precalculus - What is the next step in the prove? (Mathematical Induction) $left(x^{n}+1right)



I have to prove this preposition by mathematical induction:




$$\left(x^n+1\right)<\left(x+1\right)^n \quad \forall n\geq 2 \quad \text{and}\quad x>0,\,\, n \in \mathbb{N}$$



I started the prove with $n=2$:



$\left(x^{2}+1\right)<\left(x+1\right)^{2}$



$x^{2}+1



We see that;




$x^{2}+1-x^{2}-1<2x$



$0<2x$



Then



$x>0$



And this one carries out for $n=2$




Now for $\quad n=k \quad$ (Hypothesis)



$\left(x^{k}+1\right)<\left(x+1\right)^{k}$



We have



$\displaystyle x^{k}<\left(x+1\right)^{k}-1\ldots \quad (1)$



Then, we must prove for $\quad n= k+1 \quad$ (Thesis):




$x^{k+1}+1<\left(x+1\right)^{k+1}$



We develop before expression as:



$x^{k+1}<\left(x+1\right)^{k+1}-1\ldots \quad (2)$



According to the steps of mathematical induction, the next stpe would be use the hypothesis $(1)$ to prove thesis $(2)$. It's in here when I hesitate if the next one that I am going to write is correct:



First way:




We multiply hypothesis $(1)$ by $\left(x+1\right)$ and we have:



$x^{k}\left(x+1\right)<\left[\left(x+1\right)^{k}-1\right]\left(x+1\right)$



$x^{k}\left(x+1\right)<\left(x+1\right)^{k+1}-\left(x+1\right)$



Last expression divided by $\left(x+1\right)$ we have again the expression $(1)$:



$\displaystyle \frac{x^{k}\left(x+1\right)<\left(x+1\right)^{k+1}-\left(x+1\right)}{\left(x+1\right)}$




$x^{k}<\left(x+1\right)^{k}-1$



Second way:



If we multiply $(2)$ by $x$ we have:



$xx^{k}



$x^{k+1}




And if we again divided last expression by $x$, we arrive at the same result



$\displaystyle \frac{x^{k+1}



$x^{k}<\left(x+1\right)^{k}-1$



I do not find another way to prove this demonstration, another way to solve the problem is using Newton's theorem binomial coeficients, but the prove lies in the technical using of mathematical induction. If someone can help me, I will be very grateful with him/her!
Thanks
-Víctor Hugo-



Answer



Suppose that $(1+x)^n>1+x^n$ for some $n\ge 2$. Then
$$
(1+x)^{n+1}=(1+x)^n(1+x)>(1+x^n)(1+x)=1+x+x^n+x^{n+1}>1+x^{n+1}
$$

since $x>0$ where in the first inequality we used the induction hypothesis.


algebra precalculus - How can $frac{4}{3} times 3=4$ if $ frac{4}{3}$ is $1.3$?

Ok use your closest calculator, and type $\frac{4}{3}$, which is $1.3333333333$,and then multiply it with $3$ which is $3.9999999999$ but then type $\frac{4}{3} \times 3=4$ how?. How can it be $4$ if $\frac{4}{3}$ is $1.3333333333$ and when you multiply it with $3$ is $3.9999999999$.

Friday, April 21, 2017

Integration of $int_0^inftyfrac{1-cos x}{x^2(x^2+1)},dx$ by means of complex analysis



Dear all: this time I have the integral $$\int_0^\infty\frac{1-\cos x}{x^2(x^2+1)}\,dx$$and we must try to solve it using complex integration, residues, Cauchy's Theorem and the whole lot. (BTW, does anyone have any idea whether this integral can be solved without complex functions?)



$\underline{\text{What I did}}$: Letting $\,\gamma\,$ be the integration path containing the segments $$\begin{align}(i)&\,\,\text{the real interval} \,[-R\,,\,-\epsilon]\\(ii)&\,\,\text{the "little" half circle} \,\{z\;|\;z=\epsilon e^{i\theta}\,,\,\theta\in [0,\pi]\}\\(iii)&\,\,\text{ the real interval}\,[\epsilon\,,\,R]\\(iv)&\,\,\text{ and the "big" half circle}\,\{z\;|\;z=R e^{i\theta}\,,\,\theta\in [0,\pi]\}\end{align}$$ we take the integral $$I:=\oint_\gamma\frac{1-e^{iz}}{z^2(z^2+1)}\,dz$$
As the only pole of this function within $\,\gamma\,$ is the simple one $\,z=i\,$ (for $\,\epsilon<1

We now pass to evaluate the above integral on each segment of $\,\gamma\,$ described above:$$\text{on}\,(iv)\,\text{it is easy:}\,\left|\frac{1-e^{iz}}{z^2(z^2+1)}\right|\leq\frac{1+e^{-R\cos\theta}}{R^4}\xrightarrow[R\to\infty]{} 0$$




On $\,(i)\,,\,(iii)\,$ together and letting $\,R\to \infty\,$ we get $\,\displaystyle{\int_{-\infty}^\infty\frac{1-\cos x}{x^2(x^2+1)}\,dx}\,$ , which isn't a problem as the integrand function is even.



So here comes the problem: on $\,(ii)\,$ we have:$$z=\epsilon e^{i\theta}\Longrightarrow dz=\epsilon ie^{i\theta}d\theta\,,\,0\leq\theta\leq \pi\,\,\text{but going from left to right, so}$$$$\oint_{z=\epsilon e^{i\theta}}\frac{1-e^{iz}}{z^2(z^2+1)}\,dz=\int_\pi^0\frac{1-e^{i\epsilon e^{i\theta}}}{\epsilon^2e^{2i\theta}\left(\epsilon^2e^{2i\theta}+1\right)}\,\epsilon ie^{i\theta}\,d\theta$$



Now, the only thing I could came up with to evaluate the above integral when $\,\epsilon\to 0\,$ is to get the limit into the integral, getting $$\lim_{\epsilon\to 0}\frac{1-e^{i\epsilon e^{i\theta}}}{\epsilon e^{i\theta}\left(\epsilon^2 e^{2i\theta}+1\right)}=-i\Longrightarrow \int_\pi^0\frac{1-e^{i\epsilon e^{i\theta}}}{\epsilon^2e^{2i\theta}\left(\epsilon^2e^{2i\theta}+1\right)}\,\epsilon ie^{i\theta}\,d\theta\xrightarrow [\epsilon\to 0]{} -\pi$$applying L'Hospital, so the final result is$$\pi\left(\frac{1}{e}-1\right)=I\xrightarrow [R\to\infty\,,\,\epsilon\to 0]{} \int_{-\infty}^\infty\frac{1-\cos x}{x^2(x^2+1)}\,dx-\pi$$from which we get the value of $\,\displaystyle{\frac{\pi}{2e}}\,$ for our integral, which is correct (at least according to Wolframalpha), yet...



How can I justify the introduction of the limit into the integral?? The only way that seems to me possible (if at all) is to substitute $$\epsilon\to\frac{1}{\delta}$$ to get an indefinite integral with upper limit equal to $\,\infty\,$ inj $\,(ii)\,$ above and then use the dominated convergence theorem (or perhaps the monotone one).



My question is two fold: Is the substitution just described what can put me out of my misery in this case? , and: Is it possible to justify the passage of the limit into the integral without making the substitution and, thus, without resourcing to an indefinite integral with infinite upper limit?




Thank you to anyone investing he/his time just to read this question, and of course any ideas, corrections will be deeply appreciated.


Answer



$$\int_0^\infty\frac{1-\cos x}{x^2(x^2+1)}\,dx$$



Since



$$\frac{1}{{{x^2}({x^2} + 1)}} = \frac{1}{{{x^2}}} - \frac{1}{{{x^2} + 1}}$$



You have




$$\int_0^\infty {\frac{{1 - \cos x}}{{{x^2}}}} {\mkern 1mu} - \int_0^\infty {\frac{{1 - \cos x}}{{1 + {x^2}}}} dx$$



Now



$$\int_0^\infty {\frac{{1 - \cos x}}{{{x^2}}}} {\mkern 1mu} =- \left. {\frac{{1 - \cos x}}{x}} \right|_0^\infty + \int_0^\infty {\frac{{\sin x}}{x}} {\mkern 1mu} = \int_0^\infty {\frac{{\sin x}}{x}} = \frac{\pi }{2}$$



So maybe now it is easier to tackle $$\int_0^\infty {\frac{{1 - \cos x}}{{1 + {x^2}}}} dx$$ which gives



$$\int_0^\infty {\frac{{1 - \cos x}}{{1 + {x^2}}}} dx = \int_0^\infty {\frac{{dx}}{{1 + {x^2}}}} - \int_0^\infty {\frac{{\cos x}}{{1 + {x^2}}}dx} = \frac{\pi }{2} - \int_0^\infty {\frac{{\cos x}}{{1 + {x^2}}}dx} $$




and thus



$$\int_0^\infty {\frac{{1 - \cos x}}{{{x^2}({x^2} + 1)}}} {\mkern 1mu} dx = \int_0^\infty {\frac{{\cos x}}{{1 + {x^2}}}dx} $$



EDIT



Since the last solution is not very satisfactory, as it has been discussed, I'll supply this solution:



Define




$$F\left( \varphi \right) = \int\limits_0^{ + \infty } {\frac{{\cos \varphi x}}{{1 + {x^2}}}dx} $$



Clearly the integral is absolutely convergent.



Thus, use the Laplace Transform, to obtain:



$$L\left( s \right) = \int\limits_0^{ + \infty } {\frac{s}{{{x^2} + {s^2}}}\frac{1}{{1 + {x^2}}}dx} $$



We evaluate this integral:




$$\frac{s}{{{x^2} + {s^2}}}\frac{1}{{1 + {x^2}}} = \frac{s}{{1 - {s^2}}}\left( {\frac{1}{{1 + {x^2}}} - \frac{1}{{{s^2} + {x^2}}}} \right)$$



$$\eqalign{
& L\left( s \right) = \frac{s}{{1 - {s^2}}}\int\limits_0^{ + \infty } {\left( {\frac{1}{{1 + {x^2}}} - \frac{1}{{{s^2} + {x^2}}}} \right)dx} \cr
& L\left( s \right) = \frac{s}{{1 - {s^2}}}\left( {\frac{\pi }{2} - \int\limits_0^{ + \infty } {\frac{1}{{{s^2} + {x^2}}}dx} } \right) \cr
& L\left( s \right) = \frac{s}{{1 - {s^2}}}\left( {\frac{\pi }{2} - \frac{1}{s}\frac{\pi }{2}} \right) \cr
& L\left( s \right) = \frac{\pi }{2}\frac{s}{{1 - {s^2}}}\frac{{s - 1}}{s} = \frac{\pi }{2}\frac{1}{{1 + s}} \cr} $$



Taking the inverse transform, we arrive at




$$F\left( \varphi \right) = \frac{\pi }{2}{e^{ - \varphi }}$$



This is clearly for $\varphi >0$, thus the evenness of the function forces



$$F\left( \varphi \right) = \frac{\pi }{2}{e^{ - |\varphi| }}$$



and the result follows:



$$F\left( 1 \right) = \int\limits_0^\infty {\frac{{\cos x}}{{1 + {x^2}}}dx} = \frac{\pi }{{2e}}$$






sequences and series - Is there a mathematical proof that shows all multiples of $5$ either end with a $0$ or $5$?



I know that all multiples of $5$ end up with a $0$ or $5$ as the last digit. But there are an infinite amount of numbers. Is there a way to formally prove that this is true for all numbers using variables?


Answer



Let $n$ be a multiple of $5$, say $n=5m$ for some integer $m$. If $m$ is even, there is an integer $k$ such that $m=2k$, and then $n=10k$. If, on the other hand, $m$ is odd, there is an integer $k$ such that $m=2k+1$, and in that case $n=10k+5$. To complete the argument, we need only show that every multiple of $10$ ends in $0$.


Suppose that $n$ is a multiple of $10$, and suppose that when written in ordinary decimal notation, it is $d_rd_{r-1}\ldots d_0$, where the $d_k$ are the digits. Then


$$\begin{align*} n&=10^rd_r+10^{r-1}d_{r-1}+\ldots+10d_1+d_0\\ &=10\left(10^{r-1}d_r+10^{r-2}d_{r-1}+\ldots+10d_2+d_1\right)+d_0\;, \end{align*}$$


where the quantity in parentheses is an integer. Thus,


$$d_0=n-10\left(10^{r-1}d_r+10^{r-2}d_{r-1}+\ldots+10d_2+d_1\right)\;,\tag{1}$$


and if $n$ is a multiple of $10$, the righthand side of $(1)$ is a multiple of $10$. Thus, $d_0$ is a multiple of $10$. But $0\le d_0\le 9$, so $d_0=0$. This proves that every multiple of $10$ ends in $0$ and hence that every multiple of $5$ ends in $0$ or $5$.


calculus - show that $int_{0}^{infty} frac {sin^3(x)}{x^3}dx=frac{3pi}{8}$



show that


$$\int_{0}^{\infty} \frac {\sin^3(x)}{x^3}dx=\frac{3\pi}{8}$$


using different ways


thanks for all


Answer



Let $$f(y) = \int_{0}^{\infty} \frac{\sin^3{yx}}{x^3} \mathrm{d}x$$ Then, $$f'(y) = 3\int_{0}^{\infty} \frac{\sin^2{yx}\cos{yx}}{x^2} \mathrm{d}x = \frac{3}{4}\int_{0}^{\infty} \frac{\cos{yx} - \cos{3yx}}{x^2} \mathrm{d}x$$ $$f''(y) = \frac{3}{4}\int_{0}^{\infty} \frac{-\sin{yx} + 3\sin{3yx}}{x} \mathrm{d}x$$ Therefore, $$f''(y) = \frac{9}{4} \int_{0}^{\infty} \frac{\sin{3yx}}{x} \mathrm{d}x - \frac{3}{4} \int_{0}^{\infty} \frac{\sin{yx}}{x} \mathrm{d}x$$


Now, it is quite easy to prove that $$\int_{0}^{\infty} \frac{\sin{ax}}{x} \mathrm{d}x = \frac{\pi}{2}\mathop{\mathrm{signum}}{a}$$


Therefore, $$f''(y) = \frac{9\pi}{8} \mathop{\mathrm{signum}}{y} - \frac{3\pi}{8} \mathop{\mathrm{signum}}{y} = \frac{3\pi}{4}\mathop{\mathrm{signum}}{y}$$ Then, $$f'(y) = \frac{3\pi}{4} |y| + C$$ Note that, $f'(0) = 0$, therefore, $C = 0$. $$f(y) = \frac{3\pi}{8} y^2 \mathop{\mathrm{signum}}{y} + D$$ Again, $f(0) = 0$, therefore, $D = 0$.


Hence, $$f(1) = \int_{0}^{\infty} \frac{\sin^3{x}}{x^3} = \frac{3\pi}{8}$$


integration - Gaussian integral $int_{-infty}^infty exp(-(x+mathrm iY)^2),mathrm d x$ along $[-R,R]+mathrm i[0,Y]$





Use integration along $\partial Q$ of $Q=[-R,R]+\mathrm i[0,Y]$ to show that for all $Y\geq 0$ it holds that



$$\int_{-\infty}^\infty \exp(-(x+\mathrm iY)^2)~\mathrm dx = \int_{-\infty}^\infty \exp(-x^2)~\mathrm dx.$$




Similiar to one of my other questions referring to a rectangle I was going to rewrite $\partial Q$ as four curves but the integrands became really complicated. Using the bottom curve of the rectangle I got $\require{cancel}\cancel{\gamma(t)=(1-t)(-R)+tR=R(2t-1)}$ hence



$$\cancel{\int_\gamma\exp(-(x+\mathrm iY)^2)~\mathrm dx =\int_0^1 \exp(-(R(2t-1)+\mathrm iY)^2)\cdot 2R~\mathrm dt}$$




which seems like a very tough integral. I was thinking of a similiar (and more simple) curve $\hat{\gamma}$ using another parametrisation to get easier integrals.



What would be your suggested approach for this problem?






Results of some attempts after getting some help



After much help of πr8 I have the following results for all four curves thus far




$$
\begin{align*}
\gamma_1(t) &= (1-t)(-R) + tR = R(2t-1)\\
\gamma_1'(t) &= 2R\\
\int_{\gamma_1}\exp(-(x+\mathrm iY)^2)\,\mathrm dx
&= \int_0^1 \exp(-(\underbrace{R(2t-1)}_{u}+\mathrm iY)^2)\cdot 2R\,\mathrm dt\\
&= \int_{-R}^R \exp(-(u+\mathrm iY)^2)\,\mathrm du
\end{align*}
$$







$$
\begin{align*}
\gamma_2(t) &= (1-t)R + t(R+\mathrm iY) = R+t\mathrm iY\\
\gamma_2'(t) &= \mathrm iY\\
\int_{\gamma_2}\exp(-(x+\mathrm iY)^2)\,\mathrm dx
&= \int_0^1 \exp(-(\underbrace{R+t\mathrm iY}_{u}+\mathrm iY)^2)\cdot \mathrm iY\,\mathrm dt\\
&= \int_{R}^{R+\mathrm iY} \exp(-(u+\mathrm iY)^2)\,\mathrm du
\end{align*}

$$






$$
\begin{align*}
\gamma_3(t) &= (1-t)(R+\mathrm iY) + t(-R+\mathrm iY) = R(1-2t)+\mathrm iY\\
\gamma_3'(t) &= -2R\\
\int_{\gamma_3}\exp(-(x+\mathrm iY)^2)\,\mathrm dx
&= \int_0^1 \exp(-(\underbrace{R(1-2t)+\mathrm iY}_{u}+\mathrm iY)^2)\cdot (-2R)\,\mathrm dt\\

&= \int_{R+\mathrm iY}^{-R+\mathrm iY} \exp(-(u+\mathrm iY)^2)\,\mathrm du
\end{align*}
$$






So far I could rewrite all integrals to have the same boundaries as the curves used to deduce them. For the last curve however this wasn't as straight-forward as for the other ones (I was eager to have four identical integrands) and I had to use another substitution yielding



$$
\begin{align*}

\gamma_4(t) &= (1-t)(-R+\mathrm iY) + t(-R) = -R-(t-1)\mathrm iY\\
\gamma_4'(t) &= -\mathrm iY\\
\int_{\gamma_4}\exp(-(x+\mathrm iY)^2)\,\mathrm dx
&= \int_0^1 \exp(-(\underbrace{-R-t\mathrm iY+\mathrm iY}_{u})^2)\cdot \mathrm iY\,\mathrm dt\\
&= \int_{-R+\mathrm iY}^{-R} \exp(-u^2)\,\mathrm du.
\end{align*}
$$







Is there anything salvageable in the above equations?


Answer



Since $z\mapsto \exp(-z^2)$ is an entire function, by Cauchy's theorem the integral $\int_{\partial Q} \exp(-z^2)\, dz$ is zero. On the other hand,



$$\int_{\partial Q} \exp(-z^2)\, dz = \left(\int_{-R}^R + \int_R^{R + iY} - \int_{-R + iY}^{R + iY} - \int_{-R}^{-R + iY}\right)\exp(-z^2)\, dz$$
Thus



$$\int_{-R + iY}^{R + iY} \exp(-z^2)\, dz = \int_{-R}^R \exp(-x^2)\, dx + \int_R^{R + iY} \exp(-z^2)\, dz - \int_{-R}^{-R + iY} \exp(-z^2)\, dz\tag{1}$$



Now, the integral along the vertical edges of $Q$ are $O\!\left(\exp(-R^2)\right)$ as $R\to \infty$. Indeed, consider the vertical edge from $R$ to $R + iY$, parametrized by $z = R + it, 0 \le t \le Y$.




$$\left\lvert \int_R^{R + iY} \exp(-z^2)\, dz\right\rvert = \left\lvert\int_0^Y \exp\{-(R + it)^2\}i\, dt\right\rvert \le \int_0^Y \exp\{-(R^2 - t^2)\}\, dt = Ce^{-R^2}$$



where $C = \int_0^Y \exp(t^2)\, dt$. Similarly, $\left\lvert\int_{-R}^{-R + iY} \exp(-z^2)\, dz\right\rvert \le Ce^{-R^2}$. Hence, by $(1)$,



$$\int_{-R+iY}^{R + iY} \exp(-z^2)\, dz = \int_{-R}^R \exp(-x^2)\, dx + O\!\left(\exp(-R^2)\right) \tag{2}
$$



Parametrizing the horizontal edge $[-R + iY,R + iY]$ via $z = x + iY$, $-R\le x \le R$, we have $\int_{-R + iY}^{R + iY} \exp(-z^2)\, dz = \int_{-R}^R \exp(-(x + iY)^2)\, dx$. Therefore, $(2)$ becomes




$$\int_{-R}^R \exp(-(x + iY)^2)\, dx = \int_{-R}^R \exp(-x^2)\, dx + O\!\left(\exp(-R^2)\right)\tag{3}$$ Letting $R \to \infty$ in $(3)$, the result is established.


Thursday, April 20, 2017

linear algebra - Easiest way to find characteristic polynomial for this 4x4 matrix



I have been given the matrix



$$
\begin{bmatrix}

1 & 3 & 0 & 3 \\
1 & 1 & 1 & 1 \\
0 & 4 & 2 & 8 \\
2 & 0 & 3 & 1 \\
\end{bmatrix}
$$



and told I must find the characteristic polynomial. I began by applying cofactor expansion along the top row of the matrix



$$

\begin{bmatrix}
1-\lambda & 3 & 0 & 3 \\
1 & 1-\lambda & 1 & 1 \\
0 & 4 & 2-\lambda & 8 \\
2 & 0 & 3 & 1-\lambda \\
\end{bmatrix}
$$
and attempting to multiply out my results to get the correct answer of $\lambda^4 -5\lambda^3 - 28\lambda^2 + 58\lambda - 8$. However, this takes several pages of work and I keep making calculation errors and ending up with the wrong answer.



My question is, is there an easier way to find the determinant of this specific matrix, or, once the determinant is found, to multiply out the result to find the polynomial?




The only methods I have been taught have been to either try to find or create a row with several 0's to make the cofactor expansion easier, or to get an upper or lower triangular matrix, however, those seem equally as messy here.


Answer



For convenience, I write $t$ for $\lambda$. Call your matrix expression $M-t I$. Divide the third row by 2, swap columns 1 and 2 and multiply the third column by 2, you get
$$
\det(M-t I)=
-\det\left[\begin{array}{cc|cc}
3 & 1-t & 0 & 3\\
1-t & 1 & 2 & 1\\
\hline

2 & 0 & 2-t & 4\\
0 & 2 & 6 & 1-t
\end{array}\right]
=-\det\left[\begin{array}{c|c}A&B\\ \hline C&D\end{array}\right],
$$
where the leading minus sign is due to the swapping of columns.



Since $C$ and $D$ commute, we get
\begin{align}
\det(M-t I)

&=-\det(AD-BC)
\\
&=-\det\left(
\pmatrix{3 & 1-t\\ 1-t & 1}
\pmatrix{2-t & 4\\ 6 & 1-t}
-2\pmatrix{0&3\\ 2&1}
\right)
\\
&=-\det\left(
\pmatrix{12-9t & t^2-2t+13\\ t^2-3t+8 & 5-5t}

-\pmatrix{0&6\\ 4&2}
\right)
\\
&=-\det\pmatrix{12-9t & t^2-2t+7\\ t^2-3t+4 & 3-5t}\\
&=t^4 - 5 t^3 - 28 t^2 + 58 t - 8.
\end{align}


linear algebra - Eigenvalue Decomposition That Does Not Result in Original Matrix



Suppose I have an upper triangular $m \times m$ matrix with 1's on the main diagonal and 2's on the first superdiagonal and 0's elsewhere. It is fairly easy to see that this is a full rank matrix with eigenvalues of all 1's.



Now the eigenvalue decomposition is supposed to give me a diagonal matrix of eigenvalues and a matrix of eigenvectors such that



$$ A = Q \Lambda Q^{-1} $$



where the columns of Q are the eigenvectors. Then the inverse of $A$ is




$$ A^{-1} = Q \Lambda^{-1} Q^{-1} = Q \Lambda Q^{-1} = A $$



However, this is not true for the matrix A that I described. So now my question is, why does the eigenvalue decomposition not work for this matrix? If I am not making a mistake somewhere, how then can I invert the matrix A without resorting to something like Gauss Elimination or the sort?



I tried using Matlab's eig function and found that the matrix Q returned does not satisfy $A = Q \Lambda Q^{-1}$ which evaluates to the identity matrix. I try to to rely much on a numerical implementation that involves the inverse of a matrix because I know inverting matrices numerically can be ill-conditioned and lead to severed numerical issues. However, I would expect that an eigenvalue decomposition would yield reasonable results when the eigenvalues are not too small.


Answer



Not all matrices are diagonalizable. Yours in particular is not. Suppose it were, so that you can write $A=Q\Lambda Q^{-1}$ with $\Lambda$ diagonal. The eigenvalues of a triangular matrix are its diagonal elements, so we have $\Lambda=I$, but then $Q\Lambda Q^{-1} = QQ^{-1} = I \ne A$.



Why does this go wrong? In the above diagonalization the columns of $Q$ are linearly-independent eigenvectors of $A$. Unfortunately, for your matrix the only eigenvectors are multiples of the standard basis vector $(1,0,0,\dots,0)^T$. Except in the trivial case of $m=1$ there aren’t enough of them to form the diagonalizing matrix $Q$. What you will need to do instead if you want to proceed along these lines is find the Jordan normal form of the matrix.


algebra precalculus - How do you factor this using complete the square? $6+12y-36y^2$



I'm so embarrassed that I'm stuck on this simple algebra problem that is embedded in an integral, but I honestly don't understand how this is factored into $a^2-u^2$




Here are my exact steps:



$6+12y-36y^2$ can be rearranged this way: $6+(12y-36y^2)$ and I know I can factor out a -1 and have it in this form: $6-(-12y+36y^2)$



This is the part where I get really lost. According to everything I read, I take the $b$ term, which is $-12y$ and divide it by $2$ and then square that term. I get: $6-(36-6y+36y^2)$



The form it should look like, however, is $7-(6y-1)^2$



Can you please help me to understand what I'm doing wrong?


Answer




we know that $$ (ay - b)^2 = a^2y^2 - 2ayb + b^2 \space \space \space \space (1) $$



we have $$ 6 + 12y - 36y^2 = -(36y^2 - 12y - 6) $$



we need to factor $$ 36y^2 - 12y - 6 $$



from (1) let $ a^2 = 36 \Rightarrow a = \pm 6, 2ab = 12 \Rightarrow b = \pm 1 $



thus $$ 36y^2 - 12y + 1 - 7 = (6y - 1)^2 \ or \ (-6y +1)^2 - 7 \Rightarrow 6 + 12y - 36y^2 = 7 - (6y-1)^2 $$


linear algebra - Eigenvalue decomposition of $A = I - xx^T$




Let $A = I - xx^T$, where $x \in \mathbb{R}^n$ and $I$ is the identity matrix of $\mathbb{R}^n$




We know that $A$ is a real symmetric matrix, therefore there exists an eigenvalue decomposition of $A$ such that



$$A = Q^T\Lambda Q$$



Is it possible to find $Q$, $\Lambda$?



$I - xx^T = Q^TQ - xQ^TQx^T = Q^TQ - (Q^Tx)^T(x^TQ)^T...$


Answer



Consider
$$

Ax=(I-xx')x=(1-x'x)x
$$
so $x$ itself is an eigenvector with eigenvalue $1-x'x$. In fact, if $v$ is an eigenvector with some eigenvalue $\alpha$, we have
$$
\alpha v=Av=(I-xx')v=v-(x'v)x\implies(1-\alpha)v=(x'v)x.
$$
This means if $\alpha\neq 1$, then $v$ is proportional to $x$ so in fact $v$ is an eigenvector with eigenvalue $1-x'x$. If $\alpha=1$, then $x'v=0$. Conversely, if $v'x=0$, then $v$ is an eigenvector with eigenvalue $1$:
$$
Av=(I-xx')v=v-(x'v)v=v.
$$

Conclusion: $I-xx'$ has eigenvalues $1-x'x$ and $1$ where $1$ has multiplicity $n-1$. The eigenvectors for $1-x'x$ are parallel to $x$ and the eigenvectors of $1$ are any vector in the space orthogonal to the space spanned by $x$. So you can take $Q'=\Big(\frac{x}{|x|}\;r_1\;\cdots\;r_{n-1}\Big)$ where each $r_i$ is $n\times 1$ and $\{r_1,\ldots,r_{n-1}\}$ is some orthonormal basis of $[\text{span}(x)]^\perp$.


Wednesday, April 19, 2017

probability - Expected value of a non-negative random variable

How do I prove that $\int_0^\infty Pr(Y\geq y) dy = E[Y]$ if $Y$ is a non-negative random variable?

fraction as index number?



given these inputs x = 4, S = [1 2 3 4 5 6 7 8 9 10], and n = 10



 search (x,S,n) {
i = 1
j = n

while i < j {
m = [(i+j)/2]
if x > Sm then i=n+1
else j = m
end
if x = Si then location = i
else location = 0


This algorithm is from my discrete math hw. I'm confused as to what Sm would equal on the first iteration because m would be $\frac{11}{2}$. If i use a fraction as the index do I round down? Is there a general rule for this? Am I making any sense? Help pls



Answer



Yes, you should round down (or in other words, take only the integer part).



$$[11/2]=[5.5]=5$$



This is known as the floor function in mathematics (usually there is a floor(x) function in programming languages).


Tuesday, April 18, 2017

logic - Properties of modular arithmetic



I've recently learned about some modular congruence, but I'm having trouble actually solving problems. I've tried a few references--http://www.math.cornell.edu/~putnam/modular.pdf gets pretty close, but I'm still a bit confused.



For example, let's say I have $$6a\equiv 10b \mod{14}$$



What else can I say is true? Can I just divide everything by a factor? For example, is $$3a \equiv 5b \mod{7}$$



By experimentation, I think the above is true.




So then, I can just divide everything by the same number. But, in the example below (which I think is true?) I divide x and y by 26, but I don't change the mod $$26x \equiv 26y \mod{5}$$ and therefore $$x \equiv y \mod{5}$$


Answer



Yes, the two things are both true.



The first one is:



$$ad \equiv bd \pmod{nd}$$
if and only if
$$a \equiv d \pmod{n}$$




This happens because
$$dn |d(a-b) \Leftrightarrow n |a-b$$



The second is a completely different fact, which happens for a different reason:



If
$$ad \equiv bd \pmod{n } \mbox{ and } gcd(d,n)=1 \mbox{then} \\
a \equiv b \pmod{n}$$




This is because $n|d(a-b)$ and $gcd(n,d)=1$ implies $n|a-b$.


Monday, April 17, 2017

soft question - definite and indefinite sums and integrals

It just occurred to me that I tend to think of integrals primarily as indefinite integrals and sums primarily as definite sums. That is, when I see a definite integral, my first approach at solving it is to find an antiderivative, and only if that doesn't seem promising I'll consider whether there might be a special way to solve it for these special limits; whereas when I see a sum it's usually not the first thing that occurs to me that the terms might be differences of some function. In other words, telescoping a sum seems to be just one particular way among many of evaluating it, whereas finding antiderivatives is the primary way of evaluating integrals. In fact I learned about telescoping sums much later than about antiderivatives, and I've only relatively recently learned to see these two phenomena as different versions of the same thing. Also it seems to me that empirically the fraction of cases in which this approach is useful is much higher for integrals than for sums.



So I'm wondering why that is. Do you see a systematic reason why this method is more productive for integrals? Or is it perhaps just a matter of education and an "objective" view wouldn't make a distinction between sums and integrals in this regard?




I'm aware that this is rather a soft question, but I'm hoping it might generate some insight without leading to open-ended discussions.

Sunday, April 16, 2017

elementary number theory - Chinese Remainder Theorem Transformation with not coprime moduli


Suppose that $x \equiv 3 \pmod 7, x \equiv 3 \pmod {10}$ and $x \equiv 23 \pmod{25}$. Explain why the Chinese Remainder Theorem does not apply to compute x. Transform the problem to an equivalent problem where the Chinese Remainder Theorem can be used and solve it.





The question is unsolvable before transformation because the theorem requires modular numbers to be relatively prime to each other and $\gcd(10,25)=5$ so they are not relatively prime. How do I transform it and solve it?

real analysis - How to prove a set is a Dedekind Cut?


In my Real Analysis class we just started talking about Dedekind cuts and I'm very confused on how to prove this one homework assignment:


Given the set $-A=\{-(a+b):a\in\mathbb{Q}^+, b\in\mathbb{Q}\setminus A\}$, how do I prove this is a cut? Please help, I'm so lost...


Thanks!!


Answer



I assume that $A$ is supposed to be a Dedekind cut. Start by making a sketch:


    +++++++++++++++++++++++++++++++)----------------------------------  
A


The line as a whole represents $\Bbb Q$, and everything to the left of the parenthesis is in the cut $A$. Now look at the definition of $-A$:


$$-A=\{-(q+b):q\in\mathbb{Q}^+, b\in\mathbb{Q}\setminus A\}$$


It depends on two subsets of $\Bbb Q$, $\Bbb Q^+$, and $\Bbb Q\setminus A$, so we should figure out where those are in the picture. $\Bbb Q\setminus A$ is easy:


    +++++++++++++++++++++++++++++++)(---------------------------------  
A Q\A

it’s everything to the right of $A$. And we know just what $\Bbb Q^+$ is: it’s the set of positive rationals. What happens when you add a positive rational $q$ to every member of $\Bbb Q\setminus A$? You shift the set $\Bbb Q\setminus A$ to the right by $q$ units:


    -------------------------------)(+++++++++++++++++++++++++++++++++
A Q\A


ooooooooooooooooooooooooooooooo)-----------(++++++++++++++++++++++
A q + Q\A

That last picture shows $\{q+b:b\in\Bbb Q\setminus A\}$ for a particular $q\in\Bbb Q^+$. Now what happens when you look at the negatives of these rational numbers, $\{-(q+b):b\in\Bbb Q\setminus A\}$? You simply flip the line $180$° around $0$ to get a picture more or less like this:


    ++++++++++++++++++++++)-----------(ooooooooooooooooooooooooooooooo

The plus signs mark the set $-(q+\Bbb Q\setminus A)$, and the o’s mark the set $\{-a:a\in A\}$. The gap in the middle has length $q$. $\{-a:a\in A\}$ is always in the same place, but the location of $-(q+\Bbb Q\setminus A)$ depends on $q$: when $q$ is large, it’s far to the left of $\{-a:a\in A\}$, and when $q$ is small, it’s very close to $\{-a:a\in A\}$.


The set $-A$ that you’re to prove is a Dedekind cut is just the union of all these sets $-(q+\Bbb Q\setminus A)$ as $q$ ranges over the positive rationals, so it’s the union of all possible sets like those marked with plus signs in the pictures below:


    +++++++++++++++++)----------------(ooooooooooooooooooooooooooooooo


++++++++++++++++++++++)-----------(ooooooooooooooooooooooooooooooo

+++++++++++++++++++++++++)--------(ooooooooooooooooooooooooooooooo

++++++++++++++++++++++++++++++)---(ooooooooooooooooooooooooooooooo

What do you think that union will look like in a sketch of this kind? Won’t it look something like this?


    +++++++++++++++++++++++++++++++++)(ooooooooooooooooooooooooooooooo


That looks an awful lot like a Dedekind cut. Now you just have to prove it. For instance, you have to show that there is some rational that is not in the set. From the picture it appears that any $-a$ with $a\in A$ should work, so you should try to prove that this is the case. (Remember, the $q$’s that are being added are all strictly positive.)


You need to show that if $p$ is rational and $p

Using induction more than once in a proof


Is it possible to use induction twice or more in a proof? For instance, say we wished to prove the following proposition by induction:


Proposition



Suppose $x>3$ and $y<2$. Then $x^2 -2y>5$



Scratch Work


Let $P(x,y)$ be the inequality $x^2 -2y>5$. Let's choose a fixed integer $y$ that's less than 2, and from there prove by induction with first the bases-step showing $P(4,y)$ where $y$ is a fixed integer, followed by showing the inductive hypothesis is true.



After proving by induction $P(x,y)$ is true for all integers $x>3$ for a fixed integer $y$, we will once again use induction, but this time use induction to prove for all integers $y<2$, for a fixed integer $x>3$ that $P(x,y)$ is true. We then proceed with the standard inductive proof of showing the basis step is true and inductive hypothesis is true.


So, is it possible to use an inductive proof twice in a proof , kinda like this example? Moreover, is this particular example proof getting somewhere?


Answer



It is certainly possible to use induction more than once in a proof. Perhaps one of the more interesting applications of this idea is Cauchy induction.


To perform Cauchy induction, one first proves a base case, $P(1)$, and then proves $P(n)$ implies $P(2n)$. This inductively implies $P(2^n)$. Finally, you use decreasing induction, $P(n)$ implies $P(n-1)$, to show $P(n)$ for every natural number $n$.


trigonometry - Trigonometric Arithmetic Progression




If $a$, $b$, $c$ are in arithmetic progression, prove that
$$\cos A \cot\frac{A}{2} \qquad \cos B \cot \frac{B}{2} \qquad \cos C \cot\frac{C}{2}$$
are in arithmetic progression, too.



Here, $a$, $b$, $c$ represent the sides of a triangle and $A$, $B$, $C$ are the opposite angles of the triangle.


Answer



For better clarity, I'm adding another proof that $\displaystyle\cot\frac A2,\cot\frac B2,\cot\frac C2$ are also in AP if $a,b,c$ are so.



We have $\displaystyle00$




So, $\displaystyle\cot\frac C2=\frac1{\tan\frac C2}=+\sqrt{\frac{1+\cos A}{1-\cos A}}$



Using Law of Cosines and on simplification, $\displaystyle\cot\frac C2=+\sqrt{\frac{s(s-c)}{(s-b)(s-a)}}$ where $2s=a+b+c$



$\displaystyle\cot\frac A2,\cot\frac B2,\cot\frac C2$ will be in AP



$\displaystyle\iff\sqrt{\frac{s(s-c)}{(s-b)(s-a)}}+\sqrt{\frac{s(s-a)}{(s-b)(s-c)}}=\sqrt{\frac{s(s-b)}{(s-c)(s-a)}}$



$\displaystyle\iff s-a+s-c=2(s-b)\iff a+c=2b$


algebra precalculus - Solve $sqrt{3x}+sqrt{2x}=17$

This is what I did:
$$\sqrt{3x}+\sqrt{2x}=17$$
$$\implies\sqrt{3x}+\sqrt{2x}=17$$
$$\implies\sqrt{3}\sqrt{x}+\sqrt{2}\sqrt{x}=17$$
$$\implies\sqrt{x}(\sqrt{3}+\sqrt{2})=17$$
$$\implies x(5+2\sqrt{6})=289$$
I don't know how to continue. And when I went to wolfram alpha, I got:
$$x=-289(2\sqrt{6}-5)$$
Could you show me the steps to get the final result?

Thank you.

Saturday, April 15, 2017

real analysis - Limit of convergent monotone sequence

Looking for a nice proof for this proposition:



Let $\{ x_n \}$ be a convergent monotone sequence. Suppose there exists some $k$ such that $\lim_{n\to\infty} x_n = x_k$, show that $x_n = x_k$ for all $n \geq k$.




I have the intuition for why it's true but am having a tough time giving a rigorous proof.

calculus - The Intergral of $sin^4x$ without using reduction formula




So I've been trying to compute $$\int\sin^4(x)\mathrm{d}x$$ and everywhere they use the reduction formula which we haven't learned yet so I've been wondering if theres another way to do it? Thanks in advance.


Answer



Performing integration by parts,




$\begin{align} \int_0^x\sin^2 t\,dt&=\Big[-\cos t\sin t\Big]_0^x+\int_0^x\cos^2 t\,dt\\
&=-\cos x\sin x+\int_0^x(1-\sin^2 t)\,dt\\
&=-\cos x\sin x+\int_0^x 1\,dt-\int_0^x \sin^2 t\,dt\\
&=-\cos x\sin x+x-\int_0^x \sin^2 t\,dt\\
\end{align}$



Therefore,



$\displaystyle \int_0^x \sin^2 t\,dt=-\frac{1}{2}\cos x\sin x+\frac{1}{2}x$




$\begin{align} \int_0^x\sin^4 t\,dt&=\int_0^x(1-\cos^2)\sin^2 t\,dt
\\
&=\int_0^x\sin^2 t\,dt-\int_0^x \cos^2 t\sin^2 t\,dt\\
&=-\frac{1}{2}\cos x\sin x+\frac{1}{2}x-\int_0^x \cos^2 t\sin^2 t\,dt\\
\end{align}$



Since, for $t$ real,



$\sin(2t)=2\sin t\cos t$




then,



$\begin{align}
\int_0^x\sin^4 t\,dt&=-\frac{1}{2}\cos x\sin x+\frac{1}{2}x-\frac{1}{4}\int_0^x \sin^2(2t)\,dt\\
\end{align}$



In the latter integral perform the change of variable $y=2t$,



$\begin{align}
\int_0^x\sin^4 t\,dt&=-\frac{1}{2}\cos x\sin x+\frac{1}{2}x-\frac{1}{8}\int_0^{2x} \sin^2(y)\,dy\\

&=-\frac{1}{2}\cos x\sin x+\frac{1}{2}x-\frac{1}{8}\left(-\frac{1}{2}\cos (2x)\sin(2x)+\frac{1}{2}\times 2x\right)\\
&=-\frac{1}{4}\sin(2x)+\frac{1}{2}x+\frac{1}{32}\sin(4x)-\frac{1}{8}x\\
&=-\frac{1}{4}\sin(2x)+\frac{3}{8}x+\frac{1}{32}\sin(4x)\\
\end{align}$



Therefore,



$\displaystyle \boxed{\int \sin^4 x\,dx=\frac{3}{8}x+\frac{1}{32}\sin(4x)-\frac{1}{4}\sin(2x)+C}$



($C$ a real constant)



calculus - Is $frac{textrm{d}y}{textrm{d}x}$ not a ratio?


In the book Thomas's Calculus (11th edition) it is mentioned (Section 3.8 pg 225) that the derivative $dy/dx$ is not a ratio. Couldn't it be interpreted as a ratio, because according to the formula $dy = f'(x)dx$ we are able to plug in values for $dx$ and calculate a $dy$ (differential). Then if we rearrange we get $dy/dx$ which could be seen as a ratio.


I wonder if the author says this because $dx$ is an independent variable, and $dy$ is a dependent variable, for $dy/dx$ to be a ratio both variables need to be independent.. maybe?


Answer



Historically, when Leibniz conceived of the notation, $\frac{dy}{dx}$ was supposed to be a quotient: it was the quotient of the "infinitesimal change in $y$ produced by the change in $x$" divided by the "infinitesimal change in $x$".


However, the formulation of calculus with infinitesimals in the usual setting of the real numbers leads to a lot of problems. For one thing, infinitesimals can't exist in the usual setting of real numbers! Because the real numbers satisfy an important property, called the Archimedean Property: given any positive real number $\epsilon\gt 0$, no matter how small, and given any positive real number $M\gt 0$, no matter how big, there exists a natural number $n$ such that $n\epsilon\gt M$. But an "infinitesimal" $\xi$ is supposed to be so small that no matter how many times you add it to itself, it never gets to $1$, contradicting the Archimedean Property. Other problems: Leibniz defined the tangent to the graph of $y=f(x)$ at $x=a$ by saying "Take the point $(a,f(a))$; then add an infinitesimal amount to $a$, $a+dx$, and take the point $(a+dx,f(a+dx))$, and draw the line through those two points." But if they are two different points on the graph, then it's not a tangent, and if it's just one point, then you can't define the line because you just have one point. That's just two of the problems with infinitesimals. (See below where it says "However...", though.)


So Calculus was essentially rewritten from the ground up in the following 200 years to avoid these problems, and you are seeing the results of that rewriting (that's where limits came from, for instance). Because of that rewriting, the derivative is no longer a quotient, now it's a limit: $$\lim_{h\to0 }\frac{f(x+h)-f(x)}{h}.$$ And because we cannot express this limit-of-a-quotient as a-quotient-of-the-limits (both numerator and denominator go to zero), then the derivative is not a quotient.


However, Leibniz's notation is very suggestive and very useful; even though derivatives are not really quotients, in many ways they behave as if they were quotients. So we have the Chain Rule: $$\frac{dy}{dx} = \frac{dy}{du}\;\frac{du}{dx}$$ which looks very natural if you think of the derivatives as "fractions". You have the Inverse Function theorem, which tells you that $$\frac{dx}{dy} = \frac{1}{\quad\frac{dy}{dx}\quad},$$ which is again almost "obvious" if you think of the derivatives as fractions. So, because the notation is so nice and so suggestive, we keep the notation even though the notation no longer represents an actual quotient, it now represents a single limit. In fact, Leibniz's notation is so good, so superior to the prime notation and to Newton's notation, that England fell behind all of Europe for centuries in mathematics and science because, due to the fight between Newton's and Leibniz's camp over who had invented Calculus and who stole it from whom (consensus is that they each discovered it independently), England's scientific establishment decided to ignore what was being done in Europe with Leibniz notation and stuck to Newton's... and got stuck in the mud in large part because of it.


(Differentials are part of this same issue: originally, $dy$ and $dx$ really did mean the same thing as those symbols do in $\frac{dy}{dx}$, but that leads to all sorts of logical problems, so they no longer mean the same thing, even though they behave as if they did.)



So, even though we write $\frac{dy}{dx}$ as if it were a fraction, and many computations look like we are working with it like a fraction, it isn't really a fraction (it just plays one on television).


However... There is a way of getting around the logical difficulties with infinitesimals; this is called nonstandard analysis. It's pretty difficult to explain how one sets it up, but you can think of it as creating two classes of real numbers: the ones you are familiar with, that satisfy things like the Archimedean Property, the Supremum Property, and so on, and then you add another, separate class of real numbers that includes infinitesimals and a bunch of other things. If you do that, then you can, if you are careful, define derivatives exactly like Leibniz, in terms of infinitesimals and actual quotients; if you do that, then all the rules of Calculus that make use of $\frac{dy}{dx}$ as if it were a fraction are justified because, in that setting, it is a fraction. Still, one has to be careful because you have to keep infinitesimals and regular real numbers separate and not let them get confused, or you can run into some serious problems.


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...