Saturday, October 31, 2015

Expectation of a function in Poisson Distribution



Find the expectation of the function $\phi(x) = xe^{-x}$ in a Poisson distribution.



My Attempt: If $\lambda$ be the mean of Poisson distribution, then expectation of



$$\displaystyle \phi(x)=\sum_{x \mathop \ge 0} \frac{\phi(x)\lambda^xe^{-\lambda}}{x!}$$




$$= \displaystyle \sum_{x \mathop \ge 0} \frac{ xe^{-x}\lambda^xe^{-\lambda}}{x!}$$



$$= \displaystyle \lambda e^{-\lambda} \sum_{x \mathop \ge 1} \frac {e^{-x}} {\left({x-1}\right)!} \lambda^{x-1}$$



Now what?



Without the $e^{-x}$, the rest of summation is just a Taylor's expansion of $e^{\lambda}$, which gets cancelled.



But what do I do here?


Answer




Group together $e^{-x} \lambda^{x-1} = \frac{(\frac{\lambda}{e})^x}{\lambda}$, then do a bit of algebra to get the Taylor series for $e^s$ for some $s$.


modular arithmetic - Prove $ n equiv s(n) ($mod$ 3)$ using the fact that $ [10^n] = [1]$.





Prove $ n \equiv s(n)\ (mod\ 3)$ using the fact that $\ [10^n] = [1]$. Let $n = (a_k \times 10^k) + (a_{k-1} \times 10^{k-1}) + \cdots +(a_1 \times 10^1)+ (a_0 \times 10^0)$ and $s(n)=(a_k + a_{k-1}+ \cdots +a_1+a_0)$.



When trying to solve this question, I combined the information given and found that $$n-s(n) = (a_k \times 10^k) + (a_{k-1} \times 10^{k-1}) + \cdots +(a_1 \times 10^1)+ (a_0 \times 10^0)-(a_k + a_{k-1}+ \cdots +a_1+a_0)$$
$$=(a_k \times 10^k-a_k ) + (a_{k-1} \times 10^{k-1}-a_{k-1}) + \cdots +(a_1 \times 10^1-a_1)+ (a_0 \times 10^0-a_0)$$
$$=a_k (10^k-1) + a_{k-1} (10^{k-1}-1) + \cdots +a_1 (10^1-1)+ a_0 (1-1)$$

$$=a_k (10^k-1) + a_{k-1} (10^{k-1}-1) + \cdots +a_1 (9)$$
I'm not sure where to go from here, I thought maybe I could deduce that since, for the case of $\ [10^n] = [1]$ -- since that means that $10^n \equiv 1 (mod \ 3)$ -- I could say that since there exists an integer, call it p, such that $10^n - 1=3p$, and then put that "statement" in the parentheses in the last "=" line that I had above. I have a feeling it doesn't make sense though, and it would be incorrect.


Answer



Hint: Let $$z_n=a_n10^n+a_{n-1}10^{n-1}+...+a_1\cdot 10+a_0$$ with your hint we get
$$z_n\equiv a_n+a_{n-1}+...+a_1+a_0\mod 3$$ since $$10^i\equiv 1\mod 3$$


sequences and series - Where did the negative answer come from?




The question is to evaluate $\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2+\cdots }}}}$
$$x=\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2+\cdots }}}}$$
$$x^2=2+\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2+\cdots }}}}$$
$$x^2=2+x$$
$$x^2-x-2=0$$
$$(x-2)(x+1)=0$$
$$x=2,-1$$



because $x$ is positive $x=2$ is the answer. but where did the $x=-1$ come from ?



Answer



$x=\sqrt{2+\underbrace{\sqrt{2+\sqrt{2+\sqrt{2+\cdots}}}}_{\text{$x$}}}$, so we get $x=\sqrt{2+x}$.



Now there is only one solution. If we square both sides, we add the case $-x=\sqrt{2+x}$


sequences and series - Formula for $1^2+2^2+3^2+...+n^2$



In example to get formula for $1^2+2^2+3^2+...+n^2$ they express $f(n)$ as:
$$f(n)=an^3+bn^2+cn+d$$ also known that $f(0)=0$, $f(1)=1$, $f(2)=5$ and $f(3)=14$



Then this values are inserted into function, we get system of equations solve them and get a,b,c,d coefficients and we get that $$f(n)=\frac{n}{6}(2n+1)(n+1)$$




Then it's proven with mathematical induction that it's true for any n.



And question is, why they take 4 coefficients at the beginning, why not $f(n)=an^2+bn+c$ or even more? How they know that 4 will be enough to get correct formula?


Answer



There are several ways to see this:




  • As Rasmus pointed one out in a comment, you can estimate the sum by an integral.

  • Imagine the numbers being added as cross sections of a quadratic pyramid. Its volume is cubic in its linear dimensions.


  • Apply the difference operator $\Delta g(n)=g(n+1)-g(n)$ to $f$ repeatedly. Then apply it to a polynomial and compare the results.



[Edit in response to the comment:]



An integral can be thought of as a limit of a sum. If you sum over $k^2$, you can look at this as adding up the areas of rectangles with width $1$ and height $k^2$, where each rectangle extends from $k-1$ to $k$ in the $x$ direction. (If that's not clear from the words, try drawing it.) Now if you connect the points $(k,k^2)$ by the continuous graph of the function $f(x)=x^2$, the area under that graph is an approximation of the area of the rectangles (and vice versa). So we have



$$1^2+\dotso+n^2\approx\int_0^nk^2\mathrm dk=\frac13n^3\;.$$


Friday, October 30, 2015

radicals - Calculating the following limit: $lim_{x to 0} frac{sqrt{x^2+1}-sqrt{x+1}}{1-sqrt{x+1}} $



I am trying to calculate this limit:
$$
\lim_{x \to 0} \frac{\sqrt{x^2+1}-\sqrt{x+1}}{1-\sqrt{x+1}}
$$



I've tried using conjugate of both denominator and numerator but I can't get the right result.


Answer



$$\frac{\sqrt{x^2+1}-\sqrt{x+1}}{1-\sqrt{x+1}}$$

$$=\frac{(1+\sqrt{x+1})\{x^2+1-(x+1)\}}{(\sqrt{x^2+1}+\sqrt{x+1})(1-(x+1))}$$
$$=\frac{(1+\sqrt{x+1})\{x(x-1)\}}{(\sqrt{x^2+1}+\sqrt{x+1})(-x)}$$
$$=\frac{(1+\sqrt{x+1})(1-x)}{(\sqrt{x^2+1}+\sqrt{x+1})}\text { if } x\ne0$$



As $x\to0,x\ne0$



So, $$\lim_{x\to0}\frac{\sqrt{x^2+1}-\sqrt{x+1}}{1-\sqrt{x+1}}=\lim_{x\to0}\frac{(1+\sqrt{x+1})(1-x)}{(\sqrt{x^2+1}+\sqrt{x+1})}=\frac{(1+1)}{(1+1)}=1$$







Alternatively, as $\lim_{x\to0}\frac{\sqrt{x^2+1}-\sqrt{x+1}}{1-\sqrt{x+1}}$ is of the form $\frac00,$



Applying L'Hospital's Rule we get,
$$\lim_{x\to0}\frac{\sqrt{x^2+1}-\sqrt{x+1}}{1-\sqrt{x+1}}$$
$$=\lim_{x\to0}\frac{\frac x{\sqrt{x^2+1}}-\frac1{2\sqrt{x+1}}}{-\frac1{2\sqrt{x+1}}}$$
$$=\lim_{x\to0}\left(1-\frac{2x\sqrt{x+1}}{\sqrt{x^2+1}}\right)\text{ as }x+1\ne0$$
$$=1$$


calculus - Closed form of $int_0^{pi/2} frac{arctan^2 (sin^2 theta)}{sin^2 theta},dtheta$


I'm trying to evaluate the closed form of:



$$I =\int_0^{\pi/2} \frac{\arctan^2 (\sin^2 \theta)}{\sin^2 \theta}\,d\theta$$



So far I've tried introducing a parameters,


$\displaystyle I(a,b) = \int_0^{\pi/2} \frac{\arctan (a\sin^2 \theta)\arctan (b\sin^2 \theta)}{\sin^2 \theta}\,d\theta$


but that doesn't lead to an integral I can manage. Expanding the series for $\arctan^2 x$ leads to the sum:


$$I = \frac{\pi}{2}\sum\limits_{n=0}^{\infty}\frac{(-1)^n}{(n+1)4^{2n+1}}\binom{4n+2}{2n+1}\left(\sum\limits_{k=0}^{n}\frac{1}{2k+1}\right)$$ and using $\displaystyle \int_0^1 x^{n-\frac{1}{2}}\log (1-x)\,dx = \frac{-2\log 2 + 2\sum\limits_{k=0}^{n}\dfrac{1}{2k+1}}{n+\frac{1}{2}}$ leads to an even uglier integral:


$\displaystyle \Im \int_{0}^{1} \frac{1}{1+\sqrt{1-i\sqrt{x}}}\frac{\log(1-x)}{x}\,dx$ among others.



I got the non-square version, which seems to have a nice closed form $\displaystyle \int_0^{\pi/2} \frac{\arctan (\sin^2 \theta)}{\sin^2 \theta}\,d\theta = \frac{\pi}{\sqrt{2}\sqrt{1+\sqrt{2}}}$ but the squared version seems difficult. Any help is appreciated.


(P.S. - I'm not familiar with Hypergeometric identities, so it would be very helpful if a proof or a reference to a proof was provided, should the need arise to use them.)


Answer



Let $I$ denote the integral. Then, using integration by parts we can write $$I=\int_{0}^{\pi\over 2}\frac{\left(\tan^{-1}(\sin^2 x) \right)^2}{\sin^2 x}dx = 4\int_0^{\frac{\pi}{2}}\frac{\cos^2 x\tan^{-1}(\sin^2 x)}{1+\sin^4 x}dx$$


The main idea of this evaluation is to use differentiation under the integral sign. Let us introduce the parameter $\alpha$: $$f(\alpha)=\int_0^{\frac{\pi}{2}}\frac{\cos^2 x\tan^{-1}(\alpha \sin^2 x)}{1+\sin^4 x}dx$$ Taking derivative inside the integral, \begin{align*} f'(\alpha) &= \int_0^{\pi\over 2}\frac{\cos^2 x}{1+\sin^4 x}\cdot\frac{\sin^2 x}{1+\alpha^2 \sin^4 x}dx \\ &= \frac{1}{1-\alpha^2}\int_0^{\pi\over 2}\frac{\cos^2 x\sin^2 x}{1+\sin^4 x}dx-\frac{\alpha^2}{1-\alpha^2}\int_0^{\pi\over 2}\frac{\cos^2 x\sin^2 x}{1+\alpha^2 \sin^4 x}dx \quad (\text{Partial Fractions}) \end{align*}


Let $g(\alpha)=\int_0^{\pi\over 2}\frac{\cos^2 x\sin^2 x}{1+\alpha^2 \sin^4 x}dx$. Then,


$$\frac{I}{4}=f(1)=\int_0^1\frac{g(1)-\alpha^2 g(\alpha)}{1-\alpha^2}d\alpha \tag{1}$$


Evaluation of $g(\alpha)$


\begin{align*} g(\alpha) &= \int_0^{\frac{\pi}{2}}\frac{\cos^2 x\sin^2 x}{1+\alpha^2 \sin^4 x}dx \\ &= \int_0^\infty \frac{t^2}{\left(t^4(1+\alpha^2)+2t^2+1 \right)(t^2+1)}dt \quad (t=\tan x)\\ &= -\frac{1}{\alpha^2}\int_0^\infty\frac{1}{1+t^2}dt+\frac{1}{\alpha^2}\int_0^\infty\frac{1+(1+\alpha^2)t^2}{(1+\alpha^2)t^4+2t^2+1}dt \\ &= -\frac{\pi}{2\alpha^2}+\frac{\pi \sqrt{1+\sqrt{1+\alpha^2}}}{2\sqrt{2}\alpha^2}\tag{2} \end{align*} That last integral was evaluated using an application of the residue theorem. Substitute this into equation (1) to get $$I=\pi\sqrt{2}\int_0^1\frac{\sqrt{1+\sqrt{2}}-\sqrt{1+\sqrt{1+\alpha^2}}}{1- \alpha ^2 }d\alpha \tag{3}$$


Evaluation of integral (3)



Luckily, integral (3) has a nice elementary anti-derivative. \begin{align*} &\; \int \frac{\sqrt{1+\sqrt{2}}-\sqrt{1+\sqrt{1+\alpha^2}}}{1-\alpha^2}d\alpha \\ &= \sqrt{1+\sqrt{2}}\tanh^{-1}(\alpha) -\int \frac{\sqrt{1+\sqrt{1+\alpha^2}}}{1-\alpha^2} d\alpha \\ &= \sqrt{1+\sqrt{2}}\tanh^{-1}(\alpha)-\int \frac{t\sqrt{1+t}}{(2-t^2)\sqrt{t^2-1}}dt\quad (t=\sqrt{1+\alpha^2}) \\ &= \sqrt{1+\sqrt{2}}\tanh^{-1}(\alpha)-\int \frac{t}{(2-t^2)\sqrt{t-1}}dt \\ &=\sqrt{1+\sqrt{2}}\tanh^{-1}(\alpha)- 2\int \frac{u^2+1}{2-(u^2+1)^2}du\quad (u=\sqrt{t-1}) \\ &=\sqrt{1+\sqrt{2}}\tanh^{-1}(\alpha)- 2\int \frac{u^2+1}{(\sqrt{2}-1-u^2)(\sqrt{2}+1+u^2)}du \\ &= \sqrt{1+\sqrt{2}}\tanh^{-1}(\alpha)-\int\frac{du}{\sqrt{2}-1-u^2}+\int\frac{du}{\sqrt{2}+1+u^2} \\ &=\sqrt{1+\sqrt{2}}\tanh^{-1}(\alpha)-\sqrt{\sqrt{2}+1}\tanh^{-1}\left( u \sqrt{\sqrt{2}+1}\right)+\sqrt{\sqrt{2}-1}\tan^{-1}\left(u\sqrt{\sqrt{2}-1} \right) +C\\ &= \sqrt{1+\sqrt{2}}\tanh^{-1}(\alpha)-\sqrt{\sqrt{2}+1}\tanh^{-1}\left( \sqrt{\sqrt{1+\alpha^2}-1} \sqrt{\sqrt{2}+1}\right)\\ &\quad +\sqrt{\sqrt{2} -1}\tan^{-1}\left(\sqrt{\sqrt{1+\alpha^2}-1}\sqrt{\sqrt{2}-1} \right) +C \end{align*}


Therefore, the integral is \begin{align*} I &= \pi \sqrt{2} \lim_{\alpha\to 1}\Bigg\{ \sqrt{1+\sqrt{2}}\tanh^{-1}(\alpha)-\sqrt{\sqrt{2}+1}\tanh^{-1}\left( \sqrt{\sqrt{1+\alpha^2}-1} \sqrt{\sqrt{2}+1}\right)\\ &\quad +\sqrt{\sqrt{2} -1}\tan^{-1}\left(\sqrt{\sqrt{1+\alpha^2}-1}\sqrt{\sqrt{2}-1} \right) \Bigg\} \\ &= \pi\sqrt{2}\left\{\frac{\sqrt{\sqrt{2}+1}}{2}\log\left(\frac{\sqrt{2}+2}{4} \right) +\sqrt{\sqrt{2}-1}\tan^{-1}\left(\sqrt{2}-1 \right)\right\} \\ &= \pi\sqrt{2}\left\{\frac{\sqrt{\sqrt{2}+1}}{2}\log\left(\frac{\sqrt{2}+2}{4} \right) +\frac{\pi}{8}\sqrt{\sqrt{2}-1}\right\} \\ &= \color{maroon}{\pi \log\left( \frac{2+\sqrt{2}}{4}\right) \sqrt{\frac{\sqrt{2}+1}{2}}+\frac{\pi^2}{4}\sqrt{\frac{\sqrt{2}-1}{2}}}\approx 0.576335 \end{align*}


functions - Continuously differentiable vs Continuous derivative

I am wondering whether two characteristics of a function are identical or not? that is:



1- A function has derivative over an open interval




2- The derivative of a function exists and is continuous over an open interval ($C^1$ functions)



If not, please include any example that comes to your mind in your answer that works for a set of greater than measure-zero set, thanks

real analysis - Finding bijection from (0,1) → N



How exactly do I go about finding a bijection between (0,1) → N \ {0}



so $(0,1) → (1, \infty)$. I figured I could look at this as finding a function from $(0,1) → (0, \infty)$ and just adding 1.




I've seen examples where f(x) = $\frac{1}{x} -1$ then $f(0) = \infty$ and $f(1) = 0$ (but these were on closed sets)



I couldn't find an example of a function such that $\lim_{x\to 1} = \infty$ or $\lim_{x\to 0} = \infty$ which is what it looks like I need here.



Can someone give me an example, or a way to find such a function?


Answer



Here are two bijections $f$ from $(0,1)$ to $(1,\infty)$:



1) Let $f(x)=\frac{1}{1-x}$;




2) Let $f(x)=1+\tan\left(\frac{\pi x}{2}\right)$.


Thursday, October 29, 2015

combinatorics - What is the highest power of $18$ contained in $frac{50!}{25!(50-25)!}$?


What is the highest power of $18$ contained in $\frac{50!}{25!(50-25)!}$?


How will I be able to find the answer to such questions? Is there any special technique to find the answer to such problems? Thank you.


Answer




$18 = 2\cdot 3^2$. We can find the power of a small prime in a large factorial by successive division to find base divisibility, then divisibility by squares, etc. So the multiplicity of powers of $2$ in $50!$, $v_2(50!),$ is $$ v_2(50!) = \left\lfloor\frac{50}{2}\right\rfloor + \left\lfloor\frac{50}{4}\right\rfloor + \left\lfloor\frac{50}{8}\right\rfloor + \cdots = 25+12+6+3+1 = 47$$


and similarly $v_2(25!)=22$, $v_3(50!)=16+5+1 = 22$ and $v_3(25!)=8+2 =10$, so


$$v_2\left(\frac{50!}{25!25!}\right) = 47-2\cdot22=3 \\ v_3\left(\frac{50!}{25!25!}\right) = 22-2\cdot 10=2 $$


and only $2$ available powers of $3$ means that $v_{18}\left(\frac{50!}{25!25!}\right)=1$ - the highest power of $18$ dividing the given expression is $18^1=18$.


arithmetic - Multiplication of repeating decimal $0.3333overline{3}$ by $3$




Let's start considering a simple fractions like $\dfrac {1}{2}$ and $\dfrac {1}{3}$.



If I choose to represent those fraction using decimal representation, I get, respectively, $0.5$ and $0.3333\overline{3}$ (a repeating decimal).




That is where my question begins.



If I multiply either $\dfrac {1}{2}$ or $0.5$ by $2$, I end up with $1$, as well as, if I multiply $\dfrac {1}{3}$ by $3$.



Nonetheless, if I decide to multiply $0.3333\overline{3}$ by $3$, I will not get $1$, but instead, $0.9999\overline{9}$



What am I missing here?



*Note that my question is different than the question Adding repeating decimals



Answer



Hint: compute the difference between $1$ and $0.9\bar9$. How much is that ? What do you conclude ?


discrete mathematics - Proof By Induction: Summation of Polynomial

Prove by induction (weak or strong) that:



$$\sum_{k=0}^{n-1}(k + 1)^2= \frac{n(n + 1)(2n+1)}{6}$$




My base case is:



$n = 1$, which is true.



In my Inductive Step, I assume that: $$S(n)=\frac{n(n + 1)(2n+1)}{6}$$ holds for an arbitrary value of $n$.



Proving it then holds for $n+1$:
$$ S(n+1)=\frac{(n+1)((n+1)+1)(2(n+1)+1)}{6}$$
$$ \phantom{S(n+1)}=\frac{(n+1)((n+2)(2n+2+1)}{6}$$
$$ \phantom{S(n+1)}=\frac{(n+1)((n+2)(2n+3)}{6}$$

$$ \phantom{S(n+1)}=\frac{2n^3+9n^2+13n+6}{6}$$



but can't see how my definition of $S(n)$ can be substituted into this final equation?



[EDIT] This isn’t a duplicate because the original summation of $(k+1)^2$ is what I’m originally provided with. The apparent duplicate question also doesn’t have a correct proof by induction answer associated with it.

real analysis - Am I properly using induction (specifically the induction hypothesis)?





If $z_1, z_2, \ldots , z_n$ are complex numbers, then prove that $|z_1 + \cdots + z_n| \le |z_1| + \cdots + |z_n|.$




Hello, I've tried to prove this problem with induction, but I'm not sure if I'm using the induction hypothesis properly. From what I understand, I need to reduce the inductive step down to something that looks "similar" to the induction hypothesis, and use that as a proof.



What I did was first use the $n = 2$ case as a base case:



$|z + w| \le |z| + |w|$ (proof was covered in class, and we did not have to re-prove it).



Inductive Hypothesis:




(I) Assume that $|z_1 + \cdots + z_n| \le |z_1| + \cdots + |z_n|.$



Inductive Step:



$$|z_1 + \cdots + z_n + z_{n+1}| \le |z_1| + \cdots + |z_n| + |z_{n+1}|.$$



(II) Here, $z_1 + \cdots + z_n$ is really just some other complex number, $w$. So, we can rewrite the inequality as $|w + z_{n+1}| \le |z_1| + \cdots + |z_n| + |z_{n+1}|.$



(III) From our base case and inductive hypothesis, we know that $|z + w| \le |z| + |w|$ and $w = |z_1 + \cdots + z_n| \le |z_1| + \cdots + |z_n| = |w|.$ (I feel like I'm missing something here. Am I?)




From this, we can rewrite the inductive step as $|w + z_{n+1}| \le |w| + |z_{n+1}|.$ This expression is something that's been proven in the base case, and therefore we know the inequality holds. Therefore, the inductive step holds, and the proof is complete.



Have I properly used induction here? I feel a bit iffy on stating $|z_1| + \cdots + |z_n| = |w|$ in (III). Is this something I'd need to also prove separately? Also, I feel like I'm relying on the base case too much rather than the induction hypothesis. Is this something I'm allowed to do in induction? Thank you for your time and input!


Answer



You have a pretty good foundation here. Some clean-up...



For the inductive step,
$$
|(z_1 + \cdots + z_n) + z_{n+1}| \leq |z_1 + \cdots + z_n| + |z_{n+1}| \leq |z_1|+ \cdots + |z_n|+ |z_{n+1}|

$$
and you're done. The first inequality is your application of the basic fact for two terms, and the last is the usage of the inductive hypothesis.


linear algebra - Prove that a square matrix commutes with its inverse





The Question:



This is a very fundamental and commonly used result in linear algebra, but I haven't been able to find a proof or prove it myself. The statement is as follows:




let $A$ be an $n\times n$ square matrix, and suppose that $B=\operatorname{LeftInv}(A)$ is a matrix such that $BA=I$. Prove that $AB=I$. That is, prove that a matrix commutes with its inverse, that the left-inverse is also the right-inverse




My thoughts so far:




This is particularly annoying to me because it seems like it should be easy.



We have a similar statement for group multiplication, but the commutativity of inverses is often presented as part of the definition. Does this property necessarily follow from the associativity of multiplication? I've noticed that from associativity, we have
$$
\left(A\operatorname{LeftInv}(A)\right)A=A\left(\operatorname{LeftInv}(A)A\right)
$$
But is that enough?



It might help to talk about generalized inverses.


Answer




You notation $A^{-1}$ is confusing because it makes you think of it as a two-sided inverse but we only know it's a left-inverse.



Let's call $B$ the matrix so that $BA=I$. You want to prove $AB=I$.



First, you need to prove that there is a $C$ so that $AC=I$. To do that, you can use the determinant but there must be another way. [EDIT] There are several methods here. The simplest (imo) is the one using the fact the matrix has full rank.[/EDIT]



Then you have that $B=BI=B(AC)=(BA)C=IC=C$ so you get $B=C$ and therefore $AB=I$.


discrete mathematics - Proving that $5^n - 1$ is divisible by $4$ by mathematical induction.

I have done it, but I am not sure that the inductive step is right. Can anybody please clear me about it?


Basic steps as:



Taking $n=1$: $p(1)=5-1=4$.


Inductive hypothesis: Assume the statement is true for $p(k)$. $5^k - 1$ is divisible by $4$.


Inductive steps: We must show $p(k+1)$ is true when $p(k)$ is true. \begin{align*} & 5^k -1 + 5^{k+1} -1\\ & 5^k -1 + 5.5^{k} -1\\ & (5^k -1) + 4 \end{align*}

calculus - How come such different methods result in the same number, $e$?


I guess the proof of the identity $$ \sum_{n = 0}^{\infty} \frac{1}{n!} \equiv \lim_{x \to \infty} \left(1 + \frac{1}{x}\right)^x $$ explains the connection between such different calculations. How is it done?


Answer



This identity, in isolation, is certainly confusing. It becomes less so once you know that it is just a special case of a more general identity



$$\sum_{n \ge 0} \frac{x^n}{n!} = \lim_{n \to \infty} \left( 1 + \frac{x}{n} \right)^n.$$


Why does this hold? Well, let's call the function on the left $e^x$. Then by inspection, $\frac{d}{dx} e^x = e^x$. General theory tells you that the solution to $y' = y$ is unique provided you fix the value of $y(0)$ (and this is also physically intuitive), and here $e^0 = 1$ by inspection, so that tells you that $e^x$ is the unique function satisfying $e^0 = 1$ and $\frac{d}{dx} e^x = e^x$.



Chandru1's deleted answer reminded me that the proof of this is actually completely elementary. Suppose $f$ is a function which satisfies $f'(x) = f(x)$. Then


$$\frac{d}{dx} \left( e^{-x} f(x) \right) = e^{-x} f'(x) - e^{-x} f(x) = 0$$


hence $e^{-x} f(x) = C_f$ for some constant $C_f$. Substituting $f(x) = e^x$ gives $e^{-x} e^x = C$, and substituting $x = 0$ gives $e^{-x} e^x = 1$. Now it follows that $f(x) = C_f e^x$.



Why does the function on the RHS also satisfy this property? Let's call this function $\exp(x)$. Then


$$\exp(cx) = \lim_{n \to \infty} \left( 1 + \frac{cx}{n} \right)^n = \lim_{m \to \infty} \left( 1 + \frac{x}{m} \right)^{mc} = \exp(x)^c$$


where $m = \frac{n}{c}$. Setting $c = 1 + \frac{y}{x}$, this gives


$$\exp(x + y) = \exp(x)^{1 + \frac{y}{x}} = \exp(x) \exp(x)^{ \frac{y}{x} } = \exp(x) \exp(y)$$


from which it follows that



$$\frac{d}{dx} \exp(x) = \lim_{h \to 0} \frac{\exp(x) \exp(h) - \exp(x)}{h} = \exp(x) \lim_{h \to 0} \frac{\exp(h) - 1}{h}$$


so it follows that $\exp(x)$ satisfies $\exp(0) = 1$ and $\frac{d}{dx} \exp(x) = \exp(x)$, which means it must equal $e^x$ - at least, as soon as we know that


$$\lim_{h \to 0} \frac{\exp(h) - 1}{h} = 1.$$


But


$$\exp(h) = \lim_{n \to \infty} \left( 1 + \frac{h}{n} \right)^n = \lim_{n \to \infty} \left( 1 + {n \choose 1} \frac{h}{n} + {n \choose 2} \frac{h^2}{n^2} + ... \right) = \lim_{n \to \infty} \left( 1 + h + R(h, n) \right)$$


where $|R(h, n)| \le \sum_{k \ge 2} |h|^k = O(|h|^2)$ by the ratio test, from which the conclusion follows.



That last step, by the way, is an echo of the standard proof, where one expands


$$\left( 1 + \frac{x}{n} \right)^n = \sum_{k=0}^n {n \choose k} \frac{x^k}{n^k}$$


and uses the fact that $\frac{1}{n^k} {n \choose k} \to \frac{1}{k!}$ as $n \to \infty$. While this is fairly hands-on, in my opinion it obscures an underlying principle behind this problem, which is that $e^x$ is a very special function and all of the equivalent ways of defining it are based on one of its special properties.


One really nice way to interpret the above identity is that it is precisely what you get by applying Euler's method to the ODE $y' = y$ with initial conditions $y(0) = 1$.



Wednesday, October 28, 2015

analysis - Continuity of a function


For all $(x,y)\in\mathbb{R}^2$ a function $f$ satisfies $f (x+y)=f (x)\cdot f (y)$.



If $f$ is continuous at a point $a$, then show that $f$ is continuous on $\mathbb R$ and $f (x)= b^x$ for some constant $b$.





My approach



$$f (1)=f (1+0)=f (1)\cdot f (0)$$
Thus $f(0)=1$ and $$f(x+h)-f (x)= f(x)\,(f (h)-1)$$



I can prove the continuity of the function.
How can I prove that $f (x)=b^x$ for any $x\in\Bbb R$?

number theory - Explanation of Zeta function and why 1+2+3+4+... = -1/12











I found this article on Wikipedia which claims that $\sum\limits_{n=0}^\infty n=-1/12$. Can anyone give a simple and short summary on the Zeta function (never heard of it before) and why this odd result is true?



Answer



The answer is much more complicated than $\lim_{x \to 0} \frac{\sin(x)}{x}$.



The idea is that the series $\sum_{n=1}^\infty \frac{1}{n^z}$ it is convergent when $Re(z) >1$, and this works also for complex numbers.



The limit is a nice function (analytic) and can be extended in an unique way to a nice function $\zeta$. This means that



$$\zeta(z)=\sum_{n=1}^\infty \frac{1}{n^z} \,;\, Re(z) >1 \,.$$



Now, when $z=-1$, the right side is NOT convergent, still $\zeta(-1)=\frac{-1}{12}$. Since $\zeta$ is the ONLY way to extend $\sum_{n=1}^\infty \frac{1}{n^z}$ to $z=-1$, it means that in some sense




$$\sum_{n=1}^\infty \frac{1}{n^{-1}} =-\frac{1}{12}$$



and this is exactly what that means. Note that, in order for this to make sense, on the LHS we don't have convergence of series, we have a much more suttle type of convergence: we actually ask that the function $\sum_{n=1}^\infty \frac{1}{n^z}$ is differentiable as a function in $z$ and make $z \to -1$...



In some sense, the phenomena is close to the following:



$$\sum_{n=0}^\infty x^n =\frac{1}{1-x} \,;\, |x| <1 .$$



Now, the LHS is not convergent for $x=2$, but the RHS function makes sense at $x=2$. One could say that this means that in some sense $\sum_{n=0}^\infty 2^n =-1$.




Anyhow, because of the Analyticity of the Riemann zeta function, the statement about $\zeta(-1)$ is actually much more suttle and true on a more formal level than this geometric statement...


Tuesday, October 27, 2015

real analysis - Prove the inverse of a strictly increasing function is differentiable.

So, I was given the following problem as part of a homework assignment.




Suppose $f'(x) > 0$ in $(a,b)$. Prove that $f$ is strictly increasing in $(a,b)$, and let $g$ be its inverse function. Prove that $g$ is differentiable, and that
$$g'(f(x)) = \frac{1}{f'(x)}$$





I have proven that $f$ is strictly increasing in $(a,b)$, and I could prove that $g'(f(x)) = 1/f'(x)$ if I could prove that $g$ is differentiable. The problem is that I am having trouble with a proof of that. Any advice?



Also, as a reference, this is exercise 5.2 from Baby Rudin.

integration - Integrating $int_0^{frac{pi}{2}} frac{x}{cos(x)+sin(x)} mathrm dx$ using complex analysis

I am struggling with the integration of $$ \text{I} = \int_0^{\frac{\pi}{2}} \frac{x}{\cos(x)+\sin(x)} \mathrm dx$$ using complex analysis. I used the substitution $z=e^{ix} \rightarrow \mathrm dx = \frac{1}{iz}\mathrm dz$ and hence $\cos(x)+\sin(x)= ((z+\frac{1}{z})-i (z-\frac{1}{z}))$ because we have $ \cos(x)=\frac{1}{2}(e^{ix}+ e^{-ix})$, and $ \sin(x)=\frac{1}{2i}(e^{ix} -e^{-ix})$, and $x= -i \ln(z)$. However I am not successful to obtain the correct result when using real analysis, which gives $$\text{I}= \displaystyle\int_{0}^{\frac{\pi}{2}}\dfrac{x}{\sin\,x + \cos\,x}\,dx = 0.978959918 $$



Any idea how to calculate this integral using complex analysis and Cauchy integral theorem?

Integration involving substitution, integration by part and partial fraction



I am trying to design an integration question which involves three methods, namely substitution, integration by part and partial fraction.



However, I couldn't design such a question. The best I can do involves at most $2$ methods, but not $3$. For example,



$$\int x^8 e^{x^3} \ dx$$



The integral above involves only substitution $(u=x^3)$ and by part, but not partial fraction.





It would be good if someone can come out a question involving three methods.



Answer



An example could be
$$
\int\frac{x^2\ln x}{\left(1+x^3\right)^2}\:dx.
$$
By the change of variable $u=x^3$, $du=3x^2dx$, one gets
$$

\int\frac{x^2\ln x}{\left(1+x^3\right)^2}\:dx=\frac19\int\frac{\ln u}{\left(1+u\right)^2}\:du,
$$ then one may use an integration by parts
$$
\frac19\int\frac{\ln u}{\left(1+u\right)^2}\:du=-\frac{\ln u}{9\left(1+u\right)}+\frac19\int\frac1{u\left(1+u\right)}\:du
$$and conclude by partial fraction decomposition.


real analysis - A convergence result for functions in L^2

Let $(f_n)_{n\geq 1}$, $f$ and $g$ be functions in $L^2$ [0,1].
Suppose $f_n \rightarrow f$ pointwise almost everywhere.




If $|f_n(x)| \leq |x|^{-1/3}$ prove that :



$\lim_{n \to \infty} \int_0^1 f_n(x)g(x) = \int_0^1 f(x)g(x)$.



To me this looks very much like monotone convergence, but the existence of $g$ and the fact that the sequence may not be monotonic causes problems for me



Any help would be greatly appreciated.

calculus - Find $lim_{ntoinfty} frac{(n!)^{1/n}}{n}$.



Find $$\lim_{n\to\infty} \frac{(n!)^{1/n}}{n}.$$




I don't know how to start. Hints are also appreciated.


Answer



let $$y= \frac{(n!)^{1/n}}{n}.$$ $$\implies y=\left (\frac{n(n-1)(n-2)....3.2.1}{n.n.n....nnn}\right)^\frac{1}{n} $$ Note that we can distribute the $n$ in the denominator and give an $'n'$ to each term $$\implies \log y= \frac {1}{n}\left(\log\frac{1}{n}+\log\frac{2}{n}+...+\log\frac{n}{n}\right)$$ applying $\lim_{n\to\infty}$ on both sides, we find that R.H.S is of the form


$$ \lim_{n\to\infty} \frac{1}{n} \sum_{r=0}^{r=n}f \left(\frac{r}{n}\right)$$ which can be evaluated by integration $$=\int_0^1\log(x)dx$$ $$=x\log x-x$$ plugging in the limits (carefully here) we get $-1$.


$$ \log y=-1,\implies y=\frac{1}{e}$$


Monday, October 26, 2015

discrete mathematics - Proof of the Hockey-Stick Identity: $sumlimits_{t=0}^n binom tk = binom{n+1}{k+1}$



After reading this question, the most popular answer use the identity
$$\sum_{t=0}^n \binom{t}{k} = \binom{n+1}{k+1}.$$




What's the name of this identity? Is it the identity of the Pascal's triangle modified.



How can we prove it? I tried by induction, but without success. Can we also prove it algebraically?



Thanks for your help.






EDIT 01 : This identity is known as the hockey-stick identity because, on Pascal's triangle, when the addends represented in the summation and the sum itself are highlighted, a hockey-stick shape is revealed.




Hockey-stick


Answer



This is purely algebraic. First of all, since $\dbinom{t}{k} =0$ when $k>t$ we can rewrite the identity in question as
$$\binom{n+1}{k+1} = \sum_{t=0}^{n} \binom{t}{k}=\sum_{t=k}^{n} \binom{t}{k}$$



Recall that (by the Pascal's Triangle),
$$\binom{n}{k} = \binom{n-1}{k-1} + \binom{n-1}{k}$$



Hence
$$\binom{t+1}{k+1} = \binom{t}{k} + \binom{t}{k+1} \implies \binom{t}{k} = \binom{t+1}{k+1} - \binom{t}{k+1}$$




Let's get this summed by $t$:
$$\sum_{t=k}^{n} \binom{t}{k} = \sum_{t=k}^{n} \binom{t+1}{k+1} - \sum_{t=k}^{n} \binom{t}{k+1}$$



Let's factor out the last member of the first sum and the first member of the second sum:
$$\sum _{t=k}^{n} \binom{t}{k}
=\left( \sum_{t=k}^{n-1} \binom{t+1}{k+1} + \binom{n+1}{k+1} \right)
-\left( \sum_{t=k+1}^{n} \binom{t}{k+1} + \binom{k}{k+1} \right)$$



Obviously $\dbinom{k}{k+1} = 0$, hence we get

$$\sum _{t=k}^{n} \binom{t}{k}
=\binom{n+1}{k+1}
+\sum_{t=k}^{n-1} \binom{t+1}{k+1}
-\sum_{t=k+1}^{n} \binom{t}{k+1}$$



Let's introduce $t'=t-1$, then if $t=k+1 \dots n, t'=k \dots n-1$, hence
$$\sum_{t=k}^{n} \binom{t}{k}
= \binom{n+1}{k+1}
+\sum_{t=k}^{n-1} \binom{t+1}{k+1}
-\sum_{t'=k}^{n-1} \binom{t'+1}{k+1}$$




The latter two arguments eliminate each other and you get the desired formulation
$$\binom{n+1}{k+1}
= \sum_{t=k}^{n} \binom{t}{k}
= \sum_{t=0}^{n} \binom{t}{k}$$


calculus - Compute: $int_{0}^{1}frac{x^4+1}{x^6+1} dx$



I'm trying to compute: $$\int_{0}^{1}\frac{x^4+1}{x^6+1}dx.$$



I tried to change $x^4$ into $t^2$ or $t$, but it didn't work for me.



Any suggestions?



Thanks!



Answer



Edited Here is a much simpler version of the previous answer.



$$\int_0^1 \frac{x^4+1}{x^6+1}dx =\int_0^1 \frac{x^4-x^2+1}{x^6+1}dx+ \int_0^1 \frac{x^2}{x^6+1}dx$$



After canceling the first fraction, and subbing $y=x^3$ in the second we get:



$$\int_0^1 \frac{x^4+1}{x^6+1}dx =\int_0^1 \frac{1}{x^2+1}dx+ \frac{1}{3}\int_0^1 \frac{1}{y^2+1}dy = \frac{\pi}{4}+\frac{\pi}{12}=\frac{\pi}{3} \,.$$



P.S. Thanks to Zarrax for pointing the stupid mistakes I did...



Sunday, October 25, 2015

calculus - proving $limlimits_{xto infty }frac{x^2}{x^2+sin^2 x}=1$



I have to prove the following equation for homework $$\lim_{x\to \infty }\frac{x^2}{x^2+\sin^2 x}=1$$


The proof must be done by proving that for every $e > 0$ exists a $M > 0$ so that for every $x > M$, $|f(x)-1| < e$ is true.


I can't seem to figure this one out.


I would greatly appropriate anyone who tries to help me out :) Thanks


Answer



$$\left| \frac{{{x}^{2}}}{{{x}^{2}}+{{\sin }^{2}}x}-1 \right| = \frac{{{\sin(x)}^{2}}}{{{x}^{2}}+{{\sin }^{2}}x} \leq \frac{1}{x^2}$$


Now making $\frac{1}{x^2} \leq \epsilon$ gives you the $M$....


real analysis - Prove the sequence $x_{n+1}=frac{1}{4-x_n}, x_1=3$ converges.



I try to use induction to show that the sequence is monotonically decreasing. My induction hypothesis is $$P(n):x_{n+1}We have first few terms $x_1=3, x_2=1, x_3=1/3,\cdots$. It is clear that $P(1)$ and $P(2)$ hold. Now I try to prove $P(n+1)$ assuming $P(n)$ is true.
To do this I must show that $x_{n+2}-x_{n+1}<0$.




$$x_{n+2}-x_{n+1}=\frac{1}{4-x_{n+1}}-\frac{1}{4-x_n}=\frac{x_{n+1}-x_n}{(4-x_{n+1})(4-x_n)}$$
I get stuck here I know that numerator is less than $0$ but how do I show that denominator is positive so that the entire thing is less than $0$.



I am trying to show that sequence is monotonically decresaing and also bounded below to show that it converges.



Hints first


Answer



Hint: use as your inductive hypothesis the following statement: $x_{n+1} < x_n < 4$.







How to come up with this hypothesis? I first asked myself whether you were asking for something that's even true: is the denominator positive? The answer is obviously yes by working out the first few values. Why is the denominator positive? Because it's the product of positive things. Can we show that we're always taking the product of positive things? In order to do that, we'd need…


abstract algebra - The square roots of different primes are linearly independent over the field of rationals


I need to find a way of proving that the square roots of a finite set

of different primes are linearly independent over the field of
rationals.




I've tried to solve the problem using elementary algebra
and also using the theory of field extensions, without success. To
prove linear independence of two primes is easy but then my problems
arise. I would be very thankful for an answer to this question.

Is this function bijective, surjective and injective?

$\lfloor\cdot\rfloor: Q \rightarrow\mathbb Z$ with $\lfloor x\rfloor :=$ floor of $x$.



I know a function is injective by using $f(x_1)=f(x_2) \Rightarrow x_1=x_2$ and a function is surjective if each element of the codomain, $y\in Y$, is the image of some element in the domain $x\in X$, and bijective if the function is both injective and surjective.


I don't know what floor of $x$ is.

Proof of obtaining multiple of 10 by using arithmetic operations on any three numbers



While doing a few math puzzles I noted that you could take any three numbers (integers) and use basic arithmetic operations (Addition, Subtraction, Multiplication and Parentheses only)
to obtain a multiple of 10, using each number only once.



Eg.
Using 29, 73, 36: $73+36-29=80$
Using 2, 4, 7: $2*7-4=10$




Is there a proof for this theory in particular, or is there a more general theorem that applies to this? If not, is there a set of three numbers which disproves this theory?






I found a few clues while solving this:



If one of the numbers is a multiple of 5 (call it $x$) then there definitely exists a solution divisible by 10. This one is easily provable as you merely need an even number to multiply with. If any one of the remaining two numbers is even, you can use that to multiply with $x$ to get number divisible by 10. If both numbers are odd then you can add them to get an even number to multiply with $x$.



It would seem as only the units place digits have any bearing on whether it's a multiple of 10 or not. As you could take any of the examples, add or subtract any multiple of 10 from it and do the same operations to still get a multiple of ten. I feel this is also easily provable but I can't think of a way to do it which isn't long-winded.

EDIT: It can be proven as such, let $a,b$ be two integers s.t $a+b|10$
Adding two multiples of 10 ($10m,10n$) to $a$ and $b$ we get $10m+a+10n+b$
Since $a+b|10$ it can be written as $10m+10n+10k$ where $a+b=10k$
Which is divisible by ten
Now let $a,b$ be two integers s.t. $a.b|10$
Adding $10m,10n$ to $a$ and $b$ we get $(10m+a)(10n+b)$
$=100mn+10an+10bm+ab$ or $100mn+10an+10bm+10k$
Which is divisible by ten
Therefore any combination of Addition and Multiplication with the three no.s shouldn't matter.
I realise that could replace 10 with any no. in this proof and it would still work. So we really need to just prove for every single digit no.







I ran a program in python that checked for every combination of single digit no.s, but it didn't find any combination that disproves this.



I'm not sure how this question would be categorised, hence the lack lustre tags.



I'm fairly new to StackExchange. Please forgive me if I've worded this question poorly.


Answer



Let's collect some of the useful observations mentioned and implicit in the OP:





  1. Only the residue class of $a,b,c$ mod $10$ matters.

  2. If any of $a,b,c$ is divisible by $5$ then there is a solution.

  3. If any subset of $\{a,b,c\}$ has a solution, then so does $\{a,b,c\}$.



As a consequence of the above, we may assume WLOG that $\{a,b,c\}$ are distinct single digits from $1,2,3,4,6,7,8,9$.



With a little more effort, we can justify that $a$ can be replaced by $-a$, which will let us further restrict to the digits $1,2,3,4$. It's not too hard to be convinced of this by playing around with some examples, but let's give a proper proof by structural induction. More specifically, we will prove:





Proposition: Let $f(x_1,\ldots,x_n)$ be a fully parenthesized expression using each $x_i$ exactly once and only $+,-,*$ operators. Then at least one of $-f$ or $f$ can be written as a similar expression using $-x_1,x_2,\ldots,x_n$ exactly once.




This is trivial for $n=1$, and for $n=2$ we quickly verify each of the four possible expressions of two variables:
$$-(a+b) = (-a)-b,\\-(a-b) = (-a)+b,\\(b-a) = b+(-a),\\-(a*b) = (-a)*b.$$



We can now do a structural induction: suppose $n\ge 3$. By extracting the highest-level operator of $f(x_1,\ldots, x_n)$, we may write $f$ as $h(g_1(\cdots),g_2(\cdots))$, where each of $g_1,g_2,h$ are valid expressions, and the arguments of $g_1, g_2$ partition the set $\{x_1,\ldots,x_n\}$. I've deliberately obscured the arguments because the notation becomes unwieldy, as the idea is better illustrated by a simple example:





If $f(x_1,x_2,x_3,x_4,x_5) = ((x_1 * x_5) + x_2) - (x_3 - x_4)$, then we take $$h(a,b) = a-b,\quad g_1 = (x_1 * x_5) + x_2,\quad g_2 = x_3 - x_4.$$




Note that $h,g_1,g_2$ each take $

The above proposition lets us freely replace $9$ by $-9$ (which is equivalent to $1$), etc., and so we can narrow down the set $\{a,b,c\}$ to a subset of $\{1,2,3,4\}$. Lucky for us, there's only four such subsets:



$$ (1 + 2 - 3) = 0,\\
(1 + 4)*2 = 10,\\

(1 + 3 -4) = 0,\\
(2+3)*4 = 20.$$


Solving a differential equation with a complex number as a coefficient

I am trying to solve the following differential equation;
\begin{equation}
y'' - iy = 0
\end{equation}
By following the usual method of solving, I get my characteristic equation $\lambda^{2} - i = 0$, which then gives me the general solution of
\begin{equation}

y = cos(\sqrt{i}x) + sin(\sqrt{i}x)
\end{equation}
I want to know if there is any way I can remove the $i$ term from inside the brackets as to get real solutions. I have tried using an expression for $e^{ix}$ but cannot cancel out the $i$ and the $\sqrt{i}$



I also tried this with a similar ODE, as shown below,
\begin{equation}
y'' + iy = 0
\end{equation}
Where I got a characteristic equation of $\lambda^{2} + i = 0$, giving me a general solution of
\begin{equation}

y=e^{i\sqrt{i}x} + e^{-i\sqrt{i}x}
\end{equation}
However, I am still not sure how to cancel this out so that in cos/sin form I have no complex term inside the brackets.



Boundary conditions do not affect the problem at this stage - I'm simply interested in removing the complex term from the brackets.



I hope this is clear enough - any help would be appreciated.

permutations - How many numbers are there between 100 and 1000 in which all the digits are distinct?



How many numbers are there between 100 and 1000 in which all the digits are distinct?



My mathematics textbook says the answer is 648.



My analysis:-



The number at the hundred's place can be chosen in 9 ways(as zero is not possible)




The number at the ten's place can be chosen in 9 ways(the number at the hundred's place is no longer available but zero is now available).



The number at the one's place can be chosen in 8 ways(the number at the hundred's and ten's place is not available but zero is available).



So by the multiplication rule, the total number of ways would seem to be 9*9*8=648.



But the hundred's place can have 1 at its place, ten's can have 0 and ones can have 0. So 100 is a possibility but the question is asking for numbers between 100 and 1000.As a result of that, 100 has to be removed. Therefore, the correct answer is 648-1=647.



I have checked and rechecked my method and can't find anything wrong with it. Can someone please help verify my answer?



Answer



The answer $648$ is correct.



There are nine choices for the hundreds place. There are also nine choices for the tens place (it can be any digit except what you chose for the hundreds).



Finally, there are eight choices for the ones place, regardless of what the tens place is. If the tens place is a zero, then you can't choose either the hundreds place digit or zero for the ones place. If the tens digit is not zero, then you can pick zero for the ones place, just not the digits in the hundreds or tens place.


Saturday, October 24, 2015

sequences and series - Questions on two Formulas for $zeta(s)$

This question is related to the following two formulas for $\zeta(s)$.




(1) $\quad\zeta(s)=\frac{1}{1-2^{1-s}}\sum\limits_{n=0}^\infty\frac{1}{2^{n+1}}\sum\limits_{k=0}^n\frac{(-1)^k\binom{n}{k}}{(k+1)^s},\quad s\ne 1\quad\text{(see ref(1) and formula (21) at ref(2))}$



(2) $\quad\zeta(s)=\frac{1}{s-1}\sum\limits_{n=0}^\infty\frac{1}{n+1}\sum\limits_{k=0}^n\frac{(-1)^k\binom{n}{k}}{(k+1)^{s-1}}\qquad\qquad\qquad\text{(see ref(1) and formula (22) at ref(2))}$



Formula (1) above is claimed to converge for $s\ne 1$ at ref(2), but note that $\frac{1}{1-2^{1-s}}$ exhibits a complex infinity at $s=1+i\frac{2\,\pi\,j}{\log(2)}$ where $j\in \mathbb{Z}$ which seems consistent with the convergence claim at ref(1).



Question (1): Is it true that formula (1) converges for $s\ne 1+i\frac{2\,\pi\,j}{\log(2)}$ where $j\in \mathbb{Z}$ versus $s\ne 1$? Or is there an argument about zeros and poles cancelling each other out when formula (1) for $\zeta(s)$ is evaluated at $s=1+i\frac{2\,\pi\,j}{\log(2)}$ where $j\in \mathbb{Z}$ similar to the argument for the convergence of the right side of the functional equation $\zeta(s)=2^s Ï€^{s−1}\sin\left(\frac{Ï€\,s}{2}\right)\,\Gamma(1−s)\,\zeta(1−s)$ at positive integer values of s (e.g. see Using the functional equation of the Zeta function to compute positive integer values)?







Since originally posting question (1) above, I discovered the following Wikipedia article which I believe provides some insight.



Wikipedia Article: Landau's problem with $\zeta(s)=\frac{\eta(s)}{0}$ and solutions






Formula (2) above is claimed to be globally convergent, but seems to exhibit a significant divergence (see Figure (1) below).



Question (2): Is there an error in formula (2), or is there a conditional convergence requirement associated with formula (2) when the outer series is evaluated for a finite number of terms?




ref(1): Wikipedia Article: Riemann zeta function, Representations, Globally convergent series



ref(2): Sondow, Jonathan and Weisstein, Eric W. "Riemann Zeta Function." From MathWorld--A Wolfram Web Resource.



12/10/2018 Update:



I'm now wondering if formula (2) for $\zeta(s)$ is perhaps only valid for $s\in\mathbb{Z}$.



The following plot illustrates formula (2) for $\zeta(s)$ evaluated for the first $100$ terms.







Illustration of Formula (2) for zeta(s)



Figure (1): Illustration of Formula (2) for $\zeta(s)$






The following discrete plot illustrates formula (2) for $\zeta(s)$ minus $\zeta(s)$ where formula (2) is evaluated for the first $100$ terms in blue and the first $1000$ terms in orange.







Discrete Plot of Formula (2) for zeta(s)



Figure (2): Discrete Plot of Formula (2) for $\zeta(s)$ minus $\zeta(s)$




trigonometry - Finite Series - reciprocals of sines



Find the sum of the finite series
$$\sum _{k=1}^{k=89} \frac{1}{\sin(k^{\circ})\sin((k+1)^{\circ})}$$
This problem was asked in a test in my school.
The answer seems to be $\dfrac{\cos1^{\circ}}{\sin^21^{\circ}}$ but I do not know how. I have tried reducing it using sum to product formulae and found out the actual value and it agrees well. Haven't been successful in telescoping it.


Answer




HINT:



$$\frac{1}{\sin k^\circ\sin(k+1)^\circ}=\frac1{\sin1^\circ}\frac{\sin (k+1-k)^\circ}{\sin k^\circ\sin(k+1)^\circ}$$
$$=\frac1{\sin1^\circ}\cdot\frac{\cos k^\circ\sin(k+1)^\circ-\sin k^\circ\cos(k+1)^\circ}{\sin k^\circ\sin(k+1)^\circ}=\frac1{\sin1^\circ}\left(\cot k^\circ-\cot(k+1)^\circ\right)$$



Can you recognize Telescoping series / sum?


Friday, October 23, 2015

Sum of Infinite series $frac{1.3}{2}+frac{3.5}{2^2}+frac{5.7}{2^3}+frac{7.9}{2^4}+......$



Prove that the sum of the infinite series $\frac{1.3}{2}+\frac{3.5}{2^2}+\frac{5.7}{2^3}+\frac{7.9}{2^4}+......$ is 23.



My approach
I got the following term

$S_n=\sum_1^\infty\frac{4n^2}{2^n}-\sum_1^\infty\frac{1}{2^n}$.



For $\sum_1^\infty\frac{1}{2^n}$ the answer is 1 as it forms a geometric series but I am bot able to find the solution to $\sum_1^\infty\frac{4n^2}{2^n}$.


Answer



$$\frac{1.3}{2}+\frac{3.5}{2^2}+\frac{5.7}{2^3}+\frac{7.9}{2^4}+…=\sum_{n=1}^{\infty }\frac{(2n-1)(2n+1)}{2^n}=\sum_{n=1}^{\infty }\frac{(4n^2-1)}{2^n}$$
depending on the geometric series
$$\frac{1}{1-x}=\sum_{n=0}^{\infty }x^n$$
$$(\frac{1}{1-x})'=\sum_{n=1}^{\infty }nx^{n-1}$$
$$x(\frac{1}{1-x})'=\sum_{n=1}^{\infty }nx^{n}$$
$$(x(\frac{1}{1-x})')'=\sum_{n=1}^{\infty }n^2x^{n-1}$$

$$x(x(\frac{1}{1-x})')'=\sum_{n=1}^{\infty }n^2x^{n}$$
$$4x(x(\frac{1}{1-x})')'=\sum_{n=1}^{\infty }4n^2x^{n}$$
$$4x(x(\frac{1}{1-x})')'-\frac{1}{1-x}=\sum_{n=1}^{\infty }4n^2x^{n}-\sum_{n=0}^{\infty }x^n$$
$$4x(x(\frac{1}{1-x})')'-\frac{1}{1-x}+1=\sum_{n=1}^{\infty }4n^2x^{n}-\sum_{n=1}^{\infty }x^n$$
so
$$\sum_{n=1}^{\infty }x^n(4n^2-1)=\frac{4x^2+4x}{(1-x)^3}-\frac{x}{1-x}$$
now let $x=1/2$ to get $23$


linear algebra - The arithmetic-geometric mean for symmetric positive definite matrices

A while back, I wanted to see if the notion of the arithmetic-geometric mean could be extended to a pair of symmetric positive definite matrices. (I considered positive definite matrices only since the notion of the matrix square root is a bit intricate for other kinds of matrices.)



I expected that some complications would arise since, unlike scalar multiplication, matrix multiplication is noncommutative. Another complication would be that the product of two symmetric matrices need not be symmetric (though the positive definiteness is retained, so one can still speak of a principal matrix square root).




By analogy with the scalar AGM, I considered the iteration



$$\mathbf A_0=\mathbf A \; ; \; \mathbf B_0=\mathbf B$$ $$\mathbf A_{i+1}=\frac12(\mathbf A_i+\mathbf B_i) \; ; \; \mathbf B_{i+1}=\sqrt{\mathbf A_i \mathbf B_i}$$



I cranked up a short Mathematica routine:



matAGM[u_, v_] := First[FixedPoint[
{Apply[Plus, #]/2, MatrixPower[Apply[Dot, #], 1/2]} &, {u, v}]] /;
MatrixQ[u, InexactNumberQ] && MatrixQ[v, InexactNumberQ]



and decided to try it out on randomly generated SPD matrices.



(A numerical note: Mathematica uses the numerically stable Schur decomposition in computing matrix functions like the matrix square root.)



I found that for all of the randomly generated pairs of SPD matrices I tried, the process was convergent (though the rate of convergence is apparently not as fast as the scalar AGM). As expected, the order matters: matAGM[A, B] and matAGM[B, A] are usually not equal (and both results are unsymmetric) unless A and B commute (for the special case of diagonal A and B, the result is the diagonal matrix whose entries are the arithmetic-geometric means of the corresponding entries of the pair.)



I now have three questions:





  1. How do I prove or disprove that this process converges for any pair of SPD matrices? If it is convergent, what is the rate of convergence?


  2. Is there any relationship between matAGM[A, B] and matAGM[B, A] if the two matrices A and B do not commute?


  3. Is there any relationship between this matrix arithmetic-geometric mean and the usual scalar arithmetic-geometric mean? Would, say, arithmetic-geometric means of the eigenvalues of the two matrices have anything to do with this?







(added 8/12/2011)




More digging around has me convinced that I should indeed be considering the formulation of the geometric mean by Pusz and Woronowicz:



$$\mathbf A\#\mathbf B=\mathbf A^{1/2}(\mathbf A^{-1/2}\mathbf B\mathbf A^{-1/2})^{1/2}\mathbf A^{1/2}$$



as more natural; the proof of convergence is then simplified, as shown in the article Willie linked to. However, I'm still wondering why the original "unnatural" formulation seems to be convergent (or else, I'd like to see a pair of SPD matrices that cause trouble for the unnatural iteration). I am also interested in how elliptic integrals might crop up in here, just as they did for the scalar version of the AGM.

Linear Algebra and Matrix Theory



Given the matrix $A$ listed below as a matrix over field $z\in \{0,1,2,3,4\}$, find the row reduced echelon form $B$ of $A$. List the elementary matrices used to reduce $A$ to $B$.
$$A=\pmatrix{1 &2 &0 &3 \\ 2 &4 &1 &1 \\ 2 & 4 &0 &1 \\}$$




I am able to get the Matrix into the reduced row echelon form, the problem is that when I am getting my elementary matrices the way I reduce Matrix $A$ always makes it so my elementary matrices are not in the field. Please help me.



One way I tried was $$R_2 \leftarrow R_2-R_3$$ and $$R_3 \leftarrow 2R_1-R_3$$ that gets the matrix into reduced row echelon from but puts the elementary matrices outside of the field.


Answer



$$\begin{pmatrix}1&2&0&3\\2&4&1&1\\2&4&0&1\end{pmatrix}\stackrel{R_2+3R_1}{\stackrel{R_3+3R_1}\longrightarrow}\begin{pmatrix}1&2&0&3\\0&0&1&0\\0&0&0&0\end{pmatrix}$$



and we're done working over $\,\Bbb F_5:=\Bbb Z/5\Bbb Z\,$ . Can you now list the elementary matrices used, even if the first, and only, used operation is subdivided in two?


Thursday, October 22, 2015

elementary set theory - Prove that $mathbb{Q}$ and $mathbb{Q} cup {pi, e}$ have the same cardinality




Prove that $\mathbb{Q}$ and $\mathbb{Q} \cup \{\pi, e\}$ have the same cardinality.





I know I must show that there exists a bijection between these two sets but I'm having a difficult time trying to come up with a function that relates them. Any suggestions? Thanks.


Answer



Hint: Let $f: \mathbb{Q} \to \mathbb{Q} \cup \{\pi, e\}$ be the function
$$
f(x) = \begin{cases}
\pi&\text{if } x = 0 \\
e &\text{if } x = 1 \\
x-2 &\text{if } x \in \{2, 3, 4, 5, 6, \ldots\} \\

x &\text{otherwise}.
\end{cases}
$$


calculus - Proving a property of piecewise continuous functions




How to prove the following problem:



Suppose $f \in PC(a,b)$, where $PC(a,b)$ means the set of piecewise continuous functions on the interval $[a,b]$ and $f(x) = \frac{1}{2}[f(x-) +f(x+)]$ for all $x \in (a,b)$. Show that if $f(x_0) \neq 0$ at some point $x_0 \in (a,b)$, then $f(x) \neq 0$ for all $x$ in some interval containing $x_0$. ($x_0$ may be an endpoint on the interval).


Answer



Hint: There is a closed interval $I$ containing $x_0$ on which $f$ is continous. Use continuity of $f$ in $I$ to prove that values near $x_0$ produce function values near $f(x_0)$, therefore away from $0$.


elementary set theory - Size of infinite sets equality #$A$=#($Acup B$) and counterexample for $Acap Bneemptyset$

Let $A$ be an infinite set and $B$ a countable set and let them be disjoint.



Then there exists an injection $j:\Bbb N\hookrightarrow{} A$.



We can call its image $C:=j(\Bbb N)\subset A$. Then we have that $\Bbb N\cong C$ ($C$ is equipotent $i.e.$ there is a bijection between the two sets).



Since $B$ is countable and $C$ is countable and infinite, $B\cup C$ is countable and infinite.



So $B\cup C\cong \Bbb N\cong C\implies\exists \varphi:B\cup C\xrightarrow{\text{bij}} C$




Now I have two questions:




  1. Why is the function defined as $$\psi:A\cup B\rightarrow A\ , x\rightarrow\begin{cases}\varphi(x)\ \text{if}\ x\in C\cup B\\ x\ \text{if}\ x\in A\end{cases}$$ bijective?


  2. Is there an example of $A,B$ that are not disjoint and $A\cup B \ncong A$?


calculus - How to prove $lim_{xtoinfty}frac{x^{99}}{e^x}=infty$?





How to prove $\lim_{x\to\infty}\frac{x^{99}}{e^x}=0$?




I wasn't sure how to do this because both the top and the bottom limits independently turns out to be infinity!


Answer



After successively applying L'Hopital's rule $100$ times, you get:




$$\lim_{x\to\infty} {x^{99} \over e^x} = \lim_{x\to\infty} {99! \over e^x} = 0$$



This is due to the fact that exponentials always grow faster than polynomials. $e^x$ will eventually overcome $x^{99}$.


calculus - trigonometric identity related to $ sum_{n=1}^{infty}frac{sin(n)sin(n^{2})}{sqrt{n}} $


As homework I was given the following series to check for convergence:




$ \displaystyle \sum_{n=1}^{\infty}\dfrac{\sin(n)\sin(n^{2})}{\sqrt{n}} $



and the tip was "use the appropriate identity".


I'm trying to use Dirichlet's test and show that it's the product of a null monotonic sequence and a bounded series, but I can't figure out which trig. identity is needed.


Can anyone point me towards the right direction?


Many thanks.


Answer



Hint: You can show that $$ \sum\limits_{n=1}^N\sin(n)\sin(n^2)=\frac{1}{2}(1-\cos(N^2+N)) $$ To do this use identity $$ \sin(\alpha)\sin(\beta)=\frac{1}{2}(\cos(\alpha-\beta)-\cos(\alpha+\beta)) $$


Wednesday, October 21, 2015

probability - Expected time to roll all 1 through 6 on a die



What is the average number of times it would it take to roll a fair 6-sided die and get all numbers on the die? The order in which the numbers appear does not matter.




I had this questions explained to me by a professor (not math professor), but it was not clear in the explanation. We were given the answer $(1-(\frac56)^n)^6 = .5$ or $n = 12.152$



Can someone please explain this to me, possibly with a link to a general topic?


Answer



The time until the first result appears is $1$. After that, the random time until a second (different) result appears is geometrically distributed with parameter of success $5/6$, hence with mean $6/5$ (recall that the mean of a geometrically distributed random variable is the inverse of its parameter). After that, the random time until a third (different) result appears is geometrically distributed with parameter of success $4/6$, hence with mean $6/4$. And so on, until the random time of appearance of the last and sixth result, which is geometrically distributed with parameter of success $1/6$, hence with mean $6/1$. This shows that the mean total time to get all six results is
$$\sum_{k=1}^6\frac6k=\frac{147}{10}=14.7.$$







Edit: This is called the coupon collector problem. For a fair $n$-sided die, the expected number of attempts needed to get all $n$ values is
$$n\sum_{k=1}^n\frac1k,$$ which, for large $n$, is approximately $n\log n$. This stands in contrast with the mean time needed to complete only some proportion $cn$ of the whole collection, for some fixed $c$ in $(0,1)$, which, for large $n$, is approximately $-\log(1-c)n$. One sees that most of the $n\log n$ steps needed to complete the full collection are actually spent completing the last one per cent, or the last one per thousand, or the last whatever percentage of the collection.


Basic question on Number Theory and Divisibility

Prove or disprove that if $a\mid(sb + tc)$ for all $s,t$ elements of integers, then $a\mid b$ and $a\mid c$



My question is "for all". I'm clearly misunderstanding something, because my intuition is to say, well this is obviously not true for every integer $s,t$ and I can provide a counter-example. However, what I just suggested to do should be done if it said "there exists $s,t$ that are elements of integers".



can someone walk me through it?

Is a decimal with a predictable pattern a rational number?

I'm starting as a private Math tutor for a high school kid; in one of his Math Laboratories (that came with an answer sheet) I was stumped by an answer I encountered in the True or False section (I'm certain it should've been a False):




The number 4.212112111211112... is a rational number.




I've been searching through several threads and search results, but I haven't found anything that confirms or denies this.




My reasoning to answer 'False' is that, since the pattern is non-terminating and will never repeat, then it must be an Irrational number; granted, there is a predictable pattern ... but it is not repeating.



Am I wrong? I just want to make sure I give this kid the correct answer.

geometry - Maximum relative quotient of radii of circles surrounding other circle touching neighbors exactly once?

Assume we have one circle with radius $R$ centered at origo $(0,0)$.



How big circles of radius $r$ can we put around it touching it and each neighbor in one and only one point each if we want to fit $N$ such circles?






The only special case (own work) I know is if $r=R$ if $N=6$.



If iterated, it gives the famous hexagonal lattice of $2$ dimensional space.




enter image description here



Here is example for $N=6$.

algebra precalculus - How to find $x$ given $log_{9}left(frac{1}{sqrt3}right) =x$ without a calculator?



I was asked to find $x$ when: $$\log_{9}\left(\frac{1}{\sqrt3}\right) =x$$
Step two may resemble:
$${3}^{2x}=\frac{1}{\sqrt3}$$



I was not allowed a calculator and was told that it was possible. I put it into my calculator and found out that $x$=-0.25 but how do you get that?



Answer



You are asked to find $\displaystyle \log_{9}\big(\frac{1}{\sqrt3}\big)$, which means a number $x$ such that
$$9^x = \frac{1}{\sqrt3}$$



At this point, it is natural to express everything as powers of $3$. (I admit that for this to come naturally takes some skill and experience: for example, you need to be able to see that $9 = 3^2$ and that $\sqrt{3} = 3^{1/2}$.) So:



$$3^{2x} = 3^{-1/2}$$



At this point it is a simple equation:




$$2x = -\frac12,$$



or $$x = -\frac14.$$


calculus - How do I evaluate the following integral $int_{-infty}^{infty} e^{-sigma^2 x^2/2}; mathrm dx$?

How do I evaluate the following integral $$\int_{-\infty}^{\infty} \exp\left(-\frac{\sigma^2 x^2}{2}\right) \mathrm dx\;?$$


How is it even possible to find an antiderivative?



The integral is evaluated "silently" in a book leading to a theorem.


Using Wolfram Alpha (after trying to evaluate on my own) I get


enter image description here


and this is not what I want, since at my level we've never worked with such a function.


Hoping someone can clarify.

Tuesday, October 20, 2015

calculus - Does calculating the limit of a given real-valued (recursive) sequence already imply its convergence?



Assuming we are given a real-valued recursive sequence $(a_n)_{n \in \mathbb{N}}$ by its starting point $a_1$ and its recursive function $a_{n+1} = \varphi(a_n)$



EDIT: Since this question caused some confusion or i failed to be clear i'd like to add the recursive sequence:



$$a_1 = \sqrt{2},\ a_{n+1} = \sqrt{2a_n}$$



In order to prove the convergence of $(a_n)_{n}$ i would try to prove boundedness and monotonicity (if that's the case). Then i would try to compute its limit.




However, a few days ago i've been asked whether it would not be sufficient enough to simply compute $$\lim_{n \to \infty} a_n$$



without even proving whether or not $(a_n)_n$ is convergent at all.



Ive been asked that if we basically could find an $a$ for which $$ \lim_{n \to \infty} a_n = a$$ holds
we could immediately conclude that the given sequence was convergent in the first place.



I was quite sceptical but i failed to provide a reasonable answer why it would certainly not sufficient enough. My first reaction was that it might be possible that $a$ might not be the limit but only an accumulation point. But i'm not sure if that's either the case or if i'm completely mistaken.




Can you possibly help me with an answer to that question? Can i simply compute $\lim_{n \to \infty} a_n = a$ prior to proving the existence of such an $a$ at all?



This is all restricted to $\mathbb{R}$ but any general answer is highly appreciated as well.



Thank you very much for any help.


Answer



Take a sequence $(a_n)$ defined by $a_0=0$ and $a_n=2a_{n-1}+1$. If you assume that the limit $L=\lim_{n\to\infty}(a_n)$ exists then by taking the limit of both sides you get $L=-1$. Does this answer your question?


sequences and series - Asymptotics of partial exponential sum $ sumlimits_{k=0}^{a n} frac{n^k}{k!}$



The question Evaluating $\lim\limits_{n\to\infty} e^{-n} \sum\limits_{k=0}^{n} \frac{n^k}{k!}$ has been asked many times here, and there are different approaches for answering it.




Motivated by this other question, I'm interested in how to attack the modified problem where the number of terms grows as a fraction of $n$. I.e. the behaviour of



$$S(n;a) = \sum\limits_{k=0}^{an} \frac{n^k}{k!}$$



for large $n$, with fixed $a \in (0,1)$ (the upper limit of the summation is understood to be taken as the nearest integer).



Or more specifically,



$$ \lim_{n\to \infty} \left(\sum\limits_{k=0}^{an} \frac{n^k}{k!} \right)^{1/n}$$



Answer



Ok, here's my (not very rigorous) try. Other approaches or refinements are welcomed.



Let's change notation $m = a n$, $x = n = mb$ with $b = 1/a$.



Then $$e^x = \sum_{k=0}^m \frac{x^k}{k!} + R_m(x) \tag1$$



with the remainder:



$$

\begin{align}
R_m(x) &= \int_0^x \frac{(x-t)^m}{m!} e^t dt\\
&=\frac{e^x}{m!}\int_0^x y^m e^{-y} dt\\
&=\frac{e^x}{m!} \left(m!- \int_x^\infty y^m e^{-y} dt\right) \tag2\\
\end{align}
$$



Because $b>1$, we can approximate the complement of the gamma integral by abusing Laplace's method. Namely, for a differentiable positive decreasing function (more in general, a function that has its global maximum at the start of the interval of integration) and for $m\to \infty$ we approximate



$$ \int_c^\infty e^{m h(x)}dx\approx \int_c^\infty e^{m [h(c) + h'(c)(x-c)]}dx=\frac{e^{m \, h(c)}}{m \,|h'(c)|} \tag{3}$$




Then we can write
$$
\int_x^\infty y^m e^{-y} dt =\int_x^\infty e^{m (\log(y)-y/m)} \approx x^m e^{-x} \frac{b}{b-1} \tag 4\\
$$



Actually we are abusing the method here because our $h()$ depends also on $m$ - this would need some justification. Passing over this and putting all together:



$$\begin{align}
\sum_{k=0}^m \frac{x^k}{k!} &= e^x - R_m(x) \\

& \approx \frac{x^m}{m!} \frac{b}{b-1} \\ \tag{5}
&= \frac{n^{an}}{(an)!} \frac{1}{1-a} \\
& \approx \left(\frac{e}{a}\right)^{an} \frac{1} {(1-a) \, \sqrt{ 2 \pi a} \, \sqrt{n}} \tag{6}
\end{align}
$$



Finally



$$\lim_{n\to \infty} \left(\sum\limits_{k=0}^{an} \frac{n^k}{k!} \right)^{1/n}= \left(\frac{e}{a}\right)^a \tag 7$$







Added: As rightly comments @Maxim, if we are interested in correcting the rounding (when $m$ in $(5)$ is not an integer we round down to the nearest integer), we should multiply $(6)$ by the correction factor $a^{\{an\}}$,
where $\{\}$ denotes the fraction part. Of course, this correction is asymptotically negligible ($O(1)$) and does not change the limit $(7)$.


sequences and series - Blocks of Pyramid Pattern Expression




There is a pattern following, and trying to find the algebraic expression



Each layer (from the top).




Diagram.



enter image description here



So the first layer has 1, second has 4, third has 9, and the fourth has 16.



That's how the sequence is increasing.



What I'm looking for is,




When the second layer is added with the first layer,



Third layer is added with the second and first,



Fourth is added with third,second and first.



So something like this.



enter image description here




enter image description here



I am trying to find the algebraic expression for this pattern.



Any ideas??



Thank you


Answer



There is a well-known formula for the sum of the first $n$ squares, but I don't want spoil your investigation, so I will give you some hints.




First, compute some more terms of the sequence. Three or four more should do.



Multiply all the terms by six, and factor the results. Notice that all of them are multiple of $n$ and $n+1$.


elementary number theory - Not understanding Simple Modulus Congruency



Hi this is my first time posting on here... so please bear with me :P




I was just wondering how I can solve something like this:



$$25x ≡ 3 \pmod{109}.$$



If someone can give a break down on how to do it would be appreciated (I'm a slow learner...)!



Here is proof that I've attempted:




  1. Using definition of modulus we can rewrite $$25x ≡ 3 \pmod{109}$$ as $25x = 3 + 109y$ (for some integer $y$). We can rearrange that to $25x - 109y = 3$.



  2. We use Extended Euclidean Algorithm (not sure about this part, I keep messing things up), so this is where I'm stuck at.




Thanks!


Answer



The extended euclidean algorithm is used to find x and y such that ax + by = gcd of a and b.



In our case $a = 109$ and $b = 25$.



So we start as follows.




Find remainder and quotient when we divide $109$ by $25$ and write the remainder on the left hand side.



So we get



9 = 109 - 25*4.



Now we get two new numbers $25$ and $9$. Write the remainder on the left hand side again.



7 = 25 - 9*2.




So we have two new numbers, 9 and 7.



In the extended algorithm, we use the formula for 9 in the first step



7 = 25 - (109 - 25*4)*2 = 25*9 - 109*2.



Now



2 = 9 - 7*1




= (109-25*4) - (25*9 - 109*2) = 109*3 - 25*13



Now write



1 = 7 - 3*2



i.e.



1 = (25*9 - 109*2) - 3*(109*3 - 25*13)




i.e.
1 = 25*48 - 109*11



Thus $25x - 109y = 1$ for $x = 48$ and $y = 11$.



So $25x - 109y = 3$ for x = 48*3 = 144 and y = 11*3 = 33.



Therefore 144*25 = 3 (mod 109).




If you need a number $ \le 109,$



$144 = 109 + 35$.



So we have (109+35)*25 = 3 (mod 109).



Which implies 35*25 = 3 (mod 109).



Thus $x = 35$ is a solution to your equation, which we found using the extended euclidean algorithm.




Hope that helps.


trigonometry - $sum cos$ when angles are in arithmetic progression




Prove $$\cos(\alpha) + \cos(\alpha + \beta) + \cos(\alpha + 2\beta) + \dots + \cos[\alpha + (n-1)\beta] = \frac{\cos(\alpha + \frac{n-1}{2}\beta) \cdot \sin\frac{n\beta}{2}}{\sin\frac{\beta}{2}} $$

Monday, October 19, 2015

real analysis - Bounding the Series $sum_{k=1}^infty k^a frac{1}{k(k+1)}$



Let $0 < a < 1$. I'm trying to figure out whether the following series converges:



$$\sum_{k=1}^\infty k^a \frac{1}{k(k+1)}.$$




Now it's clear that if $a$ were greater than or equal to $1$ then this series would diverge since



$$\sum_{k=1}^\infty k^1 \frac{1}{k(k+1)} = \sum_{k=1}^\infty \frac{1}{(k+1)} = \sum_{k=2}^\infty \frac{1}{k} = \infty.$$



So this makes it a bit hard to think of a bound for the series in question. Any advice?


Answer



$$0\leq \frac{k^\alpha}{k(k+1)}\leq \frac 1{k^{2-\alpha}},$$
and $2-\alpha>1$.


Sunday, October 18, 2015

matrices - Is there a way to extract the diagonal from a matrix with simple matrix operations



I have a square matrix A. Is there a way I can apply operations like addition, subtraction, matrix multiplication, matrix inverse and transpose to get the diagonal of the matrix. For example having:
$$\begin{pmatrix}1&2\\3&4\end{pmatrix}$$
I would like to get $(1,4)$.




P.S. based on the conversation with mvw, here is a better description:



I am on board of an alien space ship and the board computer allows only matrix operations but access to the individual matrix elements is blocked. I can only use addition, subtraction, matrix multiplication, matrix inverse and transpose. No access to individual row/column/element. I can only create matrices of any dimension $(1 x n)$, $(n x 1)$, $(n x 2n)$ that have all zeros or all ones. Is there a way for me to get a diagonal vector?


Answer



Note: This solution is not working for the updated question.
$$
D = \text{diag}(a_{11}, \ldots, a_{nn}) = \sum_{i=1}^n P_{(i)} A P_{(i)}
$$
where $P_{(i)}$ is the projection on the $i$-th coordinate:

$$
(P_{(i)})_{jk} = \delta_{ij} \delta_{jk} \quad (i,j,k \in \{1,\ldots,n\})
$$
and $\delta$ is the Kronecker delta ($1$ for same index values, otherwise $0$).



Transforming the diagonal matrix $D$ into a row vector can be done by
$$
d = u^T D
$$
where each of the $n$ components of $u$ is $1$.

$$
u = (1,1,\ldots,1)^T
$$
Combining both gives
$$
d = \sum_i u^T P_{(i)} A P_{(i)} = \sum_i e_i^T A P_{(i)}
$$
where $e_i$ is the $i$-th canonical base vector.



Example:




octave> A, P1, P2, u
A =
1 2
3 4

P1 =
1 0
0 0


P2 =
0 0
0 1

u =
1
1

octave> u'*(P1*A*P1+P2*A*P2)
ans =

1 4

calculus - Solving inequality involving square root and division by logarithm: $sqrt n




I would like to solve the inequality $\sqrt n<\frac{n}{\log(n)}-2$. for some reason I had never done this before. This is clearly the same as $\frac{n}{\log(n)}-\frac{n}{\sqrt{n}}>2$. Which is the same as $\frac{\sqrt n)(\sqrt n-\log(n))}{\log(n)}>2$. But here I get stuck. Thanks in advance.


Answer



It doesn’t look like it has a closed-form solution, other than in terms of the two solutions of $\frac{\sqrt n)(\sqrt n-\log(n))}{\log(n)}{\color{blue}=}2$.



Solving numerically with Mathematica, the inequality holds for $018.138863...$.



enter image description here



The calculations here show that Mathematica finds no closed form for the two intersection points and denotes them simply as the points where $\frac{\sqrt n)(\sqrt n-\log(n))}{\log(n)}=2$




I don’t know what numerical technique Mathematica used, but the function is well behaved near its two solutions, so no fancy method beyond guessing and checking is needed. For the larger solution, for example, you could try $n= 18$, based on the graph, see that the curve lies below $y=2$, then try $y=18.5$, see that it lies above, then $y=18.2$, and so on.



enter image description here


pi - This proof is wrong. How? (See image in link below)




Came across this on the internet. I have some ideas regarding this but I wanted to know more such reasonings.
Proof of Pi


Answer



Fundamentally, you have jumped in without a definition of the length of an arc.



The circumference isn't approximated by the sum of lengths of the lines drawn as shown in the image, but by the sum of the hypotenuses of the right-angled triangles formed around the edge of the circle.


Saturday, October 17, 2015

real analysis - Graphical explanation of the difference between $C^1$ and $C^2$ function?


We are all aware of the intuitive (graphical) explanation of the concepts of continuous and differentiable function. Whenever these two concepts are formally defined, the following elementary explanations are given:



A continuous function is a function whose graph has no "holes" or "jumps", and a differentiable function is a function whose graph has no "corners".



This is a non continuous function:


enter image description here


This is a non differentiable continuous function:


enter image description here


And this is a differentiable continuous function:



enter image description here


Is there a "graphical" or intuitive explanation of the difference between a $C^1$ function and a differentiable function with discontinuous derivatives? What about a function that is $C^1$ but not $C^2$ because it does not have second derivatives? Or what about a function that is $C^1$ and has second derivatives but they are not continuous? What about the difference between a $C^1$ function and a $C^{\infty}$ function?


Answer



In general the differences are very subtle, and I don't know of any good way to visualize them. For instance, here is an example of a function that is $C^2$ but is not differentiable at $x=0$, the function $y=|x|^3$: enter image description here (It does not have a third derivative at $x=0$ because the second derivative is $3|x| + 3x$.) Can you tell visually that this function does not have a third derivative at $x=0$? I can't, although this could just be a poor example.


Hopefully you're familiar with the famous example of the Weierstrass function, which is $C^0$ but nowhere differentiable: enter image description here It's visually clear, I think, that this function is not differentiable anywhere: the surface is too rough.


But it gets harder when you look at the next level. By integrating the Weierstrass function, we obtain a function that is $C^1$ but nowhere second-differentiable: enter image description here It looks smooth enough to be differentiable, but I wouldn't know how to tell if the second derivative exists just by looking at the picture.


In summary, I think that our visual intuitions about graphs are too imprecise to fully capture the mathematical notion of smoothness.


calculus - Is it possible to solve this limit without Hopital / Taylor / derivatives: $limlimits_{x to 0} frac{x-sin(x)}{x^3} = frac{1}{6}$?

It's simple to prove with Hopital that



$$ \lim_{x \to 0} \frac{x-\sin(x)}{x^3} = \frac{1}{6}$$



Is it possible to solve this limit without Hopital or Taylor (without derivatives)?

linear algebra - Eigenvalue decomposition of $A = I - xx^T$




Let $A = I - xx^T$, where $x \in \mathbb{R}^n$ and $I$ is the identity matrix of $\mathbb{R}^n$



We know that $A$ is a real symmetric matrix, therefore there exists an eigenvalue decomposition of $A$ such that




$$A = Q^T\Lambda Q$$



Is it possible to find $Q$, $\Lambda$?



$I - xx^T = Q^TQ - xQ^TQx^T = Q^TQ - (Q^Tx)^T(x^TQ)^T...$


Answer



Consider
$$
Ax=(I-xx')x=(1-x'x)x
$$

so $x$ itself is an eigenvector with eigenvalue $1-x'x$. In fact, if $v$ is an eigenvector with some eigenvalue $\alpha$, we have
$$
\alpha v=Av=(I-xx')v=v-(x'v)x\implies(1-\alpha)v=(x'v)x.
$$
This means if $\alpha\neq 1$, then $v$ is proportional to $x$ so in fact $v$ is an eigenvector with eigenvalue $1-x'x$. If $\alpha=1$, then $x'v=0$. Conversely, if $v'x=0$, then $v$ is an eigenvector with eigenvalue $1$:
$$
Av=(I-xx')v=v-(x'v)v=v.
$$
Conclusion: $I-xx'$ has eigenvalues $1-x'x$ and $1$ where $1$ has multiplicity $n-1$. The eigenvectors for $1-x'x$ are parallel to $x$ and the eigenvectors of $1$ are any vector in the space orthogonal to the space spanned by $x$. So you can take $Q'=\Big(\frac{x}{|x|}\;r_1\;\cdots\;r_{n-1}\Big)$ where each $r_i$ is $n\times 1$ and $\{r_1,\ldots,r_{n-1}\}$ is some orthonormal basis of $[\text{span}(x)]^\perp$.


real analysis - Is there a way to show the sum of any different square root of prime numbers is irrational?

Is there a way to show the sum of any different square root of prime numbers is irrational? For example, $$\sqrt2+\sqrt3+\sqrt5 +\sqrt7+\sqrt{11}+\sqrt{13}+\sqrt{17}+\sqrt{19}$$ should be a irrational number.



One approach I used is to let the sum be a solution of an even polynomial $f(x)$with integer coefficients and prove by induction that by adding another $\sqrt{p_{k+1}}$. The new polynomial can be written as $$f(x+\sqrt{p_{k+1}})f(x-\sqrt{p_{k+1}})$$



where $$f(x+-\sqrt{p_{k+1}})=P(x)+- Q(x)\sqrt{p_{k+1}},$$



where $P(x)$ is an even plynomial and $Q(x)$
is an odd polynomial.




The new polynomial can be written as $$P^{2}(x)- Q^{2}(x)p_{k+1}.$$



Assume it has a rational solution $a$, we must have$$P(a)=Q(a)=0.$$



My calculation stopped here since I can't find any contradiction result from this. Can anyone continue this proof, or has other better way to solve this? Thanks!

sequences and series - How can one simplify $frac 36 + frac {3cdot 5}{6cdot9} + frac{3cdot5cdot7}{6cdot9cdot12}$ up to $infty$?



How can one simplify $\cfrac 36 + \cfrac {3\cdot 5}{6\cdot9} + \cfrac{3\cdot5\cdot7}{6\cdot9\cdot12}$ up to $\infty$?



I am new to the topic so can the community please guide me on the approach one needs to take while attempting such questions?




Things I am aware of:




  1. Permutations and combinations


  2. Factorials and some basic properties that revolve around it.


  3. Some basic results of AP, GP, HP


  4. Basics of summation.




Thanks for reading.



Answer



Let $$S=\cfrac 36 + \cfrac {3\cdot 5}{6\cdot9} + \cfrac{3\cdot5\cdot7}{6\cdot9\cdot12}\cdots \cdots \infty$$



Then $$\frac{S}{3} =\frac{1\cdot 3}{3\cdot 6}+\frac{1\cdot 3\cdot 5}{3\cdot 6 \cdot 9}+\frac{1\cdot 3\cdot 5\cdot 7}{3\cdot 6 \cdot 9\cdot 12}+\cdots\cdots$$



So $$1+\frac{1}{3}+\frac{S}{3} =1+\frac{1}{3}+\frac{1\cdot 3}{3\cdot 6}+\frac{1\cdot 3\cdot 5}{3\cdot 6 \cdot 9}+\frac{1\cdot 3\cdot 5\cdot 7}{3\cdot 6 \cdot 9\cdot 12}+\cdots\cdots$$



Now campare the right side series



Using Binomial expansion of $$(1-x)^{-n} = 1+nx+\frac{n(n+1)}{2}x^2+\frac{n(n+1)(n+2)}{6}x^3+.......$$




So we get $$nx=\frac{1}{3}$$ and $$\frac{nx(nx+x)}{2}=\frac{1}{3}\cdot \frac{3}{6}$$



We get $$\frac{1}{3}\left(\frac{1}{3}+x\right)=\frac{1}{3}\Rightarrow x=\frac{2}{3}$$



So we get $$n=\frac{1}{2}$$



So our series sum is $$(1-x)^{-n} = \left(1-\frac{2}{3}\right)^{-\frac{1}{2}} = \sqrt{3}$$



So $$\frac{4}{3}+\frac{S}{3}=\sqrt{3}$$




So $$S=3\sqrt{3}-4.$$


real analysis - Discontinuous derivative.

Could someone give an example of a ‘very’ discontinuous derivative? I myself can only come up with examples where the derivative is discontinuous at only one point. I am assuming the function is real-valued and defined on a bounded interval.

Friday, October 16, 2015

Complex numbers and geometric series




a) Use the formula for the sum of a geometric series to show that



$$\sum _{k=1}^n\:\left(z+z^2+\cdots+z^k\right)=\frac{nz}{1-z}-\frac{z^2}{\left(1-z\right)^2}\left(1-z^n\right),\:z\ne 1$$



I thought the formula for geometric series is $$\frac{a\left(1-r^n\right)}{1-r}=\frac{z\left(1-z^n\right)}{1-z}$$



How do I appraoch this?



b) Let $$z=\cos\left(\theta \right)+i\sin\left(\theta \right),\text{ where }0<\theta <2\pi.$$




By considering the imaginary part of the left-hand side of the equation of $a$, deduce that



$$\sum _{k=1}^n (\sin(\theta)+\sin(2\theta)+\cdots+\sin(k\theta ))=\frac{(n+1)\sin(\theta ) -\sin(n+1)\theta }{4\sin^2\left(\frac{\theta }{2}\right)}$$



assuming



$$\frac{z}{1-z}=\frac{i}{2\sin\left(\frac{\theta }{2}\right)} \left(\cos\left( \frac{\theta }{2} \right) +i\sin\left(\frac{\theta }{2}\right)\right)$$


Answer



This arabesque just might prove useful.
$$\sum_{k=1}^n \sum_{j=1}^k x^j = \sum_{j=1}^n \sum_{k=j}^n x^j

= \sum_{j=1}^n {{x^j - x^{n+1}\over 1 - x}}$$
Now resolve the remaining sum


integration - find $I=int_0^{infty} log{(x+1/x)},frac{dx}{1+x^2} $


Using $$\int_0^{\pi/2} \log\sin x\,\mathrm dx= -\dfrac{\pi}{2} \log 2$$ how to find


$$I=\int_0^{\infty} \log{(x+1/x)}\,\frac{dx}{1+x^2}. $$


Putting $x=\tan z$,


$I=\int_0^{\pi/2} (\log 2-\log(\sin(2z)))dz=\frac{\pi}{2}\log 2-1/2\int_0^{\pi} \log(\sin(u))du$ for $2z=u$


what to do next?


Answer



Split the second integral:



$$\int_0^{\pi}\log(\sin(u)) du = \int_0^{\pi/2}\log(\sin(u)) du + \int_{\pi/2}^{\pi}\log(\sin(u)) du$$


And use a change of variable $u=\pi-x$ for the second part.


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...