Saturday, April 30, 2016

let's play a (continuous) probability game!



Here's the description of the game.




We have a target number $x$ in mind and start by picking a number $c_1$ uniformly at random from $(0, 1)$. Then $c_2$ is picked uniformly at random from $(0, c_1)$, and in general, $c_i$ is picked uniformly at random from $(0, c_{i -1})$. The game stops when we pick a $c_i$ such that $c_i < x$.



What is the expected number of times we'll have to pick a number $c_i$?
Alternatively, what is the expected length of the sequence of $c_i$'s?



I am guessing that the solution will be in terms of our target $x$. How would one go about approaching this problem? I thought about conditioning in terms of $c_1$, but I couldn't work it out.



Edit: we have $x \in (0, 1)$.



Edit 2: I am not hell-bent on using a conditional expectation approach. But if there exists some ridiculously simple solution, I am guessing it would involve it.



Answer



Given $x$, let $f(x)$ be the expected length of the game. We'll show that $f(x) = 1 - \ln x$.



The probability that we finish the game on the first step is $x$; the game continues with probability $1 - x$. We therefore have
$$f(x) = 1 + (1 - x)\cdot(\textrm{#future moves}).$$



Now, suppose we have to continue, i.e., $c_i \in (x, 1)$. Rather than choosing the next number $c_{i+1}$ from $(0, c_i)$, we can instead scale $x$ appropriately so that $c_{i+1}$ is chosen from $(0, 1)$, effectively restarting the game with $x/c_i$ instead of $x$.



To account for the infinitely many ways to choose $c_i \in (x, 1)$, we can take the following integral:
$$\textrm{#future moves} = \frac1{1-x}\int_x^1f\left(\frac xu\right)\ du$$




In other words,
$$f(x) = 1 + \int_x^1f\left(\frac xu\right)\ du.\tag{1}$$



The rest (I'm skipping a few steps here...) is a matter of manipulating this to eventually bring it into the form of a differential equation,
$$xf''(x) = -f'(x),$$
whose general solution is $f(x) = a + b\ln x$. From there, using $(1)$ we find $a=1, b=-1$, solving the problem.



Given the simplicity of the final expression there may be a much more direct solution, but at the moment I don't see one.


functional equations - Solution(s) to $f(x + y) = f(x) + f(y)$ (and miscellaneous questions...)



My lecturer was talking today (in the context of probability, more specifically Kolmogorov's axioms) about the additive property of functions, namely that:



$$f(x+y) = f(x) + f(y)$$



I've been trying to find what functions satisfy this. Intuition says that, for functions over $\mathbb{R}$, the only functions should be of the form $f(x) = ax$ for some real a. Unfortunately I've only shown this is true when the domain of the function is the rational multiples of a given real number.




My question is if it is possible to extend this result (that $f(x) = ax$ given additivity) to the real numbers, possibly without assuming the continuity of f. It seems to me that additivity introduces so many constrains on a function that nothing but the trivial case would be able to sneak through. The following is a summary of my thoughts to date, though they're obviously long and not 'compulsory reading'. :)



When x is rational - Preliminary Investigation



It is not hard to see that:



$$f(x + x + x) = 3f(x)$$



and more generally, for $a\in \mathbb{N}$,
$$f(ax) = af(x)$$

It is not too hard to prove (well, it took half a bus trip ... ) that this also applies first for $a\in \mathbb{Z}$ and then for $a\in \mathbb{Q}$, (for the latter you just need to consider $a=m/n$ and then note that:



$$f\Big(\frac{m}{n}x\Big)=mf\Big(\frac{x}{n}\Big)=\frac{m}{n}\cdot nf\Big(\frac{x}{n}\Big)=\frac{m}{n}\cdot f\Big(n\frac{x}{n}\Big)=\frac{m}{n}\cdot f(x)$$



The reason this little equation is cool is that we can set $x = 1$ and get:



$$f(a)=a\cdot f(1)$$



which is equivalent to what was expected intuitively, namely (after changing $a$ to $y$ and $f(1)$ to $a$)




$$f(y) = a\cdot y$$



as long as y is rational



$y$ is a rational multiple of a real number



But we can do a bit better than that. If we substitute in $x = \sqrt{2}$ or any other real number in $f(ax) = af(x)$ (which we know for rational $a$), you can conduct the exact same argument above and show that, for instance



$$f(y) = \Big(\frac{f(\sqrt{2})}{\sqrt{2}}\Big)\cdot y=a\cdot y$$




Whenever $y = \frac{m}{n}\sqrt{2}$ i.e. whenever $y$ is a rational multiple of $\sqrt{2}$. Note however, that the value of the coefficient $a$ (i.e. the slope of the line) is apparently completely unrelated to the value taken in the case where $y$ is purely rational.



What I'm actually asking



We still haven't shown that $$f(x) = ax$$ for all $x \in \mathbb{R}$, as the slope of the line may change depending on what real number we are taking rational multiples of. As far as I've shown now, we might have $f(x) = x$ when $x$ is rational, $f(x) = 3x$ when $x$ is a rational multiple of $\sqrt{2}$, etc.



I still feel that $f(x) = ax$ for all $x \in \mathbb{R}$. One reason for thinking this comes from noting that $$f(2) = f(2-\sqrt{2})+f(\sqrt{2})$$



$2$, $2-\sqrt{2}$ and $\sqrt{2}$ are not rational multiples of each other, however the equation above gives a restraint on the slopes of the lines formed by their rational multiples (which we'll call $a_1, a_2$ and $a_3$ for the slopes on the rational multiples of $2, 2-\sqrt{2}$ and $\sqrt{2}$ respectively). We have $2a_1 = (2-\sqrt{2}) a_2 + \sqrt{2} a_3$




There's so many constraints here - all the rational multipes have the same coefficient, whenever 2 (or more) numbers which aren't rational multiples of each other are added together we get another constraint on their coefficients. The trivial solution is just that$$f(x) = ax$$



over $x \in \mathbb{R}$ and I really struggle to see how any other solution could possible squeeze through all these constraints.



Is there an additive function on $\mathbb{R}$ not of the form $f(x) = ax$?


Answer



Yea. See the Wikipedia article on Cauchy's Functional Equation.


discrete mathematics - Determine the number of 0 digits at the end of 100!

I got this question, and I'm totally lost as to how I solve it! Any help is appreciated :)



When 100! is written out in full, it equals 100! = 9332621...000000. Without using a calculator, determine the number of 0 digits at the end of this number


EDIT: Just want to confirm this is okay --


I got 24 by splitting products into 2 cases 1) multiples of 10 and 2) multiples of 5 Case I (1*3*4*6*7*8*9*10)(100,000,000,000)--> 12 zeroes


Similarly got 12 zeroes for Case 2.


So 24 in total? Is that correct?

calculus - Proving $sum_{k=1}^n{k^2}=frac{n(n+1)(2n+1)}{6}$ without induction




I was looking at: $$\sum_{k=1}^n{k^2}=\frac{n(n+1)(2n+1)}{6}$$



It's pretty easy proving the above using induction, but I was wondering what is the actual way of getting this equation?


Answer



$$n^{3}-(n-1)^{3}=3n^{2}+3n+1$$
$$(n-1)^{3}-(n-2)^{3}=3(n-1)^{2}+3(n-1)+1$$
$$\vdots$$

$$2^{3}-1^{3}=3(1)^{2}+3(1)+1$$



Now use telescopic cancellation.



Here are some "proof without words"(I find them more elegant):



Sum of squares



Sum of Squares(2)




Finally a more generalized form:$$1^{k}+2^{k}+\cdots+n^{k}=\sum\limits_{i=1}^{k}S(k,i)\binom{n+1}{i+1}i!$$
Where S(k,i) represents the Stirling number of the second kind.


algebra precalculus - Simplifying as an exact value (Simplest Radical Form)


Hey, I have a problem: solve for exact value (simplest radical form) $-3\sqrt{27}$ , the result is $-9 \sqrt3$ . I'm in 8th grade studying for a Math placement test to take trigonometry as a freshman next year. This doesn't seem to be covered in my textbook. Can anyone explain to me what's going on here? Thanks!


Answer



Presumably you are familiar with the rule that (for positive $a$ and $b$) $$\sqrt{a\times b}=\sqrt{a}\,\,\times\sqrt{b}$$ Can you see how to break up 27 into $a\times b$ for the right $a$ and $b$?


limits - Is L'Hopital for $limlimits_{xto0}frac{sin(x)}{x}$ circular?

I was considering using L'Hopital for $\displaystyle\lim\limits_{x\to0}\frac{\sin(x)}{x}$, but I was told that this is circular, because we use this limit to show $\displaystyle\frac{\mathrm d}{\mathrm dx}\sin(x) = \cos(x)$.



Do we have to use this limit to find the derivative of $\sin(x)$, or is there a legitimate counter-argument here?

matrices - Determinant matrix $3 times 2$

I need to verify the linear dependence or independence of $3 \times 2$ complex matrix, how do I compute the determinant? I would use the row reduced echelon form but I have no idea about how to do that with complex numbers, can I divide for the "complex number" when row reducing or my operations of row reduction must be limited to real numbers?

calculus - Integral of $4x(1+x)^{-5}$ without partial fractions



The question is asking me to do $\int_{0}^{\infty}{4x\over(1+x)^5}dx$




My question is, is there any way to do this without partial fractions. If there is a formula for equations of this type, I will gladly memorize it, I just don't think I'll have time to partial fraction expand a question like this on the P exam.


Answer



Let $u=1+x,$ then $du=dx$ and so your integral is now
$$\int_{u=1}^{\infty}4(u-1)u^{-5}du=4\int_{1}^{\infty}(u^{-4}-u^{-5})du=\frac{1}{3}$$


Friday, April 29, 2016

elementary number theory - Prove something is divisible by a prime



Let $p$ be a prime. Prove that $\sum_{k=j}^{p-1} \frac{k!}{ j! (k-j)! }$ is divisible by $p$ $\forall$ $j \in \{0, ..., p-2\}$.



Where this problem comes from:





  • I am trying to prove that $1+x+\dots+x^{p-1}$ is irreducible over $\mathbb{Q}[x]$ where $p\in\mathbb{P}$.

  • I concluded that it would be the same to prove that $1+(x+1)+\dots+(x+1)^{p-1}$ is irreducible over $\mathbb{Q}[x]$.

  • I wrote out the latter polynomial as $\sum_{j=1}^{p-1}\sum_{k=j}^{p-1}\frac{k!}{j! (k-j)!}x^j$ and set out to employ the Eisenstein criterion.



What I've tried so far:




  • Induction on $j$. The $j=0$ case is easy as the coefficient is just $p$.

  • Assuming it is true for some $j\in\{0,\dots,p-3\}$, I wrote out $\sum_{k=j+1}^{p-1}\frac{k!}{(j+1)!(k-j-1)!}$ and tried to express it as things that are obviously divisible by $p$ and the $j$ term to no avail.



Answer



In this answer, two proofs are given of
$$
\sum_{j=m}^{n}\binom{j}{m}=\binom{n+1}{m+1}
$$
Therefore,
$$
\sum_{k=j}^{p-1}\binom{k}{j}=\binom{p}{j+1}
$$

Now, if $j\lt p-1$, $\binom{p}{j+1}=\frac{p(p-1)\cdots(p-j)}{(j+1)!}$ has a factor of $p$ in the numerator and no factor of $p$ in the denominator. Therefore,
$$
\left.p\,\middle|\,\sum_{k=j}^{p-1}\binom{k}{j}\right.
$$


algebra precalculus - Highest power of a prime $p$ dividing $N!$



How does one find the highest power of a prime $p$ that divides $N!$ and other related products?



Related question: How many zeros are there at the end of $N!$?






This is being done to reduce abstract duplicates. See

Coping with *abstract* duplicate questions. and List of Generalizations of Common Questions for more details.


Answer



Largest power of a prime dividing $N!$



In general, the highest power of a prime $p$ dividing $N!$ is given by



$$s_p(N!) = \left \lfloor \frac{N}{p} \right \rfloor + \left \lfloor \frac{N}{p^2} \right \rfloor + \left \lfloor \frac{N}{p^3} \right \rfloor + \cdots$$



The first term appears since you want to count the number of terms less than $N$ and are multiples of $p$ and each of these contribute one $p$ to $N!$. But then when you have multiples of $p^2$ you are not multiplying just one $p$ but you are multiplying two of these primes $p$ to the product. So you now count the number of multiple of $p^2$ less than $N$ and add them. This is captured by the second term $\displaystyle \left \lfloor \frac{N}{p^2} \right \rfloor$. Repeat this to account for higher powers of $p$ less than $N$.




Number of zeros at the end of $N!$



The number of zeros at the end of $N!$ is given by $$\left \lfloor \frac{N}{5} \right \rfloor + \left \lfloor \frac{N}{5^2} \right \rfloor + \left \lfloor \frac{N}{5^3} \right \rfloor + \cdots$$ where $\left \lfloor \frac{x}{y} \right \rfloor$ is the greatest integer $\leq \frac{x}{y}$.



To make it clear, write $N!$ as a product of primes $N! = 2^{\alpha_2} 3^{\alpha_2} 5^{\alpha_5} 7^{\alpha_7} 11^{\alpha_{11}} \ldots$ where $\alpha_i \in \mathbb{N}$.



Note that $\alpha_5 < \alpha_2$ whenever $N \geq 2$. (Why?)



The number of zeros at the end of $N!$ is the highest power of $10$ dividing $N!$




If $10^{\alpha}$ divides $N!$ and since $10 = 2 \times 5$, $2^{\alpha} | N!$ and $5^{\alpha} | N!$. Further since $\alpha_5 < \alpha_2$, the highest power of $10$ dividing $N!$ is the highest power of $5$ dividing $N!$ which is $\alpha_5$.



Note that there will be 



 1. A jump of $1$ zero going from $(N-1)!$ to $N!$ if $5 \mathrel\| N$



 2. A jump of $2$ zeroes going from $(N-1)!$ to $N!$ if $5^2 \mathrel\| N$



 3. A jump of $3$ zeroes going from $(N-1)!$ to $N!$ if $5^3 \mathrel\| N$ and in general




 4. A jump of $k$ zeroes going from $(N-1)!$ to $N!$ if $5^k \mathrel\| N$



where $a \mathrel\| b$ means $a$ divides $b$ and $\gcd\left(a,\dfrac{b}{a} \right)$ = 1.



Largest power of a prime dividing other related products



In general, if we want to find the highest power of a prime $p$ dividing numbers like $\displaystyle 1 \times 3 \times 5 \times \cdots \times (2N-1)$, $\displaystyle P(N,r)$, $\displaystyle \binom{N}{r}$, the key is to write them in terms of factorials.



For instance, $$\displaystyle 1 \times 3 \times 5 \times \cdots \times (2N-1) = \frac{(2N)!}{2^N N!}.$$ Hence, the largest power of a prime, $p>2$, dividing $\displaystyle 1 \times 3 \times 5 \times \cdots \times (2N-1)$ is given by $s_p((2N)!) - s_p(N!)$, where $s_p(N!)$ is defined above. If $p = 2$, then the answer is $s_p((2N)!) - s_p(N!) - N$.




Similarly, $$\displaystyle P(N,r) = \frac{N!}{(N-r)!}.$$ Hence, the largest power of a prime, dividing $\displaystyle P(N,r)$ is given by $s_p(N!) - s_p((N-r)!)$, where $s_p(N!)$ is defined above.



Similarly, $$\displaystyle C(N,r) = \binom{N}{r} = \frac{N!}{r!(N-r)!}.$$ Hence, the largest power of a prime, dividing $\displaystyle C(N,r)$ is given by $$s_p(N!) - s_p(r!) - s_p((N-r)!)$$ where $s_p(N!)$ is defined above.


Thursday, April 28, 2016

elementary set theory - Order type in finite sets



This is a general question regarding power (cardinal number) and type (ordinal number) of a set. These definitions are taken from Kolmogorov and Fomin(1970). I can see why given power there are (uncountably) many sets with different types with this power in the infinite case. For instance, $\aleph_0$ corresponds to the usual order $\omega$ of $\mathbb{N}$ which is
$$
1,2,3,\dots,
$$
but another order type can be written as
$$
1,3,5,\dots,2,4,6\dots.

$$
However, it is claimed that in the finite case there is a unique type for given power. I do not understand this because we can follow the same arguments. For instance take a set with power $2n$, usual order type is
$$
1,2,,\dots,2n
$$
in which $2n$ is the maximal element. However, another order type
$$
1,3,5,\dots,2n-1,2n,\dots 2
$$
in which $2$ is the maximal element. I do not see why this argument should not hold. Thanks for any help



Answer



Order type does not care about naming of the elements. i.e. the order



$$1,2,\dots,2n \>\>\>\>\text{ and }\>\>\>\>1,3,5,\dots,2n-1,2n,\dots 2$$
are the same. Just create the natural bijection (mapping least element to least, second least to second least etc.) between the sets, and it is straight forward to check that it is an order preserving bijection.



Your question should really be why the order types of
$$
1,2,3,\dots
\>\>\>\>\text{ and }\>\>\>\>

1,3,5,\dots,2,4,6\dots
$$
are different.



Answer: The easy way to see this is to observe that in $1,2,3,\dots$ for each element there is a finite amount of elements which is less than that element. However in $
1,3,5,\dots,2,4,6\dots$ the element $2$ has an infinite amount of elements which is less than it. Thus any order preserving bijection between $1,2,3,\dots$ and $1,3,5,\dots,2,4,6\dots$ has an impossible task to map any element in the domain to the element $2$ in the co-domain.



Especially if we map least element to least, second least to second least etc. we will not create a function which is onto.



Formal proof that there is no order preserving bijection: Assume $f$ is an order preserving bijection from $1,2,3,\dots$ to $1,3,5,\dots,2,4,6\dots$ . Now assume that $f(n)=2$. As $1,3,5,\ldots,2n+3$ are all numbers less than 2 (in the right order type) we see that $f^{-1}(1),f^{-1}(3),\ldots,f^{-1}(2n+1)$ are all distinct numbers which are less than $n$. However these consitute a set of $n$ different positive integers which are less than $n$, which is a contradiction, since there are only $n-1$ positive integers which are less than $n$.



real analysis - Showing that $frac{sqrt[n]{n!}}{n}$ $rightarrow frac{1}{e}$


Show:$$\lim_{n\to\infty}\frac{\sqrt[n]{n!}}{n}= \frac{1}{e}$$


So I can expand the numerator by geometric mean. Letting $C_{n}=\left(\ln(a_{1})+...+\ln(a_{n})\right)/n$. Let the numerator be called $a_{n}$ and the denominator be $b_{n}$ Is there a way to use this statement so that I could force the original sequence into the form of $1/\left(1+\frac{1}{n}\right)^n$


Answer



I would like to use the following lemma:



If $\lim_{n\to\infty}a_n=a$ and $a_n>0$ for all $n$, then we have $$ \lim_{n\to\infty}\sqrt[n]{a_1a_2\cdots a_n}=a \tag{1} $$




Let $a_n=(1+\frac{1}{n})^n$, then $a_n>0$ for all $n$ and $\lim_{n\to\infty}a_n=e$. Applying ($*$) we have $$ \begin{align} e&=\lim_{n\to\infty}\sqrt[n]{a_1a_2\cdots a_n}\\ &=\lim_{n\to\infty}\sqrt[n]{\left(\frac{2}{1}\right)^1\left(\frac{3}{2}\right)^2\cdots\left(\frac{n+1}{n}\right)^n}\\ &=\lim_{n\to\infty}\sqrt[n]{\frac{(n+1)^n}{n!}}\\&= \lim_{n\to\infty}\frac{n+1}{\sqrt[n]{n!}}=\lim_{n\to\infty}\frac{n}{\sqrt[n]{n!}}+\lim_{n\to\infty}\frac{1}{\sqrt[n]{n!}}=\lim_{n\to\infty}\frac{n}{\sqrt[n]{n!}} \end{align}\tag{2} $$ where we use (1) in the last equality to show that $ \lim_{n\to\infty}\frac{1}{\sqrt[n]{n!}}=0. $


It follows from (2) that $$ \lim_{n\to\infty}\frac{\sqrt[n]{n!}}{n}=\frac{1}{e}. $$


elementary number theory - How do I compute $a^b,bmod c$ by hand?



How do I efficiently compute $a^b\,\bmod c$:




  • When $b$ is huge, for instance $5^{844325}\,\bmod 21$?


  • When $b$ is less than $c$ but it would still be a lot of work to multiply $a$ by itself $b$ times, for instance $5^{69}\,\bmod 101$?

  • When $(a,c) \neq 1$, for instance $6^{103}\,\bmod 14$?



Are there any other tricks for evaluating exponents in modular arithmetic?






This is being asked in an effort to cut down on duplicates, see here: Coping with *abstract* duplicate questions.




and here: List of Generalizations of Common Questions


Answer



Wikipage on modular arithmetic is not bad.




  • When $b$ is huge, and $a$ and $c$ are coprime, Euler's theorem applies:
    $$
    a^b \equiv a^{b \, \bmod \, \phi(c)} \, \bmod c
    $$
    For the example at hand, $\phi(21) = \phi(3) \times \phi(7) = 2 \times 6 = 12$. $ 844325 \bmod 12 = 5$, so $5^5 = 5 \times 25^2 \equiv 5 \times 4^2 = 80 \equiv 17 \mod 21$.



  • When $a$ and $c$ are coprime, but $0$$
    \begin{eqnarray}
    5^4 \equiv 5 \times 5^3 \equiv 5 \times 24 \equiv 19 &\pmod{101}\\
    19^4 \equiv (19^2)^2 \equiv 58^2 \equiv (-43)^2 \equiv 1849 \equiv 31 &\pmod{101} \\
    31^4 \equiv (31^2)^2 \equiv (961)^2 \equiv 52^2 \equiv 2704 \equiv 78 &\pmod{101} \\
    5^{69} \equiv 5 \times 5^4 \times ((5^4)^4)^4 \equiv 5 \times 19 \times 78 \equiv 5 \times 19 \times (-23)\\
    \equiv 19 \times (-14) \equiv -266 \equiv 37 & \pmod{101}
    \end{eqnarray}
    $$



  • When $a$ and $c$ are not coprime, let $g = \gcd(a,c)$. Let $a = g \times d$ and $c = g \times f$, then, assuming $b > 1$:
    $$
    a^b \bmod c = g^b \times d^b \bmod (g \times f) = ( g \times (g^{b-1} d^b \bmod f) ) \bmod c
    $$
    In the example given, $\gcd(6,14) = 2$. So $2^{102} \times 3^{103} \mod 7$, using Euler'r theorem, with $\phi(7) = 6$, and $102 \equiv 0 \mod 6$, $2^{102} \times 3^{103} \equiv 3 \mod 7$, so $6^{103} \equiv (2 \times 3) \equiv 6 \mod 14 $.



sequences and series - Infinite Sum of Falling Factorial and Power



According to Mathematica,



$$\sum_{k=0}^\infty \frac{(G+k)_{G-1}}{2^k}=2(G-1)!(2^{G}-1)$$




where



$$(G+k)_{G-1}=\frac{(G+k)!}{(G+k-G+1)!}=\frac{(G+k)!}{(k+1)!}$$



is the falling factorial. I would like to compute this analytically, but I have nothing I've been doing works. A proof by induction led me to a more complex summation, and I can split it or simplify the falling factorial. Is there any possible way to evaluate this without resorting to Mathematica? Any help and/or references would be greatly appreciated.


Answer



We have $$S=\sum_{k\geq0}\frac{\left(G+k\right)!}{\left(k+1\right)!}\left(\frac{1}{2}\right)^{k}=\left(G-1\right)!\sum_{k\geq0}\dbinom{G+k}{G-1}\left(\frac{1}{2}\right)^{k}
$$ and since holds $$\dbinom{G+k}{G-1}=\sum_{m=0}^{G-1}\dbinom{k+m}{m}
$$ we have, exchanging the sum with the series $$S=\left(G-1\right)!\sum_{m=0}^{G-1}\sum_{k\geq0}\dbinom{k+m}{m}\left(\frac{1}{2}\right)^{k}
$$ now note that $$\frac{\left(k+m\right)!}{m!}=\left(k+m\right)\left(k+m-1\right)\cdots\left(k+m-\left(k-1\right)\right)=\left(-1\right)^{k}\left(-\left(m+1\right)\right)_{k}$$ where $\left(x\right)_{k}$ is the Pochhammer' symbol, so by the generalized binomial theorem we have $$\sum_{k\geq0}\dbinom{k+m}{m}\left(\frac{1}{2}\right)^{k}=\sum_{k\geq0}\dbinom{-\left(m+1\right)}{k}\left(-\frac{1}{2}\right)^{k}$$ $$=\frac{1}{\left(1-\frac{1}{2}\right)^{m+1}}=2^{m+1}

$$ and finally $$\sum_{m=0}^{G-1}2^{m+1}=2\left(2^{G}-1\right)$$ so




$$\sum_{k\geq0}\frac{\left(G+k\right)!}{\left(k+1\right)!}\left(\frac{1}{2}\right)^{k}=2\left(G-1\right)!\left(2^{G}-1\right).$$



linear algebra - How can we prove that if we can change a matrix into an identity matrix by doing some elementary row operations, then the matrix is invertible?



I started linear algebra, and I encountered the part that using Gauss-Jordan method to compute invertible matrix. Then, how can we prove that a matrix is invertible if we can change that matrix into an identity matrix by doing some elementary row operations?


Answer



Every row operation corresponds to left-multiplication by a matrix. A sequence of row operations therefore corresponds to a single multiplication by the product of those matrices (taken right-to-left). If a square matrix $A$ can be multiplied by any other square matrix to produce an identity matrix, then $A$ is invertible (and the other matrix is its inverse).


Wednesday, April 27, 2016

real analysis - Is this a necessary and sufficient condition for the derivative to exist at $C$?

Suppose we want to prove that the derivative of a function across an interval exists at $C$, but the derivative at $C$ cannot be found. We know the function must be continuous. Can we take the limit of derivative from the negative and positive direction of $C$ and show that if they are equal, the derivative at $C$ exists and is equal to the limit obtained? Is this a necessary and sufficient condition?



EDIT:



Sufficiency - If a function is a derivative along some interval, it does not have a removable singularity at $C$.



Necessity - There is no interval of a derivative of some function in which a jump or essential discontinuity occurs.




There are two cases in which the condition is met if this is a necessary and sufficient condition. One is where the derivative is continuous, the other is where there is a removable discontinuity in the derivative. Is the latter possible?

algebra precalculus - Prove by induction $sum_{i=1}^ni^3=frac{n^2(n+1)^2}{4}$ for $nge1$

Prove the following statement $S(n)$ for $n\ge1$:



$$\sum_{i=1}^ni^3=\frac{n^2(n+1)^2}{4}$$



To prove the basis, I substitute $1$ for $n$ in $S(n)$:




$$\sum_{i=1}^11^3=1=\frac{1^2(2)^2}{4}$$



Great. For the inductive step, I assume $S(n)$ to be true and prove $S(n+1)$:



$$\sum_{i=1}^{n+1}i^3=\frac{(n+1)^2(n+2)^2}{4}$$



Considering the sum on the left side:



$$\sum_{i=1}^{n+1}i^3=\sum_{i=1}^ni^3+(n+1)^3$$




I make use of $S(n)$ by substituting its right side for $\sum_{i=1}^ni^3$:



$$\sum_{i=1}^{n+1}i^3=\frac{n^2(n+1)^2}{4}+(n+1)^3$$



This is where I get a little lost. I think I expand the equation to be



$$=\frac{(n^4+2n^3+n^2)}{4}+(n+1)^3$$



but I'm not totally confident about that. Can anyone provide some guidance?

real analysis - An alternative way to find the sum of this series?

$\displaystyle \frac{4}{20}$+$\displaystyle \frac{4.7}{20.30}$+$\displaystyle \frac{4.7.10}{20.30.40}$+...



Now I have tried to solve this in a usual way, first find the nth term $t_n$.



$t_n$= $\displaystyle \frac{1}{10}$($\displaystyle \frac{1+3}{2}$) + $\displaystyle \frac{1}{10^2}$($\displaystyle \frac{1+3}{2}$)($\displaystyle \frac{1+6}{3}$) + ...+ $\displaystyle \frac{1}{10^n}$($\displaystyle \frac{1+3}{2}$)($\displaystyle \frac{1+6}{3}$)...($\displaystyle \frac{1+3n}{n+1}$)



=$\displaystyle \frac{1}{10^n}\prod$(1+$\displaystyle \frac{2r}{r+1}$) , $r=1,2,..,n$




=$\displaystyle \prod$($\displaystyle \frac{3}{10}-\displaystyle \frac{1}{5(r+1)}$)
thus, $t_n=$ (x-$\displaystyle \frac{a}{2}$)(x-$\displaystyle \frac{a}{3}$)...(x-$\displaystyle \frac{a}{n+1}$), x=$\displaystyle \frac{3}{10}$, a=$\displaystyle \frac{1}{5}$



Now to calculate $S_n$, I have to find the product $t_n$, and then take sum over it. But this seems to be a very tedious job. Is there any elegant method(may be using the expansions of any analytic functions) to do this?

elementary set theory - Bijection between [a,b) and [a,b] intervals.

I need help with finding a bijection between these two intervals: [a,b) and [a,b]. It is told that a,b are in R (real numbers). I know how to construct a bijection if a,b are integers, but i have no idea how to do it if they are real. Any ideas?


Edit: If anyone is still watching, can you tell me if I'm doing this right?


So the bijection will be f(x) = 1) a if x = a and 2) b/p-1 if x != a and x = b/p , where p is in R - {0}.


By taking the p from R - {0} and not from N, I think this could work for real numbers.


Because a < b, every number from a to b can be written as b/p. So the function moves every number upwards if I express myself that way, thus reaching the b.

definite integrals - Finding the value of a sum using Riemann sum theorem





Question: Find the value of $\sum_{i=1}^{n}(\frac{1}{n-i})^{c}$ for large $n$.




\begin{align}
\sum_{i=1}^{n}(\frac{1}{n-i})^{c}
& = \sum_{i=1}^{n}(\frac{1}{n})^{c}(\frac{1}{1-\frac{i}{n}})^{c}
\\ & = \frac{n}{n} \times \sum_{i=1}^{n}(\frac{1}{n})^{c}(\frac{1}{1-\frac{i}{n}})^{c}
\\ & = n(\frac{1}{n})^{c} \sum_{i=1}^{n}\frac{1}{n}(\frac{1}{1-\frac{i}{n}})^{c} \qquad(1)
\end{align}




Let $f(x) = (\frac{1}{1-x})^{c}$, by using Riemann-sum theorem, we have
\begin{align}
\lim_{n\rightarrow \infty}\sum_{i=1}^{n}\frac{1}{n}(\frac{1}{1-\frac{i}{n}})^{c}
& = \int_{0}^{1} (\frac{1}{1-x})^{c} = A \qquad(2)
\end{align}
By using $(1)$ and $(2)$, for sufficently large $n$, we have
$$\bbox[5px,border:2px solid #C0A000]{\sum_{i=1}^{n}(\frac{1}{n-i})^{c} = A\times n(\frac{1}{n})^{c}}$$





The presented proof has a problem, $f(x)$ is not defined in the closed interval $[0,1]$. How can I solve this problem?







Definition (Riemann-sum theorem) Let $f(x)$ be a function dened on a closed interval $[a, b]$. Then, we have
$$\lim_{n\rightarrow \infty}\sum_{i=1}^{n}f\Big(a +(\frac{b - a}{n})i\Big)\frac{1}{n}=\int_{a}^{b}f(x)dx$$


Answer



\begin{align}\label{eq:7777}
& \frac{2}{\sqrt{n-i} + \sqrt{n-i+1}} \leq \frac{1}{\sqrt{n-i}} \leq \frac{2}{\sqrt{n-i} + \sqrt{n-i-1}} \nonumber\\

& \qquad \Rightarrow 2(\sqrt{n-i+1} - \sqrt{n-i}) \leq \frac{1}{\sqrt{n-i}} \leq 2(\sqrt{n-i} - \sqrt{n-i-1}) \nonumber\\
& \qquad \Rightarrow 2 \sum_{i=1}^{n-1}(\sqrt{n-i+1} - \sqrt{n-i}) \leq \sum_{i=1}^{n-1} \frac{1}{\sqrt{n-i}} \leq 2 \sum_{i=1}^{n-1}(\sqrt{n-i} - \sqrt{n-i-1}) \nonumber\\
& \qquad \Rightarrow 2 (\sqrt{n}-1) \leq \sum_{i=1}^{n-1} \frac{1}{\sqrt{n-i}} \leq 2 \sqrt{n-1}
\end{align}


Tuesday, April 26, 2016

complex numbers - Real and imaginary part

I need the real and imaginary part of $\log \sin (x+iy)$. I expand $\sin(x+iy)=\sin x \cosh y+i \cos x \sinh y$. But I don't know how to do it s logarithm

calculus - Evaluate $int_0^1ln(1-x)ln xln(1+x) mathrm{dx}$



What would you recommend me for the integral below?
$$\int_0^1\ln(1-x)\ln x\ln(1+x) \mathrm{dx}$$
For instance, for the version without the last logarithm would work to use Taylor
series, but in this case things are a bit more complicated and it doesn't seem to work.
Thanks in advance for your suggestions!


Answer




I get (and verified by Mathematica)
$$\int_0^1 \log(1-x) \log x \log(1+x) \, dx = -6 + 4 \log 2 - \log^2 2 + \frac{5}{2} \zeta(2) - 3\zeta(2) \log 2 + \frac{21}{8} \zeta(3).$$





Transforming to a double sum.

As in Marvis's answer, take
\begin{align*}
\log(1+x)\log(1-x) & = \left(\sum_{k=1}^{\infty} \dfrac{(-x)^k}k \right)\left(\sum_{k=1}^{\infty} \dfrac{x^k}k \right)\\
& = \sum_{k=1}^{\infty}\sum_{m=1}^{\infty} \dfrac{(-1)^k x^{k+m}}{km}.

\end{align*}
Using the integral $$\int_0^1 x^r \log x \, dx = - \frac{1}{(1+r)^2},$$
we have
$$\int_0^1 \log(1-x) \log x \log(1+x) \, dx = \sum_{k=1}^{\infty} \sum_{m=1}^{\infty} \frac{(-1)^{k+1}}{km(k+m+1)^2}.$$



Evaluating the inner sum.

Partial fractions decomposition on the summand in $m$ yields
\begin{align*}
\sum_{m=1}^{\infty} \frac{1}{m(k+m+1)^2} &= \sum_{m=1}^{\infty} \left(\frac{1}{(k+1)^2 m} - \frac{1}{(k+1)^2 (m+k+1)} - \frac{1}{(k+1) (m+k+1)^2}\right) \\

&= \sum_{m=1}^{k+1} \frac{1}{(k+1)^2 m} - \sum_{m=k+2}^{\infty} \frac{1}{(k+1)m^2} \\
&= \frac{H_{k+1}}{(k+1)^2} - \frac{\zeta(2)}{k+1} + \frac{H^{(2)}_{k+1}}{k+1},
\end{align*}
where $H^{(r)}_n = \sum_{i=1}^n i^{-r}$, the $n$th $r$-harmonic number.



Now we're left with evaluating
$$\sum_{k=1}^{\infty} \frac{(-1)^{k+1}H_{k+1}}{k(k+1)^2} - \sum_{k=1}^{\infty} \frac{(-1)^{k+1} \zeta(2)}{k(k+1)} + \sum_{k=1}^{\infty} \frac{(-1)^{k+1} H^{(2)}_{k+1}}{k(k+1)}. \tag{1}$$
We take the three sums in Eq. (1) in turn.






The first sum in Eq. (1).

For the first sum, applying partial fractions decomposition yields
\begin{align*}
\sum_{k=1}^{\infty} \frac{(-1)^{k+1}H_{k+1}}{k(k+1)^2} &= \sum_{k=1}^{\infty} \frac{(-1)^{k+1}H_{k+1}}{k} - \sum_{k=1}^{\infty} \frac{(-1)^{k+1}H_{k+1}}{(k+1)} - \sum_{k=1}^{\infty} \frac{(-1)^{k+1}H_{k+1}}{(k+1)^2} \\
&= \sum_{k=1}^{\infty} \frac{(-1)^{k+1}H_k}{k} + \sum_{k=1}^{\infty} \frac{(-1)^{k+1}}{k(k+1)} + \sum_{k=1}^{\infty} \frac{(-1)^{k+1}H_k}{k} - 1 + \sum_{k=1}^{\infty} \frac{(-1)^{k+1}H_k}{k^2} - 1 \\
&= -2 + \sum_{k=1}^{\infty} \frac{(-1)^{k+1}}{k(k+1)} + 2\sum_{k=1}^{\infty} \frac{(-1)^{k+1}H_k}{k} + \sum_{k=1}^{\infty} \frac{(-1)^{k+1}H_k}{k^2} \\
&= -3 + 2 \log 2 + 2\sum_{k=1}^{\infty} \frac{(-1)^{k+1}H_k}{k} + \sum_{k=1}^{\infty} \frac{(-1)^{k+1}H_k}{k^2},
\end{align*}
where in the last step we used the evaluation for the second sum we're about to do.






The second sum in Eq. (1).

For the second sum in Equation (1), applying partial fractions decomposition yields
\begin{align*}
\zeta(2) \sum_{k=1}^{\infty} \frac{(-1)^{k+1}}{k(k+1)} &= \zeta(2) \left(\sum_{k=1}^{\infty} \frac{(-1)^{k+1}}{k} + \sum_{k=1}^{\infty}\frac{(-1)^{k+2}}{k+1}\right) \\
&= \zeta(2) \left(2\sum_{k=1}^{\infty} \frac{(-1)^{k+1}}{k} -1\right) \\
&= 2 \zeta(2) \log 2 - \zeta(2).
\end{align*}






The third sum in Eq. (1).

For the third sum in Equation (1), applying partial fractions decomposition yields
\begin{align*}
\sum_{k=1}^{\infty} \frac{(-1)^{k+1} H^{(2)}_{k+1}}{k(k+1)} &= \sum_{k=1}^{\infty} \frac{(-1)^{k+1} H^{(2)}_{k+1}}{k} - \sum_{k=1}^{\infty} \frac{(-1)^{k+1} H^{(2)}_{k+1}}{(k+1)} \\
&= \sum_{k=1}^{\infty} \frac{(-1)^{k+1} H^{(2)}_k}{k} + \sum_{k=1}^{\infty} \frac{(-1)^{k+1}}{k(k+1)^2} + \sum_{k=1}^{\infty} \frac{(-1)^{k+1} H^{(2)}_k}{k} - 1 \\
&= -1 + 2\sum_{k=1}^{\infty} \frac{(-1)^{k+1} H^{(2)}_k}{k} + \sum_{k=1}^{\infty} \frac{(-1)^{k+1}}{k} - \sum_{k=1}^{\infty} \frac{(-1)^{k+1}}{k+1} - \sum_{k=1}^{\infty} \frac{(-1)^{k+1}}{(k+1)^2} \\
&= -1 + 2\sum_{k=1}^{\infty} \frac{(-1)^{k+1} H^{(2)}_k}{k} + 2 \sum_{k=1}^{\infty} \frac{(-1)^{k+1}}{k} - 1 + \sum_{k=1}^{\infty} \frac{(-1)^{k+1}}{k^2} - 1\\

&= -3 + 2 \log 2 + \frac{1}{2} \zeta(2) + 2\sum_{k=1}^{\infty} \frac{(-1)^{k+1} H^{(2)}_k}{k} ,\\
\end{align*}
where in the last step we use the identity, for $p > 1$, $$\sum_{k=1}^{\infty} \frac{(-1)^{k+1}}{k^p} = (1 - 2^{1-p}) \zeta(p)$$





Combining the results.

Putting all this together, we have that the integral evaluates to
$$ -6 + 4 \log 2 + \frac{3}{2} \zeta(2) - 2 \zeta(2) \log 2 + 2\sum_{k=1}^{\infty} \frac{(-1)^{k+1}H_k}{k} + \sum_{k=1}^{\infty} \frac{(-1)^{k+1}H_k}{k^2} + 2\sum_{k=1}^{\infty} \frac{(-1)^{k+1} H^{(2)}_k}{k}.$$






Evaluating the alternating Euler sums.

The three remaining sums all have similar forms. Letting $$A(p,q) = \sum_{k=1}^{\infty} \frac{(-1)^{k+1}H^{(p)}_k}{k^q},$$ we have $A(1,1)$, $A(1,2)$, and $A(2,1)$ left to evaluate. Sums of the form $A(p,q)$ are known as alternating Euler sums. Euler sums and their close variants are an ongoing research area. These three, though, seem like the three simplest alternating Euler sums, and one would think that there would be relatively simple ways to evaluate them. However, despite a decent amount of work today I could not find simple proofs for any of them, either on my own or in the literature.



At any rate, we have $A(1,1) = \frac{1}{2} \zeta(2) - \frac{1}{2} \log^2 2$, $A(1,2) = \frac{5}{8} \zeta(3)$, and $A(2,1) = \zeta(3) - \frac{1}{2}\zeta(2) \log 2$.



(Here's a link to my question where I ask for a proof for the $A(1,1)$, $A(1,2)$, and $A(2,1)$ evaluations. There are multiple nice answers there. In particular, Robjohn's three answers show how to do the evaluations by manipulating the sums, the same approach I was trying here.)






Finally.

Therefore,
$$\int_0^1 \log(1-x) \log x \log(1+x) \, dx = -6 + 4 \log 2 - \log^2 2 + \frac{5}{2} \zeta(2) - 3\zeta(2) \log 2 + \frac{21}{8} \zeta(3).$$


calculus - Why does combining $int_{-infty}^{infty} e^{-x^2} dxint_{-infty}^{infty} e^{-y^2} dy$ into a $dx,dy$ integral work?



I was looking here: How to compute the integral $\int_{-\infty}^\infty e^{-x^2}\,dx$? to find out how to evaluate $\displaystyle\int\limits_{-\infty}^{\infty} e^{-x^2} dx$, and didn't understand why
$$\left(\displaystyle\int\limits_{-\infty}^{\infty} e^{-x^2} dx\right)\left(\int\limits_{-\infty}^{\infty} e^{-y^2} dy\right)=\displaystyle\int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty} e^{-x^2} e^{-y^2} dx\thinspace dy$$
I know you can use Fubini's Theorem from this: Why does $\left(\int_{-\infty}^{\infty}e^{-t^2} dt \right)^2= \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-(x^2 + y^2)}dx\,dy$?, but I'm still confused about how exactly you can just multiply two integrals together like that.



A detailed answer would be very nice.




Thanks!


Answer



$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-x^2}e^{-y^2} dx dy$$



Because we treat $y$ constant in the first, inner, integral we can pull it out.



$$=\int_{-\infty}^{\infty} e^{-y^2} \int_{-\infty}^{\infty} e^{-x^2} dx dy$$



Now because $\int_{-\infty}^{\infty} e^{-x^2} dx$ is some constant we can pull it out,




$$=\int_{-\infty}^{\infty} e^{-x^2} dx\int_{-\infty}^{\infty} e^{-y^2} dy$$



The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,



$$\int_{a}^{b} \int_{c}^{d} g(x,y) dx dy=\int_{a}^{b} h(y) dy \int_{c}^{d} f(x) dx$$



Provided everything we write down exists.


algebra precalculus - How do we get from $ln A=ln P+rn$ to $A=Pe^{rn}$ and similar logarithmic equations?



I've been self-studying from the amazing "Engineering Mathematics" by Stroud and Booth, and am currently learning about algebra, particularly logarithms.




There is a question which I don't understand who they've solved. Namely, I'm supposed to express the following equations without logs:



$$\ln A = \ln P + rn$$



The solution they provide is:



$$A = Pe^{rn}$$



But I absolutely have no idea how they got to these solutions. (I managed to "decipher" some of the similar ones piece by piece by studying the rules of logarithms).


Answer




The basic idea behind all basic algebraic manipulations is that you are trying to isolate some variable or expression from the rest of the equation (that is, you are trying to "solve" for $A$ in this equation by putting it on one side of the equality by itself).



For this particular example (and indeed, most questions involving logarithms), you will have to know that the logarithm is "invertible"; just like multiplying and dividing by the same non-zero number changes nothing, taking a logarithm and then an exponential of a positive number changes nothing.



So, when we see $\ln(A)=\ln(P)+rn$, we can "undo" the logarithm by taking an exponential. However, what we do to one side must also be done to the other, so we are left with the following after recalling our basic rules of exponentiation:
$$
A=e^{\ln(A)}=e^{\ln(P)+rn}=e^{\ln(P)}\cdot e^{rn}=Pe^{rn}
$$


elementary number theory - Prove that $9mid (4^n+15n-1)$ for all $ninmathbb N$



First of all I would like to thank you for all the help you've given me so far.



Once again, I'm having some issues with a typical exam problem about divisibility. The problem says that:





Prove that $\forall n \in \mathbb{N}, \ 9\mid4^n + 15n -1$




I've tried using induction, but that didn’t work. I've tried saying that:




$4^n + 15n-1 \equiv 0 \pmod{9}$. Therefore, I want to prove that $4^{n+1} + 15(n+1) -1 \equiv 0 \pmod{9}$.



I've prooved for $n=1$, it's $18\equiv 0 \pmod{9}$, which is OK.





But for the inductive step, I get:




$4\cdot4^n + 15n+15-1 \equiv 0 \pmod{9}$




And from there, I don't know where to replace my inductive hypothesis, and therefore, that's why I think induction is not the correct tool to use here. I guess I might use some tools of congruence or divisibility, but I'm not sure which ones.




I do realize that all $n\in \mathbb{N}/ \ 3 \ |\ n \Rightarrow 4^n \equiv 1 \pmod{9} \text{ and } 15n \equiv 0 \pmod{9}$. In that case, where 3 divides n, then I have prove that $4^n + 15n-1 \equiv 0 \pmod{9}$. But I don't know what to do with other natural numbers that are not divisible by 4, that is, all $n \in \mathbb{N} / n \equiv 1 \pmod{3} \text{ or } n \equiv 2 \pmod{3}$.



Any ideas? Thanks in advance!


Answer



By the Inductive Hypothesis, $4^n + 15n -1 \equiv 0$ so $4^n \equiv 1-15n$ and thus
$$
4^{n+1}+15(n+1)-1 = 4 \cdot 4^n + 15n + 14 \equiv 4 \cdot (1-15n) + 15n + 14 = 18 -45n \equiv 0
$$
since both $18$ and $45$ are divisible by 9.


Monday, April 25, 2016

elementary number theory - How to find last two digits of $2^{2016}$



What should the 'efficient' way of finding the last two digits of $2^{2016}$ be? The way I found them was by multiplying the powers of $2$ because $2016=1024+512+256+128+64+32$. I heard that one way would be with the Chinese Remainder Lemma but I don't really know how I should start?


Answer



Essentially we need $2^{2016}\pmod{100}$



As $(2^{2016},100)=4$




let us find $2^{2016-2}\pmod{100/4}$



Now as $2^{10}\equiv-1\pmod{25}$



$2^{2014}=2^{201\cdot10+4}=(2^{10})^{201}\cdot2^4\equiv(-1)^{201}\cdot2^4\equiv9\pmod{25}$



$$\implies2^2\cdot2^{2014}\equiv2^2\cdot9\pmod{2^2\cdot25}$$


limits - Result of $lim_{ntoinfty} frac{{x}^{100n}}{n!}$



The task is to $\lim_{n\to\infty} \frac{{x}^{100n}}{n!}$. n is an integer. I've tried to use Stolz theorem, but that doesn't seem to give any result.



Thank you for your help.



Answer



By ratio test



$$\frac{{x}^{100(n+1)}}{(n+1)!}\frac{n!}{{x}^{100n}}=\frac{x^{100}}{n+1}\to 0$$



then



$$\lim_{n\to\infty} \frac{{x}^{100n}}{n!}=0$$


How do I divide across equivalence in modular arithmetic



I understand how to solve for $k$ for something like $2k \equiv 4 \bmod 16$. This would become $k \equiv 2 \bmod 8$. But how would I solve for $k$ for something like $3k \equiv 1 \bmod 16$?



Answer



If you have $2k\equiv 4 \pmod {16}$, then for some integer $m$,



$$2k=16m+4$$
which means $k=8m+2$. Now if you have $3k\equiv 1\pmod{16}$, we can multiply by $5$ to get
$$15k\equiv -k\equiv 5 \pmod {16}$$
and thus



$$k\equiv -5\equiv 11\pmod{16}$$
Or $k=16m+11$.




In general, $a$ has a multiplicative inverse mod $p$ iff $(a,p)=1$.


elementary number theory - Does there exist an $a$ such that $a^n+1$ is divisible by $n^3$ for infinitely many $n$?

It is well known that there are infinitely many positive integers $n$ such that $2^n+1$ is divisible by $n$.



Also it is well known that there exist infinitely many positive integers $n$ such that $4^n+1$ is divisible by $n^2$.




But I still cannot find any positive integer $a$ for which there exist infinitely many positive integers $n$ such that $a^n+1$ (or $a^n-1$) is divisible by $n^3$.



How can I find such and $a$ or prove that it doesn't exist?

Sunday, April 24, 2016

limits - Finding $lim_{n to infty} frac{sqrt{n!}}{2^n}$


I'm looking for a way to find this limit:


$\lim_{n \to \infty} \frac{\sqrt{n!}}{2^n}$


I think I have found that it diverges, by plugging numbers into the formula and "sandwich" the result. However I can't find way to prove it.


I know that $n! \approx \sqrt{2 \pi n}(\frac{n}{e})^n $ when $n \to \infty$. (Stirling rule I think)


However, I don't know how I could possibly use it. I mean, I tried using the rule of De l'Hôpital after using that rule, but I didn't go any further.


Answer




$$ \frac{\sqrt{n!}}{2^n}\sim\frac{(2\pi\,n)^{1/4}\Bigl(\dfrac{n}{e}\Bigr)^{n/2}}{2^n}=(2\pi\,n)^{1/4}\Bigl(\frac{\sqrt n}{2\sqrt e}\Bigr)^{n}. $$ Since $\dfrac{\sqrt n}{2\sqrt e}$ and $n^{1/4}$ converge to $\infty$, so does $\dfrac{\sqrt{n!}}{2^n}$.


continuity - Find a discontinuous function defined on $mathbb{Q}$ embedded in $mathbb{R}$



I am reading Chapter 1 Example 11 of 'Counterexamples in Analysis' by Gelbaum and Olmstead. This section illustrates counterexamples of functions defined on $\mathbb{Q}$ embedded in $\mathbb{R}$ of statements that are usually true for functions defined on a real domain. Almost all examples have the assumption that the function (defined on a rational domain) is continuous, for example, the book gives a counterexample of:





A function continuous and bounded on a closed interval but not
uniformly continuous.




My questions are, what is an example of a discontinuous real function defined on $\mathbb{Q}$, that is: $f:\mathbb{Q}\rightarrow\mathbb{R}$? Are all functions defined on $\mathbb{Q}$ discontinuous (similar to how functions defined on the set of natural numbers are always continuous)?


Answer



1). All functions defined on $\mathbb{N}$ are continuous (not discontinuous).



2). An example of a function $f: \mathbb{Q} \to \mathbb{R}$ that is discontinuous is $f = \chi_{\{0\}}$, i.e. $f(x) = 1$ iff $x = 0$ ($x \in \mathbb{Q}$). One can see that this is discontinuous by noting that $f(\frac{1}{n}) = 0$ for each $n \ge 1$, while $f(\lim_n \frac{1}{n}) = f(0) = 1$.


abstract algebra - Constructing a multiplication table for a finite field


Let $f(x)=x^3+x+1\in\mathbb{Z}_2[x]$ and let $F=\mathbb{Z}_2(\alpha)$, where $\alpha$ is a root of $f(x)$. Show that $F$ is a field and construct a multiplication table for $F$.





Can you please help me approach this problem? I've tried searching around, but I don't really know what I'm looking for!



Thanks.

Nowhere continuous real valued function which has an antiderivative

My question:



Is there a function $f: \mathbb{R} \rightarrow \mathbb{R}$ that nowhere continuous on its domain, but has an antiderivative?




If there is no such a function, is it true to conclude that: to have an antiderivative, $f$ is necessary to be continuous at least at one point on its domain?



Any comments/ inputs are highly appreciated. Thanks in advance.

Saturday, April 23, 2016

calculus - Evaluate the definite integral $int^{infty }_{0}frac{x ,dx}{e^{x} -1}$ using contour integration


My friend and I have been trying weeks to evaluate the integral



$$\int^{\infty }_{0}\frac{x \,dx}{e^{x} -1} .$$



We have together tried 23 contours, and all have failed.


We already know how to solve this with infinite sums (i.e., using the zeta function and the Basel problem), but we can't figure out how to solve it using contour integration methods.


We already know the answer is $\frac{\pi^{2}}{6}$.


Answer




I would have guessed this was a duplicate, but I wasn't able to find another instance of this question during a cursory search.


Hint The denominator has period $2 \pi i$, which suggests using the following contour $\Gamma_{\epsilon, R}$, $0 < \epsilon < \pi$, $\epsilon < R$, (for which an illustration was already drawn for an answer to the similar question linked by Zacky in the comments):


enter image description here


The key trick here, which we apply with the benefit of hindsight, is to evaluate instead the similar integral $$\int_{\Gamma_{\epsilon, R}} \frac{z^2 \,dz}{e^z - 1} .$$ The interior of $\Gamma_{\epsilon, R}$ contains no poles, so this integral vanishes. Thus, parameterizing the constituent arcs of the contour gives \begin{multline} 0 = \underbrace{\int_\epsilon^R \frac{x^2 \,dx}{e^x - 1}}_{A} + \underbrace{\int_0^{2 \pi} \frac{(R + i y)^2 \cdot i \,dy}{e^{R + i y} - 1}}_{B} + \underbrace{\int_R^\epsilon \frac{(x + 2 \pi i)^2 \,dx}{e^x - 1}}_{C} \\ + \underbrace{\int_0^{-\pi / 2} \frac{(2 \pi i + \epsilon e^{i\theta})^2 \cdot i \epsilon e^{i \theta} d \theta}{e^{\epsilon e^{i \theta}} - 1}}_{D} + \underbrace{\int_{2 \pi - \epsilon}^\epsilon \frac{(i y)^2 \cdot i \,dy}{e^{i y} - 1}}_{E} + \underbrace{\int_{\pi / 2}^0 \frac{(\epsilon e^{i\theta})^2 \cdot i \epsilon e^{i \theta} d \theta}{e^{\epsilon e^{i \theta}} - 1}}_{F} . \qquad (\ast) \end{multline}



A standard bounding argument shows that $B \to 0$ as $R \to \infty$. Computing the first terms of the Taylor series gives that the integrand of $D$ is $-4 \pi^2 i + O(\epsilon)$, so $D = 2 \pi^3 i + O(\epsilon)$, and similarly $F = O(\epsilon)$ (in fact, the integrand is analytic at $0$, which implies this without any more computation). Now, expanding the integrand of $C$ gives $$-\int_\epsilon^R \frac{x^2 \,dx}{e^x - 1} = -\int_\epsilon^R \frac{x^2 \,dx}{e^x - 1} - 4 \pi i \int_\epsilon^R \frac{x \,dx}{e^x - 1} + 4 \pi^2 \int_\epsilon^R \frac{\,dx}{e^x - 1} .$$ The first term on the r.h.s. cancels $A$, and after taking appropriate limits the second term will be constant multiple of the integral $\color{#df0000}{\int_0^\infty \frac{x \,dx}{e^x - 1}}$ of interest. The third term diverges as $\epsilon \searrow 0$, and it turns out that the diverging part of this term in $\epsilon$ is canceled by the diverging part of $E$, but we can avoid dealing with this issue directly by passing to the imaginary part of $(\ast)$. Computing gives $\operatorname{Im} E = -\frac{1}{2} \int_\epsilon^{2 \pi - \epsilon} y^2 \,dy = -\frac{4}{3} \pi^3 + O(\epsilon)$, so taking the limits $\epsilon \searrow 0, R \to \infty$ of the imaginary part of $(\ast)$ leaves $$0 = -4 \pi \color{#df0000}{\int_0^\infty \frac{x\,dx}{e^x - 1}} + 2 \pi^3 - \frac{4}{3} \pi^3 ,$$ and rearranging gives the desired result, $$\color{#df0000}{\boxed{\int_0^\infty \frac{x \,dx}{e^x - 1} = \frac{\pi^2}{6}}} .$$



real analysis - Prove that $lim_{n to infty} frac{n}{(n!)^frac{1}{n}} = e $





I have solved the problem in just 2 lines using a theorem which asserts that



"Let ${u_n}$ be a real sequence such that $u_n > 0 \forall n \in \mathbb{N}$ and $\lim_{n \to \infty} \frac{u_{n+1}}{u_n} = \ell$ (finite of infinite). Then $\lim_{n \to \infty} (u_n)^{1 \over n} =\ell$ "



To prove the aforesaid limit, I fix $u_n={n^n \over n!}$. Then $u_n>0 \forall n \in \mathbb{N}$ and $\lim_{n \to \infty}\frac{u_{n+1}}{u_n}= \lim_{n \to \infty}(1+{1 \over n})^n=e$.




Then it follows from the above theorem that $\lim_{n \to \infty} (u_n)^{1 \over n} =e$ i.e. $ \lim_{n \to \infty} \frac{n}{(n!)^\frac{1}{n}} = e $. (Proved)



But I am trying to prove it without using the theorem. I am trying to get a generic proof.



Can anyone provide a proper wayout for this?



Thanks for your help in advance.


Answer



EDIT: As pointed out in the comments, even though the final inequality is correct, it is insufficient since $(n+1)^{1/n} \to 1$ as $n \to \infty$. The lower bound can be obtained as shown in @adfriedman's answer.




Here's my take on it:
Whenever $n \geq 3$, we have
$$ n^n \geq (n+1)!, $$
and thus
$$ n^n \geq (n+1)n! \quad \Leftrightarrow \quad \frac{n}{n!^{\frac{1}{n}}} \geq (n+1)^{\frac{1}{n}}. $$
On the other hand, the Taylor expansion of $e^n$ gives
$$ \frac{n^n}{n!} \leq \sum_{k=0}^{\infty} \frac{n^k}{k!} = e^n \quad \Leftrightarrow \quad \frac{n}{n!^{\frac{1}{n}}} \leq e. $$
So,
$$ (n+1)^{\frac{1}{n}} \leq \frac{n}{n!^{\frac{1}{n}}} \leq e. $$

Apply the Squeeze Theorem.


number theory - What is $operatorname{gcd}(a,2^a-1)$?

Intuitively, I feel it's $1$, for example $(2, 3), (3,7)$ etc. But then I cannot go to prove it. Using the formula $ax+by=c$ does not make sense because of the power.
Is it possible by induction? If I assume $a=1$, then $\operatorname{gcd}(1, 2^1-1)=1$ Assuming, it to be true for $k$, then



$$\operatorname{gcd}(k,2^k-1)=1 = kx+(2^k-1)y=1$$



I'm stuck here. Is it even possible with this method?

number theory - How to find integer linear combination

The question said:


Use the Euclidean Algorithm to find gcd $(1207,569)$ and write $(1207,569)$ as an integer linear combination of $1207$ and $569$


I proceeded as follows:



$$ 12007 = 569(2) +69$$


$$569 = 69(8) +17$$


$$69 = 17(4) +1$$


$$17 = 1(17) + 0$$


Thus the gcd = 1


The part I am having problems with is how calculate and write it was a linear combination. Can someone help?

linear algebra - process of finding eigenvalues and eigenvectors

I need to find eigenvalues/eigenvectors of different kinds of $n \times n$ matrices. For example, how would I determine these for the matrices listed below? What is the typical process? Should I always go by the route of finding eigenvalues by finding roots of characteristic polynomial and then getting eigenvectors by solving $(\mathbf{A} - \lambda \mathbf{I})\mathbf{x} = 0$?



$\begin{bmatrix}
2&0&0\\ 1&2&0\\
0& 1 & 2
\end{bmatrix}
$

$\begin{bmatrix}
4 &1 &1 &1 \\

1&4 &1 &1 \\
1&1 &4 &1 \\
1& 1& 1& 4
\end{bmatrix}$

These are just examples. Typically I want to find eigenvectors of $n \times n$ matrices. If you can show me the process of finding solution of one of these matrices, that would be helpful.

Friday, April 22, 2016

real analysis - Show a function for which $f(x + y) = f(x) + f(y) $ is continuous at zero if and only if it is continuous on $mathbb R$




Suppose that $f: \mathbb R \to\mathbb R$ satisfies $f(x + y) = f(x) + f(y)$ for each real $x,y$.



Prove $f$ is continuous at $0$ if and only if $f$ is continuous on $\mathbb R$.




Proof: suppose $f$ is continuous at zero. Then let $R$ be an open interval containing zero. Then $f$ is continuous at zero if and only if $f(x) \to f(0)$ as $x \to 0$. Then $|f(x) - f(0)| < \epsilon$.




Can anyone please help me? I don't really know how to continue. Thank you.


Answer



First let $x=y=0$, we have $f(0)=2f(0)$ which means $f(0)=0$.
For any $a\in R$ and a given $\epsilon>0$, because f is continuous at 0, there exist $\delta>0$ s.t. $|x|<\delta$ implies $|f(x)-f(0)|=|f(x)|<\epsilon$. Now consider the same $\delta$, if $|x-a|<\delta$, we have $$|f(x)-f(a)|=|f(x-a)|<\epsilon$$, therefore f is continuous at $a$.


complex numbers - What is the meaning of Euler's identity?




I know that euler's identity state that $e^{ix} = \cos x + i\sin x$



But e is a real number. What does it even mean to raise a real number to an imaginary power. I mean multiplying it with itself underoot $-1$ times? What does that mean?


Answer



If $z$ and $w$ are complex numbers, you define $z^w = e^{w \log z}$. The problem is that $\log w$ assumes several values, so you can say that $z^w$ is a set. So if you fix a principal value for ${\rm Log}\,z$, you have a principal power $e^{w\,{\rm Log}\,z}$. For each branch you'll have a different power.



More exactly, the argument of a complex number is the set: $$\arg z = \{ \theta \in \Bbb R \mid z = |z|(\cos \theta + i \sin \theta) \}.$$We call ${\rm Arg}\,z$ the only $\theta \in \arg z$ such that $-\pi < \theta \leq \pi$. Also, if $z \neq 0$, we have: $$\log z = \{ \ln |z| + i \theta \mid \theta \in \arg z \}.$$

Call ${\rm Log}\,z = \ln |z| + i \,{\rm Arg}\,z$. Then you could say that $z^w = \{ e^{w \ell} \mid \ell \in \log z \}$.



To make sense of $e^{\rm something}$, we use the definition of the exponential with series.


calculus - Sum of infinite series $1+frac22+frac3{2^2}+frac4{2^3}+cdots$





How do I find the sum of $\displaystyle 1+{2\over2} + {3\over2^2} + {4\over2^3} +\cdots$



I know the sum is $\sum_{n=0}^\infty (\frac{n+1}{2^n})$ and the common ratio is $(n+2)\over2(n+1)$ but i dont know how to continue from here


Answer



After establishing convergence, you could do the following:
$$S = 1 + \frac22 + \frac3{2^2}+\frac4{2^3}+\dots$$
$$\implies \frac12S = \frac12 + \frac2{2^2} + \frac3{2^3}+\frac4{2^4}+\dots$$
$$\implies S - \frac12S = 1+\frac12 + \frac1{2^2} + \frac1{2^3}+\dots$$
which is probably something you can recognise easily...


Thursday, April 21, 2016

calculus - Are all limits solvable without L'Hôpital Rule or Series Expansion

Is it always possible to find the limit of a function without using L'Hôpital Rule or Series Expansion?



For example,



$$\lim_{x\to0}\frac{\tan x-x}{x^3}$$




$$\lim_{x\to0}\frac{\sin x-x}{x^3}$$



$$\lim_{x\to0}\frac{\ln(1+x)-x}{x^2}$$



$$\lim_{x\to0}\frac{e^x-x-1}{x^2}$$



$$\lim_{x\to0}\frac{\sin^{-1}x-x}{x^3}$$



$$\lim_{x\to0}\frac{\tan^{-1}x-x}{x^3}$$

recurrence relations - Markov-Chain with general state space - recurrent sets



I have an irreducible Markov Chain $(z_n )_{n\in \mathbb N } $ with state space $X$ and with transition-probability-kernel $K$, so $K(x,\cdot)$ is a probability measure (on the $\sigma$-Algebra $\mathcal{B}(X)$).
$\tau_B$ is the first entry time of the chain into the set $B$, ie $\tau_B :=\mathop{min}\{i\in \mathbb N | i\geq 1, z_i\in B\}$.



A set $B\in\mathcal{B}(X)$ was defined as recurrent, if



$\mathbb P_x\{\tau_B<\infty\}=1\qquad\forall x\in X$,




with $\mathbb P_x$ being the law of the chain starting in $x$.
Now lets assume there is a recurrent set $B$ and a set $C$ so that for some $\alpha >0$



$K(x,B)\leq\alpha K(x,C)\qquad \forall x\in X$.



I am quite sure, that $C$ has to be recurrent itself, because the chain would visit $C$ once for every $\frac{1}{\alpha}$-times it visits $B$. But how do I formally prove it?


Answer



$\alpha \geq 1$ means the chance to hit $C$ at any time is greater than the chance to hit $B$, so the chance to hit it in finite time will be greater or equal and we are done. In the case $\alpha <1$ let's say the stopping time of the $k$th-entry to $B$ is $\tau_B (k)$.




With the (strong) Markov property we get



$\mathbb{P}_x \{\tau_B (k)<\infty\}=1\quad\forall k\in \mathbb{N}_1$ which we shall call statement (I) and



$\mathbb{P}_x \{\tau_B (k+1)-\tau_B (k)<\infty\}=1\quad\forall k\in \mathbb{N}_1$ which we call statement (II).



Now because of $K(x,B)\leq αK(x,C)\quad \forall x\in X$,
we have



$ \mathbb{P}_x \{\tau_C > \tau_B (k) \}\leq \prod_{i=1}^{k} (1-\alpha\sum_{j=1}^{\infty} \mathbb{P}_x \{ z_{\tau_B (i)+j} \in B;\ z_{\tau_B (i)+m} \not\in B\ \forall m


as the sum is always equal to 1 (because of (II) ). One can understand it like this: each time $B$ was hit (for a total of $k$ times) there also was a chance to hit $C$, but $C$ was dodged.



Taking $k\rightarrow \infty$ together with (I) gives $\mathbb{P}_x \{\tau_C =\infty\}=0\quad\forall x\in X$.


Wednesday, April 20, 2016

integration - Why continuity of $X$ needed for $int_{g^{-1}(y)}^infty f_X(x) , dx = 1-F_X(g^{-1}(y))$?



Let $X$ be a random variable and $Y=g(X)$



Define
$$\tag{1}
\chi = \{x: f_X(x)>0\}\quad \text{and}\quad \mathcal{Y} = \{y:y=g(x) \text{ for some } x \in \chi\}
$$




Define $g^{-1}(y) = \{x\in \chi:g(x) = y\}$



Define: A random variable $X$ is continuous if $F_X(x)$ is a continuous function of $x$.




My question is: how come, in the theorem below, the statement in (b) requires X to be a continuous random variable but the statement in (a) does not




The relevant theorem is (Theorem 2.1.3 in Casella and Berger 2nd Edition)





Let $X$ have cdf $F_X(x)$, let $Y=g(X)$, and let $\chi$ and $\mathcal{Y}$ be defined as in (1)




  • (a) If $g$ is an increasing function on $\chi$, $F_Y(y) = F_X(g^{-1}(y))$ for $y\in \mathcal{Y}$


  • (b) If $g$ is a decreasing function on $\chi$ and $X$ is a continuous random variable, $F_Y(y) = 1-F_X(g^{-1}(y))$ for $y\in\mathcal{Y}$









Another way of stating what I am asking is that, prior to stating this theorem, Casella and Berger state




if $g(x)$ is an increasing function, then using the fact that $F_Y(y) = \int_{x\in\chi : g(x)\leq y} f_X(x)dx$, we can write
$$
F_Y(y) = \int_{x\in\chi : g(x)\leq y} f_X(x) \, dx = \int_{-\infty}^{g^{-1}(y)} f_X(x) \, dx = F_X(g^{-1}(y))
$$





If $g(x)$ is decreasing, then we have




$$
F_Y(y) = \int_{g^{-1}(y)}^\infty f_X(x) \, dx = 1-F_X(g^{-1}(y))
$$
"The continuity of $X$ is used to obtain the second equality




My question(restated) is in yellow box below:





My question (restated) is: How come, when $g(x)$ is an increasing function we do not need to use continuity of $X$, but we do for the case when $g(x)$ is decreasing?




  • (A side question, I will accept answer so long as answers the above question): this is continuity of the random variable, but the integral uses the PDF. what is the relation between continuity of $X$ and it's pdf? (specifically, I think there may be some strangeness if $F_X$, the CDF of $X$ is continuous but not differentiable)?




What came to my mind was Fundamental theorem of calculus maybe, but there is a version of it that doesn't require continuity of $f$ I think? Plus, here we have $X$ is continuous, if that matters -- I'm not sure.



Answer



$$
\int_{g^{-1}(y)}^\infty f_X(x) \, dx = 1-F_X\left(g^{-1}(y)\right) \text{ ?}
$$
We have:
$$
1-F_X(g^{-1}(y)) = 1 - \Pr(X\le g^{-1}(y)) = \Pr\left(X>g^{-1}(y)\right)
$$
We may consider continuity of $F$ at $g^{-1}(y)$ or continuity of $F$ at points greater than $g^{-1}(y).$ Nothing about continuity at points less than $g^{-1}(y)$ can matter here.




In the first place $$\Pr(a< X < b) = \int_a^b f_X(x)\,dx\tag 1$$ only if $X$ has a density function $f_X,$ and that in itself requires continuity of $F_X$ (and in fact requires something more than just continuity). If $\Pr(x = c)>0,$ where $c$ is some number between $a$ and $b,$ then line $(1)$ above is not true of any function in the role of $f.$



However, statement $(b)$ of the theorem does not mention integration of any density function. The statement is in effect $\Pr(Y\le y) = 1- \Pr(X>g^{-1}(y))$ if $F_X$ is continuous.



Cumulative distribution functions are non-decreasing. The only kind of discontinuity that a non-decreasing function can have is a jump. A jump in $F_X$ at $g^{-1}(y)$ would mean $\Pr(X = g^{-1}(y))>0.$ If that happens then
\begin{align}
& \Pr(Y\le y) = \Pr(Y=y) + \Pr(Y= {} & \Pr(X=g^{-1}(y)) + \Pr(X>g^{-1}(y)) \\[10pt]
= {} & \Pr(X=g^{-1}(y)) + \int_{g^{-1}(y)}^\infty f_X(x)\,dx.
\end{align}

If the first term in the last line is positive rather than zero, then equality between the second term in the last line and $\Pr(Y\le y)$ is not true.



But now suppose it had said $\Pr(Y\ge y).$ Then we would have
$$
\Pr(Y\ge y) = \Pr(X\le g^{-1}(y)) = F_X(g^{-1}(y)).
$$
The difference results from the difference between $\text{“}<\text{''}$ and $\text{“} \le \text{''}$ in the definition of the c.d.f., which says $F_X(x) = \Pr(X\le x)$ and not $F_X(x) = \Pr(X

As for the relationship between continuity and density functions, that is more involved. The Cantor distribution is a standard example, defined like this: A random variable $X$ will be in the interval $[0,1/3]$ or $[2/3,1]$ according to the result of a coin toss; then it will be in the upper or lower third of the chosen interval according to a second coin toss; then in the upper or lower third of that according to a third coin toss, and so on.




The c.d.f. of this distribution is continuous because there is no individual point between $0$ and $1$ that gets assigned positive probability.



But notice that there is probability $1$ assigned to a union of two intervals of total length $2/3,$ then probability $1$ assigned to a union of intervals that take up $2/3$ of that union of intervals, thus $4/9$ of $[0,1],$ then there is probability $1$ assigned to a set taking up $2/3$ of that space, thus $(2/3)^3 = 8/27,$ and so on. Thus there is probability $1$ that the random variable lies within a certain set whose measure is $\le (2/3)^n,$ no matter how big an integer $n$ is. The measure of that set must therefore be $0.$ If you integrate any function over a set whose measure is $0,$ you get $0.$ Hence there can be no function $f$ such that for every measurable set $A\subseteq[0,1]$ we have
$$
\Pr(X\in A) = \int_A f(x)\,dx,
$$
i.e. there can be no density function.



Thus the Cantor distribution has no point masses and also no probabilities that can be found by integrating a density function.




Thus existence of a density function is a stronger condition on than mere continuity of the c.d.f.


calculus - Integrating $int_0^1 sqrt{frac{log(1/t)}{t}} ,mathrm{d}t = sqrt{2pi}$



I'd like to evaluate the integral




$$\int_0^1 \sqrt{\frac{\log(1/t)}{t}} \,\mathrm{d}t.$$



I know that the value is $\sqrt{2\pi}$ but I'm not sure how to get there.



I've tried a substitution of $u = \log(1/t)$, which transforms the integral into



$$\int_0^\infty \sqrt{u e^{-u}} \,\mathrm{d}u.$$



This seems easier to deal with. But where do I go from here? I'm not sure.



Answer



The function $\Gamma(x)$ is defined as



$$\Gamma(x) = \int_0^\infty t^{x-1} e^{-t} \,\mathrm{d}t.$$



This general integral below on the left can be transformed in terms of the gamma function with a substitution like so:



$$\int_0^\infty t^{x-1} e^{-bt} \,\mathrm{d}t = \int_0^\infty \left( \frac{u}{b} \right)^{x-1} \frac{e^{-u}}{b} \,\mathrm{d}u = b^{-x} \Gamma(x).$$



This is in the form of the integral in the question. Plugging in the values yields the desired result, $\sqrt{2\pi}$.



calculus - Find a $f$ function such that$f'(x)geq 0$ but not continuous



I just started reading continuity, differentiability etc. So I was thinking of an example of following type:




Let $f: [a,b]\rightarrow \mathbb{R}$, where $f(x)$ is monotonically increasing continuous function and differentiable on $(a,b)$ or simply $f'(x)\geq 0$ for $x\in (a,b)$.



can we find such function for which $f'(x)$ is not continuous?





I could not find any, whatever function I take $f'(x)$ is becoming continuous. Is there such function even exists!


Answer



For every $x\in (-1,1).$ Define,
$$ f(x) = 100x+ x^2\sin\frac{1}{x} \qquad\text{with $f(0)=0$}$$
which is differentiable and
$$ f'(x) = 100+ 2x\sin\frac{1}{x}-\cos\frac{1}{x} \qquad\text{with $f'(0)=100$}$$
$f'$ is not continuous a $x=0$ and you might choose the interval $[a,b]$ around $x=0$ as it suite to you. Indeed




$$ f'(x)=100+2x\sin\frac{1}{x}-\cos\frac{1}{x} \ge 100+ 2x\sin\frac{1}{x}-1$$



Since $|\sin a| \le |a|$ and $-1\le-\cos a\le 1$ then $|2x\sin\frac{1}{x}|\le 2$ i.e $$2x\sin\frac{1}{x}\ge -2 $$



therefore



$$ f'(x)=100+2x\sin\frac{1}{x}-\cos\frac{1}{x} \ge 100-2-1= 97>0$$ for every $x\in \mathbb R$. so $f $ is increasing and $f'$ is not continuous at $x=0$.


calculus - Double integral with two variables in the exponential: $int_0^pi int_0^infty e^{-ibxcostheta - {x/a}} x^2 sintheta ~dx~dtheta $.



I'm having trouble with the substitutions in the following integral:





$$\int_0^\pi \int_0^\infty e^{-ibx\cos\theta - {x/a}} x^2 \sin\theta ~dx~d\theta $$




My attempt:



Let $$u = \cos\theta$$ then $$ du = -\sin\theta~d\theta$$



Then we have



$$-1 \int_0^\pi \int_0^\infty e^{-ibxu - {x/a}} x^2 ~du~dx $$




How do I separate the u out of the exponential to integrate separately with respect to $u$ and then $x$? Am I doing the wrong substitution?


Answer



\begin{align}
\int_0^\pi \int_0^\infty e^{-ibx\cos\theta - \frac1ax} x^2 \sin\theta ~dx~d\theta
&= \int_{-1}^1\int_0^\infty e^{-ibxu - \frac{x}{a}} x^2 ~dx ~du \\
&= \int_0^\infty\left(\int_{-1}^1 e^{-ibxu - \frac{x}{a}} x^2 ~du\right) ~du \\
&= \int_0^\infty x^2e^{-\frac{x}{a}}\left(\dfrac{e^{-ibx}}{ibx} - \dfrac{e^{-ibx}}{ibx}\right) ~du \\
&= \dfrac{2}{b}\int_0^\infty xe^{-\frac1ax}\sin bx dx \\
&= \dfrac{2}{b}{\cal L}(x\sin bx)\Big|_{s=\frac1a} \\

&= \dfrac{2}{b}\dfrac{2\frac1a~b}{(\frac{1}{a^2}+b^2)^2} \\
&= \dfrac{4a^3}{(1+a^2b^2)^2}
\end{align}


Tuesday, April 19, 2016

sequences and series - Finding formula for partial sum of polynomial terms?


For example, we know that the following is true (and can be easily derived):


$\sum\limits_{x=1}^{n}x = \frac{1}{2}n(n+1)$


But, what if we wanted to find the sum of a series like this:


$\sum\limits_{x=1}^{n}x(2x+1)$



Wolfram|Alpha tells me that the answer is $\frac{1}{6}n(n+1)(4n+5)$, but I'm at a loss as to how it came up with this answer. Is there a simple method for finding the general formula for a partial sum of the form $\sum\limits_{x=1}^{n}y(x)$ where $y(x)$ is a polynomial with rational roots?


Answer



Let $p(x)=\sum\limits_{i=0}^{n} a_{i}x^{i}$


Say you wan't to find $\sum p(x)$.


There is a general formula for $\sum x^{k}$.


$$1^{k}+2^{k}+\cdots+n^{k}=\sum\limits_{i=1}^{k}S(k,i)\binom{n+1}{i+1}i!\\=\frac{n^{k+1}}{k+1}+\frac{1}{2}n^{k}+B_{1}\frac{r}{2!}n^{r-1}-\cdots$$


Where $S(k,i)$ is the Stirling number of the Second Kind and $B_{r}$ denots the the Bernoulli's Numbers.


So using that you can find the sum of any polynomial.


The given sum- $$\sum x(2x+1)=2\sum x^{2}+\sum x=2\binom{n+1}{2}+4\binom{n+1}{3}+\binom{n+1}{2}=\frac{1}{6}n(n+1)(4n+5)$$


real analysis - Show that the following definitions all give norms on $S_F$

$S_F$ is the space of all real sequences $\mathbf a=(a_n)_{n=1}^{\infty}$ such that each sequence $\mathbf a\in S_F$ is eventually zero.



Show that the following definitions all give norms on $S_F$,



$$\|\mathbf a\|_{\infty}=\max_{n\geq1}\lvert a_n\rvert$$ $$\|\mathbf a\|_1=\sum_{n=1}^{\infty}\lvert a_n\rvert$$ $$\|\mathbf a\|_2=\Bigg(\sum_{n=1}^{\infty}\lvert a_n\rvert^2\Bigg)^{1/2}$$



I'm a bit unsure what I should do here. Do I just prove that all 3 functions are norms based on the properties required for a function to be a norm and the extra information that the sequences are all eventually zero?




PS. Does the fact that the sequences are eventually zero imply that they tend to zero? I would have thought this is the case but I'm not sure if I can assume it.

elementary number theory - How to prove $8^{2^n} - 5^{2^n}$ is divisible by $13$ where $ninmathbb{N}-{0}$ with induction?



So the question is as above is shown:





How to prove $8^{2^n} - 5^{2^n}$ is divisible by $13$?




$n=1: 8^2 - 5^2 = 39$ is a multiple of $13$.



I.H.: Suppose $8^{2^n} - 5^{2^n}$ is divisible by $13$ is true $\forall n \in \mathbb{N}\setminus\{0\}$.



I got stuck on this $n= m + 1$ case.




$8^{2^{m+1}} - 5^{2^{m+1}}= 8^{2^m\cdot 2}- 5^{2^m\cdot 2}$



Can somebody please help me?


Answer



$\bmod 13\!:\,\ 8\equiv -5\,\Rightarrow\, 8^{\large 2}\!\equiv 5^{\large 2}\color{}{\Rightarrow}\, 8^{\large 2N}\!\equiv 5^{\large 2N}\,$ by the $ $ Congruence Power Rule.



Remark $ $ Replying to comments, below I show how the inductive proof of the linked Power Rule can be expressed without knowledge of congruences, using analogous divisibility rules.



$\begin{align}{\bf Divisibility\ Product\ Rule}\ \ \ \

&m\mid \ a\ -\ b\qquad {\rm i.e.}\quad \ \ a\,\equiv\, b\\
&m\mid \ \ A\: -\: B\qquad\qquad \ A\,\equiv\, B\\
\Rightarrow\ \ &\color{}{m\mid aA - bB}\quad \Rightarrow\quad aA\equiv bB\!\pmod{\!m}\\[.2em]
{\bf Proof}\,\ \ m\mid (\color{#0a0}{a\!-\!b})A + b(\color{#0a0}{A\!-\!B}) &\,=\, aA-bB\ \ \text{by $\,m\,$ divides $\rm\color{#0a0}{green}$ terms by hypothesis.}\end{align}$



$\begin{align}{\bf Divisibility\ Power\ Rule}\qquad
&m\mid a\ -\ b\qquad {\rm i.e.}\qquad a\equiv b\\
\Rightarrow\ \ & m\mid a^n-b^n\quad\ \Rightarrow\quad\,\ \ a^n\!\equiv b^n\pmod{\!m}
\end{align}$




Proof $\ $ The base case $\,n=0\,$ is $\,m\mid 1-1\,$ so true, and the inductive step follows by applying the Product Rule, namely $\ m\mid a- b,\,a^n- b^n \Rightarrow\, m\mid a^{n+1}-b^{n+1}.\,$



Your exercise follows from the specialization $\,a = 8^2,\ b = 5^2,\ m = 13\, $ in the above Power Rule. More explicitly, note that we have $\,\color{#c00}{13}\mid 8^2-5^2 = (\color{#c00}{8\!+\!5})(8\!-\!5),\,$ so powering that yields



$\begin{align}{\bf Divisibility\ Power\ Rule}\qquad
&13\mid 8^2 - 5^2\qquad {\rm i.e.}\qquad 8^2\equiv\, 5^2\\
\Rightarrow\ \ & 13\mid 8^{\large 2n}\!-\!5^{\large2n}\quad\ \Rightarrow\quad\,\ \ 8^{\large 2n}\!\!\equiv 5^{\large 2n}\!\!\!\pmod{\!13}
\end{align}$



Thus it holds true for all even exponents $2n,\,$ which includes your expoents $2^n,\ n\ge 1.$



Monday, April 18, 2016

linear algebra - Eigenvalues of a nxn matrix without calculations




I have a question about the following matrix:



$$
\begin{bmatrix}

1 & 2 & 3 \\
1 & 2 & 3 \\
1 & 2 & 3 \\
\end{bmatrix}
$$



Find the eigenvalues without calculations and define your answer. Now, I was thinking about this problem. And I thought, yeah ok if you try the vector (1,1,1), you can find 6 as one eigenvalue (and I know you have a double multiplicity 0 too). But than you are doing sort of guessing/calculation work.



I see that the columns are linearly dependant. So I know the dimension of the column space and of the null space.




Thank you in advance.



EDIT: follow up question:



Ok, so you find that the dimension of the null space is 2, so there are 2 eigenvectors when the eigenvalue is 0. Now my question is, can the dimension of the eigenspace be bigger than the amount of eigenvalues? I guess not. I know it can be smaller


Answer



Notice that rank=1 and hence $0$ is an eigenvalue of multiplicity $2$.
Then trace=sum of eigenvalue and hence the last eigenvalue is $6$.



It is also rather easy to find all eigenvectors without a lot of work. For $6$ the vector is $(1,1,1)$. For $0$ you can take basis $(2,-1,0),(3,0,-1)$.



probability - Expected value of sum squared is sum of expected value squared?



Consider the following expression:



$ X \sim BIN(1, p) $



$ Var(\bar{X})=Var(\frac{\sum_i{X_i}}{n}) = \frac{1}{n^2} Var(\sum_i X_i) = \frac{1}{n^2} \left( \sum_i E(X_i^2) - ( \sum_i E(X_i) )^2 \right) $



$ = \frac{1}{n^2} \left( \sum_i (pq+p^2) - (np)^2 \right) = \frac{1}{n^2} (npq+np^2 -n^2p^2) = \frac{np(1-p)}{n} $




Now I know that the C.R.L.B. is equal to this: $ Var(T) \geq \frac{p(1-p)}{n} $



This means that I must get the same expression to show that $ \bar{X} $ is the UMVUE.



Clearly my expression has a $n$ which shouldn't be there. I think my mistake is in this part:



$ \ \ E(\sum_i X_i)^2 = \left( \sum_i E(X_i) \right)^2 = (np)^2 = n^2p^2 $



So I think it should be:




$ \ \ E(\sum_i X_i)^2 = \sum_i \left( E(X_i)^2 \right) = np^2 $



Because that does yield the correct answer.



So is this indeed the mistake or did I make another mistake?
And I would also greatly appreciate it if someone could explain why this is wrong because I don't get the reasoning behind it.



Thanks in advance.


Answer



$E(\sum_i X_i)^2 $ is ambiguous: it could be $\left(E(\sum_i X_i)\right)^2$ or $E\left((\sum_i X_i)^2\right)$




For a binomial distribution (the sum of $n$ independent Bernoulli random variables), the former is $n^2p^2$ (the square of the mean), while the latter is $np(1-p) + n^2p^2$ (the variance plus the square of the mean).



In your expression, you could have



$ Var(\bar{X})=Var(\frac{\sum_i{X_i}}{n}) = \frac{1}{n^2} Var(\sum_i X_i) = \frac{1}{n^2} \sum_i Var( X_i) = \frac{np(1-p)}{n^2}= \frac{p(1-p)}{n}$



or



$ Var(\bar{X})=Var(\frac{\sum_i{X_i}}{n}) = \frac{1}{n^2} Var(\sum_i X_i) = \frac{1}{n^2} \left( E\left((\sum_i X_i)^2\right) - \left(E(\sum_i X_i)\right)^2 \right) $



How to convert a series to an equation?










I don't know the technical language for what I'm asking, so the title might be a little misleading, but hopefully I can convey my purpose to you just as well without.



Essentially I'm thinking of this: the series $4^n + 4^{n-1} \cdots 4^{n-n}$.




I suppose this is the summation of the series $4^n$ from $n$ to 0.



But is there any way to express this as a pure equation, not as a summation of a series?



If so, how do you figure out how to convert it?


Answer



In general, for $x\neq 1$ it is true that
$$\sum_{k=0}^nx^k=1+x+\cdots+x^n=\frac{x^{n+1}-1}{x-1}.$$
So, in your case in particular, we have that

$$\sum_{k=0}^n4^{n-k}=4^n+\cdots+4+1=1+4+\cdots+4^n=\sum_{k=0}^n4^k=\frac{4^{n+1}-1}{3}.$$
Alternatively, one could pull out a factor of $4^n$ from all terms, and compute
$$\sum_{k=0}^n4^{n-k}=4^n\sum_{k=0}^n(\tfrac{1}{4})^k=4^n\cdot\frac{(\frac{1}{4})^{n+1}-1}{(\frac{1}{4})-1}=4^n\cdot\frac{\frac{4^{n+1}-1}{4^{n+1}}}{\frac{3}{4}}=4^{n+1}\cdot\frac{\frac{4^{n+1}-1}{4^{n+1}}}{3}=\frac{4^{n+1}-1}{3}.$$


calculus - Derivative of a single point (not a point in a function, but a point).




I know that the derivative of a line is its slope, and that a constant is a line with slope zero, so the derivative is zero.



But how does it work if instead of a constant, or a line, you had a single point? Is it just simply undefined? I tried to google for something about this but it always just returns info on "how to find the derivative at a point in a function". Which is far more useful but doesn't answer my curious question.



Let's assume you just have an X,Y graph and are given the point (5,10). The definition of a derivative includes a function so I'm guessing it's just not defined but wanted some more expert input.


Answer



You can't find an answer to your curious question because the question makes no sense. Derivatives were invented to study change. When your position changes over time the derivative is your velocity. When you climb a mountain and your height changes as you move horizontally the derivative is the slope. That's essentially the same slope you see in a graph when the value of $y$ changes as $x$ changes. There is no way to think about how an isolated point changes. On a particular mountain trail at a particular point there's a slope, but a point floating in the air has no slope.


Sunday, April 17, 2016

linear algebra - Constant term in the characteristic polynomial of A



a. Show that if $A$ is any $ n \times n$ matrix, the constant term in the characteristic polynomial of $A$ is $\operatorname{det}(A)$.



b. Deduce that $A$ is invertible if and only if $\lambda = 0$ is not an eigenvalue of $A$.



I know that the characteristic polynomial $P(λ) = \operatorname{det}( A - \lambda I_n )$ and $\lambda$ is an eigenvalue if and only if $P(λ) = \operatorname{det}( A - \lambda I_n )=0$ but I'm not sure where to go from there.


Answer




Hint:



From the definition we have:
$$
P(\lambda)=
\det \begin{bmatrix}
a_{11}-\lambda & a_{12} & \cdots &a_{1n}\\
a_{21} & a_{22}-\lambda & \cdots &a_{2n}\\
\cdots\\
a_{n1} & a_{n2} & \cdots &a_{nn}-\lambda\\

\end{bmatrix}=
(-1)^n(\lambda^n+c_1\lambda^{n-1}+c_2\lambda^{n-2}+\cdots c_n)
$$



take $\lambda=0$


trigonometry - Something strange regarding Euler's Identity



By Euler's identity,
$$e^{i\theta}=\cos\theta+i\sin\theta$$
And so if we let
$$\phi=e^{i\theta}=\cos\theta+i\sin\theta$$
Then we have
$$\phi=e^{i\theta}$$

$$\theta=\frac{\ln\phi}{i}$$
$$\theta=-i\ln\phi$$
and
$$\phi=\cos\theta+i\sin\theta$$
and, using an inversion formula that I explained in my question here,
$$\theta=\arcsin\frac{\phi}{\sqrt{1^2+i^2}}-\arcsin\frac{1}{\sqrt{1^2+i^2}}$$
$$\theta=\arcsin\frac{\phi}{\sqrt{1-1}}-\arcsin\frac{1}{\sqrt{1-1}}$$
$$\theta=\arcsin\frac{\phi}{0}-\arcsin\frac{1}{0}$$
and so it seems that
$$-i\ln\phi=\arcsin\frac{\phi}{0}-\arcsin\frac{1}{0}$$

Is there any meaning in this whatsoever? Why does this happen when I try to manipulate Euler's formula?


Answer



Your problem resides in taking the $\ln$ of a complex exponential. You assert that $e^{i \theta} = \phi$, but notice that:



$$e^{i \tau} = e^{2 i \tau} = e^{3 i \tau} = ... = 1$$



(I use $\tau = 2 \pi$). This shows that the complex exponential is not a one-to-one function, and so you can't just take its inverse with $\ln$. Thus your assertion that:



$$ \theta = \frac{\ln \phi}{i}$$




May not be true.



[EDIT:] In regards to why you obtain a division by zero inside your $\arcsin$, the issue lies in the derivation of your formula for inverting $a \cos x + b \sin x$.



In your question that you referenced, you multiply and divide your equation by $\sqrt{a^2 + b^2}$. In this case, since $a^2 + b^2 = 0$, you're dividing by $0$, which isn't allowed. You may have to find a different way to get an inverse while avoiding dividing by zero.


Saturday, April 16, 2016

calculus - How come $1^{infty}$ = undefined, while $2^{infty} = infty$ and $0^{infty} = 0$?

$1^\infty$ = undefined


$2^\infty = \infty$


$0^\infty = 0$



Why is $1^\infty$ undefined? People were trying to explain to me that infinity isnt part of the Real numbers, yet, $2^\infty$ and $0^\infty$ somehow ARE defined?


In my opinion $1^\infty = 1$. I mean isn't it not easy to prove since $1\times 1\times 1 \times \cdots=1$ no matter how many times you do it?

geometry - The staircase paradox, or why $pine4$




What is wrong with this proof?





Is $\pi=4?$


Answer



This question is usually posed as the length of the diagonal of a unit square. You start going from one corner to the opposite one following the perimeter and observe the length is $2$, then take shorter and shorter stair-steps and the length is $2$ but your path approaches the diagonal. So $\sqrt{2}=2$.



In both cases, you are approaching the area but not the path length. You can make this more rigorous by breaking into increments and following the proof of the Riemann sum. The difference in area between the two curves goes nicely to zero, but the difference in arc length stays constant.




Edit: making the square more explicit. Imagine dividing the diagonal into $n$ segments and a stairstep approximation. Each triangle is $(\frac{1}{n},\frac{1}{n},\frac{\sqrt{2}}{n})$. So the area between the stairsteps and the diagonal is $n \frac{1}{2n^2}$ which converges to $0$. The path length is $n \frac{2}{n}$, which converges even more nicely to $2$.


inequality - Short and intuitive proof that $left(frac{n}{k}right)^k leq binom{n}{k}$


The simple inequality that $\left(\frac{n}{k}\right)^k \leq \binom{n}{k}$ has a number of different proofs. But is there a particularly intuitive, short and elegant proof that uses the natural interpretation of binomial coefficients, for example. I would ideally like a proof which is also accessible to students with very limited prerequisite knowledge.



Here is the best proof that I have seen which is less intuitive than I was hoping for.


First we first prove that $$\frac{n-i}{k-i} \geq \frac{n}{k}$$ for $i 0$, so $(n-i)/(k-i) \geq n/k.$


Now we multiply the over $i\in\{1,\ldots,k-1\}$ to obtain $$\frac{n(n-1)\cdots(n-k+1)}{k(k-1)\cdots 1} \geq \frac{n^k}{k^k},$$ or equivalently $\binom{n}{k}\geq (n/k)^k$.


Answer



$n^k$ is the number of ways of picking $k$ balls from $n$ balls with repetition allowed. One can generate all the possible ways by first deciding which $k$ out of $n$ balls to draw and draw from the $k$ selected balls instead. There are $\binom{n}{k}$ ways to choose the $k$ balls and $k^k$ ways to pick from the selected $k$ balls with repetition allowed. This gives us $$n^k \le\binom{n}{k} k^k \quad\iff\quad \left(\frac{n}{k}\right)^k \le \binom{n}{k}$$


Infinite series and logarithm


Is it true that:



$$\log_e 2 = \frac12 + \frac {1}{1\cdot2\cdot3} + \frac {1}{3\cdot4\cdot5}+ \frac{1}{5\cdot6\cdot7}+ \ldots$$



It was one of my homeworks . Thanks!



Answer



This calculation may look a bit roundabout, but it accurately reflects my thought processes (otherwise known as directed tinkering!) in solving the problem.


Start with the Maclaurin series $$\ln(1+x)=\sum_{n\ge 1}(-1)^{n+1}\frac{x^n}n\;,$$ which is valid for $-1

$$\begin{align*} \ln 2&=\sum_{n\ge 1}\frac{(-1)^{n+1}}n\\ &=1-\frac12+\frac13-\frac14\pm\ldots\\ &=\left(1-\frac12\right)+\left(\frac13-\frac14\right)+\left(\frac15-\frac16\right)+\ldots\\ &=\sum_{n\ge 1}\left(\frac1{2n-1}-\frac1{2n}\right)\\ &=\sum_{n\ge 1}\frac1{2n(2n-1)}\;. \end{align*}$$


The series in the problem is


$$\begin{align*} \frac12+\frac1{1\cdot2\cdot3}+\frac1{3\cdot4\cdot5}+\frac1{5\cdot6\cdot7}+\ldots&=\frac12+\sum_{n\ge 1}\frac1{(2n-1)(2n)(2n+1)}\\ &=\frac12+\sum_{n\ge 1}\left(\frac1{2n(2n-1)}\cdot\frac1{2n+1}\right)\\ &=\frac12+\sum_{n\ge 1}\frac1{2n(2n-1)}\left(1-\frac{2n}{2n+1}\right)\\ &=\frac12+\sum_{n\ge 1}\frac1{2n(2n-1)}-\sum_{n\ge 1}\left(\frac1{2n(2n-1)}\cdot\frac{2n}{2n+1}\right)\\ &=\frac12+\sum_{n\ge 1}\frac1{2n(2n-1)}-\sum_{n\ge 1}\frac1{(2n-1)(2n+1)}\\ &=\frac12+\sum_{n\ge 1}\frac1{2n(2n-1)}-\frac12\sum_{n\ge 1}\left(\frac1{2n-1}-\frac1{2n+1}\right)\;. \end{align*}$$


Now notice that $$\sum_{n\ge 1}\left(\frac1{2n-1}-\frac1{2n+1}\right)$$ telescopes, so it can be evaluated easily; do that, and you’ll have the desired result. (You also have to justify the various manipulations that rearrange the terms of the series in the last long calculation, but that’s not a problem: everything in that calculation is absolutely convergent.)


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...