Thursday, March 31, 2016

sequences and series - How do i evaluate this :$sum_{k=1}^{+infty}frac{{(-1)^k}}{sqrt{k+1}+sqrt{k}}$?

I have tried to evaluate the bellow sum using some standard altern sum but i don't succed then my question here is :





Question: How do i evaluate this sum




$\sum_{k=1}^{+\infty}\frac{{(-1)^k}}{\sqrt{k+1}+\sqrt{k}}$ ?


Note: Wolfram alpha show it's values here

calculus - If all directional derivatives of $f$ in point $p$ exist, is $f$ differentiable?



I am a little bit confused by the various theorems concerning the differentiability of a multivariable function.
Let $f : D \subseteq R^n \to R$ have all directional derivatives in point $p$. Does it directly imply that $f$ is differentiable in $p$? I know that the opposite is true: If $f$ were differentiable, it would imply that it has directional derivatives.


Answer




No, this is not true. Take, for instance$$\begin{array}{rccc}f\colon&\mathbb{R}^2&\longrightarrow&\mathbb{R}\\&(x,y)&\mapsto&\begin{cases}\frac{x^2y}{x^4+y^2}&\text{ if }(x,y)\neq(0,0)\\0&\text{ otherwise.}\end{cases}\end{array}$$You can check that, at $(0,0)$, every directional derivative is $0$. However, $f$ is not differentiable there.


Radius of convergence of the series $ sumlimits_{n=1}^infty (-1)^nfrac{1cdot3cdot5cdots(2n-1)}{3cdot6cdot9cdots(3n)}x^n$



How do I find the radius of convergence for this series:




$$ \sum_{n=1}^\infty (-1)^n\dfrac{1\cdot3\cdot5\cdot\cdot\cdot(2n-1)}{3\cdot6\cdot9\cdot\cdot\cdot(3n)}x^n $$



Treating it as an alternating series, I got
$$x< \dfrac{n+1}{2n+1}$$



And absolute convergence tests yield
$$-\dfrac{1}{2}

I feel like it's simpler than I expect but I just can't get it. How do I do this?




Answer in book: $\dfrac{3}{2}$


Answer



The ratio test allows to determine the radius of convergence.



For $n \in \mathbb{N}^{\ast}$, let :



$$ a_{n} = (-1)^{n}\frac{1 \times 3 \times \ldots \times (2n-1)}{3 \times 6 \times \ldots \times (3n)}. $$



Then,




$$ \begin{align*}
\frac{\vert a_{n+1} \vert}{\vert a_{n} \vert} &= {} \frac{1 \times 3 \times \ldots \times (2n-1) \times (2n+1)}{3 \times 6 \times \ldots (3n) \times (3n+3)} \times \frac{3 \times 6 \times \ldots \times (3n)}{1 \times 3 \times \ldots \times (2n-1)} \\[2mm]
&= \frac{2n+1}{3n+3} \; \mathop{\longrightarrow} \limits_{n \to +\infty} \; \frac{2}{3}.
\end{align*}
$$



Since the ratio $\displaystyle \frac{\vert a_{n+1} \vert}{\vert a_{n} \vert}$ converges to $\displaystyle \frac{2}{3}$, we can conclude that $R = \displaystyle \frac{3}{2}$.


Wednesday, March 30, 2016

real analysis - How to Prove something is differentiable




I know how differentiability is defined in terms of continuity of the function $(f(x) - f(x_0)) / (x - x_0) $ as $x$ goes to $x_0$, but I was wondering if there are other useful theorems / lemmas I could use to show a function is differentiable?



Note: I am aware of the technique that if I can express my function in terms of a sum/product/quotient of functions that I know are differentiable, then I can just use the product rule, etc. to find the derivatives on top of showing that the function is differentiable.



But are there other lemmas or theorems that are also helpful? (For example, an equivalent definition of continuity is that preimages of open sets are open)


Answer



There are several theorems that you did not mention:




  • if $f$ and $g$ are differentiable, then $g\circ f$ is differentiable too (and $(g\circ f)'=(g'\circ f)\times f'$);


  • if $f$ is invertible and $f'$ is never $0$, then $f^{-1}$ is differentiable too (and $(f^{-1})'=\frac1{f'\circ f^{-1}}$);

  • if $(f_n)_{n\in\mathbb N}$ is a sequence of differentiable functions wich converges pointwise to a function $f$ and if the sequence $(f_n')_{n\in\mathbb N}$ converges uniformly to a function $g$, then $f$ is differentiable (and $f'=g$);

  • if $f$ is continuous and $F(x)=\int_a^xf(t)\,\mathrm dt$, then $f$ is differentiable (and $F'=f$).


calculus - Mean Value Theorem




Good Day! I,m aware of the basic concept of mean value theorem but the application of it in proving makes me confuse, this is how it goes:



By mean Value theorem:
$$2 - t^{n-1} (1+t) = (1 - t)[θ^{n – 1} + (n - 1) θ^{n – 2} (1 + θ)$$
$$ < (2n - 1)(1 + t)$$
where $$t < θ < 1$$



How does it happen? I have no idea.... I think this is complex form of mean value theorem...Any idea would be a great help thanks!


Answer



it looks like you are applying the mean value theorem to the function $$ f(x) = x^{n-1}(1+x)=x^{n-1}+x^n, \, f'(x) = (n-1)x^{n-2}+nx^{n-1} $$ on the closed interval $[t, 1].$




we get $$f(1) - f(t)=f'(\theta) \text{ for some } \theta \in (t, 1.)$$



that is $$ 2- t^{n-1}(1+t) = (1-t)\left((n-1)\theta^{n-2}+n\theta^{n-1}\right)$$


elementary number theory - Modular Multiplicative Inverse / Euclidean Algorithm




There's a method of obfuscating programs which is around that turns code like:



my_int32 = my_value


into



my_int32 = my_value * 1448272385



and



return my_int32


into



return my_int32 * -1322181119



Now, after I have looked around a bit I think that this is related to the Modular multiplicative inverse and the Extended euclidean algorithm is used to find the pairing integer of an existing value.



Now, what I don't understand is that in some way, multiplying an integer value which overflows twice results in the original integer. E.g. 1234 * 1448272385 * -1322181119 results in 1234. What is the logic behind this? What properties of those numbers are the cause of this? I'm really terrible at maths, so my apologies if this is either obvious or my question is illogical.


Answer



The numbers are inverse $\pmod {2^{32}}.$ A further obfuscation is the usage of signed integers. Here you have
$$1448272385\times(2^{32}-1322181119) \equiv 1\pmod {2^{32}}$$
But note that this trick only works for odd numbers not for even because even numbers have no inverse $\pmod {2^{32}}.$



The reason why you see no explicit $\pmod {2^{32}}$ is, that this is the way many programming languages work with 32-bit integers.


calculus - Implicit derivatives and logarithmic derivatives: $left(x^{sqrt{x}}right)'=?$



How would I find the derivative with respect to $x$ of $$y = \left(x^{\sqrt{x}}\right)'.$$ I can find the correct answer using the method of logarithmic differentiation that my book mysteriously suggest to use for some reason, BUT how can I find the derivative without using that method?



Furthermore how would I know when I should use logarithmic differentiation?
And why can I not get the correct answer using the Chain rule only?



Answer



First of all logarithmic differentiation is not mysterious as it might seem, rather it makes the calculations look visibly simpler (in fact they are simpler to type also while using $\mathrm\LaTeX$). The idea is that a complicated expression of type $\{f(x)\}^{g(x)}$ can be handled easily by first taking its logarithm.



Thus if $$y = f(x) = x^{\sqrt{x}}\tag{1}$$ then we have $$\log y = \sqrt{x}\log x\tag{2}$$ The next step is to differentiate the above equation with respect to $x$ On the LHS we don't see the variable $x$ written explicitly, but that is not a problem because the LHS is actually $\log y = \log f(x)$. So differentiating it via chain rule we get $$\frac{d}{dx}\,\log y = \frac{d}{dx}\,\log f(x) = \frac{1}{f(x)}\cdot f'(x) = \frac{f'(x)}{f(x)}\tag{3}$$ Differentiating the RHS of equation $(2)$ via product rule we get its derivative as $$\frac{\sqrt{x}}{x} + \frac{\log x}{2\sqrt{x}} = \frac{2 + \log x}{2\sqrt{x}}$$ It now follows from $(3)$ that $$\frac{f'(x)}{f(x)} = \frac{2 + \log x}{2\sqrt{x}}$$ and hence the final answer is $$f'(x) = f(x)\cdot\frac{2 + \log x}{2\sqrt{x}} = \frac{x^{\sqrt{x}}(2 + \log x)}{2\sqrt{x}}$$


algebra precalculus - Simplify expression $(xsqrt{y}- ysqrt{x})/(xsqrt{y} + ysqrt{x})$


I'm stuck at the expression: $\displaystyle \frac{x\sqrt{y} -y\sqrt{x}}{x\sqrt{y} + y\sqrt{x}}$.
I need to simplify the expression (by making the denominators rational) and this is what I did:


$$(x\sqrt{y} - y\sqrt{x}) \times (x\sqrt{y} - y\sqrt{x}) = (\sqrt{y} - \sqrt{x})^2$$ Divided by
$$(x\sqrt{y} + y\sqrt{x}) \times (x\sqrt{y} - y\sqrt{x} ) = (x\sqrt{y})^2$$


So I'm left with $\displaystyle \frac{(\sqrt{y} - \sqrt{x})^2}{(x\sqrt{y})^2}$.


This answer is incorrect. Can anyone help me understand what I did wrong? If there is a different approach to solve this it will also be much appreciated. Please explain in steps.


Answer



I am assuming your ambiguous notation begins with the task of simplifying:


$$\frac{x\sqrt y - y\sqrt x}{x\sqrt y + y\sqrt x}.$$



Assuming I'm correct, then we can rationalize the denominator (get rid of the factors with square roots), as follows:


Multiply the numerator and denominator by $(x\sqrt{y}-y\sqrt{x})$ to get a difference of squares. Recall that $$(a+b)(a-b) = a^2 - b^2.$$ If you carry out this multiplication, you'll have $$\dfrac{(x\sqrt{y}-y\sqrt{x})^2}{x^2y-xy^2}= \dfrac{x^2y - 2xy\sqrt{xy} + xy^2}{x^2y-xy^2}\; =\; \frac{xy(x-2\sqrt{xy} + y)}{xy(x-y)}\;= \; \frac{x-2\sqrt{xy} + y}{x-y}$$


You seemed to have the right idea, looking at your strategy, to multiply numerator and denominator by $x\sqrt y - y\sqrt x$, but you miscalculated.


Tuesday, March 29, 2016

complex numbers - Using De Moivre's formula for finding $sqrt{i}$



In class today, we learned about complex numbers, and the teacher described a simple procedure for finding the square root of $i$ in $z^2=i$. He explained to us to set $z=a+bi$ and square from there, which did work, but it took some work to get through.




Afterwards, he said that there was an easier way, and that was to use De Moivre's formula to find the square roots.



So...




Question: How would you use De Moivre's formula to find the square root of any nonzero complex number?








I do understand that the formula goes something like this:$$(\cos x+i\sin x)^y=\cos(xy)+i\sin(xy)\tag{1}$$
But I'm not too sure how to apply that to$$z^2=i\tag2$$ for $z$ is another complex number.


Answer



Using the $n$-th roots formula (which is actually an equivalent version of De Moivre's formula )
$$
\sqrt{i}=i^{\frac{1}{2}}=(\cos\frac{\pi}{2}+i\sin\frac{\pi}{2})^{\frac{1}{2}}
=\cos(\frac{\pi}{4}+\kappa\pi)+i\sin(\frac{\pi}{4}+\kappa\pi)
$$
for any $\kappa\in\mathbb{Z}$. The above, for $\kappa$ even, gives

$$
\frac{\sqrt{2}}{2}+i\frac{\sqrt{2}}{2}
$$
while for $\kappa$ odd, it gives
$$
-\frac{\sqrt{2}}{2}-i\frac{\sqrt{2}}{2}
$$



For the general case, the formula for the $n$-th roots of the complex number $z=\rho (\cos \phi + i \sin \phi)$, is given by
$$

[\rho (\cos \phi + i \sin \phi)]^{1/n} = \rho^{1/n}\left( \cos \frac{\phi + 2 \pi k}{n} + i \sin \frac{\phi + 2 \pi k}{n} \right), \quad k = 0, 1, \dots,
$$
$\rho$ is the modulus. For the square roots, just set $n=2$.


Monday, March 28, 2016

linear algebra - Characteristic polynomial problem




$$\begin{pmatrix}
1 & \cdots & n \\
n+1 & \cdots & 2n \\
\vdots & \ddots & \vdots \\
n^2-n+1 & \cdots & n^2
\end{pmatrix} .$$



I am trying to find characteristic polynomial of this $n\times n$ matrix but failed. At first, by taking elementray operations (Substract $k$-th row from $k+1$-th row.), the matrix can be transformed as:
$$\begin{pmatrix}
1 & \cdots& n \\

n & \cdots & n \\
\vdots & \ddots & \vdots \\
n & \cdots & n
\end{pmatrix}.$$



And taking same elementary operations give:
$$\begin{pmatrix}
1 & \cdots & n \\
n & \cdots & n \\
0 & \cdots & 0 \\

\vdots & \ddots & \vdots \\
0 & \cdots & 0
\end{pmatrix}.$$



Now it is enough to calculate the characteristic polynomial of this matrix but it seems cumbersome. Can anyone help me?
Hint says: First prove that $A$ has rank 2 and that $\operatorname{span}(\{(1,1,\cdots,1),(1,2,\cdots,n)\})$ is $L_A$-invariant.


Answer



The row operations you've done are enough to deduce that your matrix has rank $2$. Note that your row-reduced matrix will not have the same eigenvalues (even though it has the same rank), so it's not enough to simply find the characteristic polynomial of that last matrix.



In terms of eigenvalues, you now know that since the matrix has rank 2, the nullity of the matrix has dimension $n-2$, which is to say that $0$ is an eigenvalue with multiplicity $n-2$. It suffices now to find the remaining two eigenvalues.




The hint about the invariant subspace tells you that two eigenvectors will be within that $L_A$ invariant subspace, and presumably those eigenvectors correspond to the non-zero eigenvalues we're looking for.



The best way for you to use this information is to find the matrix of the restriction of $L_A$ to the invariant subspace described, then find the eigenvalues of that $2\times 2$ matrix. With those two eigenvalues, you'll now have all $n$ roots of the characteristic polynomial.






In particular, let $v_1 = (1,1,\dots,1)$, and $v_2 = (1,2,\dots,n)$, so that $\mathcal B = \{v_1,v_2\}$ is a basis for the invariant subspace discussed. We then see that the $i$th column of $A$ is given by $(i-n)v_1 + nv_2$. With that, we have
$$
Av_1 = \sum_{i=1}^n ((i-n)v_1 + nv_2) = -\frac{n(n-1)}{2} v_1 + n^2 v_2\\

Av_2 = \sum_{i=1}^n i((i-n)v_1 + nv_2) = -\frac{(n-1)n(n+1)}{6} v_1 + \frac{n^2(n+1)}2
$$
Thus, we have
$$
[L_A]_{\mathcal B} = n\pmatrix{-(n-1)/2 & n\\(n-1)(n+1)/6 & n(n+1)/2}
$$
We compute the characteristic polynomial of this matrix to be
$$
\lambda^2 - \frac n2 \left(n^2 + 1\right)\lambda - \frac{5n^3}{12}(n^2 - 1)
$$

It follows that the characteristic polynomial of the original matrix should be
$$
\lambda^n - \frac n2 \left(n^2 + 1\right)\lambda^{n-1} - \frac{5n^3}{12}(n^2 - 1)\lambda^{n-2}
$$


Sunday, March 27, 2016

calculus - How to evaluate $lim_{ntoinfty}n!/n^{k}$


I do not know how to go about finding the limit of this sequence. I know it diverges to infinity yet I can't find terms to use the squeeze lemma effectively.


$a_n = \frac{n!}{n^{1000}}$


$\lim_{n\rightarrow\infty} \frac{n!}{n^{1000}}$


I know that when $n>1000$, the numerator will "run out" of denominators and the expression will grow, yet I don't know how to formally prove this divergence.



In addition to this, since it is a sequence, no series laws can be applied.


Anyone have an idea on how to approach this type of problem?


Answer



I will first assume that $k$ is a positive integer. Observe that $$ \frac{n!}{n^k}=\frac{n}{n}\frac{n-1}{n}\cdots\frac{n-k+1}{n}(n-k)!\geq \Big(1-\frac{k-1}{n}\Big)^k(n-k)!\geq 2^{-k}(n-k)! $$ for all sufficiently large $n$. And $$ \lim_{n\to\infty}2^{-k}(n-k)!=\infty $$ for $k$ fixed, so the original sequence diverges as well.


If $k$ is a positive real number but not an integer, then if $j=\lceil k\rceil$ then $$\frac{n!}{n^k}\geq \frac{n!}{n^j}$$ so we can use the above argument.


real analysis - The n-th root of a prime number is irrational

If $p$ is a prime number, how can I prove by contradiction that this equation $x^{n}=p$ doesn't admit solutions in $\mathbb {Q}$ where $n\ge2$

real analysis - Existence of maximum value of a discontinuous function

If $f(x)$ takes a finite real value for all $x$ on the closed interval $[a,b]$, must there be a real number $M$ such that $M\geq f(x)$ for all $x$ on this interval? It seems that if not, there must be a point $c\in[a,b]$ such that $\lim_{x\to c}f(x)=+\infty$, and so $f(x)$ must be undefined at some point on this interval, but I don't know how to make this rigorous.



Edit: I see that $f(0)=0$, $f(x)=1/x$ on $(0,1]$ is a counterexample. I also see that I have been imprecise with terminology. Let me modify the question: Is there always a sub-interval $[a',b']$ with $a

Saturday, March 26, 2016

algebra precalculus - Multiplying by Inverse of a Fraction question



Hey so I'm doing an exercise and I got a bit confused by something.



I've learned early on that you can multiply the inverse of a fraction and would get the same result as you would if you divided, since you do the exact opposite.




Then why doesn't this hold true for:



$$\frac{10z^{1/3}} {2z^{2}}{^{}{}} = {10z^{1/3}} * {2z^{-2}}$$



Please explain in simple words.. not too math savvy ^^


Answer



The reason is that you forgot to use the inverse of $2$ as well. So it should be
$$10z^{1/3}\cdot 2^{-1}z^{-2}$$
That's because in the original expression, you are dividing by $2$ and also by $z^2$.




In other words, $(2z^2)^{-1}$ can be written as $2^{-1}z^{-2}$, but not as $2z^{-2}$.


Friday, March 25, 2016

induction - Prove that $a^{2n} -b^{2n}$ is divisible by $a+b$





Prove that for all $n\in\Bbb{Z}$, $a^{2n}-b^{2n}$ is divisible by $a+b$ using induction.








I know that if $a$ is divisible by $b$, then $a=kb$, where $k\in\Bbb{Z}$. Here we have that $a^{2n}-b^{2n}=(a+b)k$, with $k\in\Bbb{Z}$.



For the base case I set $n=1$, so $a^2-b^2=(a+b)(a-b)=(a+b)k$, where $k=a-b\in\Bbb{Z}$.



Now the inductive step (where I have doubts): $$a^{2n}-b^{2n}=(a+b)k\implies a^{2(n+1)}-b^{2(n+1)}=(a+b)m,\;k,m\in\Bbb{Z}.$$ We start from $a^{2(n+1)}-b^{2(n+1)}$. Then $$a^{2n+2}-b^{2n+2}=(a+b)\color{red}{(a^{2n+1}-a^{2n}b+a^{2n-1}b^2-\dots-a^2b^{2n-1}-ab^{2n}+b^{2n+1})},$$ so $a^{2(n+1)}-b^{2(n+1)}=(a+b)m$, where $m=a^{2n+1}-a^{2n}b+a^{2n-1}b^2-\dots-a^2b^{2n-1}-ab^{2n}+b^{2n+1}\in\Bbb{Z}.\qquad\square$







I have two questions:




  1. Is the math in $\color{red}{\text{red}}$ a correct descomposition of $a^{2(n+1)}-b^{2(n+1)}$?

  2. We have not used the inductive hypothesis. Could we use it?


Answer



The descomposition in red is correct. You did not use it because you could try this without induction, just with the factorization you used above. But you could have used it in the following way:




Since $$\begin{align}
a^{2n+2}-b^{2n+2}=& a^{2n+2}-a^2b^{2n}+a^2b^{2n}-b^{2n+2} \\
=& a^2 (a^{2n}-b^{2n})+b^{2n}(a^2-b^2) \\
=& a^2 (a+b)k+b^{2n}(a-b)(a+b) \\
=&(a+b)\cdots
\end{align}$$

The assert is even true for $n+1$.


algebra precalculus - Writing the complex number $z = 1 - sin{alpha} + icos{alpha}$ in trigonometric form

Now I can't finish this problem:



Express the complex number $z = 1 - \sin{\alpha} + i\cos{\alpha}$ in trigonometric form, where $0 < \alpha < \frac{\pi}{2}$.



So the goal is to determine both $r$ and $\theta$ for the expression: $z = r(\cos{\theta} + i\sin{\theta})$



I've done this so far:





  1. First of all I obtained $r = \sqrt{(1-\sin{\alpha})^2 + \cos^2{\alpha}} = \sqrt{1 + 2 \sin{\alpha} + \sin^2{\alpha} + \cos^2{\alpha}} = \sqrt{2(1 - \sin{\alpha})}$ (possible thanks to the condition over $\alpha$).


  2. Now I tried to get $\theta = \arctan{\left(\frac{\cos{\alpha}}{1-\sin{\alpha}}\right)}$




And here it is where I get stuck... how to determine $\theta$ with such an expression?



I already know $0 < 1-\sin{\alpha} < 1$ and $0 < \cos{\alpha} < 1$ under the given conditions.



Any help will be appreciated. Thank you :)




P.S. I think (according to my search results here) there are no questions about this problem. I hope you won't mind if it is a duplicate.

Thursday, March 24, 2016

finite fields - Discrete logarithm - strange polynomials

If $p$ is a prime number and $\omega$ is a fixed primitve root for $\mathbb{Z}/p\mathbb{Z}$, then we can define the discrete logarithm of $x \in (\mathbb{Z}/p\mathbb{Z})^{\times}$ as the unique number $\log_{\omega} x$ between $1$ and $p-1$ such that $\omega^{\log_{\omega} x} = x$.



$\log_{\omega} x$ will be identified with its remainder class in $\mathbb{Z}/p\mathbb{Z}$.




I was interested in finding the interpolating polynomial in $\mathbb{Z}/p\mathbb{Z}[x]$ of minimal degree for this $\log_{\omega}$ function (for fun?) and looked at a few examples where $\omega = 3$ can be taken as a fixed primitive root. Some examples:



modulo $5$, $\log_3(x) = 4x^3 + 3x^2 + 2x$.



modulo $7$, $\log_3(x) = 5x^5 + 2x^4 + 4x^3 + 6x^2 + 3x$.



modulo $17$, $\log_3(x) = $
$$10x^{15} + 16x^{14} + 3x^{13} + 11x^{12} +14x^{11} + 12x^{10} + 13x^9 + 9x^8 + 5x^7 + 6x^6 + 4x^5 + 7x^4 + 15x^3 + 2x^2 + 8x.$$



I notice that the coefficients in each case are exactly the numbers $2$ to $p-1$ up to permutation. Is there something behind this? It seems highly improbable if it's a coincidence.

limits - How to prove that $limlimits_{n to infty} frac{k^n}{n!} = 0$





It recently came to my mind, how to prove that the factorial grows faster than the exponential, or that the linear grows faster than the logarithmic, etc...



I thought about writing:
$$
a(n) = \frac{k^n}{n!} = \frac{ k \times k \times \dots \times k}{1\times 2\times\dots\times n} = \frac k1 \times \frac k2 \times \dots \times \frac kn = \frac k1 \times \frac k2 \times \dots \times \frac kk \times \frac k{k+1} \times \dots \times \frac kn
$$
It's obvious that after k/k, every factor is smaller than 1, and by increasing n, k/n gets closer to 0, like if we had $\lim_{n \to \infty} (k/n) = 0$, for any constant $k$.



But, I think this is not a clear proof... so any hint is accepted.
Thank you for consideration.



Answer



If you know that this limit exists, you have
$$
\lim_{n \to \infty} \frac{k^n}{n!} = \lim_{n \to \infty} \frac{k^{n+1}}{(n+1)!} = \lim_{n \to \infty} \frac k{n+1} \frac{k^n}{n!} = \left(\lim_{n \to \infty} \frac k{n+1} \right) \left( \lim_{n \to \infty} \frac {k^n}{n!} \right) = 0.
$$
Can you think of a short way to show the limit exists? (You need existence to justify my factoring of the limits at the end. If you don't have that then there's no reason for equality to hold.)


Sum of n consecutive numbers











Is there a shortcut method to working out the sum of n consecutive positive integers?



Firstly, starting at $1 ... 1 + 2 + 3 + 4 + 5 = 15.$



Secondly, starting at any other positive integer ...($10$ e.g.): $10 + 11 + 12 + 13 = 46$.


Answer



Take the average of the first number and the last number, and multiply by the number of numbers.


linear algebra - How to find eigenvectors/eigenvalues of a matrix where each diagonal entry is scalar $d$ and all other entries are $1$





How would you find eigenvalues/eigenvectors of a $n\times n$ matrix where each diagonal entry is scalar $d$ and all other entries are $1$ ? I am looking for a decomposition but cannot find anything for this.
For example:



$\begin{pmatrix}2&1&1&1\\1&2&1&1\\1&1&2&1\\1&1&1&2\end{pmatrix}$


Answer



The matrix is $(d-1)I + J$ where $I$ is the identity matrix and $J$ is the all-ones matrix, so once you have the eigenvectors and eigenvalues of $J$ the eigenvectors of $(d-1)I + J$ are the same and the eigenvalues are each $d-1$ greater. (Convince yourself that this works.)




But $J$ has rank $1$, so it has eigenvalue $0$ with multiplicity $n-1$. The last eigenvalue is $n$, and it's quite easy to write down all the eigenvectors.


linear algebra - Assuming $AB=I$ prove $BA=I$





Most introductory linear algebra texts define the inverse of a square matrix $A$ as such:


Inverse of $A$, if it exists, is a matrix $B$ such that $AB=BA=I$.


That definition, in my opinion, is problematic. A few books (in my sample less than 20%) give a different definition:


Inverse of $A$, if it exists, is a matrix $B$ such that $AB=I$. Then they go and prove that $BA=I$.


Do you know of a proof other than defining inverse through determinants or through using rref?



Is there a general setting in algebra under which $ab=e$ leads to $ba=e$ where $e$ is the identity?


Answer



Multiply both sides of $AB-I=0$ on the left by $B$ to get $$ (BA-I)B=0\tag{1} $$ Let $\{e_j\}$ be the standard basis for $\mathbb{R}^n$. Note that $\{Be_j\}$ are linearly independent: suppose that $$ \sum_{j=1}^n a_jBe_j=0\tag{2} $$ then, multiplying $(2)$ on the left by $A$ gives $$ \sum_{j=1}^n a_je_j=0\tag{3} $$ which implies that $a_j=0$ since $\{e_j\}$ is a basis. Thus, $\{Be_j\}$ is also a basis for $\mathbb{R}^n$.


Multiplying $(1)$ on the right by $e_j$ yields $$ (BA-I)Be_j=0\tag{4} $$ for each basis vector $Be_j$. Therefore, $BA=I$.


Failure in an Infinite Dimension


Let $A$ and $B$ be operators on infinite sequences. $B$ shifts the sequence right by one, filling in the first element with $0$. $A$ shifts the sequence left, dropping the first element.


$AB=I$, but $BA$ sets the first element to $0$.


Arguments that assume $A^{-1}$ or $B^{-1}$ exist and make no reference to the finite dimensionality of the vector space, usually fail to this counterexample.


real analysis - A continuous bijection $f:mathbb{R}to mathbb{R}$ is an homeomorphism?




A continuous bijection $f:\mathbb{R}\to \mathbb{R}$ is an homeomorphism. With the usual metric structure.




I always heard that this fact is true, but anyone shows to me a proof, and I can't prove it. I was tried using the definition of continuity but it is impossible to me conclude that. I tried using the fact that $f$ bijective has an inverse $f^{-1}$ and the fact that $ff^{-1}$ and $f^{-1}f$ are continuous identity, but I can't follow.


Answer



I think that I have the answer using Michael Greinecker hint. I'll first prove it, because I did not saw this theorem before.





Hint: A continuous injective function from the reals to the reals must be strictly increasing or strictly decreasing.




Proof: Let $I=[a,b]$ be a closed interval with $af(y)$ by injectivity can't be the equality. If $f(a)

If image is all, I'll prove that $f^{-1}=g$ is continuous. Is well known that if $f$ is strictly increasing then $g$ so is. I'll prove continuity in some $a\in\mathbb{R}$. Let $\epsilon>0$ then $g(a)-\epsilon/2$ and $g(a)+\epsilon/2$ have preimages let $g(b)=g(a)-\epsilon/2$ and $g(c)=g(a)+\epsilon/2$ then $b

Wednesday, March 23, 2016

exponentiation - Why is $0^0$ also known as indeterminate?



I've seen on Maths Is Fun that $0^0$ is also know as indeterminate. Seriously, when I wanted to see the value for $0^0$, it just told me it's indeterminate, but when I entered this into the exponent calculator, it was misspelled as "indeterminant". Let's cut to the chase now. I've seen this:$$5^0=1$$$$4^0=1$$$$3^0=1$$$$2^0=1$$$$1^0=1$$$$0^0=1$$and this:$$0^5=0$$$$0^4=0$$$$0^3=0$$$$0^2=0$$$$0^1=0$$$$0^0=0$$Right here, it seems like $0^0$ can be equal to either $0$ or $1$ as proven here. This must be why $0^0$ is indeterminate. Do you agree with me? I can't wait to hear your "great answers" (these answers have to go with all of your questions (great answers). What I mean is that you have to have great answers for all of your questions)!



Answer



Well, any number raised to the power of zero does equal $1$ because the base, or the number being raised to any power, gets divided by itself. For example, $3^0$ equals 3/3, which equals $1$, but $0^0$ "equals" 0/0, which equals any number, which is why it's indeterminate. Also, 0/0 is undefined because of what I just said.


real analysis - Showing that a sequence is monotone and thus convergent



Here's the original question:





Let $(a_n)$ be bounded. Assume that $a_{n+1} \ge a_{n} - 2^{-n}$. Show
that $(a_n)$ is convergent.




Okay, I know that if I can show that if the sequence is monotone, I can conclude that it is convergent. But I am not sure how to show that it is monotone.



I know that
$$a_n \le a_{n+1} + \frac{1}{2^n} < a_{n+1} + \frac{1}{n}$$



It looks to me as if it is monotonically increasing but I'm quite not sure how to prove my claim. Any hints would be appreciated.



Answer



For all $n$, let
$$b_n = a_n - 2^{1-n}.$$
Note that
$$b_{n+1} \ge b_n \iff a_{n+1} - 2^{-n} \ge a_n - 2^{1-n} \iff a_{n+1} \ge a_n - 2^{-n},$$
which is true. Note also that $b_n$ is the sum of the bounded sequence $a_n$ and the convergent sequence $-2^{1 - n}$, and hence is bounded as well. Thus, $b_n$ converges by the monotone convergence theorem, and hence, by the algebra of limits, so does $a_n$.


elementary set theory - Cardinality and infinite sets: naturals, integers, rationals, bijections


I have alot of questions.



  • Do Infinite sets have the same cardinality when there is a bijection between them?





  • Are $\mathbb{N}$ and $\mathbb{Z}$ infinite sets? I assume they are, but why? Why does there not exist a bijection between them?




  • Why do $\mathbb{N}, \mathbb{Z}, \mathbb{Q}$ have the same cardinality? I assume there exists a bijection between them, but how can I find this function?



Answer



First of all, I suggest you find a good article/book on Cantor's work on multiple infinities. Reading that will enlighten you and answer probably most of the questions you have concerning this subject. Try these lecture notes.


But to give a brief answer to your current questions:




  1. Per definition, two sets have the same cardinality if there exists a bijection between them. This definition was introduced by Cantor because for infinite sets, you could not just count the elements. Not all infinite sets have the same cardinality. For example, the natural numbers $\mathbb{N}$ and the real numbers $\mathbb{R}$ do not have the same cardinality. This was proven with the diagonal argument.




  2. $\mathbb{N}$ and $\mathbb{Z}$ are both infinite sets. I suggest you check out their definitions on wikipedia (the natural numbers, the integers). They also, like Ali said in his post, have the same cardinality. The bijection that was given by him is probably the easiest one to grasp.




  3. As for $\mathbb{Q}$, the rational numbers, this is also an infinite set that has the same cardinality as $\mathbb{N}$ (and thus also $\mathbb{Z}$). I suggest you check out the bijection that is given in the lectures notes that I linked above, that one is probably the clearest one.



Tuesday, March 22, 2016

complex numbers - 1 = -1 Clearest way to explain why this proof is wrong.

Say you are a high school student or a young undergrad. You are being taught about complex numbers and you are told that $i = \sqrt{-1}$.



You go home and you write this:
\begin{equation}
\begin{aligned}
1 & = 1 \\
1 & = \sqrt{1} \\

1 & = \sqrt{(-1)(-1)} \\
1 & = \sqrt{-1}\sqrt{-1} \\
1 & = i \times i \\
1 & = i^2 \\
1 & = -1
\end{aligned}
\end{equation}



You are dismayed. The infamous imaginary numbers are inconsistent after all!







The best answer would be the clearest explanation of why the "proof" is faulty, keeping in mind the level of the student. An explanation is extra clear if t presents insights to the student, as well as being correct and concise.

calculus - Prove that $int_0^infty frac{sin nx}{x}dx=frac{pi}{2}$




There was a question on multiple integrals which our professor gave us on our assignment.




QUESTION: Changing order of integration, show that $$\int_0^\infty \int_0^\infty e^{-xy}\sin nx \,dx \,dy=\int_0^\infty \frac{\sin nx}{x}dx$$
and hence prove that $$\int_0^\infty \frac{\sin nx}{x}dx=\frac{\pi}{2}$$








MY ATTEMPT: I was successful in proving the first part.



Firstly, I can state that the function $e^{-xy}\sin nx$ is continuous over the region $\mathbf{R}=\{(x,y): 0

$$\int_0^\infty \int_0^\infty e^{-xy}\sin nx \,dx \,dy$$
$$=\int_0^\infty \sin nx \left\{\int_0^\infty e^{-xy}\,dy\right\} \,dx$$
$$=\int_0^\infty \sin nx \left[\frac{e^{-xy}}{-x}\right]_0^\infty \,dx$$
$$ =\int_0^\infty \frac{\sin nx}{x}dx$$




However, the second part of the question yielded a different answer.



$$\int_0^\infty \int_0^\infty e^{-xy}\sin nx \,dx \,dy$$
$$=\int_0^\infty \left\{\int_0^\infty e^{-xy} \sin nx \,dx\right\} \,dy$$
$$=\int_0^\infty \frac{ndy}{\sqrt{n^2+y^2}}$$



which gives an indeterminate result, not the desired one.



Where did I go wrong? Can anyone help?


Answer




You should have obtained $$\int_{x=0}^\infty e^{-yx} \sin nx \, dx = \frac{n}{n^2 + y^2}.$$ There are a number of ways to show this, such as integration by parts. If you would like a full computation, it can be provided upon request.






Let $$I = \int e^{-xy} \sin nx \, dx.$$ Then with the choice $$u = \sin nx, \quad du = n \cos nx \, dx, \\ dv = e^{-xy} \, dx, \quad v = -\frac{1}{y} e^{-xy},$$ we obtain $$I = -\frac{1}{y} e^{-xy} \sin nx + \frac{n}{y} \int e^{-xy} \cos nx \, dx.$$ Repeating the process a second time with the choice $$u = \cos nx \, \quad du = -n \sin nx \, dx, \\ dv = e^{-xy} \, dx, \quad v = -\frac{1}{y} e^{-xy},$$ we find $$I = -\frac{1}{y}e^{-xy} \sin nx - \frac{n}{y^2} e^{-xy} \cos nx - \frac{n^2}{y^2} \int e^{-xy} \sin nx \, dx.$$ Consequently $$\left(1 + \frac{n^2}{y^2}\right) I = -\frac{e^{-xy}}{y^2} \left(y \sin nx + n \cos nx\right),$$ hence $$I = -\frac{e^{-xy}}{n^2 + y^2} (y \sin nx + n \cos nx) + C.$$ Evaluating the definite integral, for $y, n > 0$, we observe $$\lim_{x \to \infty} I(x) = 0, \quad I(0) = -\frac{n}{n^2 + y^2},$$ and the result follows.


Monday, March 21, 2016

calculate the limit of this sequence $sqrt{1+sqrt{1+sqrt{1+sqrt{1..}}}}$












i am trying to calculate the limit of $a_n:=\sqrt{1+\sqrt{1+\sqrt{1+\sqrt{1+}}}}..$ with $a_0:=1$ and $a_{n+1}:=\sqrt{1+a_n}$ i am badly stuck not knowing how to find the limit of this sequence and where to start the proof. i did some calculations but still cannot figure out the formal way of finding the limit of this sequence. what i tried is:
$$(1+(1+(1+..)^\frac{1}{2})^\frac{1}{2})^\frac{1}{2}$$ but i am totally stuck here


Answer



We (inductively) show following properties for sequence given by $a_{n+1} = \sqrt{1 + a_n}, a_0 =1$




  1. $a_n \ge 0$ for all $n\in \Bbb N$

  2. $(a_n)$ is monotonically increasing

  3. $(a_n)$ is bounded above by $2$




Then by Monotone Convergence Theorem, the sequence converges hence the limit of sequence exists. Let $\lim a_{n} = a$ then $\lim a_{n+1} = a$ as well. Using Algebraic Limit Theorem, we get



$$
\lim a_{n+1} = \sqrt{1 + \lim a_n} \implies a = \sqrt {1 + a}
$$



Solving above equation gives out limit. Also we note that from Order Limit Theorem, we get $a_n \ge 0 \implies \lim a_n \ge 0$.


Discrete logarithm tables for the fields $Bbb{F}_8$ and $Bbb{F}_{16}$.



The smallest non-trivial finite field of characteristic two is
$$
\Bbb{F}_4=\{0,1,\beta,\beta+1=\beta^2\},
$$
where $\beta$ and $\beta+1$ are primitive cubic roots of unity, and zeros of the
polynomial $x^2+x+1$. Here the multiplication table is given once we know
how to write the non-zero elements as powers of $\beta$. Extend the idea to fields of eight and sixteen elements.




Those fields can be constructed as
$$
\Bbb{F}_8=\Bbb{F}_2[\alpha], \quad\text{and}\quad \Bbb{F}_{16}=\Bbb{F}_2[\gamma],
$$
where $\alpha$ has minimal polynomial $x^3+x+1$, and $\gamma$ has minimal polynomial $x^4+x+1$, both irreducible in $\Bbb{F}_2[x]$.



Task:




Calculate the tables for base $\alpha$ discrete logarithm of $\Bbb{F}_8$ and base $\gamma$ discrete logarithm of $\Bbb{F}_{16}$.




Answer



A (base-$g$) discrete logarithm of a finite field $\Bbb{F}_q$, is a function
$$
\log_g:\Bbb{F}_q^*\to\Bbb{Z}_{q-1}
$$
defined via the equivalence $g^j=x\Leftrightarrow \log_g(x)=j$. For this to be well-defined it is imperative that $g$ is a primitive element, i.e. a generator of $\Bbb{F}_q^*$, and that the domain of $\log_g$ is the
residue class ring of integer modulo $q-1$, as $g^{q-1}=g^0=1$.



It immediately follows that the discrete logarithm satisfies the familiar rules

$$
\begin{aligned}
\log_g(x\cdot y)&=\log_g(x)+\log_g(y),\\
\log_g(x^n)&=n\cdot\log_g(x)
\end{aligned}
$$
for all elements $x,y\in \Bbb{F}_q^*$ and all integers $n$. The arithmetic
on the r.h.s. is that of the ring $\Bbb{Z}_{q-1}$.







It is known that when $q=8$, a zero $\alpha$ of $x^3+x+1$ generates $\Bbb{F}_8^*$. This is proven by the following calculation, where we repeatedly
use the fact that we are working in characteristic two, and that we have the
relation $\alpha^3=\alpha+1$.
$$
\eqalign{
\alpha^0&=&&=1,\\
\alpha^1&=&&=\alpha,\\
\alpha^2&=&&=\alpha^2,\\
\alpha^3&=&&=1+\alpha,\\

\alpha^4&=&\alpha\cdot\alpha^3=\alpha(1+\alpha)&=\alpha+\alpha^2,\\
\alpha^5&=&\alpha\cdot\alpha^4=\alpha(\alpha+\alpha^2)=\alpha^2+\alpha^3=\alpha^2+(1+\alpha)&=1+\alpha+\alpha^2,\\
\alpha^6&=&\alpha\cdot\alpha^5=\alpha(1+\alpha+\alpha^2)=\alpha+\alpha^2+\alpha^3=
\alpha+\alpha^2+(1+\alpha)&=1+\alpha^2,\\
\alpha^7&=&\alpha\cdot\alpha^6=\alpha(1+\alpha^2)=\alpha+\alpha^3=\alpha+(1+\alpha)&=1.
}$$



We see from the end results in the last column that all the non-zero
quadratic polynomials evaluated at $\alpha$ appear. This is yet another confirmation of the fact that $\alpha$ is a primitive element.




The discrete logarithm is used to replace the cumbersome multiplication (and raising
to an integer power) of the field with more familiar integer arithmetic. Exactly like the old-timers used logarithm tables to replace the error-prone multiplication with the easier addition.



For example
$$
(1+\alpha)(1+\alpha+\alpha^2)=\alpha^3\cdot\alpha^5=\alpha^8=\alpha^7\cdot\alpha=\alpha.
$$
Note that both the base-$\alpha$ discrete logarithms and its inverse mapping are needed. I generate such a table as a part of the program initialization, whenever I carry out extensive computer-aided calculations involving a finite field. The above table gives the discrete logarithm when read from right to left, and the inverse mapping (that we actually produced above) when read from left to right.







Similarly with $q=16$ we use $\gamma$, a zero of $x^4+x+1$. This time the table looks like
$$
\begin{aligned}
\gamma^0&=&1\\
\gamma^1&=&\gamma\\
\gamma^2&=&\gamma^2\\
\gamma^3&=&\gamma^3\\
\gamma^4&=&\gamma+1\\
\gamma^5&=\gamma(\gamma+1)=&\gamma^2+\gamma\\

\gamma^6&=\gamma(\gamma^2+\gamma)=&\gamma^3+\gamma^2\\
\gamma^7&=\gamma^4+\gamma^3=&\gamma^3+\gamma+1\\
\gamma^8&=(\gamma^4)^2=&\gamma^2+1\\
\gamma^9&=\gamma(\gamma^2+1)=&\gamma^3+\gamma\\
\gamma^{10}&=\gamma^4+\gamma^2=&\gamma^2+\gamma+1\\
\gamma^{11}&=&\gamma^3+\gamma^2+\gamma\\
\gamma^{12}&=\gamma^4+\gamma^3+\gamma^2=&\gamma^3+\gamma^2+\gamma+1\\
\gamma^{13}&=\gamma^4+\gamma^3+\gamma^2+\gamma=&\gamma^3+\gamma^2+1\\
\gamma^{14}&=\gamma^4+\gamma^3+\gamma=&\gamma^3+1\\
(\gamma^{15}&=\gamma^4+\gamma=&1)

\end{aligned}
$$



Thus for example
$$
(\gamma^3+1)(\gamma^2+1)=\gamma^{14}\cdot\gamma^8=\gamma^{22}=\gamma^7=\gamma^3+\gamma+1.
$$







As another example of the use of this table I want to discuss the problem of factorizing $x^4+x+1$ over $\Bbb{F}_4$. To that end we need to first identify a copy of $\Bbb{F}_4$ as a subfield of $\Bbb{F}_{16}$. We just saw that $\gamma$ is of order fifteen. Therefore $\gamma^5=\gamma^2+\gamma$ and $\gamma^{10}=\gamma^2+\gamma+1$ are third roots of unity. It is then trivial to check that we have a homomorphism of fields $\sigma:\Bbb{F}_4\to\Bbb{F}_{16}$ given by $\sigma(\beta)=\gamma^5$. Note that composing this (from either end) by the Frobenius automorphism gives an alternative embedding $\beta\mapsto \gamma^{10}$.



Basic Galois theory tells us that
$$
x^4+x+1=(x-\gamma)(x-\gamma^2)(x-\gamma^4)(x-\gamma^8)
$$
as we get the other roots by repeatedly applying the Frobenius automorphism $F:x\mapsto x^2$. Here we see that the factor
$$
(x-\gamma)(x-\gamma^4)=x^2+x(\gamma+\gamma^4)+\gamma^5=x^2+x+\gamma^5
$$

is stable under the automorphism $F^2$, and thus (as we also see directly!) has its
coefficients in the subfield $\sigma(\Bbb{F}_4)$. The same holds for the remaining factor
$$
(x-\gamma^2)(x-\gamma^8)=x^2+x(\gamma^2+\gamma^8)+\gamma^{10}=x^2+x+\gamma^{10}.
$$
Pulling back the effect of $\sigma$ we get the desired factorization
$$
x^4+x+1=(x^2+x+\beta)(x^2+x+\beta+1)
$$
in $\Bbb{F}_4[x]$.







Here is a local version of similar tables for $\Bbb{F}_{256}$


How do we find a fraction with whose decimal expansion has a given repeating pattern?



We know $\frac{1}{81}$ gives us $0.\overline{0123456790}$




How do we create a recurrent decimal with the property of repeating:



$0.\overline{0123456789}$



a) Is there a method to construct such a number?



b) Is there a solution?



c) Is the solution in $\mathbb{Q}$?




According with this Wikipedia page: http://en.wikipedia.org/wiki/Decimal
One could get this number by applying this series. Supppose:



$M=123456789$, $x=10^{10}$, then $0.\overline{0123456789}= \frac{M}{x}\cdot$ $\sum$ ${(10^{-9})}^k$ $=\frac{M}{x}\cdot\frac{1}{1-10^{-9}}$ $=\frac{M}{9999999990}$



Unless my calculator is crazy, this is giving me $0.012345679$, not the expected number. Although the example of wikipedia works fine with $0.\overline{123}$.



Some help I got from mathoverflow site was that the equation is: $\frac{M}{1-10^{-10}}$. Well, that does not work either.




So, just to get rid of the gnome calculator rounding problem, running a simple program written in C with very large precision (long double) I get this result:



#include  
int main(void)
{
long double b;
b=123456789.0/9999999990.0;
printf("%.40Lf\n", b);
}



Result: $0.0123456789123456787266031042804570461158$



Maybe it is still a matter of rounding problem, but I doubt that...



Please someone?



Thanks!



Beco




Edited:



Thanks for the answers. After understanding the problem I realize that long double is not sufficient. (float is 7 digits:32 bits, double is 15 digits:64 bits and long double is 19 digits:80 bits - although the compiler align the memory to 128 bits)



Using the wrong program above I should get $0.0\overline{123456789}$ instead of $0.\overline{0123456789}$. Using the denominator as $9999999999$ I must get the correct answer. So I tried to teach my computer how to divide:



#include 
int main(void)
{

int i;
long int n, d, q, r;
n=123456789;
d=9999999999;
printf("0,");
n*=10;
while(i<100)
{
if(n {

n*=10;
printf("0");
i++;
continue;
}
q=n/d;
r=n%d;
printf("%ld", q);
if(!r)
break;

n=n-q*d;
n*=10;
i++;
}
printf("\n");
}

Answer



Suppose you want to have a number $x$ whose decimal expansion is
$0.a_1a_2\cdots a_ka_1a_2\cdots a_k\cdots$. That is it has a period of length $k$, with digits $a_1$, $a_2,\ldots,a_k$.




Let $n = a_1a_2\cdots a_k$ be the integer given by the digits of the period. Then
$$\begin{align*}
\frac{n}{10^{k}} &= 0.a_1a_2\cdots a_k\\
\frac{n}{10^{2k}} &= 0.\underbrace{0\cdots0}_{k\text{ zeros}}a_1a_2\cdots a_k\\
\frac{n}{10^{3k}} &= 0.\underbrace{0\cdots0}_{2k\text{ zeros}}a_1a_2\cdots a_k\\
&\vdots
\end{align*}$$
So the number you want is
$$\sum_{r=1}^{\infty}\frac{n}{10^{rk}} = n\sum_{r=1}^{\infty}\frac{1}{(10^k)^r} = n\left(\frac{\quad\frac{1}{10^k}\quad}{1 - \frac{1}{10^k}}\right) = n\left(\frac{10^k}{10^k(10^k - 1)}\right) = \frac{n}{10^k-1}.$$

Since $10^k$ is a $1$ followed by $k$ zeros, then $10^k-1$ is $k$ 9s. So the fraction with the decimal expansion
$$0.a_1a_2\cdots a_ka_1a_2\cdots a_k\cdots$$
is none other than
$$\frac{a_1a_2\cdots a_k}{99\cdots 9}.$$



Thus, $0.575757\cdots$ is given by $\frac{57}{99}$. $0.837168371683716\cdots$ is given by $\frac{83716}{99999}$, etc.



If you have some decimals before the repetition begins, e.g., $x=2.385858585\cdots$, then first multiply by a suitable power of $10$, in this case $10x = 23.858585\cdots = 23 + 0.858585\cdots$, so $10x = 23 + \frac{85}{99}$, hence $ x= \frac{23}{10}+\frac{85}{990}$, and simple fraction addition gives you the fraction you want.



And, yes, there is always a solution and it is always a rational.



modular arithmetic - Show $a^k ≡ b^kpmod m$



I need to show that if $a,b,k$ and $m$ are integers and $k ≥ 1, m ≥ 2$, and $a ≡ b\pmod m$, then: $a^k ≡ b^k \pmod m$.



But I have no idea how to show this, I have never been this confused. So is there anyone who could help? just a little, like I honestly feel overwhelmed (sounds stupid I know, sorry)



*what do i need to do with the m ≥ 2 ???


Answer



I assume you know (all equivalences are $\text{mod } m$)





  1. $a\equiv b \iff a-b\equiv 0$

  2. $c\equiv 0 \implies cd\equiv 0$



Then



$\begin{align}&a\equiv b\\
\iff &a-b\equiv 0\\
\implies &(a-b)(a^{k-1} + a^{k-2}b + \cdots + ab^{k-2}+b^{k-1})\equiv 0\\

\iff &a^k-b^k\equiv 0\\
\end{align}$



The last deduction above is technically hand-waved but can be made formal with a summation or with induction.


How can I find a non-prime number whose square root is irrational?



I already know that when $n$ is prime, that $\sqrt n$ is irrational (this is true in every case), but I know that this isn't only true for primes, $\sqrt 8$ is irrational, but it's not a prime number.



So how could I find numbers like these, where it's square root is an irrational number, but yet it's not prime?


Answer



Hint: if $n \in \mathbb{N}$ is not a perfect square, then $\sqrt{n}$ is irrational.



[ EDIT ]    Examples of such non-prime $n$ whose square root is irrational:



  • any non-prime integer whose prime factorization includes a prime at an odd power;


  • $m!\;$ for any $\;m \gt 2$;


  • $m^2 - 1\;$ for any $\;m \gt 2$.



discrete mathematics - Prove that he can bring back this amount of coins



there are $2^{n+1}$ coins ($n$ is a natural number). Each coin has a non-negative integer value. The coins are not necessarily distinct. Prove that it is possible to bring exactly $2^n$ coins such that the total value of earnings is divisible by $2^n$.



My thoughts:
So you can only bring back half of the coins, so I think we need to prove this somehow by induction or pigeonhole principle?




With induction on $n$.
Base case: $n=0$, so there are $2$ coins total and can only bring back $1$
coin. Any natural number is divisible by $2^0=1$ so base case holds.



IH: Assume claim holds true for $n=k$.



IStep: Prove claim holds true for $n=k+1$. So there are $2\cdot{2^{k+1}}$ coins. We can split this up using algebra: $2^{k+1}+2^{k+1}$ Consider any of the $2^{k+1}$ coins. By IH, we can bring $2^{k}$ coins back that fits the claim.


Answer



You have already handled the base case of $n = 0$. Next, assume it's true for $n = k$ for some integer $k \ge 0$, i.e., among any $2^{k+1}$ coins, there are $2^{k}$ coins which sum to a multiple of $2^k$.




With $n = k + 1$, consider the $2^{k+2}$ coins. From the assumption for $n = k$, since $2^{k+2} \gt 2^{k+1}$, there are $2^{k}$ coins which sum to a multiple of $2^{k}$, say $a\left(2^{k}\right)$. Remove those coins, leaving $3\left(2^{k}\right)$. As this is still $\gt 2^{k+1}$, there are another $2^{k}$ coins which sum to a multiple of $2^{k}$, say $b\left(2^{k}\right)$. Once again, remove those coins, leaving $2^{k+1}$ coins remaining. For one more time, there are $2^k$ coins among these which sum to a multiple of $2^k$, say $c\left(2^{k}\right)$. Remove these set of coins again.



There are now $3$ sets of $2^{k}$ coins, with sums of $a\left(2^{k}\right)$, $b\left(2^{k}\right)$ and $c\left(2^{k}\right)$. Now, among $a$, $b$ and $c$, since there are only $2$ parity values (i.e., even or odd) but $3$ values, by the Pigeonhole principle, there are at least $2$ which have the same parity, i.e., they are both even or both odd. WLOG, say these are $a$ and $b$, so $a + b$ is even, meaning $a\left(2^{k}\right) + b\left(2^{k}\right) = (a + b)2^{k}$ has a factor of $2^{k+1}$. As this comes from $2^{k} + 2^{k} = 2^{k+1}$ coins, this means the question is true for $n = k + 1$ as well, finishing the induction procedure.



In summary, this proves that among any $2^{n+1}$ coins, for an integer $n \ge 0$, there are $2^{n}$ which sum to a multiple of $2^{n}$. Note this doesn't use, or need, that the coin values are non-negative, but only that they are integral.



Also, there's a more general question, with an answer, at Show that in any set of $2n$ integers, there is a subset of $n$ integers whose sum is divisible by $n$.. The answer's comment has a link to the original paper of Erdős, Ginzburg and Ziv. In this paper, the latter part shows how to prove the more restrictive requirement of there being among $2n - 1$ integers a subset of $n$ integers with a sum divisible by $n$ is true for $n = u$ and $n = v$, then it's also true for $n = uv$. Note I use a variation of this idea in my proof above.


Sunday, March 20, 2016

real analysis - Proving $limlimits_{x to 0} ({1 + sumlimits_{n = 1}^infty {{( - 1 )^n}{{( {frac{{sin nx}}{{nx}}})}^2}} } )= frac 12$




Could you show me $$\mathop {\lim }\limits_{x \to 0} \left( {1 + \sum\limits_{n = 1}^\infty {{{\left( { - 1} \right)}^n}{{\left( {\frac{{\sin nx}}{{nx}}} \right)}^2}} } \right) = \frac{1}{2}.\tag{1}$$





These day, I want to write a article about the sums of divergent series. In Hardy's book Divergent Series, he show me a method proposed by Riemann with
$$\mathop {\lim }\limits_{x \to 0} \left( {1 + \sum\limits_{n = 1}^\infty {{{\left( { - 1} \right)}^n}{{\left( {\frac{{\sin nx}}{{nx}}} \right)}^2}} } \right) ,$$ from which we can obtain $$1-1+1-1+\cdots=\frac12.$$
I hope someone tell me how to prove (1),Thanks!


Answer



We begin with the function $f(x)$ as represented by the series



$$f(x)=\sum_{n=1}^\infty (-1)^n\frac{\sin^2 nx}{n^2x^2} \tag 1$$




Using the identity



$$\sin^2nx=\frac{1-\cos 2nx}{2}$$



in $(1)$ yields



$$\begin{align}
f(x)&=\frac12 \sum_{n=1}^\infty (-1)^n\frac{1-\cos 2nx}{n^2x^2} \\\\
&=\frac12 \sum_{n=1}^\infty \frac{(-1)^n}{n^2x^2}-\frac12\text{Re}\left(\sum_{n=1}^\infty (-1)^n\frac{e^{i2nx}}{n^2x^2}\right)\\\\
&=-\frac{\pi^2}{24x^2}-\frac{1}{2x^2}\text{Re}\left(\text{Li}_2\left(-e^{i2x}\right)\right) \tag 2

\end{align}$$



We expand $\text{Li}_2\left(-e^{i2x}\right)$ in a series around $x=0$ to find



$$\text{Li}_2\left(-e^{i2x}\right)=-\frac{\pi^2}{12}-i2\log (2)x+ x^2+O(x^3) \tag 3$$



Then, upon substituting $(3)$ into $(2)$ we have



$$f(x)=-\frac12 +O(x)$$




Finally, we have



$$\bbox[5px,border:2px solid #C0A000]{\lim_{x\to 0}\left(1+\sum_{n=1}^\infty (-1)^n\frac{\sin^2 nx}{n^2x^2} \right)=\frac12}$$



as was to be shown!






NOTE:




If we inappropriately interchange the limit with the series and use $\lim_{x\to 0}\frac{\sin^2 nx}{n^2x^2}=1$, we obtain the absurd conclusion that



$$1-1+1-1+1\cdots =\frac12$$


calculus - How can I show that $sum limits_{n=2}^inftyfrac{1}{nln n}$ is divergent without using the integral test?

How can I show that $\sum \limits_{n=2}^\infty\frac{1}{n\ln n}$ is divergent without using the integral test?



I tried using the comparison test but I could not come up with an inequality that helps me show the divergence of a series. I also tried using the limit comparison test but I was not successful.



Please do not give me solutions; just a hint so that I can figure it out myself.

elementary set theory - Bijection between $[0,1]$ and $[0,1]times[0,1]$

I know that $|\mathbb R|=|\mathbb R\times\mathbb R|$, and that $|[0,1]|=|\mathbb R|$, which suggests that $|[0,1]|=|[0,1] \times [0,1]|$ but I would like to know a bijection between the interval and square.

Saturday, March 19, 2016

integration - Residue Integral: $int_0^infty frac{x^n - 2x + 1}{x^{2n} - 1} mathrm{d}x$


Inspired by some of the greats on this site, I've been trying to improve my residue theorem skills. I've come across the integral



$$\int_0^\infty \frac{x^n - 2x + 1}{x^{2n} - 1} \mathrm{d}x,$$


where $n$ is a positive integer that is at least $2$, and I'd like to evaluate it with the residue theorem. Through non-complex methods, I know that the integral is $0$ for all $n \geq 2$. But I know that it can be done with the residue theorem.


The trouble comes in choosing a contour. We're probably going to do some pie-slice contour, perhaps small enough to avoid any of the $2n$th roots of unity, and it's clear that the outer-circle will vanish. But I'm having trouble evaluating the integral on the contour, or getting cancellation.


Can you help? (Also, do you have a book reference for collections of calculations of integrals with the residue theorem that might have similar examples?)


Answer



We want to prove that the integral is $0$ for $n>1$, it is the same thing as $$\int_0^{\infty} \frac{\mathrm{d}x}{x^n+1} = 2\int_0^{\infty} \frac{x-1}{x^{2n}-1} \ \mathrm{d}x.$$ The left hand integral is widely known to be $\frac{\pi}{n} \csc \frac{\pi}{n}$, we want to calculate the right hand integral. let $f(x)=\frac{x-1}{x^{2n}-1}$, and consider the contour $C=C_1\cup C_2\cup C_3$ where $$C_1=[0,r],\ C_2=\left\{z \in \mathbb{C} | |z|=r,\ \arg(z) \in \left[0,\frac{\pi}{2n}\right]\right\},\ \ C_3 =e^{\frac{\pi i}{2n}} C_1. $$ Here's what the contour look like


enter image description here


Notice that $\int_C f(z) \ \mathrm{d}z=0$ (the integral is taken counter clockwise always) since $f$ is holomorphic inside $C$. and $$\left|\int_{C_2} f(x)\ \mathrm{d}x \right| =\mathcal{O}(r^{-1}) \to 0.$$ And \begin{align*} \int_{C_3}f(z) \ \mathrm{d}z &= e^{\frac{\pi i}{2n}}\int_0^r f\left(x e^{\frac{\pi i }{2n}}\right) \ \mathrm{d}x \\ &=e^{\frac{\pi i}{2n}}\int_0^r \frac{e^{\frac{\pi i}{2n}}x -1}{x^{2n}+1} \ \mathrm{d}x \\ &= e^{\frac{\pi i}{n}}\int_0^r \frac{x }{x^{2n}+1} \ \mathrm{d}x-e^{\frac{\pi i}{2n}}\int_0^r \frac{1}{x^{2n}+1} \ \mathrm{d}x. \end{align*} Note that $\int_{0}^{\infty} \frac{x}{x^{2n}+1} \ \mathrm{d}x = \frac{\pi }{2n} \csc \frac{\pi}{n}$, then by taking $r\to \infty$ we get $$\int_0^{\infty} f(x) \ \mathrm{d}x =-e^{\frac{\pi i}{n}}\cdot \frac{\pi }{2n} \csc \frac{\pi}{n} + e^{\frac{\pi i}{2n}} \frac{\pi }{2n} \csc \frac{\pi}{2n} = \frac{\pi}{2n} \csc \frac{\pi}{n}.$$ Which is what we were looking for.


complex analysis - How to calculate the polynomial coefficients of a fraction of polynomials?

I have a polynomial fraction which results in a polynomial

$$\frac{f(x)}{g(x)}=q(x)$$
with $f$ $g$ and $q$ being polynomials. I have formulas for the coefficients of $f(x)$ and $g(x)$ dependent on the degree of $f$ and of $g$.



Now I searched for a way to express the coefficients of $q(x)$ by algebraic expressions of the coefficients of $f$ and $g$.



One way I think I found until now is the "subresultant PRS algorithm" which allows to calculate the coefficients of $q(x)$ by appropriate determinants of matrices with coefficients of $f$ and $g$.



But these determinants seem not to be calculable in a non-computeralgebra situation.



Are there other methods ( e.g. algebraic calculus complex analysis ) how to

tackle such a general problem ?

calculus - Is it possible for a continuous function to have a nowhere-continuous derivative?

This is motivated by a question I saw elsewhere that asks whether there is a real-valued function on an interval that contains no monotone subintervals.


Edit: Note that I am asking for a function whose derivative exists but is not continuous anywhere.

sequences and series - Different methods to compute $sumlimits_{k=1}^infty frac{1}{k^2}$ (Basel problem)




As I have heard people did not trust Euler when he first discovered the formula (solution of the Basel problem)
$$\zeta(2)=\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{6}.$$
However, Euler was Euler and he gave other proofs.



I believe many of you know some nice proofs of this, can you please share it with us?


Answer



OK, here's my favorite. I thought of this after reading a proof from the book "Proofs from the book" by Aigner & Ziegler, but later I found more or less the same proof as mine in a paper published a few years earlier by Josef Hofbauer. On Robin's list, the proof most similar to this is number 9
(EDIT: ...which is actually the proof that I read in Aigner & Ziegler).



When $0 < x < \pi/2$ we have $0<\sin x < x < \tan x$ and thus

$$\frac{1}{\tan^2 x} < \frac{1}{x^2} < \frac{1}{\sin^2 x}.$$
Note that $1/\tan^2 x = 1/\sin^2 x - 1$.
Split the interval $(0,\pi/2)$ into $2^n$ equal parts, and sum
the inequality over the (inner) "gridpoints" $x_k=(\pi/2) \cdot (k/2^n)$:
$$\sum_{k=1}^{2^n-1} \frac{1}{\sin^2 x_k} - \sum_{k=1}^{2^n-1} 1 < \sum_{k=1}^{2^n-1} \frac{1}{x_k^2} < \sum_{k=1}^{2^n-1} \frac{1}{\sin^2 x_k}.$$
Denoting the sum on the right-hand side by $S_n$, we can write this as
$$S_n - (2^n - 1) < \sum_{k=1}^{2^n-1} \left( \frac{2 \cdot 2^n}{\pi} \right)^2 \frac{1}{k^2} < S_n.$$



Although $S_n$ looks like a complicated sum, it can actually be computed fairly easily. To begin with,
$$\frac{1}{\sin^2 x} + \frac{1}{\sin^2 (\frac{\pi}{2}-x)} = \frac{\cos^2 x + \sin^2 x}{\cos^2 x \cdot \sin^2 x} = \frac{4}{\sin^2 2x}.$$

Therefore, if we pair up the terms in the sum $S_n$ except the midpoint $\pi/4$ (take the point $x_k$ in the left half of the interval $(0,\pi/2)$ together with the point $\pi/2-x_k$ in the right half) we get 4 times a sum of the same form, but taking twice as big steps so that we only sum over every other gridpoint; that is, over those gridpoints that correspond to splitting the interval into $2^{n-1}$ parts. And the midpoint $\pi/4$ contributes with $1/\sin^2(\pi/4)=2$ to the sum. In short,
$$S_n = 4 S_{n-1} + 2.$$
Since $S_1=2$, the solution of this recurrence is
$$S_n = \frac{2(4^n-1)}{3}.$$
(For example like this: the particular (constant) solution $(S_p)_n = -2/3$ plus the general solution to the homogeneous equation $(S_h)_n = A \cdot 4^n$, with the constant $A$ determined by the initial condition $S_1=(S_p)_1+(S_h)_1=2$.)



We now have
$$ \frac{2(4^n-1)}{3} - (2^n-1) \leq \frac{4^{n+1}}{\pi^2} \sum_{k=1}^{2^n-1} \frac{1}{k^2} \leq \frac{2(4^n-1)}{3}.$$
Multiply by $\pi^2/4^{n+1}$ and let $n\to\infty$. This squeezes the partial sums between two sequences both tending to $\pi^2/6$. Voilà!


Friday, March 18, 2016

linear algebra - Eigenvalues of a Matrix Using Diagonal Entries


I just started learning about complex eigenvalues and eigenvalues and one example in the book I am using says that the matrix $A = \begin{bmatrix} 0 & -1\\ 1 & 0 \end{bmatrix}$. The book then says that the eigenvalues are the roots to the characteristic equation $\lambda^2 +1=0$. But from an earlier section I learned that the eigenvalues of a triangular matrix is the entries on the main diagonal. $A$ is triangular when I use the row interchange operation on the matrix and becomes $A = \begin{bmatrix} 1 & 0\\ 0 & -1 \end{bmatrix}$. The diagonal entries are $1$ and $-1$ but according to the book, the eigenvalues are $i$ and $-i$.


When given a matrix $A$, can I not use row operations to get it into a row equivalent matrix which is in triangular form and list the diagonal entries as eigenvalues?


Answer



Consider the matrix product $$ Av=\begin{pmatrix}-&a_1&-\\-&a_2&-\\&...&\\-&a_n&-\end{pmatrix}v=\begin{pmatrix}a_1 v\\a_2 v\\\vdots\\a_n v\end{pmatrix}=\lambda v $$ compared to $$ \begin{pmatrix}0&1&&\cdots&\\1&0&&\cdots&\\&&1&\\\vdots&\ddots&&\ddots\\&&&&1\end{pmatrix}Av=\begin{pmatrix}-&a_2&-\\-&a_1&-\\&...&\\-&a_n&-\end{pmatrix}v=\begin{pmatrix}a_2 v\\a_1 v\\\vdots\\a_n v\end{pmatrix}\neq q v,\forall q $$


so you cannot re-use any eigen vectors.



So what about the eigen values. We have $$ \det(A-\lambda I)=0 $$ and if $B$ equals $A$ with two rows interchanged we have $$ \det(B-\lambda I)=0 $$ but the rows of $\lambda I$ has NOT been interchanged so the $\lambda$'s has basically been attached to different positions in different row vectors. I know none of this is very deep or general in nature, but it explains what is going on in your specific example.



summation - What is the sum of the series $x^{n-1} + 2x^{n-2} + 3x^{n-3} + 4x^{n-4} + cdots + nx^{n-n}$?

$x^{n-1} + 2x^{n-2} + 3x^{n-3} + 4x^{n-4} + \cdots + nx^{n-n}$




I'm not sure if this can even be summed. Any help is appreciated.

elementary number theory - Which is the fastest way to solve these two problem?



I have two problems which are based on the sequence $A007376$.





  1. Natural numbers starting with $1$ are written one after another like $123456789101112131415\cdots$, how could we find the $10^4$th digit from left?

  2. A hundred digit number is formed by writing the first $x$ natural numbers one after another
    as $123456789101112131415\cdots$, how to find the remainder when this number is divided by $8$?



The OEIS doesn't provide any formula that could be implemented into a under a minute solution,as this is a quantitative aptitude problem, I was wondering which is the fastest way to approach?


Answer



There are $9$ one-digit numbers, giving the first $9$ digits.




Then there are $90$ two-digit numbers, giving the next $180$ digits; total, $189$ digits, so far.



There are $900$ three-digit numbers, giving $2700$ digits, total $2889$.



To get to $ 10,000$, you need another $7111$, which is $7108/4=1777$ four-digit numbers, and the first $3$ digits of the $1778$th four-digit number. You should be able to figure out what those are.



For the hundred digit number, same process, then remember that the remainder on division by $8$ depends only on the last $3$ digits.


algebra precalculus - Why the equation $3cdot0=0$ needs to be proven



In Algebra by Gelfand Page 21 ( for anyone owning the book).
He tries to prove that: $3\cdot(-5) + 15 = 0$.
Here's his proof: $3\cdot(-5) + 15 = 3\cdot(-5) + 3\cdot5 = 3\cdot(-5+5) = 3\cdot0 = 0$. After that he said:





The careful reader will asky why $3\cdot0 = 0$.




Why does this equation need to be proven ?
I asked somewhere and was told that $a\cdot0=0$ is an axiom which maybe Gelfand didn't assume was true during his proof.
But why does it need to be an axiom, it's provable:
In the second step of his proof he converted 15 to $3\cdot5$ so multiplication was defined so
$a\cdot0 = (0 + 0 + \cdots)$ x times $= 0$.
I'm aware multiplication is defined as repeated addition only for integers,
but 3 is an integer so this definition works in my example.



In case my question wasn't clear it can be summed up as:
Why he takes $3\cdot5=15$ for granted but thinks $3\cdot0=0$ needs an explanation?


Answer



Gelfand doesn't really take $3 \cdot 5 = 15$ for granted; in the ordinary course of events, this would need just as much proof as $3 \cdot 0$.



But the specific value $15$ isn't important here; we're really trying to prove that if $3 \cdot 5 = 15$, then $3 \cdot (-5) = -15$. That is, we want to know that making one of the factors negative makes the result negative. If you think of this proof as a proof that $3 \cdot (-5) = -(3 \cdot 5)$, then there's no missing step.




The entire proof could be turned into a general proof that $x \cdot (-y) = -(x\cdot y)$ with no changes; I suspect that the authors felt that this would be more intimidating than using concrete numbers.



If we really cared about the specific value of $3 \cdot 5$, we would need proof of it. But to prove that $3 \cdot 5 = 15$, we need to ask: how are $3$, $5$, and $15$ defined to begin with? Probably as $1+1+1$, $1+1+1+1+1$, and $\underbrace{1+1+\dots+1}_{\text{15 times}}$, respectively, in which case we need the distributive law to prove that $3 \cdot 5 = 15$. Usually, we don't bother, because usually we don't prove every single bit of our claims directly from the axioms of arithmetic.



Finally, we don't usually make $x \cdot 0 = 0$ an axiom. For integers, if we define multiplication as repeated addition, we could prove it as you suggest. But more generally, we can derive it from the property that $x + 0 = x$ (which is usually taken as a definition of what $0$ is) and the other laws of multiplication and addition given in this part of the textbook.


real analysis - A discontinuous function such that $f(x + y) = f(x) + f(y)$




Is it possible to construct a function $f \colon \mathbb{R} \to \mathbb{R}$ such that $$f(x + y) = f(x) + f(y)$$ and $f$ is not continuous?


Answer




"Fix a basis for $\mathbb R$ as a $\mathbb Q$-vector space" (which exists under the axiom of choice, but under weaker axioms I have no idea what happens). The condition $f(x+y) = f(x) + f(y)$ is equivalent to $f$ being $\mathbb Q$-linear, so you're asking if there exists a non-trivial discontinuous map of $\mathbb Q$-vector spaces between $\mathbb R$ and itself. If you map the basis elements to other basis elements in a discontinuous way, you will obtain such a map.



Added : A quick way to see that "a discontinuous way of doing it" exists, is that the set of $\mathbb Q$-linear maps that are also $\mathbb R$-linear has cardinality of the reals, where as the set of all $\mathbb Q$-linear maps (or in other words, the number of maps between the basis elements) has cardinality $|\mathbb R|^{|\mathbb R|}$. To understand why, well of course the basis for $\mathbb R$ as a $\mathbb Q$-vector space has cardinality $\le |\mathbb R|$, but if it was countable, then $\mathbb R$ would be countable because all of its elements would be a linear combination of elements in a countable set with countably many possible coefficients. Since the basis for $\mathbb R$ as a $\mathbb Q$-vector space is uncountable, the set of all maps from a basis to another also is. Therefore there must be at least one map between the two bases which generates a discontinuous linear map.



Hope that helps,


calculus - Proof that the following limit equals $0$


I have the following limit:


$$\lim_{n\to\infty} \left[\frac{\cos(n)+t\sin(n)}{e^{tn}}\right]$$


I wish to show that the limit equals $0$ rigorously. I have a good sense that it equals zero by taking the Taylor series expansion of the top, and comparing with that of the bottom. (The magnitude of the terms are the same, but the numerator has alternating signs, whereas the denominator is strictly positive. I should say; assume that $t>0$.)


I'm not sure however, whether that approach can be made rigorous. Other than that I am at a loss to prove it by a theorem or definition.


Answer



Note that $-1-t\leq\cos(n)+t\sin(n)\leq 1+t$



So we can use the squeeze theorem:


$$0=\lim_{n\to\infty} \frac{-1-t}{e^{tn}}\leq \lim_{n\to\infty} \frac{\cos(n)+t\sin(n)}{e^{tn}}\leq \lim_{n\to\infty} \frac{1+t}{e^{tn}}=0$$


Thursday, March 17, 2016

elementary number theory - How to approach this modulo proof?

I'm really stuck on how I might begin approaching the following proof; I've tried a few things, which I've listed at the bottom, and my own inklings of what I might try, but I'm thoroughly stumped. Here's the question I'm trying to answer"



$$ \forall\: m \in \mathbb{Z}, m^2 \mod{7} = {0,1,2,4}$$




I've tried breaking this into cases, where m is either odd or even, and then trying to find the remainder for m alone and using the fact that
$$ a \equiv b \mod{d} $$and $$c \equiv e \mod{d}$$



then $$ ac \equiv be \mod{d}$$



And just using this to square the results. I've also tried going back to the definition of modulo, but I can't solve the floor function I get:



$$ m = 2k - 7(floor(\frac{2k}{7}))$$



Can anyone help me out here? Really struggling to figure out how to prove this :S

wolfram alpha doesn't plot implicit function with range



Recently I read an article in a magazine describing how an inverse graphing calculator on this page adds domains and ranges to implicit functions. Unfortunately the page is down and I can't see it for myself, so I tried to plot one of the formulas in wolfram alpha,




(y-x)^2 + (y^2 - 6*y + 8 + sqrt(y^4 - 12*y^3 + 52*y^2 - 96*y + 64))^2=0


but it doesn't plot it, it just shows an empty graph. It does however show the correct solution:



for 2 <= x <= 4, y=x


I also tried plotting it in geogebra and some online things that can handle implicit functions but they also just give empty graphs.

Is it too complicated? Is the formula correct? And if so why doesn't it plot the equation?



Thanks in advance.


Answer



Most numerical implicit function plotters actually depend on recognizing changes of sign. Basically, if you want to plot $f(x,y) = 0$, you start by sampling
a bunch of points $f(x_i,y_j)$, and you know that if this has opposite signs at two neighbouring points, there should be a piece of the curve somewhere between them. But if you have a function such that $f(x,y) \ge 0$ for all $x,y$, unless you are lucky enough to hit exactly on a point where $f(x,y) = 0$ you will only see positive values, and you will never detect the presence of
a curve. That's the case here, since your function is the sum of two squares.


recreational mathematics - Find number of digits of N in base b



Given an integer $N$, we can find that it has
$ \lfloor\log_bN\rfloor+1$
digits in base $b$.

But what if $N$ is not in base $10$, but is in base $a$?





Can we calculate the number of digits $N$ written in base $a$ has in base $b$, but without converting $N_a$ to $N_{10}$ (its decimal representation)?







What I have tried:



If we have a case like, for example, where we have $1234567_8$ and $b=2$, then we can solve $2^x=8 \to x=3$ and know that each digit of base $8$ will take three digits to write in base $2$ (including trailing zeroes). But the last one should not have trailing zeroes, so we need to evaluate the last digit by itself, which is $1$, and see it takes only one digit in base $2$ as well. Thus, our number written in octal will have $3\times 6 +1=19$ digits in binary. But what if $x$ is not an integer?



I've thus tried to round the values to the nearest integer and put this method into a formula. In general, we follow this approach:






Since we can't evaluate (convert to base $10$) the $N_a$, we can count how many digits it has.
Also, I need to "cheat" a bit by evaluating only the last digit of the number $N_a$.



If $n$ is the number of digits of $N_a$ and $d$ is the last digit, then $N_b$ has digits: $$(n-1)[\log_ba]+\lfloor\log_bd\rfloor+1$$



($[x]$ rounds the number to the nearest integer, and $\lfloor x\rfloor$ is the floor function.)





How can we check/prove whether this expression is always exactly correct or not? I've tried only handful of examples and not sure how to fully check this. (it is correct if $x$ is an integer, the rounding is the only thing that bothers me)



Is there a better way to calculate/write solution to this problem?



Answer



When the number $N$ is a variable with an arbitrary value, the number of base-$b$ digits is indeed $\lfloor \log_b N\rfloor+1$.



If $N$ is a concrete number written in base $a$ with $n$ digits, its value lies in $[a^{n-1},a^n-1]$ and the number of base-$b$ digits is in



$$\big[\lfloor \log_b a^{n-1}\rfloor+1,\lfloor \log_b(a^n-1)\rfloor+1\big]$$ which is most of the time




$$\big[\lfloor (n-1)\log_ba\rfloor+1,\lfloor n\log_ba\rfloor+1\big].$$


permutations - How many factors of $10$ in $100!$











How many factors of 10 are there in $100!$ (IIT Question)?



Is it 26,25,24 or any other value



Please tell how you have done it


Answer



So first, we want to find how many factors of $5$ there are in $100!$. There are $20$ numbers divisible by $5$ from $1$ to $100$, so we start off our count at $20$. Then, we count how many numbers are divisble by $5^2$. There are four: $25, 50, 75, 100$, and so we add four to our count to get $24$ factors of $5$. (Note that we don't add eight fives - if we did so, we would be counting the first factors of five twice!)




Since $5^3>100$, we don't have to worry about third powers of five. There are at least $100/2=50$ factors of $2$ in $100!$, but we're only going to use $24$ of them to get our $24$ multiples of $10$, so we don't have to calculate the exact number of factors of $2$ in $100!$.



So basic method: To find how many factors of $a$ there are in $b!$, first decompose $a$ into its primes $p_n$, and then find out how many factors of each prime $p_n$ are in numbers less than $b$, by using the method I described of checking for divisibility by $p_n$, then $p_n^2$, etc. Then, from this pool of factors, figure out how many you can take. In our examples to make $10^n$ we could take a maximum of $24$ fives and $24$ twos. If we wanted to find how many factors of $40$ (=$2^3 5$) were less than $100$, we would have needed to find out exactly how many factors of $2$ were less than $100$, and then either take $24*3$ twos if there are enough, or less, if there aren't.



See also: youtube Factors of Factorials Part 1


complex analysis - How to prove Euler's formula: $e^{ivarphi}=cos(varphi) +isin(varphi)$?




Could you provide a proof of Euler's formula: $e^{i\varphi}=\cos(\varphi) +i\sin(\varphi)$?


Answer



Assuming you mean $e^{ix}=\cos x+i\sin x$, one way is to use the MacLaurin series for sine and cosine, which are known to converge for all real $x$ in a first-year calculus context, and the MacLaurin series for $e^z$, trusting that it converges for pure-imaginary $z$ since this result requires complex analysis.



The MacLaurin series:
\begin{align}
\sin x&=\sum_{n=0}^{\infty}\frac{(-1)^n}{(2n+1)!}x^{2n+1}=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\cdots
\\\\
\cos x&=\sum_{n=0}^{\infty}\frac{(-1)^n}{(2n)!}x^{2n}=1-\frac{x^2}{2!}+\frac{x^4}{4!}-\cdots
\\\\

e^z&=\sum_{n=0}^{\infty}\frac{z^n}{n!}=1+z+\frac{z^2}{2!}+\frac{z^3}{3!}+\cdots
\end{align}



Substitute $z=ix$ in the last series:
\begin{align}
e^{ix}&=\sum_{n=0}^{\infty}\frac{(ix)^n}{n!}=1+ix+\frac{(ix)^2}{2!}+\frac{(ix)^3}{3!}+\cdots
\\\\
&=1+ix-\frac{x^2}{2!}-i\frac{x^3}{3!}+\frac{x^4}{4!}+i\frac{x^5}{5!}-\cdots
\\\\
&=1-\frac{x^2}{2!}+\frac{x^4}{4!}+\cdots +i\left(x-\frac{x^3}{3!}+\frac{x^5}{5!}-\cdots\right)

\\\\
&=\cos x+i\sin x
\end{align}


real analysis - Function Satisfying $f(x)=f(2x+1)$



If $f: \mathbb{R} \to \mathbb{R}$ is a continuous function and satisfies $f(x)=f(2x+1)$, then its not to hard to show that $f$ is a constant.



My question is suppose $f$ is continuous and it satisfies $f(x)=f(2x+1)$, then can the domain of $f$ be restricted so that $f$ doesn't remain a constant. If yes, then give an example of such a function.


Answer



Let $f$ have value $1$ on $[0,\infty)$ and value $0$ on $(-\infty,-1]$. This function is not constant (although it is locally constant), and satisfies $f(x)=f(2x+1)$ whenever $x$ is in its domain.



Wednesday, March 16, 2016

Cauchy functional equation three variables

If I have function from $R^3$ to $R$ satisfying



$f(x_1,x_2,x_3)+f(y_1,y_2,y_3) = f(x_1+y_1,x_1+y_2,x_3+y_3)$


is it necessarily linear?


$f(z_1,z_2,z_3) = \lambda _1 z_1+\lambda _2 z_2+\lambda _3 z_3$


Wasn't sure if this was a direct consequence of Cauchy's theorem or not.

calculus - Function as a "constant of integration"



I'm reading a book Differential Equations with Applications and Historical Notes, 3rd edition, specifically section 8 about exact equations. The author is trying to prove that iff $\partial M/\partial y = \partial N/\partial x$ then equation
\begin{equation}
M(x,y)dx + N(x,y)dy = 0
\end{equation}

is exact differential equation.
At some point we integrate equation

\begin{equation}
\frac{\partial f(x,y)}{\partial x} = M(x,y)
\end{equation}

to get
\begin{equation}
f(x, y) = \int M(x,y)dx + g(y)
\end{equation}

The author states that function $g(y)$ appears as a constant of integration because if we take derivative of both sides with respect to $x$, $g(y)$ would disappear because it doesn't depend on $x$.
That's the part that I have trouble with, $y$ is a dependent variable and $x$ is independent variable so wouldn't derivative of $g(y)$ with respect to $x$ be
\begin{equation}
\frac{d\,g(y)}{dy} \frac{dy}{dx}

\end{equation}

and not $0$ ?


Answer



This is a common poorly written part in differential equations textbooks, because they don't want to spend time discussing differential forms.



At this point we forget that $y$ depends on $x$. Of course then the equation $M(x,y)dx+N(x,y)dy=0$ looks weird, and indeed it's wrong. What is meant there is that if we have a dependence of $x$ and $y$, a curve on $x$-$y$ plane, denoted $\gamma$, then the pullback of $M(x,y)dx+N(x,y)dy$ on $\gamma$ is $0$. For example, if we can parametrize $\gamma$ by $x$ (i.e. we can write $y$ as a function of $x$), then this condition says $\frac{dy}{dx} = -\frac{M(x,y)}{N(x,y)}$. That's why we want to find such $\gamma$.



The exactness condition means that $df=M(x,y)dx+N(x,y)dy$. Then the level sets of $f$, $\{(x,y)|f(x,y)=c\}$, give us such $\gamma$'s. Note that exactness follows from closeness on simply connected domains.



So, one can separate this problem into two stages, where $x$ and $y$ are independent, and then were we look for a required dependence.




Alternatively, instead of using differential forms, one can think of $(N,M)$ as a vector field on $x$-$y$ plane perpendicular to $\gamma$'s, the level sets of $f$, gradient of which is $(N,M)$.


divisibility - Prove that $4^{2n+1}+3^{n+2} : forall ninmathbb{N}$ is a multiple of $13$


How to prove that $\forall n\in\mathbb{N},\exists k\in\mathbb{Z}:4^{2n+1}+3^{n+2}=13\cdot k$



I've tried to do it by induction. For $n=0$ it's trivial.


Now for the general case, I decided to throw the idea of working with $13\cdot k$ and try to prove it via congruences. So I'd need to prove that $4^{2n+1}+3^{n+2}\equiv0\pmod{13} \longrightarrow 4^{2n+3}+3^{n+3}\equiv0\pmod{13}$, that is, $4^{2n+1}+3^{n+2}\equiv4^{2n+3}+3^{n+3}\pmod{13}$


But I have no clue how to do is. Any help?


Answer



$$4^{2n+3}+3^{n+3}= 16 \times 4^{2n+1} + 3 \times 3^{n+2} =13 \times 4^{2n+1} +3 \times(4^{2n+1}+3^{n+2})$$


Tuesday, March 15, 2016

probability - Die roll, value of 4 must follow value of 1 to win

I came across a "die roll" probability question that has me stumped.


Two fair die are being rolled by players A and B, who alternate, with A rolling first (ie. A then B then A then B...so long as the game hasn't been won). In order to win the game, a player must roll a 4 following the previous player's 1. What's the probability that A wins the game?


The normal form of this question -- "the first player to roll a 4" -- is simple enough, but I'm having a hard time understanding how the conditional aspect changes the calculation. Any thoughts?

limits - How to find $lim _{ nto infty } frac { ({ n!) }^{ 1over n } }{ n } $?





How to find $\lim _{ n\to \infty } \frac { ({ n!) }^{ 1\over n } }{ n } $ ?
I tried taking using logarithm to bring the expression to sum form and then tried L Hospital's Rule.But its not working.Please help!



This is what wolfram alpha is showing,but its not providing the steps!




BTW if someone can tell me a method without using integration, I'd love to know!


Answer



Note



\begin{align}\frac{(n!)^{1/n}}{n} &= \left[\left(1 - \frac{0}{n}\right)\left(1 - \frac{1}{n}\right)\left(1 - \frac{2}{n}\right)\cdots \left(1 - \frac{n-1}{n}\right)\right]^{1/n}\\
&= \exp\left\{\frac{1}{n}\sum_{k = 0}^{n-1} \log\left(1 - \frac{k}{n}\right)\right\}
\end{align}



and the last expression converges to




$$\exp\left\{\int_0^1\log(1 - x)\, dx\right\} = \exp(-1) = \frac{1}{e}.$$



Alternative: If you want to avoid integration, consider the fact that if $\{a_n\}$ is a sequence of positive real numbers such that $\lim\limits_{n\to \infty} \frac{a_{n+1}}{a_n} = L$, then $\lim\limits_{n\to \infty} a_n^{1/n} = L$.



Now $\frac{(n!)^{1/n}}{n} = a_n^{1/n}$, where $a_n = \frac{n!}{n^n}$. So



$$\frac{a_{n+1}}{a_n} = \frac{(n+1)!}{(n+1)^{n+1}}\cdot \frac{n^n}{n!} = \frac{n+1}{n+1}\cdot\frac{n^n}{(n+1)^n} = \left(\frac{n}{n+1}\right)^n = \left(\frac{1}{1 + \frac{1}{n}}\right)^n = \frac{1}{\left(1 + \frac{1}{n}\right)^n}.$$



Since $\lim\limits_{n\to \infty} (1 + \frac{1}{n})^n = e$, then $$\lim_{n\to \infty} \frac{a_{n+1}}{a_n} = \frac{1}{e}.$$




Therefore $$\lim_{n\to \infty} \frac{(n!)^{1/n}}{n} = \frac{1}{e}.$$


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...