Monday, August 31, 2015

real analysis - How do you prove that if $limlimits_{n to infty}a_n=1$, then $limlimits_{n to infty}frac{1}{1+a_n}=frac{1}{2}$?


More precisely:


Prove using only the $\epsilon$-$N$ definition of convergence that if $\lim\limits_{n \to \infty}a_n=1$ and $a_n>-1$ for all $n\in \mathbb{N}$, then $\lim\limits_{n \to \infty}\frac{1}{1+a_n}=\frac{1}{2}$ .


Here's what I have so far:


  1. Let $\{a_n\}$ be a sequence and suppose $\lim\limits_{n \to \infty}a_n=1$ and $a_n>-1$ for all $n\in \mathbb{N}$.

  2. Then for all $\epsilon>0$, there exists $N\in \mathbb{N}$ such that for all $n\ge N$, $|a_n-1|<\epsilon$ by the $\epsilon$-$N$ definition of convergence.

  3. Then $-\epsilon

  4. Then $-\epsilon<1+a_n-2<\epsilon$


  5. Then $\frac{1}{-\epsilon}<\frac{1}{1+a_n}-\frac{1}{2}<\frac{1}{\epsilon}$

  6. Then $|\frac{1}{1+a_n}-\frac{1}{2}|<\frac{1}{\epsilon}$

  7. Let $\epsilon'=\frac{1}{\epsilon}$

  8. Then for all $\epsilon'>0$, there exists $N\in \mathbb{N}$ such that for all $n\ge N$, $|\frac{1}{1+a_n}-\frac{1}{2}|<\epsilon'$

  9. Therefore, $\lim\limits_{n \to \infty}\frac{1}{1+a_n}=\frac{1}{2}$ by the $\epsilon$-$N$ definition of convergence.

Is this a valid proof? In particular, I am not sure about step 5. Intuition tells me that it is correct; but I am not 100% sure about the algebra.


Answer



You should start with $\frac{1}{1+a_n}$ rather than $a_n$. Here is a standrad answer.


For any $\epsilon>0$, we want to find an $N$ such that for all $n>N$, $$\left|\frac{1}{1+a_n}-\frac12\right|=\frac{|2-(1+a_n)|}{2(1+a_n)}=\frac{|1-a_n|}{2(1+a_n)}<\epsilon.$$ Since $\lim a_n=1$, there exists an $N_1$ such that $a_n>0$ for any $n>N_1$. Choose $N_2$ such that $|1-a_n|<2\epsilon$ for any $n>N_2$. Let $N=\max{(N_1,N_2)}$, then for any $n>N$, we have $$\frac{|1-a_n|}{2(1+a_n)}<\frac{2\epsilon}{2}=\epsilon.$$ From the definition we finally prove the desired result.


algebra precalculus - 8^n - 3^n is divisible by 5

My compitations:
Base Case= $8-3 = 5$ is divisible by $5$
Inductive Hypothesis$=$
$8^k - 3^k = 5M$
$8^k = 5M + 3^k$
$3^k = 8^k - 5M$



My final answer is $5(11M + 11^k)$
Please tell me if I have any mistake regarding my computations.

sequences and series - Does $sumlimits_{k=1}^n 1 / k ^ 2$ converge when $nrightarrowinfty$?



I can prove this sum has a constant upper bound like this:



$$\sum_{k=1}^n \frac1{k ^ 2} \lt 1 + \sum_{k=2}^n \frac 1 {k (k - 1)} = 2 - \frac 1 n \lt 2$$



And computer calculation shows that sum seems to converge to 1.6449. But I still want to know:





  • Dose this sum converge?

  • Is there a name of this sum (or the series $1 / k ^2 $)?


Answer



A sequence that is increasing and bounded must converge. That's one of the fundamental properties of the real line. So once you've observed that your sequence of partial sums is bounded, since it obviously increases, it must converge. Of course it is a very famous series, and it converges to a number which quite miraculously has a "closed form" formula: it is $\pi^2/6$.



EDIT: for many proofs of this famous formula, see this MO question.


number theory - Prove that $2^{1/n}$ is irrational

Proof by contradiction, Assume $2^{1/n}$ is rational so:



$$2^{1/n} = \frac ab $$

where a,b have no common factors.



$$2 = \frac{a^n}{b^n}$$



$2$ divides LHS, therefore $2$ divides RHS
so $2$ divides $a^n$ or $2$ divides $b^n$ which implies $2$ divides $a$ or $2$ divides $b$.



Stuck on what to do next.

Sunday, August 30, 2015

trigonometry - What is the exact value of sin 1 + sin 3 + sin 5 ... sin 177 + sin179 (in degrees)?

My attempt:



First I converted this expression to sum notation:



$\sin(1^\circ) + \sin(3^\circ) + \sin(5^\circ) + ... + \sin(175^\circ) + \sin(177^\circ) + \sin(179^\circ)$ = $\sum_{n=1}^{90}\sin(2n-1)^\circ$



Next, I attempted to use Euler's formula for the sum, since I needed this huge expression to be simplified in exponential form:



$\sum_{n=1}^{90}\sin(2n-1)^\circ$ = $\operatorname{Im}(\sum_{n=1}^{90}cis(2n-1)^\circ)$




$\operatorname{Im}(\sum_{n=1}^{90}cis(2n-1)^\circ)$ = $\operatorname{Im}(\sum_{n=1}^{90}e^{i(2n-1)^\circ})$



$\operatorname{Im}(\sum_{n=1}^{90}e^{i(2n-1)^\circ})$ = $\operatorname{Im}(e^{i} + e^{3i} + e^{5i} + ... + e^{175i} + e^{177i} + e^{179i})$



Next, I used the sum of the finite geometric series formula on this expression:



$\operatorname{Im}(e^{i} + e^{3i} + e^{5i} + ... + e^{175i} + e^{177i} + e^{179i})$ = $\operatorname{Im}(\dfrac{e^i(1-e^{180i})}{1-e^{2i}})$



$\operatorname{Im}(\dfrac{e^i(1-e^{180i})}{1-e^{2i}})$ = $\operatorname{Im}(\dfrac{2e^i}{1-e^{2i}})$




Now I'm stuck in here;

real analysis - Finding $lim_{ntoinfty}frac{sqrt[n]{n!}}{n}$


I have to find $\lim_{n\to\infty}\frac{\sqrt[n]{n!}}{n}$.


What I've got:


I express $n!$ as the product of the sequence $b_n=n$.


I know (from a previous problem) that $$\frac{1}{\sum_{i=1}^n\frac{1}{n}} \le \frac{\sqrt[n]{n!}}{n} \le \frac{\sum_{i=1}^n i}{n^2}=\frac{1}{2}$$



From this I know that the limit of the sequence is between $0$ and $\frac{1}{2}$ since $\sum_{i=1}^n\frac{1}{n}$ is divergent.


Second attempt:


I looked up the answer and I saw that the limit is $\frac{1}{e}$, so I tried expressing $\frac{\sqrt[n]{n!}}{n}$ as $$\sqrt[n]{\frac{(1-\frac{1}{n})(1-\frac{2}{n})...(1-1+\frac{1}{n})}{n^{n-1}}}$$ since I know that $(1-\frac{1}{n})^n$ converges to $\frac{1}{e}$, but this also didn't get me anywhere.


Thanks in advance!


Answer



Hint:


Use the following famous Stirling formula: Given $x>0$ $$ \lim_{x\to +\infty} \frac{\Gamma(x+1)}{\left(\frac{x}{e}\right)^x \sqrt{2x} }=\sqrt{\pi}. $$ Where $\Gamma$ is the bGamma function of Euler and $n! =\Gamma(n+1)$ for $n\in \mathbb{N}$


modular arithmetic - Insight into $pmod {2^n pm 1}$

When I was in elementary school I was taught that in order to find if a number is divisible by 3, if the number is big, we could add up all the separate digits that
made it up. If the result is divisible by 3, the original number is divisible by 3 as well. This was the basis for a nice insight.



First, what is the explanation for this. Let' s say we have the number 4728




$$
4728 \pmod 3 = (4720 + 8) \pmod 3 = (4720 \pmod 3 + 8 \pmod 3) \pmod 3 = [(472 \times 10) \pmod 3 + 8 \pmod 3] \pmod 3 = [([(472 \pmod 3)(10 \pmod 3)] \pmod 3) + 8 \pmod 3] \pmod 3 = [(472 \pmod 3) \pmod 3 + 8 \pmod 3] \pmod 3 = (472 \pmod 3 + 8 \pmod 3) \pmod 3 = [(472 + 8) \pmod 3] \pmod 3 = (472 \pmod 3 + 8 \pmod 3) \pmod 3 $$



... and then it becomes a recursive operation... till we get to



$$(4 + 7 + 2 + 8) \pmod 3 = 21 \pmod 3 = 0$$



$$4728 \pmod 3 = 21 \pmod 3 = 0$$




Of course, this trick is possible because $10 \pmod 3 = 1$... and this is when the insight begins.



Other Modulos



What would happen if we consider a number that is not 3. For starters, let's use 7:



$$
4728 \pmod 7 = (4720 + 8) \pmod 7 = (4720 \pmod 7 + 8 \pmod 7) \pmod 7 = ((472 \times 10) \pmod 7 + 8 \pmod 7) \pmod 7 = (([(472 \pmod 7)(10 \pmod 7)] \pmod 7) + 8 \pmod 7] \pmod 7 = [(472 \pmod 7) \times 3] + 8 \pmod 7] \pmod 7.
$$




In this case we won't be able to simplify to add up all the separate digits but we can see how the expression has already changed to involve $\pmod 7$ of the numbers of tens that we have in the original one, only that multiplied by 3 (because $10 \pmod 7 = 3$)... if we continue going down this path we will end up with this:



$$
4728 \pmod 7 = [([( 4 \times 3 ) + 7] \times 3 + 2) \times 3 + 8] \pmod 7 = 185 \pmod 7 = 3.
$$



If we tried to explain this, consider that for each 10 units that you are working with, you will compensate with 3 units to calculate modulo, because $10 - 7 = 3$.



So, take number 26. You take the 6 units, and then compensate for whatever is needed for the 20 units. Each 10 units will have to be compensated with 3 units added up to the 6 units. So,




$$
(6 + 2 \times 3) \pmod 7 = 12 \pmod 7 = 5
$$



Changing Bases



Let's consider $4729 \pmod 7$ but now, instead of using the decimal system, let's switch into octal system. 4729 becomes 011171. And $8 \pmod 7 = 1$, that will make it much simpler.



$$
011171 \pmod 7 = (011170 \pmod 7 + 1 \pmod 7) \pmod 7 = ((01117 \times 010) \pmod 7 + 1 \pmod 7) \pmod 7 = ([(01117 \pmod 7)(010 \pmod 7)] \pmod 7 + 1 \pmod 7) \pmod 7 = ([(01117 \pmod 7) \times 1] \pmod 7 + 1 \pmod 7) \pmod 7 = (01117 \pmod 7 + 1 \pmod 7) \pmod 7

$$



... and at this point we are again met with our recursive definition where we will develop $01117 \pmod 7$ using the same technique and end up with:



$$
(1 + 1 + 1 + 7 + 1) \pmod 7 = 11 \pmod 7 = 4.
$$



This all happens because, like when we were doing $\pmod 3$ and $10 \pmod 3 = 1$, when calculating $\pmod 7$, $8 \pmod 7 = 1$.




So, by changing the base of the numeric system for the analysis we were able to make an operation that involved multiplications by 3 and additions into a simple addition operation.



Subtracting also works



Consider $4729 \pmod {11}$



We could use base 16 for our analysis (which would be very comfortable when writing a program on a computer) and will use a "compensation factor" of 5 (because $16 - 11 = 5$).



4729 becomes 0x1279.




$$
4729 \pmod {11} = [([(1 \times 5 + 2) \times 5 + 7) \times 5 + 9] \pmod {11} = 219 \pmod {11} = 10
$$



Great... now let's do the same analysis but instead of using 16 as the base numeric system, let's go back to decimal.



You remember how I explained that for every 10 units we are compensating adding some units? When using $\pmod 3$, we were adding one unit per every 10, when doing $\pmod 7$ we were adding 3 units per every 10. If we are doing $\pmod {11}$ (which is bigger than 10), instead of adding units, we will compensate by subtracting units. Now, for every 10 units, instead of adding units, we substract 1 unit (because $10 - 11 = -1$)



Take $26 \pmod {11}$, we take 6 units and then compensate by subtracting 1 unit per every 10 units. So compensating for 20 should be -2




$$
26 \pmod {11} = (6 + -2) \pmod {11} = 4 \pmod {11} = 4.
$$



Let's go back to 4729:



$$
4729 \pmod {11} = ((((4 \times -1 + 7) \times -1 + 2) \times -1 + 9) \pmod {11} = (-4 + 7 - 2 + 9) \pmod {11} = 10 \pmod {11} = 10
$$




Right on target



Trying with $\pmod {15}$? Base is 16 and you have an addition-only operation.... trying with $\pmod {17}$ and using base 16 it becomes an addition-subtraction operation.



Might also consider bigger numbers



If you are doing $\pmod {15}$, you could take 16 as the base numeric system and use 1 as the compensation factor, but you could also use 256 as the numeric system (as in, go with 8 bits at at a time) and use 1 as the compensation factor too, because $256 \pmod {15} = 1$.



$$
4729 \pmod 15 = 0x1279 \pmod {15} = (0x12 + 0x79) \pmod {15} = (18 + 121) \pmod {15} = 139 \pmod {15} = 4.

$$



So, when working with computers, $\pmod {2^n \pm 1}$ should be a piece of cake... at least simpler than making a huge division and finding the remainder.



$\pmod {2^n \pm 1}$



For the sake of simplicity to explain the concept, as the details have already been covered, let's consider calculating $668371941 \pmod {255}$ and $668371941 \pmod {257}$. Using 256 as our base system:



$$
668371941 \pmod {255} = (0x27 + 0xd6 + 0x8b + 0xe5) \pmod {255} = (39 + 214 + 139 + 229) \pmod {255} = 621 \pmod {255} = 111

$$



Now $668371941 \pmod {257}$



$$
668371941 \pmod {257} = (-0x27 + 0xd6 - 0x8b + 0xe5) \pmod {257} = (-39 + 214 - 139 + 229) \pmod {257} = 265 \pmod {257} = 8
$$



So... had anybody noted this? If not, is this worth an article? Because I have no connection to the academia and I certainly lack the mathematical skills to write an article to prove it (other than what I posted here).

integration - A closed-form expression for $int_0^infty frac{ln (1+x^alpha) ln (1+x^{-beta})}{x} , mathrm{d} x$



I have been trying to evaluate the following family of integrals:





$$ f:(0,\infty)^2 \rightarrow \mathbb{R} \, , \, f(\alpha,\beta) = \int \limits_0^\infty \frac{\ln (1+x^\alpha) \ln (1+x^{-\beta})}{x} \, \mathrm{d} x \, . $$




The changes of variables $\frac{1}{x} \rightarrow x$, $x^\alpha \rightarrow x$ and $x^\beta \rightarrow x$ yield the symmetry properties
$$ \tag{1}
f(\alpha,\beta) = f(\beta,\alpha) = \frac{1}{\alpha} f\left(1,\frac{\beta}{\alpha}\right) = \frac{1}{\alpha} f\left(\frac{\beta}{\alpha},1\right) = \frac{1}{\beta} f\left(\frac{\alpha}{\beta},1\right) = \frac{1}{\beta} f\left(1,\frac{\alpha}{\beta}\right) $$
for $\alpha,\beta > 0$ .



Using this result one readily computes $f(1,1) = 2 \zeta (3)$ . Then $(1)$ implies that
$$ f(\alpha,\alpha) = \frac{2}{\alpha} \zeta (3) $$

holds for $\alpha > 0$ . Every other case can be reduced to finding $f(1,\gamma)$ for $\gamma > 1$ using $(1)$.



An approach based on xpaul's answer to this question employs Tonelli's theorem to write
$$ \tag{2}
f(1, \gamma) = \int \limits_0^\infty \int \limits_0^1 \int \limits_0^1 \frac{\mathrm{d}u \, \mathrm{d}v \, \mathrm{d}x}{(1+ux)(v+x^\gamma)} = \int \limits_0^1 \int \limits_0^1 \int \limits_0^\infty \frac{\mathrm{d}x \, \mathrm{d}u \, \mathrm{d}v}{(1+ux)(v+x^\gamma)} \, .$$
The special case $f(1,2) = \pi \mathrm{C} - \frac{3}{8} \zeta (3)$ is then derived via partial fraction decomposition ($\mathrm{C}$ is Catalan's constant). This technique should work at least for $\gamma \in \mathbb{N}$ (it also provides an alternative way to find $f(1,1)$), but I would imagine that the calculations become increasingly complicated for larger $\gamma$ .



Mathematica manages to evaluate $f(1,\gamma)$ in terms of $\mathrm{C}$, $\zeta(3)$ and an acceptably nice finite sum of values of the trigamma function $\psi_1$ for some small, rational values of $\gamma > 1$ (before resorting to expressions involving the Meijer G-function for larger arguments). This gives me some hope for a general formula, though I have not yet been able to recognise a pattern.



Therefore my question is:





How can we compute $f(1,\gamma)$ for general (or at least integer/rational) values of $\gamma > 1$ ?




Update 1:



Symbolic and numerical evaluations with Mathematica strongly suggest that
$$ f(1, n) = \frac{1}{n (2 \pi)^{n-1}} \mathrm{G}_{n+3, n+3}^{n+3,n+1} \left(\begin{matrix} 0, 0, \frac{1}{n}, \dots, \frac{n-1}{n}, 1 , 1 \\ 0,0,0,0,\frac{1}{n}, \dots, \frac{n-1}{n} \end{matrix} \middle| \, 1 \right) $$
holds for $n \in \mathbb{N}$ . These values of the Meijer G-function admit an evaluation in terms of $\zeta(3)$ and $\psi_1 \left(\frac{1}{n}\right), \dots, \psi_1 \left(\frac{n-1}{n}\right) $ at least for small (but likely all) $n \in \mathbb{N}$ .




Interesting side note: The limit
$$ \lim_{\gamma \rightarrow \infty} f(1,\gamma+1) - f(1,\gamma) = \frac{3}{4} \zeta(3) $$
follows from the definition.



Update 2:



Assume that $m, n \in \mathbb{N} $ are relatively prime (i.e. $\gcd(m,n) = 1$). Then the expression for $f(m,n)$ given in Sangchul Lee's answer can be reduced to
\begin{align}
f(m,n) &= \frac{2}{m^2 n^2} \operatorname{Li}_3 ((-1)^{m+n}) \\

&\phantom{=} - \frac{\pi}{4 m^2 n} \sum \limits_{j=1}^{m-1} (-1)^j \csc\left(j \frac{n}{m} \pi \right) \left[\psi_1 \left(\frac{j}{2m}\right) + (-1)^{m+n} \psi_1 \left(\frac{m + j}{2m}\right) \right] \\
&\phantom{=} - \frac{\pi}{4 n^2 m} \sum \limits_{k=1}^{n-1} (-1)^k \csc\left(k \frac{m}{n} \pi \right) \left[\psi_1 \left(\frac{k}{2n}\right) + (-1)^{n+m} \psi_1 \left(\frac{n + k}{2n}\right) \right] \\
&\equiv F(m,n) \, .
\end{align}
Further simplifications depend on the parity of $m$ and $n$.



This result can be used to obtain a solution for arbitrary rational arguments: For $\frac{n_1}{d_1} , \frac{n_2}{d_2} \in \mathbb{Q}^+$ equation $(1)$ yields
\begin{align}
f\left(\frac{n_1}{d_1},\frac{n_2}{d_2}\right) &= \frac{d_1}{n_1} f \left(1,\frac{n_2 d_1}{n_1 d_2}\right) = \frac{d_1}{n_1} f \left(1,\frac{n_2 d_1 / \gcd(n_1 d_2,n_2 d_1)}{n_1 d_2 / \gcd(n_1 d_2,n_2 d_1)}\right) \\
&= \frac{d_1 d_2}{\gcd(n_1 d_2,n_2 d_1)} f\left(\frac{n_1 d_2}{\gcd(n_1 d_2,n_2 d_1)},\frac{n_2 d_1}{\gcd(n_1 d_2,n_2 d_1)}\right) \\

&= \frac{d_1 d_2}{\gcd(n_1 d_2,n_2 d_1)} F\left(\frac{n_1 d_2}{\gcd(n_1 d_2,n_2 d_1)},\frac{n_2 d_1}{\gcd(n_1 d_2,n_2 d_1)}\right) \, .
\end{align}



Therefore I consider the problem solved in the case of rational arguments. Irrational arguments can be approximated by fractions, but if anyone can come up with a general solution: you are most welcome to share it. ;)


Answer



Only a comment. We have



$$ \int_{0}^{\infty} \frac{\log(1+\alpha x)\log(1+\beta/x)}{x} \, dx = 2\operatorname{Li}_3(\alpha\beta) - \operatorname{Li}_2(\alpha\beta)\log(\alpha\beta) $$



which is valid initially for $\alpha, \beta > 0$ and extends to a larger domain by the principle of analytic continuation. Then for integers $m, n \geq 1$ we obtain




\begin{align*}
f(m, n)
&=\int_{0}^{\infty} \frac{\log(1+x^m)\log(1+x^{-n})}{x}\,dx \\
&\hspace{6em} = \sum_{j=0}^{m-1}\sum_{k=0}^{n-1} \left[ 2\operatorname{Li}_3\left(e^{i(\alpha_j+\beta_k)}\right) - i(\alpha_j+\beta_k)\operatorname{Li}_2\left(e^{i(\alpha_j+\beta_k)}\right) \right],
\end{align*}



where $\alpha_j = \frac{2j-m+1}{n}\pi$ and $\beta_k = \frac{2k-n+1}{n}\pi$. (Although we cannot always split complex logarithms, this happens to work in the above situation.) By the multiplication formula, this simplifies to





\begin{align*}
f(m, n)
&= \frac{2\gcd(m,n)^3}{m^2n^2}\operatorname{Li}_3\left((-1)^{(m+n)/\gcd(m,n)}\right) \\
&\hspace{2em} - \frac{i}{n} \sum_{j=0}^{m-1} \alpha_j \operatorname{Li}_2\left((-1)^{n-1}e^{in\alpha_j}\right) \\
&\hspace{2em} - \frac{i}{m} \sum_{k=0}^{n-1} \beta_k \operatorname{Li}_2\left((-1)^{m-1}e^{im\beta_k}\right).
\end{align*}




Here, $\gcd(m,n)$ is the greatest common divisor of $m$ and $n$.







The following code tests the above formula.



Numerical computation


Saturday, August 29, 2015

How to evaluate a partially alternating series.



$\sum_{n=0}^{\infty} \frac{\cos{(n\pi/3)}}{n!}$ = $\sqrt{e} \cos(\sqrt{3}/2)$




Wolfram alpha gave me this answer for the sum. First I'd like to know if I can use the alternating series test to prove that this sum is convergent (absolutely or conditionally). Then I'd like to know how wolfram came up with this answer for the sum.


Answer



The easiest way to get the sum is using Euler's formula $e^{i\theta} = \cos\theta+i\sin\theta$. What you have is the real part of $\sum_{n=0}^\infty \frac{1}{n!} \bigl( e^{i \pi / 3} \bigr)^n$, which is $\exp(e^{i\pi/3})$. Then, $e^{i\pi/3}=\frac{1+i\sqrt{3}}{2}$, so $\exp(e^{i\pi/3})=e^{1/2}\, e^{i \sqrt{3}/2}$, and taking the real part gives the cosine.



What you need to prove, though, doesn't require any of that. Remember that the cosine is bounded between $-1$ and $+1$, and that $n!$ grows very fast. In fact, $\sum_{n=0}^\infty \frac{1}{n!} = e$ is certainly convergent, without any need for alternating properties. The alternating series test is not the best way to go.


algebra precalculus - Summation Formula for Series

I have a series of the form :
\begin{equation}
\frac{1}{M-1} + \frac{q}{M-2} + \frac{q^2}{M-3} + \frac{q^3}{M-4} + \frac{q^4}{M-5}+\dots = \sum_{i=1} ^{M-1} \frac{q^{i-1}}{M-i}
\end{equation}
I want to solve this series to find a general formula that provides its sum. I am not able to figure out the best and easy way to proceed with this. I would be glad if anybody could point me the right direction for solving such series.

Friday, August 28, 2015

elementary number theory - How can I prove that $gcd(a,b)=1implies gcd(a^2,b^2)=1$ without using prime decomposition?


How can I prove that if $\gcd(a,b)=1$, then $\gcd(a^2,b^2)=1$, without using prime decomposition? I should only use definition of gcd, division algorithm, Euclidean algorithm and corollaries to those. Thanks for your help!


Answer



The golden rule for all problems about greatest common divisors, I claim (especially if you're specifically trying to avoid using prime decomposition), is the fact that $\gcd(a,b)$ is equal to the smallest positive integer $g$ that can be written in the form $g=ax+by$ with $x,y$ integers.


In particular, $\gcd(a,b)=1$ if and only if there exist integers $x$ and $y$ such that $ax+by=1$. (This is not true with 1 replaced by a larger integer! It's true because 1 is the smallest positive integer there is.)


That's my general hint; my specific hint is to cube both sides of the equation $ax+by=1$.


algebra precalculus - Find the coefficient of $x^{15}y^4$ in the expansion of $(2x – 3y)^{19}$

Not even sure I understand this question or know where to start, I understand the basics of finding coefficients but don't get how it's calculated "in the expansion of" another number. Looking for some sort of explanation as to how to solve this. I understand solving binomial coefficients but I'm not entirely sure how they related in this type of question.

Thursday, August 27, 2015

abstract algebra - Let $K$ be a field extension of $F$ and let $a in K$. Show that $[F(a):F(a^3)] leq 3$



Let $K$ be a field extension of $F$ and let $a \in K$. Show that $[F(a):F(a^3)] \leq 3$. Find examples to illustrate that $[F(a):F(a^3)]$ can be $1,2$ or $3$.



Attempt: $F \subset F(a^3) \subseteq F(a)$


The minimal polynomial for $a^3$ over $F$ is $ x-a^3=0$


I, unfortunately, don't have much idea than this on this problem. Could you please tell me how to move ahead?




Let $K$ be an extension of $F$. Suppose that $E_1$ and $E_2$ are contained in $K$ and are extensions of $F$. If $[E_1:F]$ and $[E_2:F]$ are both prime, show that $E_1 = E_2$ or $E_1 \bigcap E_2 = F $



Attempt: $[K:F] = [K:E_1][E_1:F] = [K:E_2][E_2:F]$


Since, $[E_1:F]$ and $[E_2:F]$ are both prime $\implies [E_2:F]$ divides $[K:E_1]$ and $[E_1:F]$ divides $[K:E_2]$


How do i move ahead?


Thank you for your help.


Answer



Consider $x^3-a^3$, where here $a^3$ is the number which generates $F(a^3)/F$. Then $a$ is a root of this polynomial, and so we will let $m_a(x)$ denote the minimal polynomial for $a$ over $F(a^3)$. We know




$$\begin{cases}m_a(x)|(x^3-a^3)\\ \deg m_a(x)\le \deg (x^3-a^3)=3\end{cases}.$$



But since


$$F(a)\cong F(a^3)[x]/(m_a(x))$$


we know



$$[F(a):F(a^3)]=\text{deg }_{F(a^3)} F(a)=\text{deg } m_a(x)\le 3$$



For the second question we proceed similarly.



Let $F\subseteq E'\subseteq E_1$, then by the tower law $[E':F]\big|[E_1:F]$ since $[E_1:F]$ is prime, we have $[E':F]\in \{1,[E_1:F]\}$, so either $E'=F$ or $E'=E_1$, and similarly for any subfield of $E_2$.



But then



$$F\subseteq E_1\cap E_2\subseteq E_1\implies E_1\cap E_2= F\text{ or } E_1$$



If it's $F$, we're done, if not then by using the same condition on $E_2$ we see that $E_1\cap E_2=E_2$ as well (since it's not $F$). Hence $E_1=E_2$.



Note: Why don't we see $K$ in this computation? What is it there for? The answer is that $E_1\cap E_2$ doesn't make sense unless they are both subsets of a common set, $K$ is there because of a set theory restriction, but isn't really essential to the proof other than that, unless we're being very pedantic.


arithmetic progression, finding the nth term.

The sum of the 1st n terms, of an AP is $S_n=n^2-3n$. Write down the 4th term and find an expression for the $n$th term.




Will the 4th term be $t_4= a+3d$?

calculus - Evaluating $lim_{btoinfty} int_0^b frac{sin x}{x}, dx= frac{pi}{2}$





Using the identity $$\lim_{a\to\infty} \int_0^a e^{-xt}\, dt = \frac{1}{x}, x\gt 0,$$ can I get a hint to show that $$\lim_{b\to\infty} \int_0^b \frac{\sin x}{x} \,dx= \frac{\pi}{2}.$$


Answer



Hint: $$\begin{align} \lim_{b\to \infty}\int_{0}^{b}\frac{\sin x}{x}dx &= \lim_{a,b\to \infty}\int_{0}^{b}\int_{0}^{a}e^{-xt}dt\sin x dx\\& = \lim_{a,b\to \infty}\int_{0}^{b}dt\int_{0}^{a}e^{-xt}\frac{e^{ix}-e^{-ix}}{2i} dx \\&=\lim_{a,b\to \infty}\int_{0}^{b}dt\int_{0}^{a}\frac{e^{-(t-i)x}-e^{-(i+t)x}}{2i} dx\end{align}$$.


sequences and series - Uniqueness of real numbers represented as products of integer powers of primes




Let $p_k$ represent the $(k+1)$th prime number. My hypothesis is that all positive real numbers may be represented as some infinite product
$$\prod_{k=0}^\infty p_k^{e_k}$$
(where $e_k \in \mathbb{Z}$); and moreover that this product is unique. Intuitively I am certain this is true, but I cannot imagine how I could go about proving it.



EDIT: some rational examples for clarity:



$1 = 2^0\cdot3^0\cdot5^0\cdot7^0\cdot11^0\cdot\dots$



$\frac{22}{7} = 2^1\cdot3^0\cdot5^2\cdot7^{-1}\cdot11^1\cdot\dots$




$100 = 2^2\cdot3^0\cdot5^2\cdot7^0\cdot11^0\cdot\dots$


Answer



An infinite product $\prod x_i$ of real (and also complex) numbers converges only if the sequence $(x_i)$ converges to 1. In your case you have to have $p_i^{e_i} \to 1$. Which is only possible if $(e_i)$ is eventually vanishing.


sequences and series - Weird limit $lim limits_{nmathoptoinfty}frac{1}{e^n}sum limits_{kmathop=0}^nfrac{n^k}{k!} $




$$\lim \limits_{n\mathop\to\infty}\frac{1}{e^n}\sum \limits_{k\mathop=0}^n\frac{n^k}{k!} $$




I thought this limit was obviously $1$ at first but approximations on Mathematica tells me it's $1/2$. Why is this?


Answer



In this answer, it is shown that
$$
\begin{align}
e^{-n}\sum_{k=0}^n\frac{n^k}{k!}
&=\frac{1}{n!}\int_n^\infty e^{-t}\,t^n\,\mathrm{d}t\\
&=\frac12+\frac{2/3}{\sqrt{2\pi n}}+O(n^{-1})
\end{align}
$$



algebra precalculus - On Assumptions In Induction

I'm currently learning mathematical induction from this site: https://www.mathsisfun.com/algebra/mathematical-induction.html




What I'm confused about is how it presents mathematical induction. It says that there are 3 steps to induction:




  • Show it true for $n=1$.

  • Assume it is true for $n=k$

  • Prove it is true for $n=k+1$ (we can use the $n=k$ case as a fact.)



There are many things I am confused about here, about all $3$ steps.




Starting from the first step, why is it necessary to prove it true for $n=1$? I don't get why this step is needed. Second, why choose $1$ of all numbers; can't a number like $2$ be chosen?



Moving on to the second step, why is it legitimate to assume it true for all $n=k$? Is this assumption proved true by the third step, if so, how?



On the final step, first, how can we prove it true for all $n=k+1$ ? Because this prove fundamentally assumes that it is true for $n=k$, but there is no way to verify this. Second, what happens if the set we're doing induction on has only a limited amount of numbers, let's say 100 numbers. So if we go up to the 100th number, then how can $n=k+1$ still be true? Because there is no 101st term for it to be true on; there are only 100 numbers!



Please explain this as simply as possible; I'm still a beginner. I will not be able to understand complicated proof notation.



DETAILS: This question is different from the question above since the question above uses $\lim_{x\to\infty}$ notation and other pieces of calculus knowledge. However, I have not learnt calculus, and nothing about this question suggest prior calculus knowledge. An answer where induction is explained without calculus would benefit me greatly.

calculus - Limit of $(cos{xe^x} - ln(1-x) -x)^{frac{1}{x^3}}$



So I had the task to evaluate this limit




$$ \lim_{x \to 0} (\cos{(xe^x)} - \ln(1-x) -x)^{\frac{1}{x^3}}$$



I tried transforming it to:



$$ e^{\lim_{x \to 0} \frac{ \ln{(\cos{xe^x} - \ln(1-x) -x)}}{x^3}}$$



So I could use L'hospital's rule, but this would just be impossible to evaluate without a mistake. Also, I just noticed this expression is not of form $\frac{0}{0}$.



Any solution is good ( I would like to avoid Taylor series but if that's the only way then that's okay).




I had this task on a test today and I failed to do it.


Answer



First notice that $$\cos (xe^x) = \sum_{n=0}^{\infty}(-1)^n \frac{(xe^x)^{2n}}{2n!}$$



and $$\ln (1 - x) = -\sum_{n=0}^{\infty} \frac{x^n}{n}$$



Thus $$\cos (xe^x) - \ln (1 - x) = \sum_{n=0}^{\infty}(-1)^n \frac{(xe^x)^{2n}}{2n!} +\sum_{n=0}^{\infty}\frac{x^n}{n} = 1 + x - \frac{2x^3}{3} - O(x^4) $$



Therefore we have




$$\ln (1 - \frac{2x^3}{3}) = - \frac{2x^3}{3} - \frac{2x^6}{9} - O(x^9)$$



Finally



$$\begin{align}\lim_{x \to 0} \frac{\ln (\cos xe^x - \ln (1 - x) - x)}{x^3} &= \lim_{x \to 0} -\frac{2}{3} - \frac{2x^3}{9} - O(x^6) =\color{red}{ -\frac{2}{3}}\end{align}$$



Now you may find your limit.


Wednesday, August 26, 2015

The Formula to this Sequence Series

What is the formula for following sequence?



$$\frac12 + \frac12 \cdot \frac34 + \frac12 \cdot \frac34 \cdot \frac56 + ... + \frac12 \cdot \frac34 \cdot \frac56 ... \frac{2n - 1}{2n}$$



This is a question from my Calculus class about sequences.

induction - On a generalization for $sum_{d|n}rad(d)phi(frac{n}{d})$ and related questions



Let $\phi(m)$ Euler's totient function and $rad(m)$ the multiplicative function defined by $rad(1)=1$ and, for integers $m>1$ by $rad(m)=\prod_{p\mid m}p$ the product of distinct primes dividing $m$ (it is obvious that it is a multiplicative funtion since in the definition is $\prod_{p\mid m}\text{something}$ and since empty products are defined by $1$).



Denoting $r_1(n)=rad(n)$, and $$R_1(n)=\sum_{d|n}rad(d)\phi(\frac{n}{d}),$$
I claim that it is possible to proof that this Dirichlet product of multiplicative functions (thus is multiplicative) is computed as $$\frac{n}{rad(n)}\prod_{p\mid n}(2p-1).$$




Question 1. Can you prove or refute that
$$R_k:=\sum_{d|n}r_k(d)\phi(\frac{n}{d})=\frac{n}{rad(n)}r_{k+1}(n)$$

for $$r_{k+1}(n)=\prod_{p\mid n}((k+1)p-k),$$ with $k\geq 1$? Thanks in advance.




I excuse this question since I've obtain the first examples and I don't know if I have mistakes. I know that the proof should be by induction. Since computations are tedious I would like see a full proof. In this ocassion if you are sure in your computations, you can provide to me a summary answer. The following is to obtain a more best post, in other case I should to write a new post.



I know the theorem about Dirichlet product versus Dirichlet series that provide us to write



$$\sum_{n=1}^\infty\frac{\frac{n}{rad(n)}r_2(n)}{n^2}=\left(\sum_{n=1}^\infty\frac{rad(n)}{n^s}\right)\left(\sum_{n=1}^\infty\frac{\phi(n)}{n^s}\right)=\sum_{n=1}^\infty\frac{\sum_{d\mid n}rad(d)\phi(n/d)}{n^s},$$
for $\Re s=\sigma>2$ (I've read notes in Apostol's book about this and follows [1]). By a copy and paste from [2] we can write
$$\frac{\zeta(s)^2}{\zeta(2s)}
where $R(s)$ is the Dirichlet series for $rad(n)$, and I believe that previous inequality holds for $\sigma>2$.




Question 2. Can you write and claim the convergence statement corresponding to Dirichlet series for $r_k(n)$? I say if Question 1 is true, and looking to compute these Dirichlet series for $r_k(n)$ as values, or inequalities involving these values, of the zeta function. Thanks in advance.




I excuse this Question 2 to encourage to me read and understand well, previous references [1] and [2].



[1] Ethan's answer, this site Arithmetical Functions Sum, $\sum_{d|n}\sigma(d)\phi(\frac{n}{d})$ and $\sum_{d|n}\tau(d)\phi(\frac{n}{d})$




[2] LinusL's question, this site, Average order of $\mathrm{rad}(n)$


Answer



About your first question, you just have to observe that $R_k(n)$ is multiplicative beeing a Dirichlet product of multiplicative functions, and as a consequence so is $r_{k}(n)$, so you can compute it for a prime power a then multiply,



$$ R_k(n) = \prod_{p^j\vert\vert n} R_k(p^j) $$
For computing $R_k(p^j)$ we treat separately the divisor 1 for the rest of divisors of $p^j$, ($p,p^2,\dots,p^j$) obtaining:
$$ R_k(p^j) = p^j-p^{j-1} + \sum_{i=1}^j (kp-(k-1))\phi(p^{j-i}) = \\
p^j-p^{j-1} + (kp-(k-1)) \left( (p^{j-1}-p^{j-2})+ \dots + (p-1) + 1\right) = \\
(k+1)p^j-kp^{j-1} = \frac{p^{j}}{rad(p^{j})}((k+1)p-k)$$




And you are done.



I'm not entirely sure what you are asking in the second question, if I understand you are interested in the convergence of the Dirichlet series
$$ \sum_n \frac{r_k(n)}{n^s} $$
suppose it converges for some $s=\sigma+it$, then it is easy to show that it converges for any $s$ with real part $>\sigma$, know in that hypothesis it will have also an expresion as an Euler product:
$$ \sum_n \frac{r_k(n)}{n^s} = \prod_p \left( 1 + (kp-(k-1))(p^{-s} + p^{-2s}+\dots) \right)= \\
\prod_p \left( \frac{1+p^{-s}(kp-k))}{1-p^{-s}} \right)=\zeta(s)A(s)$$
Where $A(s)$ has the Euler product
$$ A(s) = \prod_p (1+p^{-s}(kp-k)) $$
if $s$ is real then all the factors are positive so you can bound it above by

$$ A(s) < \prod_p (1+p^{-s+1})^k = \left(\sum_n \frac{\vert \mu(n) \vert}{n^{s-1}}\right)^k $$
this implies that the original series converges for $\sigma >2$, to see that it diverges for $\sigma < 2$ it is easy for $k > 1$ as we have $kp-k >= p$, and so again for $s$ real
$$ A(s) > \prod_p(1+ p^{-s+1})=\sum_n \frac{\vert \mu(n) \vert}{n^{s-1}} $$
and the right hand series diverges for $s=2$. It still remains to prove that it diverges for $k=1$ I can't see now any simple proof but a limiting argument should work.



I hope this is what you were looking for.


calculus - Prove $lim_{x rightarrow 0} frac {sin(x)}{x} = 1$ with the epsilon-delta definition of limit.



It is well known that



$$\lim_{x \rightarrow 0} \frac {\sin(x)}{x} = 1$$



I know several proofs of this: the geometric proof shows that $\cos(\theta)\leq\frac {\sin(\theta)}{\theta}\leq1$ and using the Squeeze Theorem I conclude that $\lim_{x \rightarrow 0} \frac {\sin(x)}{x} = 1$, other proof uses the Maclaurin series of $\sin(x)$. My question is: is there a demonstration of this limit using the epsilon-delta definition of limit?


Answer



Here is a more direct answer for this: Since $\cos\theta<\frac{\sin\theta}{\theta}<1$, one can get
$$\bigg|\frac{\sin\theta}{\theta}-1\bigg|<1-\cos\theta.$$

But $1-\cos\theta=2\sin^2\frac{\theta}{2}\le\frac{\theta^2}{2}$ and hence
$$\bigg|\frac{\sin\theta}{\theta}-1\bigg|\le\frac{\theta^2}{2}.$$
Now it is easy to use $\varepsilon-\delta$ definition to get the answer.


linear algebra - $2times2$ matrices are not big enough



Olga Tausky-Todd had once said that




"If an assertion about matrices is false, there is usually a 2x2 matrix that reveals this."




There are, however, assertions about matrices that are true for $2\times2$ matrices but not for the larger ones. I came across one nice little example yesterday. Actually, every student who has studied first-year linear algebra should know that there are even assertions that are true for $3\times3$ matrices, but false for larger ones --- the rule of Sarrus is one obvious example; a question I answered last year provides another.




So, here is my question. What is your favourite assertion that is true for small matrices but not for larger ones? Here, $1\times1$ matrices are ignored because they form special cases too easily (otherwise, Tausky-Todd would have not made the above comment). The assertions are preferrably simple enough to understand, but their disproofs for larger matrices can be advanced or difficult.


Answer



Any two rotation matrices commute.


Tuesday, August 25, 2015

Non-differentiability implies non-Lipschitz continuity

In a simple one-dimensional framework, it is known that the differentiability of a function (with bounded derivative) on an interval implies its Lipschitz continuity on that interval. However, non-differentiability does not implies non-Lipschitz continuity as shown by the function $f:x\to |x|$. Still, there are functions that are not differentiable at a point and this argument is used to say that this same function is not Lipschitz-continuous at that point as, for example, $f:x\to \sqrt{x}$ at $x=0$ (we say that the slope of the function at that point is "vertical"). So my question is: is there a theorem telling us that for some functions (to be characterized), their non-differentiability implies their non-Lipschitz continuity? Or is this result obvious from the definition of Lipschitz continuity?

calculus - A sine integral $int_0^{infty} left(frac{sin x }{x }right)^n,mathrm{d}x$


The following question comes from Some integral with sine post $$\int_0^{\infty} \left(\frac{\sin x }{x }\right)^n\,\mathrm{d}x$$ but now I'd be curious to know how to deal with it by methods of complex analysis.
Some suggestions, hints? Thanks!!!


Sis.


Answer



Here's another approach.



We have $$\begin{eqnarray*} \int_0^\infty dx\, \left(\frac{\sin x}{x}\right)^n &=& \lim_{\epsilon\to 0^+} \frac{1}{2} \int_{-\infty}^\infty dx\, \left(\frac{\sin x}{x-i\epsilon}\right)^n \\ &=& \lim_{\epsilon\to 0^+} \frac{1}{2} \int_{-\infty}^\infty dx\, \frac{1}{(x-i\epsilon)^n} \left(\frac{e^{i x}-e^{-i x}}{2i}\right)^n \\ &=& \lim_{\epsilon\to 0^+} \frac{1}{2} \frac{1}{(2i)^n} \int_{-\infty}^\infty dx\, \frac{1}{(x-i\epsilon)^n} \sum_{k=0}^n (-1)^k {n \choose k} e^{i x(n-2k)} \\ &=& \lim_{\epsilon\to 0^+} \frac{1}{2} \frac{1}{(2i)^n} \sum_{k=0}^n (-1)^k {n \choose k} \int_{-\infty}^\infty dx\, \frac{e^{i x(n-2k)}}{(x-i\epsilon)^n}. \end{eqnarray*}$$ If $n-2k \ge 0$ we close the contour in the upper half-plane and pick up the residue at $x=i\epsilon$. Otherwise we close the contour in the lower half-plane and pick up no residues. The upper limit of the sum is thus $\lfloor n/2\rfloor$. Therefore, using the Cauchy differentiation formula, we find $$\begin{eqnarray*} \int_0^\infty dx\, \left(\frac{\sin x}{x}\right)^n &=& \frac{1}{2} \frac{1}{(2i)^n} \sum_{k=0}^{\lfloor n/2\rfloor} (-1)^k {n \choose k} \frac{2\pi i}{(n-1)!} \left.\frac{d^{n-1}}{d x^{n-1}} e^{i x(n-2k)}\right|_{x=0} \\ &=& \frac{1}{2} \frac{1}{(2i)^n} \sum_{k=0}^{\lfloor n/2\rfloor} (-1)^k {n \choose k} \frac{2\pi i}{(n-1)!} (i(n-2k))^{n-1} \\ &=& \frac{\pi}{2^n (n-1)!} \sum_{k=0}^{\lfloor n/2\rfloor} (-1)^k {n \choose k} (n-2k)^{n-1}. \end{eqnarray*}$$ The sum can be written in terms of the hypergeometric function but the result is not particularly enlightening.


field theory - If $k_1, ldots, k_n$ are non-square, pairwise coprime, then $sqrt {k_n} not in mathbf{Q}(sqrt {k_1}, ldots, sqrt {k_{n-1}})$

Seems intuitive. Like the fact that $\sqrt 3 \not \in \mathbf{Q}(\sqrt 2)$. But how to approach actually proving it? The proof of this fact doesn't seem to generalize well.

complex numbers - prove the following equation about inverse of tan in logarithmic for

$$\arctan(z)=\frac1{2i}\log\left(\frac{1+iz}{1-iz}\right)$$



i have tried but my answer doesn't matches to the equation .the componendo dividendo property might have been used. where $$\arcsin(x)=\frac1i\log\left(iz+\sqrt{1-z^2}\right)$$

fake proofs - Why is $i^3$ (the complex number "$i$") equal to $-i$ instead of $i$?


$$i^3=iii=\sqrt{-1}\sqrt{-1}\sqrt{-1}=\sqrt{(-1)(-1)(-1)}=\sqrt{-1}=i $$


Please take a look at the equation above. What am I doing wrong to understand $i^3 = i$, not $-i$?


Answer




We cannot say that $\sqrt{a}\sqrt{b}=\sqrt{ab}$ for negative $a$ and $b$. If this were true, then $1=\sqrt{1}=\sqrt{\left(-1\right)\cdot\left(-1\right)} = \sqrt{-1}\sqrt{-1}=i\cdot i=-1$. Since this is false, we have to say that $\sqrt{a}\sqrt{b}\neq\sqrt{ab}$ in general when we extend it to accept negative numbers.


Monday, August 24, 2015

discrete mathematics - Proving $(0,1)$ and $[0,1]$ have the same cardinality

Prove $(0,1)$ and $[0,1]$ have the same cardinality.


I've seen questions similar to this but I'm still having trouble. I know that for $2$ sets to have the same cardinality there must exist a bijection function from one set to the other. I think I can create a bijection function from $(0,1)$ to $[0,1]$, but I'm not sure how the opposite. I'm having trouble creating a function that makes $[0,1]$ to $(0,1)$. Best I can think of would be something like $x \over 2$.


Help would be great.

sequences and series - Value of $sumlimits_n x^n$


Why does the following hold:


\begin{equation*} \displaystyle \sum\limits_{n=0}^{\infty} 0.7^n=\frac{1}{1-0.7} = 10/3 ? \end{equation*}



Can we generalize the above to



$\displaystyle \sum_{n=0}^{\infty} x^n = \frac{1}{1-x}$ ?



Are there some values of $x$ for which the above formula is invalid?


What about if we take only a finite number of terms? Is there a simpler formula?



$\displaystyle \sum_{n=0}^{N} x^n$



Is there a name for such a sequence?




This is being repurposed in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions.


and here: List of abstract duplicates.


Answer



By definition, a "series" (an "infinite sum") $$\sum_{n=k}^{\infty} a_n$$ is defined to be a limit, namely $$\sum_{n=k}^{\infty} a_n= \lim_{N\to\infty} \sum_{n=k}^N a_n.$$ That is, the "infinite sum" is the limit of the "partial sums", if this limit exists. If the limit exists, equal to some number $S$, we say the series "converges" to the limit, and we write $$\sum_{n=k}^{\infty} a_n = S.$$ If the limit does not exist, we say the series diverges and is not equal to any number.


So writing that $$\sum_{n=0}^{\infty} 0.7^n = \frac{1}{1-0.7}$$ means that we are asserting that $$\lim_{N\to\infty} \sum_{n=0}^N0.7^n = \frac{1}{1-0.7}.$$


So what your question is really asking is: why is this limit equal to $\frac{1}{1-0.7}$? (Or rather, that is the only way to make sense of the question).


In order to figure out the limit, it is useful (but not strictly necessary) to have a formula for the partial sums, $$s_N = \sum_{n=0}^N 0.7^n.$$ This is where the formulas others have given come in. If you take the $N$th partial sum and multiply by $0.7$, you get $$\begin{array}{rcrcrcrcrcrcl} s_N &= 1 &+& (0.7) &+& (0.7)^2 &+& \cdots &+& (0.7)^N\\ (0.7)s_N &= &&(0.7) &+& (0.7)^2 &+&\cdots &+&(0.7)^N &+& (0.7)^{N+1} \end{array}$$ so that $$(1-0.7)s_N = s_N - (0.7)s_N = 1 - (0.7)^{N+1}.$$ Solving for $s_N$ gives $$s_N = \frac{1 - (0.7)^{N+1}}{1-0.7}.$$ What is the limit as $N\to\infty$? The only part of the expression that depends on $N$ is $(0.7)^{N+1}$. Since $|0.7|\lt 1$, then $\lim\limits_{N\to\infty}(0.7)^{N+1} = 0$. So, $$\lim_{N\to\infty}s_N = \lim_{N\to\infty}\left(\frac{1-(0.7)^{N+1}}{1-0.7}\right) = \frac{\lim\limits_{N\to\infty}1 - \lim\limits_{N\to\infty}(0.7)^{N+1}}{\lim\limits_{N\to\infty}1 - \lim\limits_{N\to\infty}0.7} = \frac{1 - 0}{1-0.7} = \frac{1}{1-0.7}.$$ Since the limit exists, then we write $$\sum_{n=0}^{\infty}(0.7)^n = \frac{1}{1-0.7}.$$


More generally, a sum of the form $$a + ar + ar^2 + ar^3 + \cdots + ar^k$$ with $a$ and $r$ constant is said to be a "geometric series" with initial term $a$ and common ratio $r$. If $a=0$, then the sum is equal to $0$. If $r=1$, then the sum is equal to $(k+1)a$. If $r\neq 1$, then we can proceed as above. Letting $$S = a +ar + \cdots + ar^k$$ we have that $$S - rS = (a+ar+\cdots+ar^k) - (ar+ar^2+\cdots+a^{k+1}) = a - ar^{k+1}$$ so that $$(1-r)S = a(1 - r^{k+1}).$$ Dividing through by $1-r$ (which is not zero since $r\neq 1$), we get $$S = \frac{a(1-r^{k+1})}{1-r}.$$


A series of the form $$ \sum_{n=0}^{\infty}ar^{n} $$ with $a$ and $r$ constants is called an infinite geometric series. If $r=1$, then $$ \lim_{N\to\infty}\sum_{n=0}^{N}ar^{n} = \lim_{N\to\infty}\sum_{n=0}^{N}a = \lim_{N\to\infty}(N+1)a = \infty, $$ so the series diverges. If $r\neq 1$, then using the formula above we have: $$ \sum_{n=0}^{\infty}ar^n = \lim_{N\to\infty}\sum_{n=0}^{N}ar^{N} = \lim_{N\to\infty}\frac{a(1-r^{N+1})}{1-r}. $$ The limit exists if and only if $\lim\limits_{N\to\infty}r^{N+1}$ exists. Since $$ \lim_{N\to\infty}r^{N+1} = \left\{\begin{array}{ll} 0 &\mbox{if $|r|\lt 1$;}\\ 1 & \mbox{if $r=1$;}\\ \text{does not exist} &\mbox{if $r=-1$ or $|r|\gt 1$} \end{array}\right. $$ it follows that: $$ \begin{align*} \sum_{n=0}^{\infty}ar^{n} &=\left\{\begin{array}{ll} 0 &\mbox{if $a=0$;}\\ \text{diverges}&\mbox{if $a\neq 0$ and $r=1$;}\\ \lim\limits_{N\to\infty}\frac{a(1-r^{N+1})}{1-r} &\mbox{if $r\neq 1$;}\end{array}\right.\\ &= \left\{\begin{array}{ll} \text{diverges}&\mbox{if $a\neq 0$ and $r=1$;}\\ \text{diverges}&\mbox{if $a\neq 0$, and $r=-1$ or $|r|\gt 1$;}\\ \frac{a(1-0)}{1-r}&\mbox{if $|r|\lt 1$;} \end{array}\right.\\ &=\left\{\begin{array}{ll} \text{diverges}&\mbox{if $a\neq 0$ and $|r|\geq 1$;}\\ \frac{a}{1-r}&\mbox{if $|r|\lt 1$.} \end{array}\right. \end{align*} $$


Your particular example has $a=1$ and $r=0.7$.




Since this recently came up (09/29/2011), let's provide a formal proof that $$ \lim_{N\to\infty}r^{N+1} = \left\{\begin{array}{ll} 0 &\mbox{if $|r|\lt 1$;}\\ 1 & \mbox{if $r=1$;}\\ \text{does not exist} &\mbox{if $r=-1$ or $|r|\gt 1$} \end{array}\right. $$


If $r\gt 1$, then write $r=1+k$, with $k\gt0$. By the binomial theorem, $r^n = (1+k)^n \gt 1+nk$, so it suffices to show that for every real number $M$ there exists $n\in\mathbb{N}$ such that $nk\gt M$. This is equivalent to asking for a natural number $n$ such that $n\gt \frac{M}{k}$, and this holds by the Archimedean property; hence if $r\gt 1$, then $\lim\limits_{n\to\infty}r^n$ does not exist. From this it follows that if $r\lt -1$ then the limit also does not exist: given any $M$, there exists $n$ such that $r^{2n}\gt M$ and $r^{2n+1}\lt M$, so $\lim\limits_{n\to\infty}r^n$ does not exist if $r\lt -1$.


If $r=-1$, then for every real number $L$ either $|L-1|\gt \frac{1}{2}$ or $|L+1|\gt \frac{1}{2}$. Thus, for every $L$ and for every $M$ there exists $n\gt M$ such that $|L-r^n|\gt \frac{1}{2}$ proving the limit cannot equal $L$; thus, the limit does not exist. If $r=1$, then $r^n=1$ for all $n$, so for every $\epsilon\gt 0$ we can take $N=1$, and for all $n\geq N$ we have $|r^n-1|\lt\epsilon$, hence $\lim\limits_{N\to\infty}1^n = 1$. Similarly, if $r=0$, then $\lim\limits_{n\to\infty}r^n = 0$ by taking $N=1$ for any $\epsilon\gt 0$.


Next, assume that $0\lt r\lt 1$. Then the sequence $\{r^n\}_{n=1}^{\infty}$ is strictly decreasing and bounded below by $0$: we have $0\lt r \lt 1$, so multiplying by $r\gt 0$ we get $0\lt r^2 \lt r$. Assuming $0\lt r^{k+1}\lt r^k$, multiplying through by $r$ we get $0\lt r^{k+2}\lt r^{k+1}$, so by induction we have that $0\lt r^{n+1}\lt r^n$ for every $n$.


Since the sequence is bounded below, let $\rho\geq 0$ be the infimum of $\{r^n\}_{n=1}^{\infty}$. Then $\lim\limits_{n\to\infty}r^n =\rho$: indeed, let $\epsilon\gt 0$. By the definition of infimum, there exists $N$ such that $\rho\leq r^N\lt \rho+\epsilon$; hence for all $n\geq N$, $$|\rho-r^n| = r^n-\rho \leq r^N-\rho \lt\epsilon.$$ Hence $\lim\limits_{n\to\infty}r^n = \rho$.


In particular, $\lim\limits_{n\to\infty}r^{2n} = \rho$, since $\{r^{2n}\}_{n=1}^{\infty}$ is a subsequence of the converging sequence $\{r^n\}_{n=1}^{\infty}$. On the other hand, I claim that $\lim\limits_{n\to\infty}r^{2n} = \rho^2$: indeed, let $\epsilon\gt 0$. Then there exists $N$ such that for all $n\geq N$, $r^n - \rho\lt\epsilon$. Moreover, we can assume that $\epsilon$ is small enough so that $\rho+\epsilon\lt 1$. Then $$|r^{2n}-\rho^2| = |r^n-\rho||r^n+\rho| = (r^n-\rho)(r^n+\rho)\lt (r^n-\rho)(\rho+\epsilon) \lt r^n-\rho\lt\epsilon.$$ Thus, $\lim\limits_{n\to\infty}r^{2n} = \rho^2$. Since a sequence can have only one limit, and the sequence of $r^{2n}$ converges to both $\rho$ and $\rho^2$, then $\rho=\rho^2$. Hence $\rho=0$ or $\rho=1$. But $\rho=\mathrm{inf}\{r^n\mid n\in\mathbb{N}\} \leq r \lt 1$. Hence $\rho=0$.


Thus, if $0\lt r\lt 1$, then $\lim\limits_{n\to\infty}r^n = 0$.


Finally, if $-1\lt r\lt 0$, then $0\lt |r|\lt 1$. Let $\epsilon\gt 0$. Then there exists $N$ such that for all $n\geq N$ we have $|r^n| = ||r|^n|\lt\epsilon$, since $\lim\limits_{n\to\infty}|r|^n = 0$. Thus, for all $\epsilon\gt 0$ there exists $N$ such that for all $n\geq N$, $| r^n-0|\lt\epsilon$. This proves that $\lim\limits_{n\to\infty}r^n = 0$, as desired.


In summary, $$\lim_{N\to\infty}r^{N+1} = \left\{\begin{array}{ll} 0 &\mbox{if $|r|\lt 1$;}\\ 1 & \mbox{if $r=1$;}\\ \text{does not exist} &\mbox{if $r=-1$ or $|r|\gt 1$} \end{array}\right.$$



The argument suggested by Srivatsan Narayanan in the comments to deal with the case $0\lt|r|\lt 1$ is less clumsy than mine above: there exists $a\gt 0$ such that $|r|=\frac{1}{1+a}$. Then we can use the binomial theorem as above to get that $$|r^n| = |r|^n = \frac{1}{(1+a)^n} \leq \frac{1}{1+na} \lt \frac{1}{na}.$$ By the Archimedean Property, for every $\epsilon\gt 0$ there exists $N\in\mathbb{N}$ such that $Na\gt \frac{1}{\epsilon}$, and hence for all $n\geq N$, $\frac{1}{na}\leq \frac{1}{Na} \lt\epsilon$. This proves that $\lim\limits_{n\to\infty}|r|^n = 0$ when $0\lt|r|\lt 1$, without having to invoke the infimum property explicitly.



calculus - Is there an integral for $frac{1}{zeta(3)} $?

There are many integral representations for $\zeta(3)$



Some lesser known are for instance :



$$\int_0^1\frac{x(1-x)}{\sin\pi x}\text{d}x= 7\frac{\zeta(3)}{\pi^3} $$



$$\int_0^1 \frac{\operatorname{li}(x)^3 \space (x-1)}{x^3} \text{d}x = \frac{\zeta(3)}{4} $$



$$\int_0^\pi x(\pi - x) \csc(x) \space \text{d}x = 7 \space \zeta(3) $$




$$ \int_0^{\infty} \frac{\tanh^2(x)}{x^2} \text{d}x = \frac{ 14 \space \zeta(3)}{\pi^2} $$



$$\int_0^{\frac{\pi}{2}} x \log\tan x \;\text{d}x=\frac{7}{8}\zeta(3)$$



$\zeta(2) $ also has many integral representations as does $ \frac{1}{\zeta(2)} $ , although this probability is because $\frac{1}{\pi}$ and $\frac{1}{\pi^2} $ have many.
Well I suspect that because I know no simple integral expression for $\frac{1}{\zeta(3)} $.



My question is: is there some interesting integral $^*$ whose result is simply $\frac{1}{\zeta(3)}$?




Note



$^*$ Interesting integral means that things like



$$\int\limits_0^{+\infty} e^{- \zeta(3) \space x}\ \text{d}x = \frac{1}{\zeta(3)} $$



are not a good answer to my question.

calculus - Evaluating $limlimits_{ntoinfty} e^{-n} sumlimits_{k=0}^{n} frac{n^k}{k!}$


I'm supposed to calculate:


$$\lim_{n\to\infty} e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!}$$


By using W|A, i may guess that the limit is $\frac{1}{2}$ that is a pretty interesting and nice result. I wonder in which ways we may approach it.


Answer




Edited. I justified the application of the dominated convergence theorem.


By a simple calculation,


$$ \begin{align*} e^{-n}\sum_{k=0}^{n} \frac{n^k}{k!} &= \frac{e^{-n}}{n!} \sum_{k=0}^{n}\binom{n}{k} n^k (n-k)! \\ (1) \cdots \quad &= \frac{e^{-n}}{n!} \sum_{k=0}^{n}\binom{n}{k} n^k \int_{0}^{\infty} t^{n-k}e^{-t} \, dt\\ &= \frac{e^{-n}}{n!} \int_{0}^{\infty} (n+t)^{n}e^{-t} \, dt \\ (2) \cdots \quad &= \frac{1}{n!} \int_{n}^{\infty} t^{n}e^{-t} \, dt \\ &= 1 - \frac{1}{n!} \int_{0}^{n} t^{n}e^{-t} \, dt \\ (3) \cdots \quad &= 1 - \frac{\sqrt{n} (n/e)^n}{n!} \int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du. \end{align*}$$


We remark that


  1. In $\text{(1)}$, we utilized the famous formula $ n! = \int_{0}^{\infty} t^n e^{-t} \, dt$.

  2. In $\text{(2)}$, the substitution $t + n \mapsto t$ is used.

  3. In $\text{(3)}$, the substitution $t = n - \sqrt{n}u$ is used.

Then in view of the Stirling's formula, it suffices to show that


$$\int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du \xrightarrow{n\to\infty} \sqrt{\frac{\pi}{2}}.$$


The idea is to introduce the function



$$ g_n (u) = \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \mathbf{1}_{(0, \sqrt{n})}(u) $$


and apply pointwise limit to the integrand as $n \to \infty$. This is justified once we find a dominating function for the sequence $(g_n)$. But notice that if $0 < u < \sqrt{n}$, then


$$ \log g_n (u) = n \log \left(1 - \frac{u}{\sqrt{n}} \right) + \sqrt{n} u = -\frac{u^2}{2} - \frac{u^3}{3\sqrt{n}} - \frac{u^4}{4n} - \cdots \leq -\frac{u^2}{2}. $$


From this we have $g_n (u) \leq e^{-u^2 /2}$ for all $n$ and $g_n (u) \to e^{-u^2 / 2}$ as $n \to \infty$. Therefore by dominated convergence theorem and Gaussian integral,


$$ \int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du = \int_{0}^{\infty} g_n (u) \, du \xrightarrow{n\to\infty} \int_{0}^{\infty} e^{-u^2/2} \, du = \sqrt{\frac{\pi}{2}}. $$


calculus - Proving that $n over 2^n$ converges to $0$


I'm completely clueless on this one. I can easily calculate the limit using L'Hopital's rule, but proving that the series is converging to 0 is far more tricky.


$$a_n= {n \over 2^n}$$


Any help?


Answer



Using the fact that $2^n>n^2$ for $n > 4$, we have: $$0 \leq a_n < \frac{n}{n^2}=\frac{1}{n}.$$



Hence, $\displaystyle \lim_{n \to \infty}a_n=0.$


Sunday, August 23, 2015

trigonometry - How to solve $3 - 2 cos theta - 4 sin theta - cos 2theta + sin 2theta = 0$



I have got a bunch of trig equations to solve for tomorrow, and got stuck on this one.



Solve for $\theta$:



$$3 - 2 \cos \theta - 4 \sin \theta - \cos 2\theta + \sin 2\theta = 0$$




I tried using the addition formula, product-to-sum formula, double angle formula and just brute force by expanding all terms on this, but couldn't get it.



I am not supposed to use inverse functions or a calculator to solve this.



Tried using Wolfram|Alpha's step by step function on this, but it couldn't explain things.


Answer



Let $x = \sin(\theta), y = \cos(\theta)$



$$3 - 2 y - 4x - 2y^2+1 + 2xy = 0$$




Simplify, divide by $2$ and replace $y^2$ with $1-x^2$.



$$1 - y - 2x+x^2+ xy = 0$$



Factor



$$(x-1)(x+y-1) = 0$$



Now just solve $\sin(\theta) = 1$ and $\sin(\theta) + \cos(\theta) = 1$.


real analysis - Characterizing discontinuous derivatives


Apparently the set of discontinuity of derivatives is weird in its own sense. Following are the examples that I know so far:



$1.$ $$g(x)=\left\{ \begin{array}{ll} x^2 \sin(\frac{1}{x}) & x \in (0,1] \\ 0 & x=0 \end{array}\right.$$ $g'$ is discontinuous at $x=0$.


$2. $ The Volterra function defined on the ternary Cantor set is differentiable everywhere but the derivative is discontinuous on the whole of Cantor set ,that is on a nowhere dense set of measure zero.


$3.$ The Volterra function defined on the Fat-Cantor set is differentiable everywhere but the derivative is discontinuous on the whole of Fat-Cantor set ,that is on a set of positive measure, but not full measure.


$4.$ I am yet to find a derivative which is discontinuous on a set of full measure.


Some good discussion about this can be found here and here.


Questions:


1.What are some examples of functions whose derivative is discontinuous on a dense set of zero measure , say on the rationals?


2.What are some examples of functions whose derivative is discontinuous on a dense set of positive measure , say on the irrationals?


Update: One can find a function which is differentiable everywhere but whose derivative is discontinuous on a dense set of zero measure here.


Answer




There is no everywhere differentiable function $f$ on $[0,1]$ such that $f'$ is discontinuous at each irrational there. That's because $f',$ being the everywhere pointwise limit of continuous functions, is continuous on a dense $G_\delta$ subset of $[0,1].$ This is a result of Baire. Thus $f'$ can't be continuous only on a subset of the rationals, a set of the first category.


But there is a differentiable function whose derivative is discontinuous on a set of full measure.


Proof: For every Cantor set $K\subset [0,1]$ there is a "Volterra function" $f$ relative to $K,$ which for the purpose at hand means a differentiable function $f$ on $[0,1]$ such that i)$|f|\le 1$ on $[0,1],$ ii) $|f'|\le 1$ on $[0,1],$ iii) $f'$ is continuous on $[0,1]\setminus K,$ iv) $f'$ is discontinuous at each point of $K.$


Now we can choose disjoint Cantor sets $K_n \subset [0,1]$ such that $\sum_n m(K_n) = 1.$ For each $n$ we choose a Volterra function $f_n$ as above. Then define


$$F=\sum_{n=1}^{\infty} \frac{f_n}{2^n}.$$


$F$ is well defined by this series, and is differentiable on $[0,1].$ That's because each summand above is differentiable there, and the sum of derivatives converges uniformly on $[0,1].$ So we have


$$F'(x) = \sum_{n=1}^{\infty} \frac{f_n'(x)}{2^n}\,\, \text { for each } x\in [0,1].$$


Let $x_0\in \cup K_n.$ Then $x_0$ is in some $K_{n_0}.$ We can write


$$F'(x) = \frac{f_{n_0}'(x)}{2^{n_0}} + \sum_{n\ne n_0}\frac{f_n'(x)}{2^n}.$$


Now the sum on the right is continuous at $x_0,$ being the uniform limit of functions continuous at $x_0.$ But $f_{n_0}'/2^{n_0}$ is not continuous at $x_0.$ This shows $F'$ is not continuous at $x_0.$ Since $x_0$ was an aribtrary point in $\cup K_n,$ $F'$ is discontinuous on a set of full measure as desired.



real analysis - Convergence in measure and bounded $L^p$ norm implies convergence in $L^p$




Let $11. $f_n \rightarrow f$ in measure and
2. $\sup_{n \in \mathbb{N}} \|f_n\|_p < +\infty$ where $\|f_n\|_p = (\int |f_n|^p \,\mathrm{d}\mu)^{\frac{1}{p}}$ is the $L^p$ norm.
then $\int f_n \rightarrow \int f$ in $L^p(\mu)$.





I know that convergence in measure implies that there exists a subsequence converges a.e.,but how does it relate to convergence in $L^p(\mu)$.



Update:
Thanks for @carmichael561, the conclusion of the question was incorrect. It should be the convergence of integration.


Answer



The two conditions stated are not enough to imply convergence in $L^p$. Indeed, if $X=[0,1]$ and $\mu$ is Lebesgue measure, then the sequence of functions
$$ f_n=n1_{[0,\frac{1}{n^2}]}$$
converges to zero in measure and is bounded in $L^2$ because
$$ \int_X|f_n|^2\;d\mu=n^2\cdot\frac{1}{n^2}=1 $$

for all $n$. But $f_n\not\to 0$ in $L^2$.


Hot Linked Questions

Bijection between [a,b) and [a,b] intervals. [duplicate]


I need help with finding a bijection between these two intervals: [a,b) and [a,b]. It is told that a,b are in R (real numbers). I know how to construct a bijection if a,b are integers, but i have no ...





contour integration - Example of Improper integral in complex analysis




I'm doing this example of Cauchy principle value



$$ \int_0^\infty \frac{dx}{x^3+1}=\frac{2\pi}{3\sqrt{3}} $$



After some steps i got,



$$ \int_{[0,R]+C_R} \frac{dz}{z^3+1}=2\pi i(B_1)\text{
where }B_1= \operatorname{Res}_{z=z_0}\frac{1}{z^3+1} $$




also I got that $\displaystyle \bigg|\int_{C_R} \frac{dz}{z^3+1}\bigg|\to 0 \text{ as } R \to \infty$



There is problem to to finding residue at $z_0=\displaystyle \frac{1+\sqrt{3}i}{2}$



Here i am considering the following contour:



enter image description here



please help me.thanks in advance.


Answer




The contour is good. Two things though:



1) You have to consider the integral along the angled line of the wedge contour. The angle of the contour was chosen to preserve the integrand. 2) Write $z=e^{i 2 \pi/3} x$ and get that the contour integral is



$$\left(1-e^{i 2 \pi/3}\right) \int_0^{\infty} \frac{dx}{x^3+1} = i 2 \pi \frac{1}{3 e^{i 2 \pi/3}}$$



The term on the right is the residue at the pole $z=e^{i\pi/3}$ times $i 2\pi$. I used the fact that, if $f(z)=a(z)/b(z)$, then the residue of a simple pole $z_k$ of $f$ is $a(z_k)/b'(z_k)$.



Note that $e^{i 2 \pi/3}-e^{i 4 \pi/3}=i \sqrt{3}$. The result follows.


Saturday, August 22, 2015

sequences and series - How do I evaluate this sum :$sum_{n=1}^{infty}frac{{(-1)}^{n²}}{{(ipi)}^{n}}$?

I'm interesting to know how do i evaluate this sum :$$\sum_{n=1}^{\infty}\frac{{(-1)}^{n²}}{{(i\pi)}^{n}}$$, I have tried to evaluate it using two partial sum for odd integer $n$ and even integer $n$ ,but i can't since it's alternating series ,and i would like to know if it's well know series also what about it's values :real or complex ? .



Note : wolfram alpha showed that is a convergent series by the root test



Thank you for any help

analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...