Saturday, June 30, 2018

functional equations - Solutions of $f(x+y^{n})=f(x)+[f(y)]^{n}$.



Consider the functional equation $f(x+y^{n})=f(x)+[f(y)]^{n}$ where $f:\mathbb R \to \mathbb R$ and $n$ is given integer $>1$. This equation was discussed yesterday and it was shown that $f$ is necessarily additive. Assuming continuity it was concluded that $f(x)\equiv cx$ for some $c$. [ Necessarily $c$ is an n-th root of unity]. If $n$ is even then the given functional equation gives $f(x+y^{n}) \geq f(x)$ which easily leads to the conclusion that $f$ is an increasing function. It follows that $f$ is Borel measurable; since any Borel measurable additive function if of the type $f(x)\equiv cx$ the assumption that $f$ is continuous is not necessary. My question is what can be said for $n$ odd? Can one use some trick to prove that $f$ is necessarily Borel measurable? Or is there a counter-example? Discontinuous additive functions are constructed using Hamel basis but I am unable to use this method to construct a counter-example. I would appreciate receiving any ideas about this question.



Answer



Here's a generalization of i707107's argument that is actually a bit simpler, as long as I didn't make any mistakes:



You have



$$f(x+y)=f(x)+f(y)$$



and



\begin{align}

\sum_{i=0}^n \binom{n}{i}f(x^iy^{n-i})
&=f((x+y)^n)\\
&=f(x+y)^n\\
&=(f(x)+f(y))^n\\
&=\sum_{i=0}^n \binom{n}{i}f(x)^if(y)^{n-i}.
\end{align}



Taking $y$ rational, we have $f(x^iy^{n-i})=y^{n-i}f(x^i)$ and $f(y)=yf(1)$, so



$$\sum_{i=0}^n\binom{n}{i}y^{n-i}\left[f(x^i)-f(1)^{n-i}f(x)^i\right]=0$$




As this is a polynomial of degree $n$ that is $0$ for all rationals, it is identically $0$, so



$$f(x^i)=f(1)^{n-i}f(x)^i$$



for all $0\leq i\leq n$. Originally, we had $f(1)=f(1)^n$, so $f(1)\in\{-1,0,1\}$. If $f(1)=0$, we have $f(x^i)=0$, so $\boxed{f(x)\equiv 0}$. Otherwise, we have



$$f(x^2)=f(1)^{n-2}f(x)^2=f(1)f(x)^2$$



$$f(x+y^2)=f(x)+f(y^2)=f(x)+f(1)f(y)^2.$$




If $f(1)=1$, this means $f$ is increasing, and if $f(1)=-1$ this means $f$ is decreasing. Either way, $f$ is not everywhere dense, so $f(x)=cx$ for some $c$ and all $x$. The observation that $f(1)=\pm 1$ means $\boxed{f(x)=x}$ and $\boxed{f(x)=-x}$ are our only other solutions.


functions - Explicit bijection between $[0,1)$ and $(0,1)$





Proving that $[0,1)$ and $(0,1)$ have the same cardinality (without assuming any previous knowledge) can be done easily using Cantor-Bernstein theorem.



However I'm wondering if someone can build an explicit bijection between these sets.



It's easy to build a bijection between $(0,1)$ and $\mathbb R$, so a bijection from $[0,1)$ to $\mathbb R$ will also fit the bill.


Answer




Let us partition $(0,1)$ into a countable number of disjoint subsets of the form $[\frac{1}{n+1},\frac{1}{n})$ for $n=0,1,2,\ldots$.



These half-open intervals may then be positioned in reverse order to form a half-open interval of equal length. Whether this construction is sufficiently explicit is open to question, but it does allow the relocation of any $x\in (0,1)$ to $[0,1)$ to be computed in short order.



A more succinct construction is to define $f:[0,1) \to (0,1)$ by $f(0) = 1/2$, $f(1/n) = 1/(n+1)$ for integer $n \ge 2$, and $f(x) = x$ otherwise.


Friday, June 29, 2018

number theory - Divisibility by 37 proof



$\overline {abc}$ is divisible by $37$. Prove that $\overline {bca}$ and $\overline {cab}$ are also divisible by $37$.



$$\overline {abc} = 100a + 10b + c$$
$$\overline {bca} = 100b + 10c + a$$
$$\overline {cab} = 100c + 10a + b$$
When you add them:
$$\overline {abc} + \overline {bca} + \overline {cab} = 111a + 111b + 111c$$

$$\overline {abc} + \overline {bca} + \overline {cab} = 111(a + b + c)$$
Since $111$ is divisible by $37$, the whole sum is divisible by $37$, but how can i prove that $\overline {abc}$, $\overline {bca}$, $\overline {cab}$ separately are divisible by $37$?



Any tips or hints appreciated.


Answer



$$\overline{bca}=10\cdot\overline{abc}-1000a+a = 10\cdot\overline{abc}-27\cdot 37\cdot a$$


Thursday, June 28, 2018

Are there any motivation of dual space and annihilator PURELY in linear algebra?

Are there any motivation of dual space and annihilator PURELY in linear algebra? I can prove some relate theorem about it, but I totally can't get what I am doing. It looks like I am, including the textbook writes were, totally just playing some game on the symbols - define dual space without motivation, define annihilator without reasons, and define $T^t=g\mapsto g\circ T$ without any necessity.



I have seen some people say we need that because it is useful in functional analysis, or category theory. However, I think there should be some persuasive reason that why should we define, and what is the motivation of defining them purely in the linear algebra field. Don't tell me "if you grow older, you will know it." If the reasons for it are all in the advanced courses, then why the authors define it and use it in Linear Algebra?



Definitions in my books:




linear functional: a function from a vector space $V$ to a field $F$



dual space on $V$: all linear functionals from $V$ to $F$

Two questions on finding trailing digits in (large) numbers and one on divisibility

Without using a calculator, how can we solve the following?




  1. How do we find the number of zeros at the end of $600!$


  2. What are the last 3-digits of $171^{172}$?

  3. What is the sum of all positive numbers less than or equal to $61$, which are divisible by $3$ as well as by $5$?

Wednesday, June 27, 2018

elementary number theory - Prove that if $d$ divides $n$, then $2^d -1$ divides $2^n -1$

Prove that if $d$ divides $n$, then $2^d -1$ divides $2^n -1$.



Use the identity $x^k -1 = (x-1)*(x^{k-1} + x^{k-2} + \cdots + x +1)$

calculus - How can the following be calculated?



How can the following series be calculated?



$$S=1+(1+2)+(1+2+3)+(1+2+3+4)+\cdots+(1+2+3+4+\cdots+2011)$$


Answer



Note that $1$ occurs $2011$ times; $2$ occurs $2010$ times; $3$ occurs $2009$ times, and so on, until $2011$ occurs only once. Hence we can rewrite the sum as

$$(2012-1)(1)+(2012-2)(2)+(2012-3)(3)+\cdots+(2012-2011)(2011).$$
Split and regroup terms:
$$2012(1+2+3+\cdots+2011)-(1^2+2^2+3^2+\cdots+2011^2).$$
Now using the two formulas for triangular numbers and square pyramidal numbers, compute
$$2012\frac{2011(2011+1)}{2}-\frac{2011(2011+1)(2\cdot2011+1)}{6}=1357477286.$$






This also evaluates the general sum:
$$1+(1+2)+\cdots+(1+2+\cdots+n)$$

$$=(n+1-1)(1)+(n+1-2)(2)+\cdots+(n+1-n)(n)$$
$$=(n+1)(1+2+\cdots+n)-(1^2+2^2+\cdots+n^2)$$
$$=(n+1)\frac{n(n+1)}{2}-\frac{n(n+1)(2n+1)}{6}=n(n+1)\left[\frac{n+1}{2}-\frac{2n+1}{6}\right]$$
$$=\frac{n(n+1)(n+2)}{6}.$$
One could also use the triangle number formula on each term for a more direct route:
$$\frac{1(1+1)}{2}+\frac{2(2+1)}{2}+\frac{3(3+1)}{2}+\cdots+\frac{n(n+1)}{2}$$
$$=\frac{1}{2}\left[(1+2^2+\cdots+n^2)+(1+2+\cdots+n)\right]$$
$$=\frac{1}{2}\left[\frac{n(n+1)(2n+1)}{6}+\frac{n(n+1)}{2}\right]=\frac{n(n+1)}{4}\left[\frac{2n+1}{3}-1\right]$$
$$=\frac{n(n+1)(n+2)}{6}.$$


calculus - Gaussian type integral $int_{-infty}^{infty} frac{mathrm{e}^{-a^2 x^2}}{1 + x^2} mathrm{d}x$



When working a proof, I reached an expression similar to this:



$$\int_{-\infty}^{\infty} \frac{\mathrm{e}^{-a^2 x^2}}{1 + x^2} \mathrm{d}x$$




I've tried the following:



1. I tried squaring and combining and converting to polar coordinates, like one would solve a standard Gaussian. However, this yielded something which seems no more amenable to a solution:



$$\int_{\theta=0}^{\theta=2\pi} \int_{0}^{\infty} \frac{r \mathrm{e}^{-a^2 r^2}}{(1 + r^2 \sin^2(\theta))(1 + r^2 \cos^2(\theta))} \mathrm{d}r \mathrm{d}\theta$$



2. I tried doing a trig substitution, t = tan u, and I have no idea what to do from there.



$$\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \mathrm{e}^{-a^2 \tan^2(u)} \mathrm{d}u$$




3. I looked into doing $u^2 = 1 + x^2$ but this gives us a ugly dx that I don't know how to handle, and moreover, I think I'm breaking my limits of integration (because Mathematica no longer solves it.):



$$u^2 = 1 + x^2$$



$$2 u \mathrm{d}u = 2 x \mathrm{d}x$$



$$\mathrm{d}x = \frac{u}{\sqrt{u^2 - 1}}$$
$$\mathrm{e}^{a^2} \int_{-\infty}^{\infty} \frac{\mathrm{e}^{-a^2 u^2}}{u \sqrt{u^2 - 1}} \mathrm{d}u$$



4. I looked into some form of differentiation under the integral, but that didn't seem to yield anything that looked promising. (I checked parameterizing x^2 to x^b in both places, and in either place, and nothing canceled cleanly.)




I have a solution from Mathematica, it's:



$$\pi e^{a^2} \text{erfc}(a)$$



But I'd like to know how to arrive at this. I'm sure it's something simple I'm missing.


Answer



Let $F$ be the function
$$F(a)=\int_{-\infty}^{\infty}\frac{e^{-a^{2}x^{2}}}{1+x^2}dx$$
We take the derivative w.r.t $a$

$$F^{\prime}(a)=\frac{d}{da}\left(\int_{-\infty}^{\infty}\frac{e^{-a^{2}x^{2}}}{1+x^2}dx\right)=\int_{-\infty}^{\infty}\frac{d}{da}\left(\frac{e^{-a^{2}x^{2}}}{1+x^2}\right)dx
=\int_{-\infty}^{\infty}\frac{-2ax^{2}e^{-a^{2}x^{2}}}{1+x^2}dx$$
$$=\int_{-\infty}^{\infty}\frac{-2a\big((x^{2}+1)-1\big)e^{-a^{2}x^{2}}}{1+x^2}dx
=-2a\int_{-\infty}^{\infty}e^{-a^{2}x^{2}}dx+2aF(a)
=-2a\sqrt{\frac{\pi}{a^2}}+2aF(a)$$
Then
$$F^{\prime}(a)=2a\left(F(a)-\sqrt{\pi}\,\frac{1}{\vert{a}\vert}\right)
=2aF(a)-2\sqrt{\pi}\mathrm{sign}(a).$$
Then you have a differential equation:
$$ F^{\prime}(a)-2a\,F(a)=-2\sqrt{\pi}\mathrm{sign}(a) $$

with initial condition $F(0)=\pi$.
This fisrt order ode has integrant factor:
$$\mu(a)=\displaystyle{e^{\displaystyle{\int{-2ada}}}}=e^{-a^2}$$
Then
$$
\left(e^{-a^2}F(a)\right)^{\prime}=-2\sqrt{\pi}\mathrm{sign}(a) e^{-a^2} $$
this implies
$$ e^{-a^2}F(a)=-2\sqrt{\pi}\int{\mathrm{sign}(a) e^{-a^2}}da+C $$
Finaly
$$F(a)=e^{a^2}\left(C-2\sqrt{\pi}\mathrm{sign}(a)\int{e^{-a^2}da}\right)$$



calculus - Evaluating the sum $sum_{n=2}^{infty}ln[1-1/n^2]$




For convenience the sum is again
$$
\sum_{n=2}^{\infty}\ln[1-1/n^2]=\sum_{n=2}^{\infty}\ln\frac{(n^2-1)}{(n^2)}
$$
I first tried solving using a definite integral, since this seems to make telescoping easier to see,
$$
\sum_{n=2}^{\infty}\ln\frac{(n^2-1)}{(n^2)}=\sum_{n=2}^{\infty}\ln(n^2-1)-\ln(n^2)=
\sum_{n=2}^{\infty}\int_{n^2}^{n^2-1}\frac{dx}{x}
$$
But writing out the first few terms doesn't make any cancellation obvious because of the $n^2$ terms.




I also tried futzing around with log rules and got things down to
$$
\sum_{n=2}^{\infty}\ln\frac{(n^2-1)}{(n^2)}=
\sum_{n=2}^{\infty}\ln((n-1)(n+1))-\ln(n^2)=
\sum_{n=2}^{\infty}\ln(n-1)+\ln(n+1)-\ln(n^2)
$$
The first few terms of which are
$$
\sum_{n=2}^{4}\ln\frac{(n^2-1)}{(n^2)}=[\ln1+\ln3-2\ln 2]+[\ln2+\ln4-2\ln 3]+[\ln3+\ln5-2\ln 4]+...\\

=\ln 2+\ln 5-2\ln 4
$$
Which leads to the guess that
$$
\sum_{n=2}^{4}\ln\frac{(n^2-1)}{(n^2)}=\ln (N-1)+\ln(N+2)-2\ln(N)\\
=\ln(\frac{(N-1)(N+2)}{N^2})\rightarrow 0
$$
Which means I'm wrong. Should I soldiering on looking for a pattern through more computation, or is there a more expedient/elegant way to evaluate the sum?


Answer



You were on the right track in writing $$\log\left(1-\frac{1}{n^2}\right)=\log(n+1)-\log(n)+\log(n-1)-\log(n)$$




Proceeding, we can write



$$\begin{align}
\sum_{n=2}^N \log\left(1-\frac{1}{n^2}\right)&=\sum_{n=2}^N \left(\log(n+1)-\log(n)\right)+\sum_{n=2}^N \left(\log(n-1)-\log(n)\right)\\\\
&=\log(N+1)-\log(2)-\log(N)
\end{align}$$



Therefore, we find that




$$\sum_{n=2}^\infty \log\left(1-\frac{1}{n^2}\right)=-\log(2)$$


real analysis - Infinity norm related to $L_p(X)$ norm on finite measure space $X$.

Let $(X, \cal M, \mu)$ be a measure space with $\mu(X)<\infty$. Why
\begin{align}
\lim_{p\to\infty}\|f\|_p=\|f\|_\infty
\end{align}
for all $f\in L_\infty$?

Tuesday, June 26, 2018

summation - Formula for sum of first $n$ odd integers

I'm self-studying Spivak's Calculus and I'm currently going through the pages and problems on induction. This is my first encounter with induction and I would like for someone more experienced than me to give me a hint and direction.
The first problem is as follows:



Find a formula for $$\sum_{i=1}^n(2i-1)=1+3+5+...+(2n-1)$$
And the related following problem:



Find a formula for $$\sum_{i=1}^n(2i-1)^2=1^2+3^2+5^2+...+(2n-1)^2$$



The given hints are: "What do these expressions have to do with $1+2+3+...+2n$ and $1^2+2^2+3^2+...+(2n)^2$?"




I recognize that the above sums are the sum of all the odd integers from $1$ to $n$ and the sum of all the squares of the odd integers from $1$ to $n$, respectively. My question is, in problems like these does one just do a bunch of trial and error, as I have done for quite a while now, or is there a more clever way to go about it?

elementary number theory - Arithmetic of integers, divisibility

Show that, for $a$ and $b$ integers, it has been $3|a^{2}+b^{2}$ then $3|a$ and $3|b$.



I tried immediately, assuming that 3 divides the sum, then it has to divide separately.

Monday, June 25, 2018

trigonometry - Find $sin(A)$ and $cos(A)$ given $cos^4(A) - sin^4(A) = frac{1}{2}$ and $A$ is located in the second quadrant.




Question: Find $\sin(A)$ and $\cos(A)$, given $\cos^4(A)-\sin^4(A)=\frac{1}{2}$ and $A$ is located in the second quadrant.




Using the fundamental trigonometric identity, I was able to find that:



• $\cos^2(A) - \sin^2(A) = \frac{1}{2}$




and



• $$ \cos(A) \cdot \sin(A) = -\frac{1}{4} $$



However, I am unsure about how to find $\sin(A)$ and $\cos(A)$ individually after this.



Edit: I solved the problem through using the Fundamental Trignometric Identity with the difference of $\cos^2(A)$ and $\sin^2(A)$.


Answer



Hint




$$\left( \cos(A)+ \sin(A) \right)^2 = 1+2 \sin(A) \cos(A)=\frac{1}{2} \\
\left( \cos(A)- \sin(A) \right)^2 = 1-2 \sin(A) \cos(A)=\frac{3}{2} $$



Take the square roots, and pay attention to the quadrant and the fact that $\cos^4(A) >\sin^4(A)$ to decide is the terms are positive or negative.



Alternate simpler solution
$$2 \cos^2(A)= \left( \cos^2(A)+\sin^2(A)\right)+\left( \cos^2(A)-\sin^2(A)\right)=1+\frac{1}{2} \\
2 \sin^2(A)= \left( \cos^2(A)+\sin^2(A)\right)-\left( \cos^2(A)-\sin^2(A)\right)=1-\frac{1}{2} \\$$


Possible fake proof of $1= -1$







Well, I remembered this after having Algebra II a year ago, is it possible that this is a valid proof that $1 = -1$?


$$ 1 = \sqrt{1} = \sqrt{-1\cdot-1} = \sqrt{-1} \cdot \sqrt{-1} = i \cdot i = i^2 = -1 $$


$$ \therefore 1 = -1 $$


So is this actually fully valid? Or can it be disproved?


Answer



I think the problem is between $\sqrt{ -1 \dot{} -1 }$ and $\sqrt{-1} \dot{} \sqrt{-1}$.


Sunday, June 24, 2018

Why can't erf be expressed in terms of elementary functions?



I have seen this claim on Wikipedia and other places. Which branch of mathematics does this result come from?


Answer




Differential Galois theory.


limits - What is the real answer for this geometric series?




I recently read a book about Data structures, int that book I found this equation under topic, "the average number of probes for an unsuccessful search":




enter image description here




but when I was trying to prove this I found another expression,
here is how I tried:




enter image description here



enter image description here



enter image description here



enter image description here



enter image description here




enter image description here



enter image description here



So my proof led me to this equation



enter image description here



So what did I do wrong, I can't find any error above?




Or Am I right?


Answer



Your answer is right. $\lambda =0$ should give both the right hand side and left side to be zero, which is clearly not the case in the formula given in the text.



Another way to obtain the same answer is as follows. We have
\begin{align}
\sum_{k=0}^{\infty} \lambda^k = \dfrac1{1-\lambda} \,\,\, \forall \left \vert \lambda \right \vert < 1
\end{align}
Differentiating the above we obtain
\begin{align}

\sum_{k=0}^{\infty} k\lambda^{k-1} = \dfrac1{(1-\lambda)^2} \,\,\, \forall \left \vert \lambda \right \vert < 1
\end{align}
Multiplying by $\lambda$, we obtain
\begin{align}
\sum_{k=0}^{\infty} k\lambda^{k} = \dfrac{\lambda}{(1-\lambda)^2} \,\,\, \forall \left \vert \lambda \right \vert < 1
\end{align}


calculus - Absolute convergence of the series $sin(a_n)$ implies...?

Can someone please give me a counterexample for the following statement?


If $\sum _1 ^\infty sin(a_n) $ absolutely converges , then $\sum_1^\infty a_n$ also absolutely converges.


I tried working with $a_n = \frac{ (-1)^n } {n} $ and $a_n = \frac{(-1)^n lnn}{n}$ , but without any success...


Got any idea?


Thanks a lot !

Calculate this limit without L'Hospital's rule or series expansion

Calculating the limit as $x$ approaches $0$ using L'Hospital's rule or series expansion is straightforward, but how to evaluate the limit without either of those techniques.



How to calculate the following limit as $x$ approaches $0$:




$\dfrac{\ln(x+1)+1-e^x}{x^2}$

general topology - Understanding Fixed Points on $[0,1)$ and $(0,1)$


Munkres asks the following question:



Let $f:[0,1] \rightarrow [0,1]$ be continuous. Show that there exists a point $f(x) = x$ (called a fixed point). What if the domain and range are $[0,1)$ or $(0,1)$?



The arguement for the first part is to produce a function $g = f-x$, argue that $g$ is continuous and use intermediate value theorem. A counterexample for the second part is $f(x) = 1/2 + x/2$.


My question: I'm not understanding what fails in the second part of the question. Intermediate value theorem still holds because $[0,1)$ and $(0,1)$ are still members of the order topology - so I should be able to produce a fixed point. So why does a counterexample exist?


EDIT: The Theorem Reads as Follows:




Let $f:X \rightarrow Y$ be a continuous map, where $X$ is a connected space and $Y$ is an ordered set in the order topology. If $a$ and $b$ are two points in $X$ and $r$ is a point in $Y$ lying in between $f(a)$ and $f(b)$ then there exists a point $c \in X$ such that $f(c) = r$.



Answer



Tsemo's answer is correct, but intuitively, it follows because when your interval is missing an endpoint, you can 'hide' the fixed point in the 'hole' formed by the missing endpoint. For example, $f(x) = x^2$, viewed as a map from $(0,1)$ to $(0,1)$ is fixed point free, though of course, this is just because it's fixed points are 0 and 1, which are excluded from your domain.


Saturday, June 23, 2018

calculus - Evaluate $lim_{xrightarrow 0} frac{sin x}{x + tan x} $ without L'Hopital




I need help finding the the following limit:



$$\lim_{x\rightarrow 0} \frac{\sin x}{x + \tan x} $$



I tried to simplify to:



$$ \lim_{x\rightarrow 0} \frac{\sin x \cos x}{x\cos x+\sin x} $$



but I don't know where to go from there. I think, at some point, you have to use the fact that $\lim_{x\rightarrow 0} \frac{\sin x}{x} = 1$. Any help would be appreciated.




Thanks!


Answer



$$
\frac{\sin x}{x + \tan x} = \frac{1}{\frac{x}{\sin x}+\frac{\tan x}{\sin x}} \to 1/2
$$


calculus - Prove that $int_0^inftyfrac1{t^x+1}dt = fracpi x csc fracpi x$

I'm stuck on this identity:


$$ \int_0^\infty\frac1{t^x+1}dt = \frac\pi x \csc \frac\pi x $$


Could someone show me a proof for this?


What I've tried:


I've thrown a bunch of substitutions and integration by parts at this, but they haven't led me to the answer. I did, however, find these identites: $$ \int_0^\infty\frac1{t^x+1}dt = \int_0^\infty\frac{t^{x-2}}{t^x+1}dt = \int_0^1\left( \frac{1-t}t \right)^{1/x}dt $$ But none of these seem to lead anywhere helpful.


I also tried introducing another variable to turn it into something similar to the Laplace Transform, but I'm not very familiar with methods like that, so they've led nowhere.

elementary number theory - Solving linear congruences $12x equiv 1pmod {77}$



I am having some repeated trouble getting the correct answer on linear congruences. Consider the following



$$12x \equiv 1 \pmod {77} $$




$12$ and $77$ are relatively prime so this congruence has a solution. We search for a linear combination of $12$ and $77$ using the extended Euclidean algorithm.



$$77=12(6)+5\\
12=5(2)+2\\
5=2(2)+1$$



We now solve for the remainders



$$1=5-2(2)\\

2=12-5(2)\\
5=77-12(6)$$



Back substituting we find



$1=77(5)+12(-32)$



The solution to this congruence is $45$ and $-32 \equiv 45$ (mod 77). What am I failing to do properly as to get the first positive solution?


Answer



Your answer is completely correct. More generally, the solution is $45+77k: k\in \mathbb{Z}$ as stated above, because

$$ 45+77k\equiv 45\mod 77.$$
If you're concerned about getting the negative answer first, it is simple to just add $77$ as you did to find the first positive value.


real analysis - Is it true that $0.999999999dots=1$?



I'm told by smart people that $$0.999999999\dots=1$$ and I believe them, but is there a proof that explains why this is?


Answer



What does it mean when you refer to $.99999\ldots$? Symbols don't mean anything in particular until you've defined what you mean by them.


In this case the definition is that you are taking the limit of $.9$, $.99$, $.999$, $.9999$, etc. What does it mean to say that limit is $1$? Well, it means that no matter how small a number $x$ you pick, I can show you a point in that sequence such that all further numbers in the sequence are within distance $x$ of $1$. But certainly whatever number you choose your number is bigger than $10^{-k}$ for some $k$. So I can just pick my point to be the $k$th spot in the sequence.


A more intuitive way of explaining the above argument is that the reason $.99999\ldots = 1$ is that their difference is zero. So let's subtract $1.0000\ldots -.99999\ldots = .00000\ldots = 0$. That is,


$1.0 -.9 = .1$


$1.00-.99 = .01$


$1.000-.999=.001$,


$\ldots$


$1.000\ldots -.99999\ldots = .000\ldots = 0$



Friday, June 22, 2018

linear algebra - Finding characteristic matrix of a triangle matrix



I came to this stage when I was reading a Linear algebra text :




Suppose $M$ is a block triangular matrix, say $M= \begin{pmatrix} A_1 & B \\ 0 & A_2\end{pmatrix}$ where $A_1$ and $A_2$ are square matrices. Then the characteristic matrix of $M$,
$$\begin{pmatrix} tI-A_1 & -B \\ 0 & tI-A_2\end{pmatrix}$$is also a block triangular matrix with diagonal blocks $tI-A_1$ and $tI-A_2$. Thus by Theorem 7.12,
$$|tI-M|=\left|\begin{matrix} tI-A_1 & -B \\ 0 & tI-A_2\end{matrix}\right|=|tI-A_1||tI-A_2|.$$That is the characteristic polynomial of $M$ is the product of the characteristic polynomials of the diagonal blocks $A_1$ and $A_2$.





$tI - M$ gives a matrix with components :




  • $tI - A_1$ (makes sense)

  • $tI - A_2$ (even that I got)



But why is the top right component $-B$? Why not $tI-B$? What am I missing?



Answer



Note that $$ M = \begin{pmatrix} A_1 & B \\ 0 & A_2 \end{pmatrix} $$
writing the identity with correspoding blocks: Suppose $A_1$ is a $n_1 \times n_1$-matrix and $A_2$ is a $n_2 \times n_2$-matrix. Then in $tI - M$ the identity is a $(n_1+n_2)\times (n_1 + n_2)$-matrix. We have
$$ I_{n_1+n_2} = \begin{pmatrix} I_{n_1} & 0 \\ 0 & I_{n_2} \end{pmatrix} $$
as the identity does only have off-diagonal entries and the off-diagonal blocks do not contain diagonal-elements. Hence
$$ tI - M = t\begin{pmatrix} I_{n_1} & 0 \\ 0 & I_{n_2} \end{pmatrix}
- \begin{pmatrix} A_1 & B \\ 0 & A_2 \end{pmatrix} = \begin{pmatrix} tI - A_1 & -B \\ 0 & tI - A_2 \end{pmatrix}. $$


limits - Prove that $lim limits_{n to infty} frac{x^n}{n!} = 0$, $x in Bbb R$.

Why is





$$\lim_{n \to \infty} \frac{2^n}{n!}=0\text{ ?}$$




Can we generalize it to any exponent $x \in \Bbb R$? This is to say, is




$$\lim_{n \to \infty} \frac{x^n}{n!}=0\text{ ?}$$








This is being repurposed in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions.



and here: List of abstract duplicates.

calculus - How to prove that $sum_{1}^{infty} frac{1}{n^3} le 1.5$

I have this sequence:
$$\sum_{1}^{\infty} \frac{1}{n^3}$$



and I need to prove that: $\sum_{1}^{\infty} \frac{1}{n^3} \le 1.5$



So basically I know that this sequence converges using the integral test, but I don't know how to prove the above statement.




Some help?

probability - Intuitive explanations for Gaussian distribution function and mahalanobis distance




I was wondering If anyone could give intuitive explanations for the multivariate Gaussian distribution function and mahalanobis distance? My professor didn't explain these in probability class, they were only defined...



Where did the formula come from? Why is the Gaussian function the way it is? Is there a way to intuitively explain mahalanobis distance?



Thank you for any support!!


Answer



For the Mahalanobis distance, have a look here and here


Thursday, June 21, 2018

real analysis - Is this:$sum_{n=1}^{infty}{(-1)}^{frac{n(n-1)}{2}}frac{1}{n}$ a convergent series?



Is there someone who can show me how do I evaluate this sum :$$\sum_{n=1}^{\infty}{(-1)}^{\frac{n(n-1)}{2}}\frac{1}{n}$$



Note : In wolfram alpha show this result and in the same time by ratio test it's not a convince way to judg that is convergent series



Thank you for any help


Answer




The parity of $\frac{n(n-1)}{2}$ is 4-periodic. Thus the sequence $(-1)^{\frac{n(n-1)}{2}}$ equals to:
$$ 1, \, -1, \, -1, \, 1, \, 1, \, -1, \, -1, \, 1, \, 1, \, -1, \, -1, \, 1 , \cdots$$
The original series' partial sum truncated at $N$ equals to
$$ \sum_{k=0}^{K} \left( \frac{1}{4k+1} - \frac{1}{4k+2} - \frac{1}{4k+3} + \frac{1}{4k+4}\right) + \sum_{i=1}^{N - 4K - 4}\frac{(-1)^{\frac{i(i-1)}{2}}}{4K + 4 + i}$$
where $K = \lfloor \frac{N}{4}\rfloor - 1$.



Then by a discussion on the partial sum we can conclude that the series is convergent.


probability - Coupon Collector's Problem with X amount of coupons already collected.




I am having an issue with understanding how to calculate a specific case of the Coupon Collector's Problem. Say I have a set of 198 coupons. I learned how to find the estimated amount of draws to see all 198 coupons, using the following equation:



$$n \sum_{k=1}^n\frac1k$$



It turns out that for $n = 198$, the expected number of draws is approximately 1162. Let's assume, however, that I already have some of the coupons, say 50. How should I go about solving the same problem, given that I've already collected $X$ of them?


Answer



Based on the corresponding thread on Wikipedia. The expected time to draw all $n$ coupons equals:




$$E[T] = E[t_1] + E[t_2] + \ldots + E[t_n]$$



with $t_i$ the time needed to collect the $i^{th}$ coupon once $i-1$ coupons have been drawn. Once $i-1$ tickets have been drawn, there are $n-i+1$ unseen coupons left. The probability $p_i$ of selecting a new coupon thus equals $\frac{n-i+1}{n}$, and the expected number of draws needed to draw a new coupon equals $\frac{1}{p_i} = \frac{n}{n-i+1}$. As such, the expected value for the time needed to draw all $n$ coupons can be calculated as:



$$E[T] = \frac{n}{n} + \frac{n}{n-1} + \ldots + \frac{n}{1} = n \sum_{k=1}^{n}{\frac{1}{k}}$$



In this case, however, we have already drawn $X$ unique coupons. As such, the estimated number of draws needed to find all $n$ coupons equals:



$$E[T] = E[t_{X+1}] + E[t_{X+2}] + \ldots + E[t_n] = n \sum_{k=1}^{n-X} \frac{1}{k}$$



Wednesday, June 20, 2018

calculus - Solving $lim_{xto 0} frac{e^x-1-x}{x^2}$ without L'Hôpital's Rule.

I want to solve this limit without the use of L'Hôpital's rule:



$$\lim_{x\to 0} \frac{e^x-1-x}{x^2}.$$

Differentiability implies continuous derivative?

We know differentiability implies continuity, and in 2 independent variables cases both partial derivatives fx and fy must be continuous functions in order for the primary function f(x,y) to be defined as differentiable.



However in the case of 1 independent variable, is it possible for a function f(x) to be differentiable throughout an interval R but it's derivative f ' (x) is not continuous?

Tuesday, June 19, 2018

calculus - Evaluate the $lim_{x to -infty} (x + sqrt{x^2 + 2x})$


Evaluate : $$\lim_{x \to \ -\infty} (x + \sqrt{x^2 + 2x})$$


I've tried some basic algebraic manipulation to get it into a form where I can apply L'Hopital's Rule, but it's still going to be indeterminate form.



This is what I've done so far


\begin{align} \lim_{x \to \ -\infty} (x + \sqrt{x^2 + 2x}) &= \lim_{x \to \ -\infty} (x + \sqrt{x^2 + 2x})\left(\frac{x-\sqrt{x^2 + 2x}}{x-\sqrt{x^2 + 2x}}\right)\\ \\ &= \lim_{x \to \ -\infty} \left(\frac{x^2 - (x^2 + 2x)}{x-\sqrt{x^2 + 2x}}\right)\\ \\ &= \lim_{x \to \ -\infty} \left(\frac{-2x}{x-\sqrt{x^2 + 2x}}\right)\\ \\ \end{align}


And that's as far as I've gotten. I've tried applying L'Hopitals Rule, but it still results in an indeterminate form.


Plugging it into WolframAlpha shows that the correct answer is $-1$


Any suggestions on what to do next?


Answer



$$\lim _{ x\to -\infty } \left( \frac { -2x }{ x-\sqrt { x^{ 2 }+2x } } \right) =\lim _{ x\rightarrow -\infty }{ \left( \frac { -2x }{ x-\sqrt { { x }^{ 2 }\left( 1+\frac { 2 }{ x } \right) } } \right) =\lim _{ x\rightarrow -\infty }{ \frac { -2x }{ x-\left| x \right| \sqrt { 1+\frac { 2 }{ x } } } = } } \\ =\lim _{ x\rightarrow -\infty }{ \frac { -2x }{ x+x\sqrt { 1+\frac { 2 }{ x } } } = } \lim _{ x\rightarrow -\infty }{ \frac { -2x }{ x\left( 1+\sqrt { 1+\frac { 2 }{ x } } \right) } = } -1$$


complex numbers - Proof of Euler's formula that doesn't use differentiation?

So I saw a 'proof' of the sine and cosine angle addition formulae,
i.e. $\sin(x+y)=\sin x\cos y+\cos x \sin y$, using Euler's formula, $e^{ix}=\cos x+i\sin x$.
By multiplying by $e^{iy}$, you can get the desired result.




However, this 'proof' appears to be circular reasoning, as all proofs I have seen of Euler's formula involve finding the derivative of the sine and cosine functions. But to find the derivative of sine and cosine from first principles requires the use of the sine and cosine angle addition formulae.



So is there any proof of Euler's formula that doesn't involve finding the derivative of sine or cosine?
I know you can prove the trigonometric formulas geometrically, but it is more laborious to do.

calculus - Why doesn't using the approximation $sin xapprox x$ near $0$ work for computing this limit?



The limit is
$$\lim_{x\to0}\left(\frac{1}{\sin^2x}-\frac{1}{x^2}\right)$$

which I'm aware can be rearranged to obtain the indeterminate $\dfrac{0}{0}$, but in an attempt to avoid L'Hopital's rule (just for fun) I tried using the fact that $\sin x\approx x$ near $x=0$. However, the actual limit is $\dfrac{1}{3}$, not $0$.



In this similar limit, the approximation reasoning works out.


Answer



If we take one more term in the Taylor expansion:



\begin{align}
\sin x&\approx x-\frac{x^3}6+\cdots\\
\sin^2 x&\approx x^2-2x\frac{x^3}6+\cdots\\
\frac 1{\sin^2 x}&\approx\frac 1{x^2-x^4/3}\\

&=\frac 1{x^2}\cdot\frac 1{1-x^2/3}\\
\lim_{x\to 0}\left[\frac 1{\sin^2 x}-\frac 1{x^2}\right]&=\lim_{x\to 0}\left[\frac 1{x^2}\left(\frac 1{1-x^2/3}-1\right)\right]\\
&=\lim_{x\to 0}\left[\frac 1{x^2}\cdot\frac{1-1+x^2/3}{1-x^2/3}\right]\\
&=\lim_{x\to 0}\frac 1{3-x^2}\\
&=\frac 1 3
\end{align}






To see where the first-order expansion went wrong, it's necessary to keep track of where the error term goes:




\begin{align}
\sin x&= x+\text{O}(x^3)\\
\sin^2 x&=x^2+2x\text{O}(x^3)+\text{O}(x^3)^2\\
&=x^2+\text{O}(x^4)+\text{O}(x^6)\\
&=x^2+\text{O}(x^4)\\
\frac 1{\sin^2 x}&=\frac 1{x^2+\text{O}(x^4)}\\
&=\frac 1{x^2}\cdot\frac 1{1+\text{O}(x^2)}\\
\frac 1{\sin^2 x}-\frac 1{x^2}&=\frac 1{x^2}\left[\frac 1{1+\text{O}(x^2)}-1\right]\\
&=\frac 1{x^2}\cdot\frac{1-1+\text{O}(x^2)}{1+\text{O}(x^2)}\\

&=\frac{\text{O}(x^2)}{x^2}\cdot\frac 1{1+\text{O}(x^2)}\\
&=\text{O}(1)
\end{align}



Thus the $\sin x\approx x$ approximation is not accurate enough to estimate even the constant term of the expression in the limit. (Note that it does allow us to say that there are no $\text{O}(n^{-1})$ or bigger terms, so the limit probably won't diverge.)


real analysis - Does $ int_0^{infty}frac{sin x}{x}dx $ have an improper Riemann integral or a Lebesgue integral?




In this wikipedia article for improper integrals,
$$
\int_0^{\infty}\frac{\sin x}{x}dx
$$
is given as an example for the integrals that have an improper Riemann integral but do not have a (proper) Lebesgue integral. Here are my questions:




  • Why does this one have an improper Riemann integral? (I don't see why $\int_0^a\frac{\sin x}{x}dx$ and $\int_a^{\infty}\frac{\sin x}{x}dx$ converge.)

  • Why doesn't this integral have a Lebesgue integral? Is it because that $\frac{\sin x}{x}$ is unbounded on $(0,\infty)$ and Lebesgue integral doesn't deal with unbounded functions?



Answer



$\displaystyle \int_0^a\frac{\sin x}xdx$ converges since we can extend the function $x\mapsto \frac{\sin x}x$ by continuity at $0$ (we give the value $1$ at $0$). To see that the second integral converges, integrate by parts $\displaystyle\int_a^A\frac{\sin x}x dx$. Indeed, we get
$$\int_a^A\frac{\sin x}xdx =\left[-\frac{\cos x}x\right]_a^A+\int_a^A-\frac{\cos x}{x^2}dx = \frac{\cos a}a-\frac{\cos A}A-\int_a^A\frac{\cos x}{x^2}dx,$$
and $\displaystyle\lim_{A\to +\infty}\frac{\cos A}A=0$, and the fact that $\displaystyle\int_a^{+\infty}\frac{dx}{x^2}$ is convergent gives use the convergence of $\displaystyle\int_a^{+\infty}\frac{\sin x}xdx$
. But $f(x):=\frac{\sin x}x$ has not a Lebesgue integral, since the integral $\displaystyle\int_0^{\infty}\left|\frac{\sin x}x\right| dx$ is not convergent (but it's not a consequence of the fact that $f$ is not bounded, first because $f$ is bounded, and more generally consider $g(x)=\frac 1{\sqrt x}$ for $0 and $g(x)=0$ for $x>1$). To see that the integral is not convergent, note that for $N\in\mathbb N$
\begin{align*}
\int_{\pi}^{(N+1)\pi}\left|\frac{\sin x}x\right|dx&=\sum_{k=1}^N\int_{k\pi}^{(k+1)\pi}\left|\frac{\sin x}x\right|dx\\\
&=\sum_{k=1}^N\int_0^{\pi}\frac{|\sin(t+k\pi)|}{t+k\pi}dt\\\
&=\sum_{k=1}^N\int_0^{\pi}\frac{|\sin t|}{t+k\pi}dt\\\

&\geq \sum_{k=1}^N\frac 1{(k+1)\pi}\int_0^{\pi}\sin t\,dt\\\
&=\frac 2{\pi}\sum_{k=1}^N\frac 1{k+1},
\end{align*}

and we can conclude since the harmonic series is not convergent.


trigonometry - Induction proof of the identity $cos x+cos(2x)+cdots+cos (nx) = frac{sin(frac{nx}{2})cosfrac{(n+1)x}{2}}{sin(frac{x}{2})}$





Prove that:$$\cos x+\cos(2x)+\cdots+\cos (nx)=\frac{\sin(\frac{nx}{2})\cos\frac{(n+1)x}{2}}{\sin(\frac{x}{2})}.\ (1)$$




My attempt:$$\sin\left(\frac{x}{2}\right)\sum_{k=1}^{n}\cos{(kx)}$$$$=\sum_{k=1}^{n}\sin\left(\frac{x}{2}\right)\cos{(kx)} $$ (Applying $\sin x\cos y =\frac{1}{2}\left[\sin(x+y)+\sin(x-y)\right]$)
$$=\sum_{k=1}^{n}\frac{1}{2}\sin\left(\frac{x}{2}-kx\right)+\frac{1}{2}\sin\left(\frac{x}{2}+kx\right).$$Multiplying $(1)$ by $\sin\left(\frac{x}{2}\right)$ ,gives: $$\sum_{k=1}^{n}\sin\left(\frac{x}{2}\right)\cos(kx)=\sin\left(\frac{nx}{2}\right)\cos{\frac{(n+1)x}{2}}$$ $$\Leftrightarrow\sum_{k=1}^{n}\left[\sin\left(\frac{x}{2}-kx\right)+\sin\left(\frac{x}{2}+kx\right)\right]=\sin\left(-\frac{x}{2}\right)+\sin\left(\frac{x}{2}+nx\right).$$Then, by induction on $n$ and using $(\sin x\sin y)$ and $(\sin x +\sin y)$ formulas, I end in here: $$\sum_{k=1}^{n+1}\sin\left(\frac{x}{2}\right)\cos(kx)=\left[2\sin\left(\frac{nx}{2}\right)\cos\frac{(n+1)x}{2}\right]+\left[2\sin\left(\frac{x}{2}\right)\cos(n+1)x\right].$$ Any help would be appreciated, thanks!


Answer



Here is the induction step: it comes down to proving
\begin{gather*}\frac{\sin \dfrac{nx}{2} \cos\dfrac{(n+1)x}{2}}{\sin\dfrac{x}{2}}+\cos(n+1)x=\frac{\sin\dfrac{(n+1)x}{2}\cos\dfrac{(n+2)x}{2}}{\sin(\dfrac{x}{2})}\\

\text{or}\qquad\sin\dfrac{nx}{2}\cos\dfrac{(n+1)x}{2}+\sin\dfrac{x}{2}\cos(n+1)x=\sin\dfrac{(n+1)x}{2} \cos\dfrac{(n+2)x}{2}
\end{gather*}
Now use the linearisation formulae:
\begin{cases}\displaystyle
\sin\frac{nx}{2}\cos\frac{(n+1)x}{2}=\tfrac12\biggl(\sin\frac{(2n+1)x}{2}-\sin\frac x2\biggr),\\[1ex]
\sin\dfrac{x}{2}\cos(n+1)x=\frac12\biggl(\sin\dfrac{(2n+3)x}{2}-\sin\dfrac{(2n+1)x}{2}\biggr),
\end{cases}
whence the l.h.s. is
$$\frac12\biggl(\sin\dfrac{(2n+3)x}{2}-\sin\frac x2\biggr)=\sin\frac{(n+1)x}2\,\cos\frac{(n+2)x}2$$
by the factorisation formula: $\;\sin p-\sin q=2\sin\dfrac{p-q}2\cos\dfrac{p+q}2$.



combinatorics - Combinatorial Proof vs. Algebraic Proof



I'm learning combinatorics and need a little help differentiating between a combinatorial proof and an algebraic proof. Here is an example I came across:




Prove the following two formulas by combinatorial means and algebraic manipulation:



(i) For all $k$, $n \in \mathbb{N}$ with $k \leq n$.



${n \choose 2} + {n+1 \choose 2} = n^{2}$.



(ii) For all $k$, $n \in \mathbb{N}$ with $k \leq n$.



$k{n \choose k} = n{n-1 \choose k-1}$.



Answer



Algebraic manipulations consist of calculation using explicit formulas. In your two examples, we have:
$$
\binom{n}{2} + \binom{n+1}{2} = \frac{n(n-1) + (n+1)n}{2} = n^2,
$$

and
$$
k\binom{n}{k} = k\frac{n!}{k!(n-k)!} = \frac{n!}{(k-1)!(n-k)!} = n\frac{(n-1)!}{(k-1)!(n-k)!} = n\binom{n-1}{k-1}.
$$




Combinatorial proofs represent both sides of the equation as counting certain objects, and then exhibit a bijection between the two sides. In your two examples, we have:




  1. The left-hand side counts the size of two sets of ordered pairs: pairs of integers in $[n]$ (type A), and pairs of integers in $[n+1]$ (type B). The right-hand side counts the size of the set $[n]^2$. We exhibit a bijection between them in the following way:




    • A pair $i of type A is mapped to $(i,j)$.

    • A pair $i of type B with $j \neq n+1$ is mapped to $(j,i)$.

    • A pair $i of type B is mapped to $(i,i)$.





We can check that this is a bijection by giving the inverse mapping:




  • A pair $(i,j)$ with $i < j$ is mapped to the pair $i of type A.

  • A pair $(i,j)$ with $i > j$ is mapped to the pair $j of type B.

  • A pair $(i,i)$ is mapped to the pair $i of type B.




The second example is of a somewhat different kind.




  1. The left-hand side counts the number of $k$-element subsets of $[n]$ with a distinguished element. The right-hand side offers another way of counting the same set: first choose the distinguished element, and then choose the remaining $k-1$ elements out of the remaining $n-1$ elements. This is an example of double counting.



Another kind of proof which sometimes appears is probabilistic proofs, which are really a variant of combinatorial proofs. As an example,
$$ \sum_{k=0}^n \binom{n}{k} p^k (1-p)^{n-k} = 1 $$
expresses the fact that if we toss $n$ many $p$-biased coins, then $k$ of them will turn up heads for exactly one value of $k$. (While this works only for $0 \leq p \leq 1$, the general result follows since two degree $n$ polynomials are equal whenever they agree on more than $n$ points.)


Monday, June 18, 2018

number theory - Computing Hermite Normal Form using Extended Euclidean Algorithm (modulo $D$)

I am attempting to compute the Hermite Normal Form of a matrix $A$, again, only this time using the Extended Euclidean Algorithm modulo the determinant of the matrix. This was a method contributed by Hafner and McCurley [HM91], a variation of which is provided below (ref. [Coh93]).



$
b \leftarrow \det(A)\\
\text{for } i \leftarrow n \text{ to } 1 \text{ do}\\
\quad \text{compute } (d, u, v) \text{ such that } ua_{i,i} + vb = d = \gcd(a_{i,i},b)\\
\quad H_{i} \leftarrow (uA_{i} \mod |b|)\\

\quad \text{if } d = |b| \text{ then}\\
\quad\quad h_{i,i} \leftarrow d\\
\quad \text{end if}\\
\quad b \leftarrow b / d\\
\text{end for}\\
$



$
\text{for } i \leftarrow (n - 1) \text{ to } 1 \text{ do}\\
\quad\text{for } j \leftarrow (i + 1) \text{ to } n \text{ do}\\

\quad\quad q \leftarrow \lfloor h_{i,j} / h_{i,i} \rfloor\\
\quad\quad H_{j} \leftarrow H_{j} - q \times H_{i}\\
\quad\text{end for}\\
\text{end for}
$



Trying this for the following simple $2 \times 2$ square matrix,



$
\left(

\begin{array}{rr}
4 & 2 \\
3 & 1 \\
\end{array}
\right)
$



the output that I expect is,



$

\left(
\begin{array}{rr}
2 & 0 \\
0 & 1 \\
\end{array}
\right).
$



On the first iteration of the algorithm, we have $b = \det(A) = -2$, and




$
\begin{array}{rcl}
d, u, v & = & \text{XGCD}(a_{i,i}, b) \\
& = & \text{XGCD}(1,-2) \\
& = & 1, 1, 0 \\
\end{array}
$



The last column of the HNF matrix, $H$, is therefore calculated thus,




$
\begin{array}{rcl}
H_{i} & = & uA_{i} \mod |b| \\
& = &
1 \times
\left(
\begin{array}{c}
2 \\
1 \\
\end{array}

\right) \mod 2 \\
& = &
\left(
\begin{array}{c}
0 \\
1 \\
\end{array}
\right) \\
\end{array}
$




On the second iteration of the algorithm, we have $b = b / d = -2 / 1 = -2$, and



$
\begin{array}{rcl}
d, u, v & = & \text{XGCD}(a_{i,i}, b) \\
& = & \text{XGCD}(4,-2) \\
& = & 2, 0, -1 \\
\end{array}
$




The first column of the HNF matrix, $H$, is therefore calculated thus,



$
\begin{array}{rcl}
H_{i} & = & uA_{i} \mod |b| \\
& = &
0 \times
\left(
\begin{array}{c}

4 \\
3 \\
\end{array}
\right) \mod 2 \\
& = &
\left(
\begin{array}{c}
0 \\
0 \\
\end{array}

\right) \\
\end{array}
$



But, because $d = |b|$, $h_{1,1}$ is assigned the value of $2$, therefore producing the correct output,



$
\begin{array}{rcl}
H_{1} & = &
\left(

\begin{array}{c}
2 \\
0 \\
\end{array}
\right) \\
\end{array}
$



When I apply the algorithm to larger dimensions, though, I do not get the result that I expect. For example, providing the following matrices as input,




$
A =
\left(
\begin{array}{rrrr}
10 & 0 & 1 & 0 \\
-2 & -2 & 2 & 1 \\
-2 & -1 & 3 & -1 \\
4 & -6 & -2 & 0 \\
\end{array}
\right), \text{ } B =

\left(
\begin{array}{rrrr}
-8 & 5 & -3 & 1 \\
0 & 0 & -3 & -4 \\
7 & -3 & -5 & 2 \\
-6 & -6 & 2 & 1 \\
\end{array}
\right)
$




results in the following matrices as output, respectively,



$
\text{HNF}(A) =
\left(
\begin{array}{rrrr}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 396 \\

\end{array}
\right), \text{ } \text{HNF}(B) =
\left(
\begin{array}{rrrr}
1 & 0 & 2299 & 1 \\
0 & 2873 & 2299 & 2869 \\
0 & 0 & 1 & 2 \\
0 & 0 & 2298 & 1 \\
\end{array}
\right)

$



The expected result are



$
\text{HNF}(A) =
\left(
\begin{array}{rrrr}
33 & 12 & 12 & 11 \\
0 & 6 & 5 & 1 \\

0 & 0 & 1 & 0 \\
0 & 0 & 0 & 2 \\
\end{array}
\right), \text{ } \text{HNF}(B) =
\left(
\begin{array}{rrrr}
2873 & 1663 & 286 & 335 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\

\end{array}
\right)
$



respectively.



I am certain it is an implementation issue, however, the question I would have to ask is whether a coefficient with the value of zero on the diagonal will make any difference to the algorithm, especially since no condition exists to cater for it like in the previous (related) question I asked concerning the computation of hermite normal forms using the extended euclidean algorithm.



Furthermore, how can the lower triangular portion of the matrix be reduced if there is no step like $A_{j} \leftarrow (a_{i,k}/d)A_{j} - (a_{i,j}/d)A_{k}$ to do this?




Any help/advice/feedback would be appreciated.



References:



[Coh93] A Course in Computational Algebraic Number Theory, Vol. 138 of Graduate Texts in Mathematics. Springer, Heidelberg, 1993.



[HM91] Asymptotically Fast Triangularization of Matrices over Rings. SIAM Journal of Computing, 20:1068-1083, December 1991

Finding the limit for fractions..

Find the limit: $$\lim_{x \to -{6}} \frac{(1/6)+(1/x)}{6+x}$$

divisibility - Show that the number $n$ is divisible by $7$

How can I prove that $n = 8709120$ divisible by $7$?
I have tried a couple methods, but I can't show that. Can somebody please help me?

real analysis - Differentiable functions such that the derivative is nowhere continuous.

Is there a function $f:\mathbb R\to\mathbb R$ which is differentiable but such that the derivative is nowhere continuous?

calculus - Can this definite integral involving series be solved without a calculator?



I got this question today but I can't see if there is any good way to solve it by hand.




Evaluate the definite integral




$$\int_2^{12}\frac{\sqrt{x+\sqrt{x+\sqrt{x+...}}}}{\sqrt{x\sqrt{x\sqrt{x}...}}}\,\mathrm{d}x$$



where the series in the numerator and denominator continue infinitely.




If you let $y=\sqrt{x+\sqrt{x+\sqrt{x+...}}}=\sqrt{x+y}$, solving for $y$ we get $y=\frac{1\pm\sqrt{1+4x}}{2}$. And similarly for the denominator we have $z=\sqrt{xz}$. So $z=x$. So the integral simplifies to



$$\int_2^{12}\frac{1\pm\sqrt{1+4x}}{2x}\,\mathrm{d}x\,.$$




Now my problems are




  1. I don't know what to with the $\pm$.


  2. I tried to solve the integral by separating it as a sum of two fractions. But I can't solve $$\int_2^{12}\frac{\sqrt{1+4x}}{2x}\,\mathrm{d}x\,.$$



Answer



As you did, let $y=\sqrt{x+\sqrt{x+\sqrt{x+...}}}=\sqrt{x+y}$. Clearly $y\ge0$ and $y$ satisfies
$$ y^2-y-x=0 $$
from which you have

$$ y=\frac{1\pm\sqrt{4x+1}}{2}. $$
Since $x\in[2,12]$ and $y\ge0$, you must choose "$+$". Since if you choose "$-$", then
$$ y=\frac{1-\sqrt{4x+1}}{2}<0. $$
Now, under $t=\sqrt{1+4x}$
\begin{eqnarray}
&&\int_2^{12}\frac{\sqrt{x+\sqrt{x+\sqrt{x+...}}}}{\sqrt{x\sqrt{x\sqrt{x}...}}}dx\\
&=&\int_2^{12}\frac{1+\sqrt{1+4x}}{2x}dx\\
&=&\int_2^{12}\frac{1}{2x}dx+\int_2^{12}\frac{\sqrt{1+4x}}{2x}dx\\
&=&\frac12\ln x\bigg|_2^{12}+\int_3^7\frac{t^2}{t^2-1}dt\\
&=&\frac12\ln6+\bigg(t+\frac12\ln\frac{t-1}{t+1}\bigg)\bigg|_3^7\\

&=&\frac12\ln6+4+\frac12\ln\frac32\\
&=&\ln3+4.
\end{eqnarray}


abstract algebra - $[L_1 L_2 : k] = [L_1 : k] [L_2 : k]$ for two finite field extensions $L_1/k$ and $L_2/k$ with $L_1 cap L_2 = k$




I want to prove or disprove the following:




Let $L_1$ and $L_2$ be two finite extensions of field $k$ inside of an
extension $L/k$. Moreover, $L_1 \cap L_2 = k$. Then the degree of the
composite field $L_1 L_2$ is $[L_1 L_2 : k] = [L_1 : k] [L_2 : k]$.




I want to solve this problem with basic field theory (I haven't studied Galois theory). Thanks in advance.







Below is what I've done so far.



Let $[L_1 : k] = m$, $[L_2 : k] = n$, and write $L_1$, $L_2$ as $L_1 = k(\alpha_1, \dots, \alpha_m)$, $L_2 = k(\beta_1, \dots, \beta_n)$, where $\alpha_1, \dots, \alpha_m$ is a $k$-basis for $L_1$, and $\beta_1, \dots, \beta_n$ is a $k$-basis for $L_2$. Then we can easily show that



$$[L_1 L_2 : k] \le [L_1 : k] [L_2 : k],$$



where the equality is achieved iff $\beta_1, \dots, \beta_n$ are linearly independent over $L_1$.




I tried to use $L_1 \cap L_2 = k$ to prove the linear independence but failed.


Answer



The answer turns out to be negative.



Consider $k = \mathbb{Q}$, $L_1 = \mathbb{Q}(\alpha)$, and $L_2 = \mathbb(\alpha \zeta_3)$, where $\alpha$ is the real root of $x^3 - 2$, and $\zeta_3$ is a primitive root of $x^3 - 1$. Then $[L_1 : k] = [L_2 : k] = 3$, and it is easy to show that $L_1 \cap L_2 = k$ (one way is to consider the imaginary parts). However, $L_1 L_2 = \mathbb{Q}(\alpha, \zeta_3)$ is the splitting field of $x^3 - 2$, which has degree 6 over $\mathbb{Q}$.



It turns out that if one of $L_1/k$ and $L_2/k$ is Galois, then the argument would be true.


complex numbers - How to solve $sqrt {35 - 5i}$

Need some hints on how to Solve $\sqrt {35 - 5i}$



Attempt.




I factorized 5 out and it became $\sqrt {5(7-i)}$



I just want to know if it can be solved further.



Thanks.

calculus - Limit of an expression



$$\lim\limits_{n\to\infty}\frac{1}{e^n\sqrt{n}}\sum\limits_{k=0}^{\infty}\frac{n^k}{k!}|n-k|=\sqrt{2/\pi}$$
Is this limit true? I should show limit is true. It is allowed to use computer programs to find this limit.
Thanks for your helps...



Answer



This question has a nice probabilistic interpretation. Given that $X$ is a Poisson distribution with parameter $\lambda=n$, we are essentially computing the expected value of the absolute difference between $X$ and its mean $n$. The central limit theorem gives that $Y\sim N(n,n)$ (a normal distribution with mean and variance equal to $n$) is an excellent approximation of our distribution for large values of $n$, hence:
$$\begin{eqnarray*}\frac{1}{e^n \sqrt{n}}\sum_{k=0}^{+\infty}\frac{n^k}{k!}|n-k|&\approx&\frac{1}{\sqrt{n}}\cdot\frac{1}{\sqrt{2\pi n}}\int_{-\infty}^{+\infty}|x-n|\exp\left(-\frac{(x-n)^2}{2n}\right)\,dx\\&=&\frac{2}{n\sqrt{2\pi}}\int_{0}^{+\infty}x\exp\left(-\frac{x^2}{2n}\right)\,dx\\&=&\color{red}{\sqrt{\frac{2}{\pi}}},\end{eqnarray*}$$
so the limit is not zero.


Sunday, June 17, 2018

radicals - Reducing square root fractions without calculator




I am trying to figure out how to reduce simple square root fractions without a calculator. In my lecturer's notes, for instance, he reduces $1/\sqrt2$ by multiplying with $4/\sqrt2$. Following is his example and another example of him doing this:



Reducing square root fraction



How does he know how to do this?
I am allowed to bring notes for my exam, is there some practical table for the most common square root fractions I could bring?
Or is there some rule for reducing I can use?



Really hope you can help me out here, thanks!


Answer




That is generally called "rationalizing the denominator". [tex]\sqrt{a}\times\sqrt{a}= a[/tex] so multiplying both numerator and denominator of a fraction with a square root in the middle moves the square root to the numerator. $\frac{1}{\sqrt{2}}\frac{\sqrt{2}}{\sqrt{2}}= \frac{\sqrt{2}}{2}$



More generally, if you have something of the form $a+ b\sqrt{c}$ in the denominator multiply both numerator and denominator by its "conjugate" $a- b\sqrt{c}$. Since $(a+ b)(a- b)= a^2- b^2$ that will also get rid of the square root in the denominator: $\frac{1}{2+ 3\sqrt{2}}\frac{2- 3\sqrt{2}}{2- 3\sqrt{3}}= \frac{2- 3\sqrt{2}}{4- 9(2)}= -\frac{2- 3\sqrt{2}}{14}$


calculus - About $ lim_{xrightarrow 0}frac {sin x}{x} = 1$

I do not understand how $$\lim_{x \to 0} \frac{\sin x}{x} = 1$$ As if $$ x = 0, \frac{\sin (0)}{0} = \frac {0}{0} $$ So if someone could explain this I would appreciate it! Thanks!

Saturday, June 16, 2018

calculus - Why isn't it mathematically rigorous to treat dx's and dy's as variables?





If I do something like:



$$\frac{dy}{dx} = D$$



$$dy = D \times dx$$



People would often say that it is not rigorous to do so. But if we start from the definition of the derivative:



$$\lim_{h \to 0}{\frac{f(x + h) - f(x)}{h}} = D$$




And by using the properties of limits we can say:



$$\frac{\lim_{h \to 0}{f(x + h) - f(x)}}{\lim_{h \to 0}{h}} = D$$



And then finally:



$$\lim_{h \to 0}(f(x + h) - f(x)) = D \times (\lim_{h \to 0} h)$$



Isn't this the same? Or am I missing something?


Answer




I can spot the following mistake in your attempt:





  1. And by using the properties of limits we can say:



    $$\frac{\lim_{h \to 0}{f(x + h) - f(x)}}{\lim_{h \to 0}{h}} = D$$






You cannot actually do that, as smcc has said. You must note that $$\lim_{x\to 0} \frac{f(x)}{g(x)}=\frac{\lim_\limits{x\to 0} f(x)}{\lim_\limits{x\to 0} g(x)} \,\,\,\,\,\,\,\,\,\,\mathrm {iff \lim_\limits{x\to 0} g(x) \not = 0}$$
So what you have said is not right.






Now coming to the actual question, if $dx$ and $dy$ can be treated as variables,
most of the mathematicians treat $\frac{d}{dx}$ as a mathematical operator (like $+,-,*,/$) which acts on the variable $y$. So that way, you can clearly understand what is the variable and what is not.



However, if you are strict enough to observe from the "limit" viewpoint, then observe that $\frac{dy}{dx}$ is nothing but $\lim_\limits{\Delta x\to 0}\frac{\Delta y}{\Delta x}$. Now $\frac{\Delta y}{\Delta x}$ is a fraction with $\Delta y$ and $\Delta x$ in the numerator and denominator. So you can view them as variables now.




Looks a bit weird, I know, but it entirely depends on how you want to support your argument and from which point of view you want to make your claim.


probability - Coupon Collector Problem with multiple copies and X amount of coupons already collected

I have a variant of the coupon collector problem where there are $n$ different coupons being collected, which are being drawn with equal probability and with replacement, but 2 copies of each coupon is needed. Now say I have a total of $X$ number of relevant coupons collected of the $2n$ total number of coupons for a full set, and want to know how many extra unneeded coupons you should have. What would be an equation for that? I have found on Wikipedia an equation for the number of draws needed to get a full set of one of each coupon, and number of draws needed to get a full set of multiple copies of each coupon.


$E(T) = n\log{n} + γn + {\frac{1}{2}} + O({\frac{1}{n}})$, where $\gamma \approx 0.5772156649$


$E(Tm) = n\log{n} + (m-1)n\log\log{n} + O(n)$, as $n→∞$


Where $m$ is the number of copies of each coupon to be collected, so 2 in this case, and $Tm$ is the first time $m$ copies of each coupon are collected.


I also found this from a previous question. Coupon Collector's Problem with X amount of coupons already collected.




The probability $p_i$ of selecting a new coupon thus equals $\frac{n-i+1}{n}$, and the expected number of draws needed to draw a new coupon equals $\frac{1}{p_i} = \frac{n}{n-i+1}$. As such, the expected value for the time needed to draw all $n$ coupons can be calculated as:


$$E[T] = \frac{n}{n} + \frac{n}{n-1} + \ldots + \frac{n}{1} = n \sum_{k=1}^{n}{\frac{1}{k}}$$


In this case, however, we have already drawn $X$ unique coupons. As such, the estimated number of draws needed to find all $n$ coupons equals:


$$E[T] = E[t_{X+1}] + E[t_{X+2}] + \ldots + E[t_n] = n \sum_{k=1}^{n-X} \frac{1}{k}$$



So to find the total number of drawn coupons upon collecting the $X^{th}$ unique coupon, the equation would be $$n \sum_{k=1}^{n}{\frac{1}{k}}-n \sum_{k=1}^{n-X} \frac{1}{k} $$


The total number of just unneeded duplicate coupons would be $$n \sum_{k=1}^{n}{\frac{1}{k}}- n \sum_{k=1}^{n-X} \frac{1}{k} - X $$


I'm not sure how to combine these two equations, $E(Tm) = n\log{n} + (m-1)n\log\log{n} + O(n)$, as $n→∞$ and $n \sum_{k=1}^{n}{\frac{1}{k}}-n \sum_{k=1}^{n-X} \frac{1}{k}$, to get the total number of coupons drawn upon having X number of relevant coupons collected towards a full collection of 2 of each coupon. Then with that equation just subtract X (the number of relevant coupons collected) from the total number to get the total unneeded coupons. I will admit I’m not in a math profession or have done any higher level math lately, but if I understand correctly the first equation is more of an approximation of the value, while the second equation is more exact. So I'm not sure how to combine them or if they can be easily combined.


Also I somewhat understand $O(n)$ and its use, I’m not sure how to input it with the rest of the equation into wolframalpha or even excel, the end goal of where I want to us this equation. The maximum number of coupons would be about 100 if that helps. If it would be easier it, the total number of coupons that have only 1 copy and number of coupons that have 2 copies collected could be used as an input instead of the total number of relevant coupons collected.

measure theory - Do limits of sequences of sets come from a topology?



In measure theory we frequently see the following definitions:




$$\limsup_{n\to\infty} A_n = \bigcap_{n=1}^{\infty}\left(\bigcup_{j=n}^{\infty} A_j\right)$$



$$\liminf_{n\to\infty} A_n = \bigcup_{n=1}^{\infty}\left(\bigcap_{j=n}^{\infty} A_j\right)$$



where $(A_n)_n$ is a sequence of measureable sets i.e. $\forall n: A_n\in\mathcal{M}$, where $\mathcal{M}$ is a $\sigma$-algebra on $X$, for example $\mathcal{M} = 2^X$. Therefore it makes sense to also define:



$$\lim_{n\to\infty}A_n = \limsup_{n\to\infty} A_n = \liminf_{n\to\infty} A_n$$



when the last two agree. If $\mu$ is a finite (positive, to keep things simple) measure, it is easy to see that under such definition we have $\mu(\lim_{n\to\infty}A_n) = \lim_{n\to\infty}\mu(A_n)$, whenever $\lim_{n\to\infty}A_n$ exists, which looks like some kind of continuity.





Does this kind of convergence of sequences of measurable sets arise from a (preferably Hausdorff, so that limits are unique) topology on $\mathcal{M}$? If such a topology exists, is $\mu:\mathcal{M}\to[0,\infty)$ in fact a continuous function?




(A related question that may be of interest would be: what happens if we allow arbitrary sets? Can we make the Von Neumann universe $V$ into a topological space in such a way?)


Answer



There is such a topology. Simply give $\mathcal{M}$ the subspace topology induced by the product topology on $2^X$.



It may help to think of $2^X$ as the set of functions from $X$ to $\{0,1\}$, by identifying a set with its indicator function. Then we have $1_{\limsup A_n} = \limsup 1_{A_n}$ and so on. Since the product topology is just the topology of pointwise convergence, this behaves as desired.




However, the map $A \mapsto \mu(A)$ is not in general continuous with respect to this topology. For instance, the finite sets are dense in $\mathcal{M}$ with this topology, and so any nontrivial non-atomic measure gives a discontinuous map.


Friday, June 15, 2018

integration - Find the value of $limlimits_{n to infty}nleft(left(int_0^1frac{1}{1+x^n},mathrm{d}xright)^n-frac{1}{2}right)$




Find the value of the limit $$\lim_{n \to \infty}n\left(\left(\int_0^1 \frac{1}{1+x^n}\,\mathrm{d}x\right)^n-\frac{1}{2}\right)$$





I can't solve the integral $\int_0^1 \mathrm{\frac{1}{1+x^n}}\,\mathrm{d}x$. But maybe the questions doesn't require solving the integral.
Apparently the $\lim_{n \to \infty}(\int_0^1 \mathrm{\frac{1}{1+x^n}}\,\mathrm{d}x)^n$ should be $\frac{1}{2}$ for the question to make sense. That's all I know.


Answer



Let $I(n)$ be given by the integral



$$\begin{align}
I(n)&=\int_0^1 \frac{1}{1+x^n}\,dx \tag 1\\\\
\end{align}$$




Then, expanding the integrand of the integral on the right-hand side of $(1)$ in the Taylor series $ \frac{1}{1+x^n}=\sum_{k=0}^\infty (-1)^kx^{nk}$ reveals



$$\begin{align}
I(n)&=\sum_{k=0}^\infty \frac{(-1)^k}{nk+1}\\\\
&=1+\frac1n \sum_{k=1}^\infty\frac{(-1)^k}{k+1/n} \tag 2
\end{align}$$






Next, using the fact that $\log(2)= \sum_{k=1}^\infty\frac{(-1)^{k-1}}{k}$ and that $\frac{\pi^2}{12}=-\sum_{k=1}^\infty \frac{(-1)^k}{k^2}$, we can write the series in $(2)$ as




$$\begin{align}
\sum_{k=1}^\infty\frac{(-1)^k}{k+1/n} &=-\log(2)+\sum_{k=1}^\infty (-1)^k\left(\frac{1}{k+1/n}-\frac1k\right)\\\\
&=-\log(2)-\frac1n \sum_{k=1}^\infty \frac{(-1)^k}{k(k+1/n)}\\\\
&=-\log(2)-\frac1n \sum_{k=1}^\infty\frac{(-1)^k}{k^2}-\frac1n\sum_{k=1}^\infty (-1)^k\left(\frac{1}{k(k+1/n)}-\frac{1}{k^2}\right)\\\\
&=-\log(2)+\frac{\pi^2}{12n}+O\left(\frac1{n^2}\right) \tag 3
\end{align}$$







Using $(1)-(3)$ yields



$$I(n)=1-\frac{\log(2)}{n}+\frac{\pi^2}{12n^2}+O\left(\frac1{n^3}\right) \tag 4$$






Next, using $(4)$, we can write



$$\begin{align}
\left(I(n)\right)^n&=e^{n\log\left(1-\frac{\log(2)}{n}+\frac{\pi^2}{12n^2}+O\left(\frac1{n^3}\right)\right)}\\\\

&=e^{-\log(2)+\frac{\pi^2}{12n}-\frac{\log^2(2)}{2n}+O\left(\frac{1}{n^2}\right)}\\\\
&=\frac12 \left(1+\frac{\pi^2}{12n}-\frac{\log^2(2)}{2n}+O\left(\frac{1}{n^2}\right)\right)
\end{align}$$



Finally, we have




$$\begin{align}
\lim_{n\to \infty}\left(n\left(\left(I(n)\right)^n-\frac12\right)\right)&=\lim_{n\to \infty}\left(\frac{\pi^2}{24}-\frac{\log^2(2)}{4}+O\left(\frac1n\right)\right)\\\\
&=\bbox[5px,border:2px solid #C0A000]{\frac{\pi^2}{24}-\frac{\log^2(2)}{4}}

\end{align}$$




And we are done!


calculus - graphical meaning of integration by parts of this function



I have seen this and this. After looking at the first link I was pretty satisfied with the answer. Now I understand why



$\int vdu=uv-\int udv$




works graphically.



I have even worked out $\int (\sin^{-1}x)dx$ using integration by parts and understood it graphically.



But I am not able to visualize this $\int (x\sin x)dx$ in the graphical manner as I'm not able to find an inverse which is important for the graphical method to work. However, I could solve it mathematically.



Now my questions are



i) Is the graphical intuition only limited to functions which can be integrated this way?




$\int f^{-1}(y)dy=xy-\int f(x)dx$



ii)Please help me with a graphical intuition behind solving functions like $\int (x\sin x)dx$ using integration by parts (if any). I will try to do the remaining math myself.



Any suggestion is of great help to me. Thank you.


Answer



I am not sure if this helps, but I'll show you a proof for integration by parts then use it to evaluate $\int x\sin x\,dx$. I don't know how to really think of integration by parts graphically, most of the visualization for me comes from seeing the algebra.



Let $u(x)$ and $v(x)$ be differentiable, continuous functions. Then we know that
$$\frac{d}{dx}\left[u(x)v(x)\right]=u'(x)v(x)+u(x)v'(x)$$

And if we integrate both sides, we have that
$$\int \frac{d}{dx}\left[u(x)v(x)\right] dx=\int u'(x)v(x)dx+\int u(x)v'(x)dx$$
$$u(x)v(x)=\int u'(x)v(x)dx+\int u(x)v'(x)dx$$
And after some simple re-arrangement,
$$\int u(x)v'(x)dx=u(x)v(x)-\int u'(x)v(x)dx$$
Which is usually just abbreviated as
$$\int udv=uv-\int vdu$$
Which completes our proof.







Next, we evaluate some examples.



Ex 1: $$I=\int x\sin(x)dx$$
For this, I do not recommend trying to visualize the integration, because that could just make things more complicated. Instead integrate by parts with
$$u=x\Rightarrow du=dx\\ dv=\sin(x)dx\Rightarrow v=-\cos(x)$$
Hence,
$$I=-x\cos(x)+\int\cos(x)dx$$
$$I=-x\cos(x)+\sin(x)+C$$
Which is rather easy: no visualization required.




Ex. 2:
$$I=\int x\tan^{-1}(x)dx$$
I chose this example because it is rather hard to visualize, but can be computed with integration by parts as follows:
$$u=\tan^{-1}(x)\Rightarrow du=\frac{dx}{1+x^2}\\ dv=xdx\Rightarrow v=\frac{x^2}2$$
Hence
$$I=\frac{x^2}2\tan^{-1}(x)-\frac12\int\frac{x^2}{1+x^2}dx$$
This final integral is easier than it looks, and it can be computed by noting that
$$\begin{align}
\int\frac{x^2}{x^2+1}dx&=\int\frac{x^2+1-1}{x^2+1}dx\\

&=\int\frac{x^2+1}{x^2+1}dx-\int\frac{dx}{x^2+1}\\
&=\int dx-\int\frac{dx}{x^2+1}\\
&=x-\tan^{-1}(x)\\
\end{align}$$

So, without having to visualize anything,
$$I=\frac{x^2+1}2\tan^{-1}(x)-\frac{x}2+C$$
Tell me if there's anything I can do to improve my answer if it is not what you needed.


elementary number theory - Factorization and modular inverses


In this post in the last method the factorials were factorized. But I don't quite understand how that works.


Lets say we have



$$ (-24)^{-1}+(6)^{-1} +(-2)^{-1}$$


modulo a prime $p$, for instance $7$. Then $(-24)^{-1} = 2$, $(6)^{-1} = 6$ and $(-2)^{-1} = 3$ (correct me if I'm wrong).
The sum is congruent to $11 \equiv 4$ modulo $7$ which is correct.


However, the factorized method multiplies $(-24)^{-1}$ by $8$ modulo $7$. That is $(-24)^{-1}$ (because $8 \equiv 1 \pmod 7$) which equals $2$.. that is wrong.


Am I doing something wrong here? Is $7$ an exception because $8$ is congruent to $1$?


Answer



I think the problem is a mistake that was pointed out in the comments. Note that


$$-24(-24)^{-1}\equiv1\pmod p$$ $$6[-4(-24)^{-1}]\equiv1\pmod p$$.


So we have $6^{-1}\equiv-4(-24)^{-1}$. Similarly, we have $(-2)^{-1}\equiv12(-24)^{-1}$. Therefore, we have


$$(-24)^{-1}+6^{-1}+(-2)^{-1}\equiv(-24)^{-1}(1-4+12)\equiv9(24)^{-1}$$


trigonometry - Prove $sin(alpha -beta)+sin(alpha-gamma)+sin(beta-gamma)=4cosfrac{alpha-beta}{2}sinfrac{alpha-gamma}{2}cosfrac{beta-gamma}{2}$




Here is a problem from Gelfand's Trigonometry:




Let $\alpha, \beta, \gamma$ be any angle, show that $$\sin(\alpha -\beta)+\sin(\alpha-\gamma)+\sin(\beta-\gamma)=4\cos\left(\frac{\alpha-\beta}{2}\right)\sin\left(\frac{\alpha-\gamma}{2}\right)\cos\left(\frac{\beta-\gamma}{2}\right).$$




I have tried to worked through this problem but cannot complete it. If I let $A= \alpha -\beta$, $B=\beta-\gamma$ and $C= \beta-\gamma$, and $A+B+C=\pi$ (now $A$, $B$ and $C$ are angles of a triangle), then I could prove the equality. But without this condition, I am stuck.



Could you show me how to complete this exercise?


Answer




$$
\begin{align}
\color{#C00}{\sin(x)+\sin(y)}+\color{#090}{\sin(x+y)}
&=\color{#C00}{2\sin\left(\frac{x+y}2\right)\cos\left(\frac{x-y}2\right)}+\color{#090}{2\sin\left(\frac{x+y}2\right)\cos\left(\frac{x+y}2\right)}\\
&=2\sin\left(\frac{x+y}2\right)\left[\cos\left(\frac{x-y}2\right)+\cos\left(\frac{x+y}2\right)\right]\\
%&=2\sin\left(\frac{x+y}2\right)\,\color{#00F}{2\cos\left(\frac x2\right)\cos\left(\frac y2\right)}\\
%&=4\sin\left(\frac{x+y}2\right)\cos\left(\frac x2\right)\cos\left(\frac y2\right)
\end{align}
$$
Finish off by using the formula for the cosine of a sum/difference, then set $x=\alpha-\beta$ and $y=\beta-\gamma$.



Thursday, June 14, 2018

fractions - $frac{6}{4 times 2} + frac{7}{5 times 2} + ... + frac{21}{19 times 2}$




I got this exercise from school and I have no idea what notion to use, it resumes to Harmonic series, I can't find a generic answer. Do you have any idea?



$\frac{6}{4 \times 2} + \frac{7}{5 \times 2} + ... + \frac{21}{19 \times 2}$



Which is the generic answer for such a sum?


Answer



Consider the most general case $$A=\sum_{i=a}^b\frac n{2(n-2)}=\frac 12\sum_{i=a}^b\frac n{n-2}=\frac 12\sum_{i=a}^b\frac {n-2+2}{n-2}$$ $$A=\frac 12\sum_{i=a}^b1+\sum_{i=a}^b\frac {1}{n-2}=(b-a+1)+\sum_{i=a}^b\frac {1}{n-2}$$ So, you are just left with the sum of fractions $\frac 14+\frac 15+\frac 16+\cdots+\frac 1{19}$


calculus - Showing that $int_{-infty}^{infty} exp(-x^2) ,mathrm{d}x = sqrt{pi}$





The primitive of $f(x) = \exp(-x^2)$ has no analytical expression, even so, it is possible to evaluate $\int f(x)$ along the whole real line with a few tricks. How can one show that $$ \int_{-\infty}^{\infty} \exp(-x^2) \,\mathrm{d}x = \sqrt{\pi} \space ? $$


Answer



Such an integral is called a Gaussian Integral


This link should help you out.


Wednesday, June 13, 2018

limits - Find $displaystylelim_{xtoinfty}x - sqrt{x+1}sqrt{x+2}$ using squeeze theorem


Find $$\lim_{x\to\infty}x - \sqrt{x+1}\sqrt{x+2}$$ using squeeze theorem



Tried using binomial expansion, but have no idea on how to continue.

elementary number theory - Help with congruence and divisibility exercise



I'm starting to solve some problems of congruence and integer division, so the exercise is quite simple but I'm not sure I'm on the right track. I need to prove that the following is true for all $n \in \Bbb N$:
$$9\ |\ 7 \cdot 5^{2n}+ 2^{4n+1}$$



This is what I have so far:




$$7 \cdot 5^{2n} + 2^{4n+1} \equiv 0 \ (9)$$



So I try to see what each side of the sum is congruent to: $7 \equiv -2 \ (9)$ and $5^{2n} \equiv 4^{2n} (9)$, hence: $7 \cdot 5^{2n} \equiv -2 \cdot 4^{2n} \ (9)$ and the left side is also congruent to: $-2 \cdot 4^{2n} \equiv 7^n \cdot -2 \ (9)$ which leaves me with:



$$7 \cdot 5^{2n} \equiv 7^n \cdot -2 \ (9)$$



As for the other side:



$$2^{4n+1} \equiv 7^{4n} \cdot\ 2\ (9)$$




Finally combining them:



$$7 \cdot 5^{2n} + 2^{4n+1} \equiv 7^n \cdot (-2) + 7^{4n} \cdot\ 2\ (9)$$



Am I right so far? Any hint on how to continue?
Thanks!


Answer



Alternative proof by induction.







First, show that this is true for $n=0$:



$7\cdot5^{2\cdot0}+2^{4\cdot0+1}=9$



Second, assume that this is true for $n$:



$7\cdot5^{2n}+2^{4n+1}=9k$




Third, prove that this is true for $n+1$:



$7\cdot5^{2(n+1)}+2^{4(n+1)+1}=$



$16\cdot(\color\red{7\cdot5^{2n}+2^{4n+1}})+63\cdot5^{2n}=$



$16\cdot\color\red{9k}+63\cdot5^{2n}=$



$9\cdot(16k+7\cdot5^{2n})$







Please note that the assumption is used only in the part marked red.


abstract algebra - Degree of a finite field extension. How to find?




What is the degree of the extension $[\mathbb Q(\sqrt2 + \sqrt3 + \sqrt5) : \mathbb Q ]$?
Can you explain, what I must to do in this example?


Answer



$[\mathbb Q(\sqrt2 + \sqrt3 + \sqrt5) : \mathbb Q ]=[\mathbb Q(\sqrt2 , \sqrt3 , \sqrt5) : \mathbb Q ]=[\mathbb Q(\sqrt2 ,\sqrt3 , \sqrt5) : \mathbb Q (\sqrt2 , \sqrt3)].[\mathbb Q(\sqrt2 ,\sqrt3 ) : \mathbb Q (\sqrt2 )].[\mathbb Q(\sqrt2 ) : \mathbb Q ]=2.2.2=8$




$\mathbb Q(\sqrt2 + \sqrt3 + \sqrt5) $ is a $8 $ degree extension over $Q$.And basis are $1,\sqrt2,\sqrt3,\sqrt5,\sqrt6,\sqrt{10},\sqrt{15},\sqrt{30}$


functional equations - If $f(xy) = f(x) + f(y)$, show that $f(.)$ can only be a logarithmic function.


As the question states, show that the property exhibited can only be satisfied by a logarithmic function i.e no other family of functions can satisfy the above property.


Answer



Continuity is necessary.


If $F(x+y)=F(x)+F(y)$, for all $x,y$ and $F$ discontinuous (such $F$ exist due to the Axiom of Choice, and in particular, the fact that $\mathbb R$ over $\mathbb Q$ possesses a Hamel basis) and $f(x)=F(\log x)$, then $f(xy)=f(x)+f(y)$, and $f$ is not logarithmic!


abstract algebra - Other Algebraically Independent Transcendentals



I was thinking about a incomplete answer I gave earlier today to a interesting question made by the user lurker. The question was about wheter or not the sum or the product of two transcendental numbers in $\mathbb{C}/\mathbb{Q}$ could result on a algebraic number in the same set. A infinite amount of trivial examples can be shown by manipulating the same transcendental, like shown in the answers and in the comments, but I was intrigued by what I believed being the question's true aim, the non-trivial results.
After fiddling around with some algebra to try and understand the nature of transcendental extensions, then proceed to read about it during the day, I reached the known result that it depends on wheter or not the two given transcendental numbers are algebraically independent. If they are, then we cannot generate a 'non trivial algebraic' transformation between them that yields a algebraic number (i.e. using only the four basic operations). It's just a question of stating a polynomial with rational coefficients whose roots are the given transcendental numbers.
Now to the real question. From Nesterenko's results [1996] we know the independence of $\pi$, some values of the Gamma $\Gamma$ function and some powers of $e^\pi$. From the unproved Schanuel's conjecture follows that $\pi$ and $e$ are independent.





  1. Are there examples of more notable (conjectured or known) algebraically independent transcendental numbers?

  2. Are there a pair of transcendental numbers, maybe notable constants, whose algebraical dependence has been proved or conjectured?


Answer



Generally speaking the answer is "no," however we have results such as Lindemann-Weierstrass and Baker's theorem and the Gelfond–Schneider theorem (some of which are just refinements of others, but you get the idea). The actual algebraic independent is usually phrased based on sets involving numbers versus their exponentials, and then a careful analysis of properties of the exponential function to show that both cannot simultaneously be algebraic.



However--generally speaking--doing this for specific numbers which don't fit immediately into that framework is a difficult order to fill. This is mostly because transcendentals have very little to work with: ordinarily we prove facts about them by contradiction because the only fact they have about them is that they have no polynomial with rational coefficients they satisfy. So one argues by assuming a number is algebraic and showing it satisfies some property (usually a bound on some invariant) and then proving that numbers which do not have that property exist in some set, and then those numbers are transcendental.




To give you an idea of the scope of the difficulty, we still do not know if the logarithms of the rational primes are algebraically independent, one of the oldest problems in Diophantine analysis and transcendence.


general topology - Order type between two sets and bijection?

I want to show that $$ \{1,2\}\times Z_+\ \text{and} \ Z_+ \times\{1,2\}\ \text{have different order type}$$



If we define $$f(i,j)=(j,i)\ \text{for}\ i\ \text{in }\{1,2\}\ \text{and} \ j\ \text{in}\ Z_+$$



It seems like that this is bijective map between two sets.




However, to show that they are not order isomorphic, how shall I start to show that bijection does not preserve ordering?



It seems like that the way I defined the bijection is not the only way.



I am wondering if there exists any bijection between two sets and that bijection does not preserve order, can I conclude that they have different order type?

summation - How to find the sum of the series?



$$\frac{1}{1.3} + \frac{7}{1.3.5} + \frac{17}{1.3.5.7} + \frac{31}{1.3.5.7.9} +...... upto \space 10 \space terms$$



I found the nth term of the equation but don't know how to proceed from there?



$$ T_n = \frac{2n^2 -1 }{1.3.5.7...(2n-1)(2n+1)}$$


Answer




In double factorial notation, this is
$$\sum^{10}_{n=1}\frac{2n^2-1}{(2n+1)!!}.$$ But $$2n^2-1=\frac{(2n-1)(2n+1)-1}2,$$
and that means $$\frac{2n^2-1}{(2n+1)!!}=\frac12\left(\frac1{(2n-3)!!}-\frac1{(2n+1)!!}\right),$$ i.e. we have a telescoping series, so
\begin{align}\sum^{10}_{n=1}\frac{2n^2-1}{(2n+1)!!}&=\frac12\left(\frac1{(-1)!!}+\frac1{1!!}-\frac1{19!!}-\frac1{21!!}\right)=\frac12\left(2-\frac1{654729075}-\frac1{13749310575
}\right)\\&=1-\frac1{1249937325}.\end{align}


soft question - Looking for Proofs Of Basic Properties Of Real Numbers



I have just begun my study of complex numbers and I learned where imaginary numbers came from and their importance. However there's one thing that I need to clarify and that is the properties of real numbers and their proofs.





  1. Closure Laws
    For all $a,b \in \mathbb{R}$, $a+b$, $a-b$, $ab$, $a/b$ are real numbers. Thus $\mathbb{R}$ is closed under four fundamental operations.


  2. Commutative Laws
    For all $a,b \in \mathbb{R}$ $a+b = b+a$ and $ab = ba$.


  3. Associative Laws
    For all $a,b,c \in \mathbb{R}$ $a+(b+c) = (a+b)+c$ and $a(bc) = (ab)c$.


  4. Additive Identity
    For all $a \in \mathbb{R}$ there exists $0\in \mathbb{R}$ such that $a+0 = 0+a = a$.


  5. Additive inverse
    For all $a \in \mathbb{R}$ there exists a $b \in \mathbb{R}$ such that $a+b = b+a = 0$, the additive identity $b = -a$ is called the additive inverse or the negative of $a$.




and similarly Multiplicative Identity, Multiplicative inverse, Distributive Law, Trichotomy Law, Transitivity of order, Monotone Law of Addition, Monotone law of multiplication.




I understand that the above laws hold good throughout mathematics. Should these laws be accepted as being true "on faith" or are there proofs?
If yes, I am curious to know the proofs. As per my understanding no textbook has ever talked about proofs for these.


Answer



If some book states them like that, you should NOT take them on faith, NOR believe that they can get proven. The set of all real numbers is NOT closed under the operation of division, as the above statement of the closure laws indicates, since there does not exist division by 0 on the set of all real numbers.


Tuesday, June 12, 2018

number theory - Proving that $19mid 5^{2n+1}+3^{n+2} cdot 2^{n-1}$




How can I prove that $$5^{2n+1}+3^{n+2} \cdot 2^{n-1} $$ can be divided by 19 for any nonnegative n? What modulo should I choose?


Answer



You can prove this by induction.






First, show that this is true for $n=1$:



$5^{2\cdot1+1}+3^{1+2}\cdot2^{1-1}=19\cdot8$




Second, assume that this is true for $n$:



$5^{2n+1}+3^{n+2}\cdot2^{n-1}=19k$



Third, prove that this is true for $n+1$:



$5^{2(n+1)+1}+3^{n+1+2}\cdot2^{n+1-1}=$



$5^{2+2n+1}+3^{1+n+2}\cdot2^{1+n-1}=$




$5^{2}\cdot5^{2n+1}+3^{1}\cdot3^{n+2}\cdot2^{1}\cdot2^{n-1}=$



$5^{2}\cdot5^{2n+1}+3^{1}\cdot2^{1}\cdot3^{n+2}\cdot2^{n-1}=$



$25\cdot5^{2n+1}+\color\green{6}\cdot3^{n+2}\cdot2^{n-1}=$



$25\cdot5^{2n+1}+(\color\green{25-19})\cdot3^{n+2}\cdot2^{n-1}=$



$25\cdot5^{2n+1}+25\cdot3^{n+2}\cdot2^{n-1}-19\cdot3^{n+2}\cdot2^{n-1}=$




$25\cdot(\color\red{5^{2n+1}+3^{n+2}\cdot2^{n-1}})-19\cdot3^{n+2}\cdot2^{n-1}=$



$25\cdot\color\red{19k}-19\cdot3^{n+2}\cdot2^{n-1}=$



$19\cdot25k-19\cdot3^{n+2}\cdot2^{n-1}=$



$19\cdot(25k-3^{n+2}\cdot2^{n-1})$







Please note that the assumption is used only in the part marked red.


elementary set theory - Bijective Function between Uncountable Set of Real numbers and a set of all functions




Let $S$ be the set of all real numbers in $(0, 1)$ having a decimal representation which only uses the digits $0$ and $1$. So for example, the number $1/9$ is in $S$ because $1/9 = 0.1111\ldots$, the number $1/100$ is in $S$ because we could write $1/100 = 0.010000\ldots$, but $1/2$ is not in $S$ because $1/2 = 0.5$ which contains the digit $5$ (or $1/2 = 0.49999\ldots$ which contains $4$ and $9$).




(a) Prove that $S$ is uncountable. [Hint: use a proof similar to the proof of Theorem 10.8.]



(b)Let $T =\{1,2\}^{\mathbb{N}}$ be the set of all functions $f :\mathbb{N}\to\{1,2\}$,where $\mathbb{N}$ is the set of all positive integers. Find a bijection between the sets $S$ and $T$, and thus conclude that $T$ is uncountable.




(c) Prove that the set $\mathbb{N}^{\{1,2\}}$ of all functions $f : \{1, 2\} → \mathbb{N}$ is denumerable.






We solved question (a) and know $S$ is uncountable, we are looking to do (b) and if anyone wants to give a hint for (c) that would be great.
I'm having trouble with notation, but we think:



$$T=\{\{(i,a_{i}+1):i \in \mathbb{N}\}: n\text{ corresponds to some real number }c \in S\}$$
$g: S \rightarrow T = \{(c_{m},\{(i,a_{i}+1):i \in \mathbb{N}\}\}$, where $c_{m}$ is an arbitrary element of $S$.




Then we tried to show $g$ is one-to-one and onto and didn't make much progress.



Alternatively, we thought of defining:
$$g = \{(c,f(i))\},$$ where $c \in S$ and $$f(i) = \begin{cases}2\text{ if }a_{i}=0\\ 1\text{ if }a_{i}=1\end{cases}$$


Answer



Thank you everyone for the feedback and suggestions. Andre, your suggestions for question 2(b) were very helpful, but not so much for 2(c) and we did something completely different.



Our solutions for 2(b) and 2(c) were:



2(b)

Prove that: $T=\{1,2\}^{\mathbb{N}}$ is uncountable.



For $c \in S$, we know $c=0.a_{1}a_{2}a_{3}...$, where for the digit $a_{i}$ of $c$, $i \in \mathbb{N}$. Then for $T =\{1,2\}^{\mathbb{N}}$, we map $c$ to a subset $\mathbb{B}=T-\{(1,1),(2,1),(3,1),...\}$, of $T$ to ensure that $0.000...$, does not have an image in $\mathbb{B}$, since $0.000... \notin S$. Define the elements $f \in \mathbb{B}$ as, $f=\{(i,b_{i})|i \in \mathbb{N}, b_{i} \in \{1,2\}\}$, where $b_{i}=a_{i}+1$. By Result 2, we know $S$ is uncountable, so if we can show that there exists a bijective function $g:S \rightarrow \mathbb{B}$, then $\mathbb{B}$ must be uncountable. We now show this for $g=\{(c,f)|c \in S, f \in \mathbb{B}\}$, and since $B \subseteq T$, then by Theorem 10.9, $T$ would be uncountable. For $g$ to be onto we take an arbitrary element $f \in \mathbb{B}$, where $f=\{(1,b_{1}),(2,b_{2}),(3,b_{3}),...\}$, which can also be written as $\{b_{1},b_{2},b_{3},...\}$ or $f=\{b_{i}\}_{i=1}^{\infty}$, where $b_{i} \in \{1,2\}$. Then, for $c=0.a_{1}a_{2}a_{3}...$, the $i^{th}$ digit $a_{i}$ is given by $a_{i}=b_{i}-1$. So, $c=0.b_{1}-1\text{ }b_{2}-1\text{ }b_{3}-1...$, therefore, since all $c \in S$ have unique decimal representations, for any arbitrary $f \in \mathbb{B}$, there exists a $c \in S$, $g:S \rightarrow \mathbb{B}$ is onto. For $g$ to be one-to-one, we assume for $c_{1},c_{2} \in S$, that $g(c_{1})=g(c_{2})$, where $c_{1}=0.x_{1}x_{2}x_{3}...$, and $c_{2}=0.y_{1}y_{2}y_{3}...$, with $x_{i},y_{i} \in \{0,1\}$. So, $g(0.x_{1}x_{2}x_{3}...)=g(0.y_{1}y_{2}y_{3}...)$, then, $\{x_{1}+1,x_{2}+1,x_{3}+1,...\}=\{y_{1}+1,y_{2}+1,y_{3}+1,...\}$. Since for every digit $x_{i}$ of $c_{1}$ and every digit $y_{2}$ of $c_{2}$, $x_{i}+1=y_{i}+1$, then $x_{i}=y_{i}$. So, any arbitrary digit of $c$, is equal to any arbitrary digit of $c_{2}$, and all $c \in S$ have unique decimal representations, so $c_{1}=c_{2}$. Thus, $g$ is one-to-one. So, since $g: S \rightarrow \mathbb{B}$ is one-to-one and onto it is bijective and so $|S|=|\mathbb{B}|$. Since $\mathbb{B} \subseteq T$, and $\mathbb{B}$ is uncountable, by Theorem 10.9, $T$ is uncountable.



2(c)
There is a table and figure included in our proof, but I'll list out some of what we had:



Let $f$ be an arbitrary function $f \in \mathbb{N}^{\{1,2\}}$ so that $f$ is of the form $f=\{(1,a),(2,b)|a,b \in \mathbb{N}\}$. We list the entries for $a$ and $b$ as their own ordered pair in the following table:
If we traverse this table as shown, we hit the ordered pair that gives the values for $a$ and $b$ for every possible function $f=\{(1,a),(2,b)\}$. So, the set of all functions $f:\{1,2\} \rightarrow \mathbb{N}$ can be listed in a sequence as:
$$

\mathbb{N}^{\{1,2\}} = \{\{(1,1),(2,1)\},\{(1,1),(2,2)\},\{(1,2),(2,1)\},
\{(1,1),(2,3)\},\{(1,2),(2,2)\},\{(1,3),(2,1)\},\{(1,1),(2,4)\},...\}
$$
Therefore, $\mathbb{N}^{\{1,2\}}$ is denumerable.


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...