Thursday, January 31, 2019

discrete mathematics - Sum of Arithmetic progression problem


I have following progression:



$2n+(2n-1)+\cdots+n$



which is equivalent to:



$n+(n+1)+\cdots+2n$



The answer says:




You match $2n-i$ with $n+i$ for all $i=0..n$ Therefore you have $(n+1)$ terms of $3n$, so the sum of the $2$ exactly same sequence is $3n(n+1)$ and therefore the sum of 1 sequence is $1.5n(n+1)$


Or you can just apply the formula for the sum of Arithmetic progression, please refer to the wiki page https://en.wikipedia.org/wiki/Arithmetic_progression



Thus, I tried applying sum of arithmetic progression since I need to make it "$1.5n(n+1)$". but when I apply sum of arithmetic progression, it gives me different result like "$\frac n2(2n+(n-1)*1) = 1.5n^2-0.5n$"


How can I get "$1.5n(n+1)$"?


Answer



The sum of an artihmetic progression is (first term + last term)*(number of terms)/2.


Here :


  • first term=n,


  • last term=2n,

  • number of terms=n+1

Applying the formula, $Sum=\frac{(n+2n)*(n+1)}{2}$ reaches your desired result...


discrete mathematics - How to compute $3^{2003}pmod {99}$ by hand?





Compute $3^{2003}\pmod {99}$ by hand?





It can be computed easily by evaluating $3^{2003}$, but it sounds stupid. Is there a way to compute it by hand?


Answer



I would calculate separately modulo $9$ and $11$ and put the pieces together at the end.



Modulo $9$ is trivial, we get $0$.



Note that $3^5\equiv 1\pmod{11}$, so $3^{2000}\equiv 1\pmod{11}$, and therefore $3^{2003}\equiv 3^3\equiv 27\pmod{11}$. This is already congruent to $0$ modulo $9$, so we are finished.


real analysis - Finding $limlimits_{n rightarrow infty}left(int_0^1(f(x))^n,mathrm dxright)^frac{1}{n}$ for continuous $f:[0,1]to[0,infty)$





Find $$\lim_{n \rightarrow \infty}\left(\int_0^1(f(x))^n\,\mathrm dx\right)^\frac{1}{n}$$if $f:[0,1]\rightarrow(0,\infty)$ is a continuous function.




My attempt:




Say $f(x)$ has a max. value $M$. Then $$\left(\int_0^1(f(x))^ndx\right)^\frac{1}{n}\leq\left(\int_0^1M^ndx\right)^\frac{1}{n} =M$$



I cannot figure out what to do next.


Answer



Your guess that it should be the maximum is a good guess. You have shown that the limit must be $\leq M$. We will now show that the limit must be greater than or equal to $M-\epsilon$ for any $\epsilon$, from which you can conclude that the limit is indeed $M$.



Since $f(x)$ is continuous, given $\epsilon > 0$, there exists a $\delta$ such that
$$f(x) > M - \epsilon$$ for all $x \in (x_0 -\delta, x_0 + \delta)$. Hence, we have
$$\int_0^1 f(x)^n dx > \int_{x_0 - \delta}^{x_0 + \delta} f(x)^n dx > \int_{x_0 - \delta}^{x_0 + \delta} (M - \epsilon)^n dx = (M-\epsilon)^n 2\delta$$
Hence for any $\epsilon > 0$,

$$\left(\int_0^1 f(x)^n dx\right)^{1/n} > (M-\epsilon)(2 \delta)^{1/n}$$
Letting $n \to \infty$, we get what we want, i.e.,
$$\lim_{n \to \infty}\left(\int_0^1 f(x)^n dx\right)^{1/n} \geq (M-\epsilon)$$


elementary set theory - Cardinality of the set of all subsets of $E$ equipotent to $E$



I'm trying to prove the following statement (an exercise in Bourbaki's Set Theory):



If $E$ is an infinite set, the set of subsets of $E$ which are equipotent to $E$ is equipotent to $\mathfrak{P}(E)$.



As a hint, there is a reference to a proposition of the book, which reads:




Every infinite set $X$ has a partition $(X_\iota)_{\iota\in I}$ formed of countably infinite sets, the index set $I$ being equipotent to $X$.



I don't have any idea how that proposition might help.



If $E$ is countable, then a subset of $E$ is equipotent to $E$ iff it is infinite. But the set of all finite subsets of $E$ is equipotent to $E$. So its complement in $\mathfrak{P}(E)$ has to be equipotent to $\mathfrak{P}(E)$ by Cantor's theorem. Hence the statement is true if $E$ is countable. Unfortunately, I don't see a way to generalize this argument to uncountable $E$.



I'd be glad for a small hint to get me going.


Answer



Using the axiom of choice, every infinite set $X$ can be divided into two disjoint sets $X_0\sqcup X_1$, both of which are equinumerous with $X$. (Just well-order $X$, and take every other point in the enumeration.)




Now, consider all sets of the form $X_0\cup A$ for any $A\subset X_1$. There are $2^X$ many such $A$ and hence $2^X$ many such sets, and each is equinumerous with the original set $X$. So we've got $2^X$ many sets as desired, and there cannot be more than this, so this is the precise number.



Incidently, the stated answer to this question does in fact depend on the axiom of choice, since it is known to be consistent with $ZF+\neg AC$ that there are infinite Dedekind finite sets, and these are not equinumerous with any proper subsets of themselves. So for such an infinite set $X$, there would be only one subset to which it is equinumerous.


Wednesday, January 30, 2019

discrete mathematics - Proving $(0,1)$ and $[0,1]$ have the same cardinality

Prove $(0,1)$ and $[0,1]$ have the same cardinality.



I've seen questions similar to this but I'm still having trouble. I know that for $2$ sets to have the same cardinality there must exist a bijection function from one set to the other. I think I can create a bijection function from $(0,1)$ to $[0,1]$, but I'm not sure how the opposite. I'm having trouble creating a function that makes $[0,1]$ to $(0,1)$. Best I can think of would be something like $x \over 2$.



Help would be great.

calculus - Can function be differentiable but not continuous?

Is there any possible function that is not continuous but differentiable?



For example these functions, $f(x) = \pi x + \pi $ whenever $x<0$ ,and $f(x) = \arctan \pi x$ when $0\leq x$.




I know that these are not continuous but when I derivative them, they get the same answers. Should I consider that differentiable or not?

Tuesday, January 29, 2019

multivariable calculus - Calculating partial derivatives




Let f and g be functions of one real variable and define $F(x,y)=f[x+g(y)]$. Find formulas for all the partial derivatives of F of first and second order.



For the first order, I think we have:



$\frac{\partial F}{\partial x}=\frac{\partial f}{\partial x}+ \frac{\partial f}{\partial y}$



$\frac{\partial F}{\partial y}=\frac{\partial f}{\partial x}g'(x)+ \frac{\partial f}{\partial y}g'(y)$



Is it correct? What are the second order derivatives?




Thank you


Answer



$f$ is a function of one variable. Therefore the notation $\frac{\partial f}{\partial x},\frac{\partial f}{\partial y}$ is problematic (and I suggest you adapt the prime notation in that case). What you have written is not correct.



The correct formulas are: $$\frac{\partial F}{\partial x}(x,y)=f'(x+g(y)) $$



$$\frac{\partial F}{\partial y}(x,y)=f'(x+g(y))g'(y) $$ $$\frac{\partial^2 F}{\partial x^2}(x,y)=f''(x+g(y)) $$ $$\frac{\partial^2 F}{\partial x \partial y}(x,y)=f''(x+g(y))g'(y)=\frac{\partial^2 F}{\partial y \partial x}(x,y) $$



$$\frac{\partial^2 F}{\partial y^2}(x,y)=f''(x+g(y))g'(y)+f'(x+g(y))g''(y) $$



elementary number theory - Write 100 as the sum of two positive integers



Write $100$ as the sum of two positive integers, one of them being a multiple of $7$, while the other is a multiple of $11$.



Since $100$ is not a big number, I followed the straightforward reasoning of writing all multiples up to $100$ of either $11$ or $7$, and then finding the complement that is also a multiple of the other. So then $100 = 44 + 56 = 4 \times 11 + 8 \times 7$.


But is it the smart way of doing it? Is it the way I was supposed to solve it? I'm thinking here about a situation with a really large number that turns my plug-in method sort of unwise.


Answer




From Bezout's Lemma, note that since $\gcd(7,11) = 1$, which divides $100$, there exists $x,y \in \mathbb{Z}$ such that $7x+11y=100$.


A candidate solution is $(x,y) = (8,4)$.


The rest of the solution is given by $(x,y) = (8+11m,4-7m)$, where $m \in \mathbb{Z}$. Since we are looking for positive integers as solutions, we need $8+11m > 0$ and $4-7m>0$, which gives us $-\frac8{11}


If you do not like to guess your candidate solution, a more algorithmic procedure is using Euclid' algorithm to obtain solution to $7a+11b=1$, which is as follows.


We have \begin{align} 11 & = 7 \cdot (1) + 4 \implies 4 = 11 - 7 \cdot (1)\\ 7 & = 4 \cdot (1) + 3 \implies 3 = 7 - 4 \cdot (1) \implies 3 = 7 - (11-7\cdot (1))\cdot (1) = 2\cdot 7 - 11\\ 4 & = 3 \cdot (1) + 1 \implies 1 = 4 - 3 \cdot (1) \implies 1 = (11-7 \cdot(1)) - (2\cdot 7 - 11) \cdot 1 = 11 \cdot 2-7 \cdot 3 \end{align} This means the solution to $7a+11b=1$ using Euclid' algorithm is $(-3,2)$. Hence, the candidate solution $7x+11y=100$ is $(-300,200)$. Now all possible solutions are given by $(x,y) = (-300+11n,200-7n)$. Since we need $x$ and $y$ to be positive, we need $-300+11n > 0$ and $200-7n > 0$, which gives us $$\dfrac{300}{11} < n < \dfrac{200}7 \implies 27 \dfrac3{11} < n < 28 \dfrac47$$ The only integer in this range is $n=28$, which again gives $(x,y) = (8,4)$.


real analysis - Application of L'Hospital's Rule on the definition of a derivative.


I'm currently taking an introduction to Calculus course and I've come across the following identity:



How would one come up with this? My best guess is using L'Hospital's Rule on $$\lim_{x\rightarrow a}{\frac{f(x)-f(a)}{x-a}}$$



but I'm not very sure how, since differentiating both the numerator and denominator merely yields


$$\lim_{x\rightarrow a}{f'(x)} = f'(a)$$


Answer



The result holds under the weaker assumption that $f''(a) $ exists (other answers assume the continuity of $f''$ at $a$ or even more). Also note that under this weaker assumption it is not possible to apply L'Hospital's Rule on the expression under limit in question and hence a slight modification is required.



By definition of derivative we have $$\lim_{x\to a} \frac{f'(x) - f'(a)} {x-a} =f''(a)\tag{1}$$ Adding this to the limit in question it is clear that our job is done if we can establish that $$\lim_{x\to a} \frac{f(x) - f(a) - (x-a) f'(a)} {(x-a)^2}=\frac{f''(a)}{2}\tag{2}$$ And the above limit is easily evaluated by a single application of L'Hospital's Rule. Applying it on the fraction on left side we get a new fraction $$\frac{f'(x) - f'(a)} {2(x-a)}$$ which clearly tends to $f''(a) /2$ (via $(1)$) and hence the fraction on left side of $(2)$ also tends to the same value and the identity $(2)$ is established.


discrete mathematics - Is this a valid proof for $sqrt5$ being irrational?




'Prove by contradiction that $\sqrt5$ is irrational.'



Proof: Assume that $\sqrt5$ is rational i.e. $\sqrt5 = p/q,$ where $p,q \in \Bbb Z$.



Then $\sqrt5q = p$



Now for p to be an integer, $\sqrt5q$ must be an integer, i.e $q=\sqrt5$, $2\sqrt5$, $3\sqrt5$, ...



$\implies$ q must be irrational for p to be an integer.




$\implies$ $p,q \notin \Bbb Z$.



Contradiction. Therefore $\sqrt5$ is irrational. #


Answer



Your proof is not correct because you have to prove that $q= k\sqrt{5}$ is irrational for every integer $k$ by proving that $\sqrt{5}$ is irrational.



Note that if we multiply a non-zero integer and an irrational the product will be an irrational number.(You can prove this for practise)



So you assume what you want to prove.




Here it is a valid proof which i hope it will help you:




Assume that $\sqrt{5}=\frac{m}{n}$ where $(m,n)=1$.



We assume that $(m,n)=1$ because if its not then we can cancel a priori every common factor of the numerator and denominator until we remain with a fraction $\frac{s}{l}$ in its lowest terms namely $(s,l)=1$.



Thus $$5n^2=m^2 \Rightarrow 5|m^2 \Rightarrow 5|m$$ because $5$ is a prime number.




Also because $5|m$ we have that $m^2=25s^2$



Thus $$5n^2=25m^2 \Rightarrow n^2=5m^2 \Rightarrow 5|n$$



Now we have a contradiction because $5|n$ and $5|m$ and we assumed that $(n,m)=1$



So $\sqrt{5}$ is an irrational.



terminology - What the technical term for factors of a number (via summation)?


I know that 5 and 3 are factors of 15. Here factors are numbers whose multiplication gives 15.


What is the technical term for numbers whose sum (and not multiplication) is a given value?


i.e. technical term for following numbers


14 - 1 (sum is 15)



11 - 2 - 2 (sum is 15)


1 - 1 - 1 - 1 - 1 - 1 .. 15times (sum is 15)


Answer



These are called the partitions of a number.


elementary number theory - Reducing products in modular arithmetic

During an effort to show that $2^{20} \equiv 1 \mod{41}$, I have done the following:


$2^{20} = \left(2^5\right)^4 = 32^4$


Since $32 \equiv -9 \mod{41}$, we get $32^4 \equiv (-9)^4 = 81\cdot81 \mod 41$


From here, I know that I can reduce the 81s, such that I get $2^{20} \equiv (-1)(-1) \mod 41$, so I can solve the problem, but I can't connect this reduction to a particular rule of modular arithmetic.


Question


From $2^{20} \equiv 81 \cdot 81 \mod 41$, which rule is it that states that the $81$s can be reduced to their individual congruences, modulo $41$? In other words, why may I reduce them to $(-1)(-1)$?



I'm familiar with some of the rules, like the basic addition/subtraction/multiplication/power ones, but if it's one of these, I don't quite see the connection.

calculus - Prove the following inequality using the Mean Value Theorem

I want to show that



$$\log(2+x) - \log(x) \lt \frac{2}{x}$$
$\forall x \in \mathbb{R^+}$




I know I need to apply the Mean Value Theorem to find an upper bound of the function to the left and show that it is smaller than $\frac{2}{x}$, but I can't find the correct upper bound. I've tried multiple variations of the inequality. My teacher also said that I only needed to check in the interval from 0 to 2, but I'm not sure why.

Monday, January 28, 2019

calculus - Find $lim_{ntoinfty} frac{(n!)^{1/n}}{n}$.





Find $$\lim_{n\to\infty} \frac{(n!)^{1/n}}{n}.$$




I don't know how to start. Hints are also appreciated.


Answer



let $$y= \frac{(n!)^{1/n}}{n}.$$

$$\implies y=\left (\frac{n(n-1)(n-2)....3.2.1}{n.n.n....nnn}\right)^\frac{1}{n} $$
Note that we can distribute the $n$ in the denominator and give an $'n'$ to each term
$$\implies \log y= \frac {1}{n}\left(\log\frac{1}{n}+\log\frac{2}{n}+...+\log\frac{n}{n}\right)$$
applying $\lim_{n\to\infty}$ on both sides, we find that R.H.S is of the form



$$ \lim_{n\to\infty} \frac{1}{n} \sum_{r=0}^{r=n}f \left(\frac{r}{n}\right)$$
which can be evaluated by integration $$=\int_0^1\log(x)dx$$
$$=x\log x-x$$ plugging in the limits (carefully here) we get $-1$.



$$ \log y=-1,\implies y=\frac{1}{e}$$



measure theory - $v(B)=int_{B} f dmu $


I have a question in integration theory:


If I have $(\Psi,\mathcal{G},\mu)$ a $\sigma$-finite measure space and $f$ a $[0,\infty]$-valued measurable function on $(\Psi,\mathcal{G})$ that is finite a.s.


So my question is if I define for $B\in \mathcal{G}$ $$v(B)=\int_{B} f d\mu $$



Is $(\Psi,\mathcal{G},v)$ a $\sigma$-finite measure space too ?



I think this reationship betwwen $v$ and $\mu$ can help me in calculational purpose.


Could someone help me? Thanks for the time and help.


Answer




If $\mu$ is $\sigma$-finite, there exists a countable collection of disjoint sets $X_i$ s.t. $\mu(X_i)<\infty$ and $\bigcup_{i\ge 1}X_i=X$. Consider $F_j=\{j-1\le f

elementary number theory - How to solve this congruence $17x equiv 1 pmod{23}$?



Given $17x \equiv 1 \pmod{23}$



How to solve this linear congruence?
All hints are welcome.




edit:
I know the Euclidean Algorithm and know how to solve the equation $17m+23n=1$
but I don't know how to compute x with the use of m or n.


Answer



To do modular division I do this:



an - bm = c where c is dividend, b is modulo and a is divisor, then n is quotient



17n - 23m = 1




Then using euclidean algorithm, reduce to gcd(a,b) and record each calculation



As described by http://mathworld.wolfram.com/DiophantineEquation.html



17 23 $\quad$ 14 19



17 6 $\quad\;\;$ 14 5



11 6 $\quad\;\;\;\;$ 9 5




5 6 $\quad\;\;\;\;\;$ 4 5



5 1 $\quad\;\;\;\;\;$ 4 1



1 1 $\quad\;\;\;\;\;$ 0 1



Left column is euclidean algorithm, Right column is reverse procedure



Therefore $ 17*19 - 23*14 = 1$, i.e. n=19 and m=14.




The result is that 1/17 ≡ 19 mod 23



this method might not be as quick as the other posts, but this is what I have implemented in code. The others could also be, but I thought I would share my method.


Sunday, January 27, 2019

complex analysis - Analytic continuation "playing nice" with function composition


Suppose I have a meromorphic function $f(s)$, and a sequence of functions $g_N(t)$ that diverge to infinity, but for which the analytic continuation exists. A good example would be $g_N(t)= \sum_{n=1}^N \frac{1}{n^{0.5+it}}$, which diverges as $N \to \infty$, but which can be analytically continued to $g_\infty(t)=\zeta(0.5+it)$.


Now consider the function $f(g_N(t))$. As we increase $N$, the argument to $f$ blows up. Naively, there are two ways to do the analytic continuation:



  1. First analytically continue $g_N(t)$ to $g_\infty(t)$, and then take $f(g_\infty(t))$




  2. Analytically continue $f(g_N(t))$ all at once



In other words, for the second approach, we define a new family of functions $(f \circ g)_N(t)$, which has a different limit than the analytic continuation of $f(g_\infty(t))$.



Am I correct that this shows that analytic continuation and function composition do not play nice with one another? Is there a general theory of when the two approaches will agree?


For example, look at $f(t) = \frac{1}{t}$. Then as $N$ blows up, $f(g_N(t)) \to 0$, so the composition is the zero function. On the other hand, the analytic continuation $g_\infty(t)$ need.not be strictly positive at all, or could even be zero at points, leading to poles.


Answer



Just to make my comment an official answer:


Assuming you really intend to ask this question



Assume $g_N,g_\infty:\Omega\to\mathbb{C}$ are holomorphic functions s.t. $g_N \xrightarrow{N\to\infty} g_\infty$ pointwise on some open subset $\emptyset\neq\Omega_0\subseteq\Omega$. Given any entire function $f:\mathbb{C}\to\mathbb{C}$, is it true that $f\circ g_N$ converges pointwise to a holomorphic function $h$ on $\Omega_0$ and that $f\circ g_\infty$ is the analytic continuation of $h$ to all of $\Omega$?



In that case, the answer is "yes" for obvious reasons: $f$ is continuous, therefore $f(g_N(z)) \to f(g_\infty(z))$ for all $z\in\Omega_0$. We can therefore define $h:=(f\circ g_\infty)_{|\Omega_0}$ and have found a holomorphic function on $\Omega_0$ such that $f\circ g_N$ converges pointwise to $h$. Furthermore: By construction $f\circ g_\infty$ is a holomorphic extension of $h$ to all of $\Omega$ and by the identity theorem it is the unique such function, i.e. the analytic continuation of $h$.


Note that the two instances of pointwise convergence can be replace by a lot of other convergence modes. For example one could ask for uniform convergence, locally uniform convergence, and many more.



Find all functions $f: mathbb{R}rightarrow mathbb{R}$ satisfying a functional equation

Find all functions $f: \mathbb{R}\rightarrow \mathbb{R}$ satisfying:
$f\left ( x \right )f\left ( y \right )+ f\left ( xy \right )+ f\left ( x \right )+f\left ( y \right )= f\left ( x+y \right )+ 2\,xy$



I tried the standard way: $x=0, x=y, x=1,...$ but without any success. I spent quite some time trying to solve it but didn't succeed.




I tried to reduce it to Cauchy's 1-4 equations but didn't succeed. In the corse of it, I found interesting works of Aczel, Erdos and even Putnum, but they are not directly related, I guess.



Any idea? I am interested in this problem but I couldn't solve!

algebra precalculus - Proving an inequality $log$ and $e$



This inequality should be fairly easy to show. I think I'm just having trouble looking at it the right way (It's used in a proof without explanation).




$$(1-\frac{1}{\log ^{2} n})^{(2 \log n) -1}\geq e^{-2/\log n}$$



Any help is much appreciated. Thanks
Edit: Log is base 2


Answer



Assuming $n>2$ are natural numbers and $\log = \log_2$:



$(1 + \frac 1n)^n$ is increasing, $\lim_{n\to \infty}(1+ \frac 1n)^n =e $ and $(1+\frac 1x)^x < e$ for $x \ge 1$.



And $(1-\frac 1x)^x > \frac 1e$ for $x \ge 1$.




So $ (1 - \frac 1{\log^2 n})^{\log^2 n} > e^{-1}$



$( 1 - \frac 1{\log^2 n})^{2\log n} > e^{\frac{-2}{\log n}}$



$( 1 - \frac 1{\log^2 n})^{2\log n - 1} > \frac {e^{\frac{-2}{\log n}}}{1 - \frac 1{\log^2 n}}$



If $n > 2$ and $\log n = \log_2 n > 1$ then $0< {1 - \frac 1{\log^2 n}} < 1$ and



$( 1 - \frac 1{\log^2 n})^{2\log n - 1} > \frac {e^{\frac{-2}{\log n}}}{1 - \frac 1{\log^2 n}}>e^{\frac{-2}{\log n}} $




If $n = 2$ then



$( 1 - \frac 1{\log^2 n})^{2\log n - 1} =$



$( 1 - \frac 1{\log^2 2})^{2\log 2 - 1} = 0^0$ is undefined.



Likewise if $n=1$ we have division by $0$.



Perhaps $\log = \ln =\log_e$?



trigonometry - What is the exact value of sin 1 + sin 3 + sin 5 ... sin 177 + sin179 (in degrees)?

My attempt:



First I converted this expression to sum notation:



$\sin(1^\circ) + \sin(3^\circ) + \sin(5^\circ) + ... + \sin(175^\circ) + \sin(177^\circ) + \sin(179^\circ)$ = $\sum_{n=1}^{90}\sin(2n-1)^\circ$




Next, I attempted to use Euler's formula for the sum, since I needed this huge expression to be simplified in exponential form:



$\sum_{n=1}^{90}\sin(2n-1)^\circ$ = $\operatorname{Im}(\sum_{n=1}^{90}cis(2n-1)^\circ)$



$\operatorname{Im}(\sum_{n=1}^{90}cis(2n-1)^\circ)$ = $\operatorname{Im}(\sum_{n=1}^{90}e^{i(2n-1)^\circ})$



$\operatorname{Im}(\sum_{n=1}^{90}e^{i(2n-1)^\circ})$ = $\operatorname{Im}(e^{i} + e^{3i} + e^{5i} + ... + e^{175i} + e^{177i} + e^{179i})$



Next, I used the sum of the finite geometric series formula on this expression:




$\operatorname{Im}(e^{i} + e^{3i} + e^{5i} + ... + e^{175i} + e^{177i} + e^{179i})$ = $\operatorname{Im}(\dfrac{e^i(1-e^{180i})}{1-e^{2i}})$



$\operatorname{Im}(\dfrac{e^i(1-e^{180i})}{1-e^{2i}})$ = $\operatorname{Im}(\dfrac{2e^i}{1-e^{2i}})$



Now I'm stuck in here;

calculus - A limit problem $limlimits_{x to 0}frac{xsin(sin x) - sin^{2}x}{x^{6}}$



This is a problem from "A Course of Pure Mathematics" by G H Hardy. Find the limit $$\lim_{x \to 0}\frac{x\sin(\sin x) - \sin^{2}x}{x^{6}}$$ I had solved it long back (solution presented in my blog here) but I had to use the L'Hospital's Rule (another alternative is Taylor's series). This problem is given in an introductory chapter on limits and the concept of Taylor series or L'Hospital's rule is provided in a later chapter in the same book. So I am damn sure that there is a mechanism to evaluate this limit by simpler methods involving basic algebraic and trigonometric manipulations and use of limit $$\lim_{x \to 0}\frac{\sin x}{x} = 1$$ but I have not been able to find such a solution till now. If someone has any ideas in this direction please help me out.



PS: The answer is $1/18$ and can be easily verified by a calculator by putting $x = 0.01$


Answer



Preliminary Results:




We will use
$$
\begin{align}
\frac{\color{#C00000}{\sin(2x)-2\sin(x)}}{\color{#00A000}{\tan(2x)-2\tan(x)}}
&=\underbrace{\color{#C00000}{2\sin(x)(\cos(x)-1)}\vphantom{\frac{\tan^2(x)}{\tan^2(x)}}}\underbrace{\frac{\color{#00A000}{1-\tan^2(x)}}{\color{#00A000}{2\tan^3(x)}}}\\
&=\hphantom{\sin}\frac{-2\sin^3(x)}{\cos(x)+1}\hphantom{\sin}\frac{\cos(x)\cos(2x)}{2\sin^3(x)}\\
&=-\frac{\cos(x)\cos(2x)}{\cos(x)+1}\tag{1}
\end{align}
$$
Therefore,

$$
\lim_{x\to0}\frac{\sin(x)-2\sin(x/2)}{\tan(x)-2\tan(x/2)}=-\frac12\tag{2}
$$
Thus, given an $\epsilon\gt0$, we can find a $\delta\gt0$ so that if $|x|\le\delta$
$$
\left|\,\frac{\sin(x)-2\sin(x/2)}{\tan(x)-2\tan(x/2)}+\frac12\,\right|\le\epsilon\tag{3}
$$
Because $\,\displaystyle\lim_{x\to0}\frac{\sin(x)}{x}=\lim_{x\to0}\frac{\tan(x)}{x}=1$, we have
$$
\sin(x)-x=\sum_{k=0}^\infty2^k\sin(x/2^k)-2^{k+1}\sin(x/2^{k+1})\tag{4}

$$
and
$$
\tan(x)-x=\sum_{k=0}^\infty2^k\tan(x/2^k)-2^{k+1}\tan(x/2^{k+1})\tag{5}
$$
By $(3)$ each term of $(4)$ is between $-\frac12-\epsilon$ and $-\frac12+\epsilon$ of the corresponding term of $(5)$. Therefore,
$$
\left|\,\frac{\sin(x)-x}{\tan(x)-x}+\frac12\,\right|\le\epsilon\tag{6}
$$
Thus,

$$
\lim_{x\to0}\,\frac{\sin(x)-x}{\tan(x)-x}=-\frac12\tag{7}
$$
Furthermore,
$$
\begin{align}
\frac{\tan(x)-\sin(x)}{x^3}
&=\tan(x)(1-\cos(x))\frac1{x^3}\\
&=\frac{\sin(x)}{\cos(x)}\frac{\sin^2(x)}{1+\cos(x)}\frac1{x^3}\\
&=\frac1{\cos(x)(1+\cos(x))}\left(\frac{\sin(x)}{x}\right)^3\tag{8}

\end{align}
$$
Therefore,
$$
\lim_{x\to0}\frac{\tan(x)-\sin(x)}{x^3}=\frac12\tag{9}
$$
Combining $(7)$ and $(9)$ yield
$$
\lim_{x\to0}\frac{x-\sin(x)}{x^3}=\frac16\tag{10}
$$

Additionally,
$$
\frac{\sin(A)-\sin(B)}{\sin(A-B)}
=\frac{\cos\left(\frac{A+B}{2}\right)}{\cos\left(\frac{A-B}{2}\right)}
=1-\frac{2\sin\left(\frac{A}{2}\right)\sin\left(\frac{B}{2}\right)}{\cos\left(\frac{A-B}{2}\right)}\tag{11}
$$






Finishing Up:

$$
\begin{align}
&x\sin(\sin(x))-\sin^2(x)\\
&=[\color{#C00000}{(x-\sin(x))+\sin(x)}][\color{#00A000}{(\sin(\sin(x))-\sin(x))+\sin(x)}]-\sin^2(x)\\
&=\color{#C00000}{(x-\sin(x))}\color{#00A000}{(\sin(\sin(x))-\sin(x))}\\
&+\color{#C00000}{(x-\sin(x))}\color{#00A000}{\sin(x)}\\
&+\color{#C00000}{\sin(x)}\color{#00A000}{(\sin(\sin(x))-\sin(x))}\\
&=(x-\sin(x))(\sin(\sin(x))-\sin(x))+\sin(x)(x-2\sin(x)+\sin(\sin(x)))\tag{12}
\end{align}
$$

Using $(10)$, we get that
$$
\begin{align}
&\lim_{x\to0}\frac{(x-\sin(x))(\sin(\sin(x))-\sin(x))}{x^6}\\
&=\lim_{x\to0}\frac{x-\sin(x)}{x^3}\lim_{x\to0}\frac{\sin(\sin(x))-\sin(x)}{\sin^3(x)}\lim_{x\to0}\left(\frac{\sin(x)}{x}\right)^3\\
&=\frac16\cdot\frac{-1}6\cdot1\\
&=-\frac1{36}\tag{13}
\end{align}
$$
and with $(10)$ and $(11)$, we have

$$
\begin{align}
&\lim_{x\to0}\frac{\sin(x)(x-2\sin(x)+\sin(\sin(x)))}{x^6}\\
&=\lim_{x\to0}\frac{\sin(x)}{x}\lim_{x\to0}\frac{x-2\sin(x)+\sin(\sin(x))}{x^5}\\
&=\lim_{x\to0}\frac{(x-\sin(x))-(\sin(x)-\sin(\sin(x))}{x^5}\\
&=\lim_{x\to0}\frac{(x-\sin(x))-\sin(x-\sin(x))\left(1-\frac{2\sin\left(\frac{x}{2}\right)\sin\left(\frac{\sin(x)}{2}\right)}{\cos\left(\frac{x-\sin(x)}{2}\right)}\right)}{x^5}\\
&=\lim_{x\to0}\frac{(x-\sin(x))-\sin(x-\sin(x))+\sin(x-\sin(x))\frac{2\sin\left(\frac{x}{2}\right)\sin\left(\frac{\sin(x)}{2}\right)}{\cos\left(\frac{x-\sin(x)}{2}\right)}}{x^5}\\
&=\lim_{x\to0}\frac{\sin(x-\sin(x))}{x^3}\frac{2\sin\left(\frac{x}{2}\right)\sin\left(\frac{\sin(x)}{2}\right)}{x^2}\\[6pt]
&=\frac16\cdot\frac12\\[6pt]
&=\frac1{12}\tag{14}

\end{align}
$$
Adding $(13)$ and $(14)$ gives
$$
\color{#C00000}{\lim_{x\to0}\frac{x\sin(\sin(x))-\sin^2(x)}{x^6}=\frac1{18}}\tag{15}
$$






Added Explanation for the Derivation of $(6)$




The explanation below works for $x\gt0$ and $x\lt0$. Just reverse the red inequalities.



Assume that $x\color{#C00000}{\gt}0$ and $|x|\lt\pi/2$. Then $\tan(x)-2\tan(x/2)\color{#C00000}{\gt}0$.

$(3)$ is equivalent to
$$
\begin{align}
&(-1/2-\epsilon)(\tan(x)-2\tan(x/2))\\[4pt]
\color{#C00000}{\le}&\sin(x)-2\sin(x/2)\\[4pt]
\color{#C00000}{\le}&(-1/2+\epsilon)(\tan(x)-2\tan(x/2))\tag{16}

\end{align}
$$
for all $|x|\lt\delta$. Thus, for $k\ge0$,
$$
\begin{align}
&(-1/2-\epsilon)(2^k\tan(x/2^k)-2^{k+1}\tan(x/2^{k+1}))\\[4pt]
\color{#C00000}{\le}&2^k\sin(x/2^k)-2^{k+1}\sin(x/2^{k+1})\\[4pt]
\color{#C00000}{\le}&(-1/2+\epsilon)(2^k\tan(x/2^k)-2^{k+1}\tan(x/2^{k+1}))\tag{17}
\end{align}
$$

Summing $(17)$ from $k=0$ to $\infty$ yields
$$
\begin{align}
&(-1/2-\epsilon)\left(\tan(x)-\lim_{k\to\infty}2^k\tan(x/2^k)\right)\\[4pt]
\color{#C00000}{\le}&\sin(x)-\lim_{k\to\infty}2^k\sin(x/2^k)\\[4pt]
\color{#C00000}{\le}&(-1/2+\epsilon)\left(\tan(x)-\lim_{k\to\infty}2^k\tan(x/2^k)\right)\tag{18}
\end{align}
$$
Since $\lim\limits_{k\to\infty}2^k\tan(x/2^k)=\lim\limits_{k\to\infty}2^k\sin(x/2^k)=x$, $(18)$ says
$$

\begin{align}
&(-1/2-\epsilon)(\tan(x)-x)\\[4pt]
\color{#C00000}{\le}&\sin(x)-x\\[4pt]
\color{#C00000}{\le}&(-1/2+\epsilon)(\tan(x)-x))\tag{19}
\end{align}
$$
which, since $\epsilon$ is arbitrary is equivalent to $(6)$.


sequences and series - Bernoulli's representation of Euler's number, i.e $e=lim limits_{xto infty} left(1+frac{1}{x}right)^x $






Possible Duplicates:
Finding the limit of $n/\sqrt[n]{n!}$
How come such different methods result in the same number, $e$?






I've seen this formula several thousand times: $$e=\lim_{x\to \infty} \left(1+\frac{1}{x}\right)^x $$



I know that it was discovered by Bernoulli when he was working with compound interest problems, but I haven't seen the proof anywhere. Does anyone know how to rigorously demonstrate this relationship?



EDIT:

Sorry for my lack of knowledge in this, I'll try to state the question more clearly. How do we prove the following?



$$ \lim_{x\to \infty} \left(1+\frac{1}{x}\right)^x = \sum_{k=0}^{\infty}\frac{1}{k!}$$


Answer



From the binomial theorem



$$\left(1+\frac{1}{n}\right)^n = \sum_{k=0}^n {n \choose k} \frac{1}{n^k} = \sum_{k=0}^n \frac{n}{n}\frac{n-1}{n}\frac{n-2}{n}\cdots\frac{n-k+1}{n}\frac{1}{k!}$$



but as $n \to \infty$, each term in the sum increases towards a limit of $\frac{1}{k!}$, and the number of terms to be summed increases so




$$\left(1+\frac{1}{n}\right)^n \to \sum_{k=0}^\infty \frac{1}{k!}.$$


Saturday, January 26, 2019

calculus - $limlimits_{n rightarrow infty} sqrt{n^2+n} -n$?






Calculate
$\displaystyle\lim_{n \to \infty}
\left(\,{\sqrt{\,{n^{2} + n}\,} - n}\,\right)$.





$\displaystyle\lim_{n \to \infty}\left(\,{\sqrt{\,{n^{2} + n}\,} - n}\,\right) =
\infty - \infty$ We have an indeterminate form



So I proceeded to factorize $$\sqrt{n^2+n} -n = \sqrt{ \frac{n^2(n+1)}{n}}-n =n \left[ \sqrt{\frac{n+1}{n}}-1 \right]$$



taking the limit:
$$\lim\limits_{n \rightarrow \infty} n \left[ \sqrt{\frac{n+1}{n}}-1 \right]= \infty \cdot 0$$



indeterminate again




What am i missing? How is the way forward to proceed? Much appreciated


Answer



Hint: use the so-to-speak "multiply and divide by the conjugate" trick — it often helps to rationalize. In this case, since you're given a difference $\sqrt{n^2+n}-n$, multiply and divide by the sum of the same two terms $\sqrt{n^2+n}+n$:



$$\lim_{n\to\infty} \left(\sqrt{n^2+n}-n\right)=\lim_{n\to\infty} \frac{\left(\sqrt{n^2+n}-n\right)\left(\sqrt{n^2+n}+n\right)}{\sqrt{n^2+n}+n}=\cdots$$


real analysis - Limit of $L^p$ norm



Could someone help me prove that given a finite measure space $(X, \mathcal{M}, \sigma)$ and a measurable function $f:X\to\mathbb{R}$ in $L^\infty$ and some $L^q$, $\displaystyle\lim_{p\to\infty}\|f\|_p=\|f\|_\infty$?




I don't know where to start.


Answer



Fix $\delta>0$ and let $S_\delta:=\{x,|f(x)|\geqslant \lVert f\rVert_\infty-\delta\}$ for $\delta<\lVert f\rVert_\infty$. We have
$$\lVert f\rVert_p\geqslant \left(\int_{S_\delta}(\lVert f\rVert_\infty-\delta)^pd\mu\right)^{1/p}=(\lVert f\rVert_\infty-\delta)\mu(S_\delta)^{1/p},$$
since $\mu(S_\delta)$ is finite and positive.
This gives
$$\liminf_{p\to +\infty}\lVert f\rVert_p\geqslant\lVert f\rVert_\infty.$$
As $|f(x)|\leqslant\lVert f\rVert_\infty$ for almost every $x$, we have for $p>q$, $$
\lVert f\rVert_p\leqslant\left(\int_X|f(x)|^{p-q}|f(x)|^qd\mu\right)^{1/p}\leqslant \lVert f\rVert_\infty^{\frac{p-q}p}\lVert f\rVert_q^{q/p},$$
giving the reverse inequality.



real analysis - The convergence of $sqrt {2+sqrt {2+sqrt {2+ldots}}}$

I would like to know if this sequence converges $\sqrt {2+\sqrt {2+\sqrt {2+\ldots}}}$.



I know this sequence is increasing monotone, but I couldn't prove it's bounded.




Thanks

Friday, January 25, 2019

calculus - Limits without L'Hopitals Rule

Evaluate the limit without using L'hopital's rule


a)$$\lim_{x \to 0} \frac {(1+2x)^{1/3}-1}{x} $$


I got the answer as $l=\frac 23$... but I used L'hopitals rule for that... How can I do it another way?


b)$$\lim_{x \to 5^-} \frac {e^x}{(x-5)^3}$$


$l=-\infty$


c)$$\lim_{x \to \frac {\pi} 2} \frac{\sin x}{\cos^2x} - \tan^2 x$$


I don't know how to work with this at all


So basically I was able to find most of the limits through L'Hopitals Rule... BUT how do I find the limits without using his rule?

probability - 6 sided die probabilities

i am currently working on a study guide and one of the questions i am completely stuck on and have no idea how to do it. Question is. You are interested in the number of rolls of a fair $6$ sided die until a number $2$ shows up.


Let $X =$ The number of times you roll the die until a number $2$ shows up.


(a) What type of random variable is $X$?


(b) How many rolls do you expect it to take? That is, what is the expected value, or mean, of the random variable $X$?


(c) What is the probability you roll a $2$ for the first time on the fourth roll? i.e. What is $P(X = 4)$?

Power series solution of $ f(x+y) = f(x)f(y) $ functional equation


Here on StackExchange I read a lot of interesting questions and answers about functional equations, for example a list of properties and links to questions is Overview of basic facts about Cauchy functional equation.


I'm interested in the following problem: if $f:\mathbb{R} \rightarrow \mathbb{R}$ is a continuous function verifying the functional equation $f(x+y)=f(x)f(y), \ \forall x,y\in \mathbb{R}$, find its non identically zero solution using power series.


My attempt so far using power series:
let $$f(x) = \sum_{n=0}^{\infty} a_{n} \, x^{n}$$ so $$f(y) = \sum_{n=0}^{\infty} a_{n} \, y^{n} $$ and $$f(x+y) = \sum_{n=0}^{\infty} a_{n} \, (x+y)^{n}$$



The functional equation $f(x+y)=f(x)f(y)$ leads to $$\sum_{n=0}^{\infty} a_{n} \, (x+y)^{n}=\sum_{n=0}^{\infty} a_{n} \, x^{n}\sum_{n=0}^{\infty} a_{n} \, y^{n}$$


Using the binomial theorem $$(x+y)^{n} = \sum_{k=0}^{n}\binom{n}{k}x^ky^{n-k}$$ and the Cauchy product of series $$\sum_{n=0}^{\infty} a_{n} \, x^{n}\sum_{n=0}^{\infty} a_{n} \, y^{n} = \sum_{n=0}^{\infty}(\sum_{k=0}^n a_k a_{n-k}x^k y^{n-k})$$
it follows $$\sum_{n=0}^{\infty} a_{n} (\sum_{k=0}^{n}\binom{n}{k}x^ky^{n-k})=\sum_{n=0}^{\infty}(\sum_{k=0}^n a_k a_{n-k}x^k y^{n-k})$$ $$\sum_{n=0}^{\infty}(\sum_{k=0}^{n} a_{n} \binom{n}{k}x^ky^{n-k})=\sum_{n=0}^{\infty}(\sum_{k=0}^n a_k a_{n-k}x^k y^{n-k})$$


Now I need to equate the coefficients: $$\forall n\in\mathbb N, \;\;\;\; \; a_{n} \binom{n}{k} = a_k a_{n-k} \;\; \textrm{for } k= 0,1,...,n $$


The first equation, for $n=0$, is $a_0=a_0a_0$, that is $a_0(a_0-1)=0$ with solutions $a_0=0$ and $a_0=1$. If $a_0=0$ every coefficient would be zero, so we have found the first term of the power series: $a_0=1$.


Now the problem is to determine the remaining coefficients. I tried, but it's too difficult to me.


Answer



From $a_n{n\choose n-1} = a_{n-1}a_1$ we have $a_n = a_{n-1}\dfrac{a_1}{n}$. So $a_n = \dfrac{a_1^n}{n!}$.


We know that the functional equation has as solutions the expnential functions $f(x) = a^x$ for some positive real number $a$. We are insterested to know if there is a relation between $a$ and the coefficient $a_1$.


Let us call $f_{a_1}(x)$ the solution of the functional equation where the coefficients are $(a_1)^n/n!$ and let $e$ be the real number defined by $f_1(x)$, i.e. $f_1(x) = e^x$. Then the series expansion tells us that $f_1(a_1x) = f_{a_1}(x)$, i.e. $e^{a_1x} = a^x$. For $x = 1$ we have that $e^{a_1} = a$.


From the series expansion, one sees that $e^x$ is a strictly increasing function and it's continuous by definition. Thus it has a continuous inverse. Let us call $\ln(x) = f^{-1}_{1}(x)$. Then $a_1 = \ln(a)$.



real analysis - Examples of bijective map from $mathbb{R}^3rightarrow mathbb{R}$





Could any one give an example of a bijective map from $\mathbb{R}^3\rightarrow \mathbb{R}$?



Thank you.


Answer



First, note that it is enough to find a bijection $f:\Bbb R^2\to \Bbb R$, since then $g(x,y,z) = f(f(x,y),z)$ is automatically a bijection from $\Bbb R^3$ to $\Bbb R$.




Next, note that since there is a bijection from $[0,1]\to\Bbb R$ (see appendix), it is enough to find a bijection from the unit square $[0,1]^2$ to the unit interval $[0,1]$. By constructions in the appendix, it does not really matter whether we consider $[0,1]$, $(0,1]$, or $(0,1)$, since there are easy bijections between all of these.



Mapping the unit square to the unit interval



There are a number of ways to proceed in finding a bijection from the unit square to the unit interval. One approach is to fix up the "interleaving" technique I mentioned in the comments, writing $\langle 0.a_1a_2a_3\ldots, 0.b_1b_2b_3\ldots\rangle$ to $0.a_1b_2a_2b_2a_3b_3\ldots$. This doesn't quite work, as I noted in the comments, because there is a question of whether to represent $\frac12$ as $0.5000\ldots$ or as $0.4999\ldots$. We can't use both, since then $\left\langle\frac12,0\right\rangle$ goes to both $\frac12 = 0.5000\ldots$ and to $\frac9{22} = 0.40909\ldots$ and we don't even have a function, much less a bijection. But if we arbitrarily choose to the second representation, then there is no element of $[0,1]^2$ that is mapped to $\frac12$, and if we choose the first there is no element that is mapped to $\frac9{22}$, so either way we fail to have a bijection.



This problem can be fixed.



(In answering this question, I tried many web searches to try to remember the fix, and I was amazed at how many sources I found that ignored the problem, either entirely, or by handwaving. I never did find it; I had to remember it. Sadly, I cannot remember where I saw it first.)




First, we will deal with $(0,1]$ rather than with $[0,1]$; bijections between these two sets are well-known, or see the appendix. For real numbers with two decimal expansions, such as $\frac12$, we will agree to choose the one that ends with nines rather than with zeroes. So for example we represent $\frac12$ as $0.4999\ldots$.



Now instead of interleaving single digits, we will break each input number into chunks, where each chunk consists of some number of zeroes (possibly none) followed by a single non-zero digit. For example, $\frac1{200} = 0.00499\ldots$ is broken up as $004\ 9\ 9\ 9\ldots$, and $0.01003430901111\ldots$ is broken up as $01\ 003\ 4\ 3\ 09\ 01\ 1\ 1\ldots$.



This is well-defined since we are ignoring representations that contain infinite sequences of zeroes.



Now instead of interleaving digits, we interleave chunks. To interleave $0.004999\ldots$ and $0.01003430901111\ldots$, we get $0.004\ 01\ 9\ 003\ 9\ 4\ 9\ldots$. This is obviously reversible. It can never produce a result that ends with an infinite sequence of zeroes, and similarly the reverse mapping can never produce a number with an infinite sequence of trailing zeroes, so we win. A problem example similar to the one from a few paragraphs ago is resolved as follows: $\frac12 = 0.4999\ldots$ is the unique image of $\langle 0.4999\ldots, 0.999\ldots\rangle$ and $\frac9{22} = 0.40909\ldots$ is the unique image of $\langle 0.40909\ldots, 0.0909\ldots\rangle$.



This is enough to answer the question posted, but I will give some alternative approaches.




Continued fractions



According to the paper "Was Cantor Surprised?" by Fernando Q. Gouveâ, Cantor originally tried interleaving the digits himself, but Dedekind pointed out the problem of nonunique decimal representations. Cantor then switched to an argument like the one Robert Israel gave in his answer, based on continued fraction representations of irrational numbers. He first constructed a bijection from $(0,1)$ to its irrational subset (see this question for the mapping Cantor used and other mappings that work), and then from pairs of irrational numbers to a single irrational number by interleaving the terms of the infinite continued fractions. Since Cantor dealt with numbers in $(0,1)$, he could guarantee that every irrational number had an infinite continued fraction representation of the form $$x = x_0 + \dfrac{1}{x_1 + \dfrac{1}{x_2 + \ldots}}$$



where $x_0$ was zero, avoiding the special-case handling for $x_0$ in Robert Israel's solution.



Cantor-Schröder-Bernstein mappings



The Cantor-Schröder-Bernstein theorem takes an injection $f:A\to B$ and an injection $g:B\to A$, and constructs a bijection between $A$ and $B$.




So if we can find an injection $f:[0,1)^2\to[0,1)$ and an injection $g:[0,1)\to[0,1)^2$, we can invoke the CSB theorem and we will be done.



$g$ is quite trivial; $x\mapsto \langle x, 0\rangle$ is one of many obvious injections.



For $f$ we can use the interleaving-digits trick again, and we don't have to be so careful because we need only an injection, not a bijection. We can choose the representation of the input numbers arbitrarily; say we will take the $0.5000\ldots$ representation rather than the $0.4999\ldots$ representation. Then we interleave the digits of the two input numbers. There is no way for the result to end with an infinite sequence of nines, so we are guaranteed an injection.



Then we apply CSB to $f$ and $g$ and we are done.



Appendix





  1. There is a bijection from $(-\infty, \infty)$ to $(0, \infty)$. The map $x\mapsto e^x$ is an example.


  2. There is a bijection from $(0, \infty)$ to $(0, 1)$. The map $x\mapsto \frac2\pi\tan^{-1} x$ is an example, as is $x\mapsto{x\over x+1}$.


  3. There is a bijection from $[0,1]$ to $(0,1]$. Have $0\mapsto \frac12, \frac12\mapsto\frac23,\frac23\mapsto\frac34,$ and so on. That takes care of $\left\{0, \frac12, \frac23, \frac34,\ldots\right\}$. For any other $x$, just map $x\mapsto x$.


  4. Similarly, there is a bijection from $(0,1]$ to $(0,1)$.



Thursday, January 24, 2019

real analysis - Finding the limit of a sequence involving n-th roots.



We're asked to find the limit (as $n$ goes to infinity) of the following sequence: $$a_n = \frac {\sqrt[3n]{4}\ -6 \ \sqrt[3n]{2} \ + \ 9}{\sqrt[2n]{9} \ - \ 4 \ \sqrt[2n]{3}+ 4} $$.



I thought that since the limit of the numerator exists, and since the limit of the denominator also exists and is non-zero, then the limit of the fraction should be $\frac{9}{4}$.



But apperently, the limit of this sequence is $4$ as $n \rightarrow +\infty$.




I don't understand why my approach is incorrect, nor why the limit of the sequence is $4$.


Answer



It's just $$\frac{1-6+9}{1-4+4}=4$$ because for example $4^{\frac{1}{3n}}\rightarrow4^0=1$.


calculus - Definition of convergence of a nested radical $sqrt{a_1 + sqrt{a_2 + sqrt{a_3 + sqrt{a_4+cdots}}}}$?



In my answer to the recent question Nested Square Roots, @GEdgar correctly raised the issue that the proof is incomplete unless I show that the intermediate expressions do converge to a (finite) limit. One such quantity was the nested radical
$$
\sqrt{1 + \sqrt{1+\sqrt{1 + \sqrt{1 + \cdots}}}} \tag{1}
$$



To assign a value $Y$ to such an expression, I proposed the following definition. Define the sequence $\{ y_n \}$ by:
$$
y_1 = \sqrt{1}, y_{n+1} = \sqrt{1+y_n}.

$$
Then we say that this expression evaluates to $Y$ if the sequence $y_n$ converges to $Y$.



For the expression (1), I could show that the $y_n$ converges to $\phi = (\sqrt{5}+1)/2$. (To give more details, I showed, by induction, that $y_n$ increases monotonically and is bounded by $\phi$, so that it has a limit $Y < \infty$. Furthermore, this limit must satisfy $Y = \sqrt{1+Y}$.) Hence we could safely say (1) evaluates to $\phi$, and all seems to be good.



My trouble. Let us now test my proposed idea with a more general expression of the form
$$\sqrt{a_1 + \sqrt{a_2 + \sqrt{a_3 + \sqrt{a_4+\cdots}}}} \tag{2}$$
(Note that the linked question involves one such expression, with $a_n = 5^{2^n}$.) How do we decide if this expression converges? Mimicking the above definition, we can write:
$$
y_1 = \sqrt{a_1}, y_{n+1} = \sqrt{a_{n+1}+y_n}.

$$
However, unrolling this definition, one get the sequence
$$
\sqrt{a_1}, \sqrt{a_{2}+ \sqrt{a_1}}, \sqrt{a_3 + \sqrt{a_2 + \sqrt{a_1}}}, \sqrt{a_4+\sqrt{a_3 + \sqrt{a_2 + \sqrt{a_1}}}}, \ldots
$$
but this seems little to do with the expression (2) that we started with.



I could not come up with any satisfactory ways to resolve the issue. So, my question is:





How do I rigorously define when an expression of the form (2) converges, and also assign a value to it when it does converge?




Thanks.


Answer



I would understand it by analogy with continued fractions and look for a limit of $\sqrt{a_1}$, $\sqrt{a_1+\sqrt{a_2}}$, $\sqrt{a_1+\sqrt{a_2+\sqrt{a_3}}}$, ..., $\sqrt{a_1+\sqrt{a_2 \cdots + \sqrt{a_n}}}$, ...



Each of these is not simply derivable from the previous one, but neither are continued fraction approximants.


real analysis - $sigma$-finite measure and semi-finite measure

Let $ (X, \Sigma, \mu) $ it will be a space with measure.



$\mu$ is $\sigma$-finite measure if it exist sequence of sets $X_{i} \in \Sigma $ and $\cup_{i=1}^{\infty}X_{i}=X$ and $\mu(X_{i})<\infty$ for all i




$\mu$ is semi-finite measure if for all $G \in \Sigma $ and $\mu (G)=\infty$ it exist $H \in \Sigma$ and $H \subset G$ and $0<\mu(H)<\infty$



Show that if $\mu$ is $\sigma$-finite measure then $\mu$ is semi-finite measure

induction - Prove that $log(x) 0$, $xin mathbb{N}$.


I'm trying to prove $ \log(x) < x$ for $x > 0$ by induction.


Base case: $x = 1$



$\log (1) < 1$ ---> $0 < 1$ which is certainly true.


Inductive hypothesis: Assume $x = k$ ---> $\log(k) < k$ for $k > 0$


Inductive conclusion: Prove $\log(k+1) < k+1$


I don't know what to do after this. I mean the statement itself is quite obviously true, but how do I continue with the proof?


Answer



I don't know why you'd use induction, (unless your domain of each function is $\mathbb{N}\setminus \{0\}$). Here is an alternative approach using calculus. If this is not helpful, I can delete this answer.

Let $g(x)= x- \log(x)$.


$g'(x) = 1 - \frac{1}{x} > 0 $

for all $ x >1$. So $g(x)$ is increasing on $(1,\infty)$.

At $x=1$, $g(x) = 1$, thus $x - \log(x) > 0$ for all $x \ge 1$ (use continuity and the known value at $x = 1$ with what has just been shown about the monotony of $g$).

Now for $x\in (0,1)$, $\log(x) < 0$ and $x>0$ thus $x-\log(x) > 0$.

Thus $x-\log(x) > 0 $ for all $x \in (0,\infty)$. And conclude $x> \log(x) $ for all $x\in (0,\infty)$.

Added

If you want to use induction to show that for each $x\in \mathbb{N}\setminus \{0\}$, $x>\log(x)$, use your inductive hypothesis via: $$ k > \log(k) \longrightarrow \\ k+\log(1+\frac{1}{k})> \log(k)+\log(1+\frac{1}{k}) = \log(k+1) \\ k+\log(1+\frac{1}{k}) \le k + \log(2) \text{ and } \log(2) < 1 \text{ so } \\ k + \log(2) < k + 1 \text{ thus } \\ k+1 > k + \log(2) \ge k + \log(1+\frac{1}{k}) > \log(k+1) $$ Q.E.D.


Wednesday, January 23, 2019

calculus - Determining convergence of improper integrals including $ int_{0} ^ {1} frac{lnleft(1+e^xright)-x}{x^2}text{d}x $



Will you please help me figure out whether the following improper integrals converge or not?





  1. $$
    \int _ {0} ^ {\infty} \frac{x^2}{2^x}\text{d}x

    $$


  2. $$
    \int_{0} ^ {1} \frac{\ln\left(1+e^x\right)-x}{x^2}\text{d}x
    $$





As for the first one, I have no idea.
As for the second,
I have tried rewriting it as:

$$
\int_{0} ^ {1} \frac{\ln\left(\frac{1+e^x}{e^x}\right)}{x^2}\text{d}x
$$
but I have no idea if it helps me or not.



Thanks in advance.


Answer



Hint You're on the right track with your manipulation in (2). Since $e^x$ is increasing, for all $x \in [0, 1]$ we have $$\frac{1 + e^x}{e^x} = 1 + \frac{1}{e^x} \geq 1 + \frac{1}{e} =: C > 0 .$$ Since $\log$ is increasing, we have
$$\frac{\log\left(\frac{1 + e^x}{e^x}\right)}{x^2} \geq \frac{\log C}{x^2} $$ on $(0, 1]$. What can you say about the integral $$\int_0^1 \frac{\log C}{x^2} \, dx$$ relevant to the (direct) comparison test?




For (1), one can determine convergence again by comparing the integrand to a judiciously chosen function.




Additional hint For (1), show that $x^2 \leq a^x$ for all sufficiently large $x$ for some suitable constant $a$.




Remark One can also determine the convergence of the integral in (1) by evaluating it directly, though this is probably slower: Applying integration by parts twice shows that it has value $\frac{2}{(\log 2)^3}$.


sequences and series - Show that $sum_{n = 1}^{+infty} frac{n}{2^n} = 2$


Show that $\sum_{n = 1}^{+\infty} \frac{n}{2^n} = 2$. I have no idea to solve this problem. Anyone could help me?


Answer



Consider the following



$$\frac{1}{1-x} = \sum_{k\geq 0}x^ k $$


The series converges for $|x|<1$


Differentiating both sides we have


$$\frac{1}{(1-x)^2} = \sum_{k\geq 1}k x^{k-1} $$


$$\frac{x}{(1-x)^2} = \sum_{k\geq 1}k x^{k} $$


Now put $x=\frac{1}{2}$


$$2 = \sum_{k\geq 1}\frac{k}{2^k} $$


calculus - Generalized Fresnel Integral using Laplace



So I wanted to solve the following integral:




$$\int_0^\infty \sin{(x^2) dx}$$



I did it by using the Laplace transform of the function:



$$I(t) = \int_0^\infty \sin{(tx^2) dx}$$



$$\mathcal{L} [I(t)] = \int_0^\infty \frac {x^2}{s^2+x^4} dx$$



This last integral is easily solvable for x, and then I took the inverse Laplace Transform to get:




$$I(t) = \sqrt{\frac{\pi}{8}} t^{-1/2} $$



But afterwards I googled about it, and I found this thread. The best answer easily got the generalized Fresnel Integral by applying a LT and an ILT, and I have no idea why he did this, but it worked! (I solved the generalized integral using the Mellin transform and I got the same result).



If I try my previous method here, I would have to solve the integral:



$$\int_0^\infty \frac {x^p}{s^2+x^{2p}} dx$$



And this seems much harder than the mysterious method. So can someone please explain me what is going on there ? Thanks in advance.


Answer




There is a theorem that states that
\begin{align}
\int_0^\infty f(x) \, g(x)\,dx=\int_0^{\infty}\mathcal{L}\{f(x)\}(s) \,\mathcal{L}^{-1}\{g(x)\}(s)\,ds \tag1
\end{align}
To compute the generalized integral
$$\int_0^\infty \sin (x^p) \,dx$$
You can first substitute $u=x^p$, and then use $(1)$, choosing $f(u)=\sin u$ and $g(u)=u^{\frac{1}{p}-1}$.



The resulting integral can be calculated by using one of the integral representation of the Beta function.


arithmetic - $1 +1$ is $0$ ?​











So:



$$
\begin{align}

1+1 &= 1 + \sqrt{1} \\
&= 1 + \sqrt{1 \times 1} \\
&= 1 + \sqrt{-1 \times -1} \\
&= 1 + \sqrt{-1} \times \sqrt{-1} \\
&= 1 + i \times i \\
&= 1 + (-1) \\
&= 1 - 1\\
&= 0
\end{align}
$$




I can't see anything wrong there, and I can't see anything wrong in $1+1=2$ too. Clearly, $1+1$ is $2$, but I really want to know where is the incorrect part in the above.


Answer



$$\sqrt{ab} = \sqrt{a} \times \sqrt{b}$$ is valid only for non-negative real numbers $a$ and $b$. Hence, the error is in the step $$\sqrt{(-1) \times (-1)} = \sqrt{-1} \times \sqrt{-1}$$


Tuesday, January 22, 2019

calculus - Limit problem $e^x$ without L'Hôpital's rule




$$\lim_{x \to -\infty} \frac {1-e^{x^2-x}}{1+e^{x^2-x}}$$
I solved this limit problem by applying L'Hôpital's rule and I got $-1$.



Question: how to solve this limit without L'Hopital rule and Taylor series?


Answer



hint:



Divide Numerator and Denominator by $e^{x^2-x}$



$$\lim_{x \to -\infty} \frac {1-e^{x^2-x}}{1+e^{x^2-x}} = \lim_{x \to -\infty} \frac {\dfrac{1}{e^{x^2-x}}-1}{\dfrac{1}{e^{x^2-x}}+1}$$




Now, $\lim_{x\rightarrow -\infty} \dfrac{1}{e^{x^2-x}} = 0$, since $\lim_{x\rightarrow -\infty} x^2 -x = +\infty$ and $\lim_{x\rightarrow \infty} e^x = +\infty$
$$\implies \text{ Rqrd. limit} = {-1}$$


linear algebra - find the amount of elementary matrix required for a 2x2 matrix

can someone help me with this question?




question:



Given a 2 × 2 invertible matrix, we have seen we can write it as a product of elementary matrices. What is the largest amount of elementary matrices required? Give an example of a matrix that requires this number of elementary matrices.

reference request - Is this divisibility test for 4 well-known?



It has just occurred to me that there is a very simple test to check if an integer is divisible by 4: take twice its tens place and add it to its ones place. If that number is divisible by 4, so is the original number.




This result seems like something that anybody with an elementary knowledge of modular arithmetic could realize, but I have noticed that it is conspicuously missing on many lists of divisibility tests (for example, see here, here, here, or here). Is this divisibility test well-known?


Answer



Yes, it is well known; do you know modular arithmetic? Assuming you do, we have a number $abc=a\cdot 10^2+b\cdot 10^1+c\cdot 10^0$. Now $$a\cdot 10^2+b\cdot 10^1+c\cdot 10^0\equiv 2\cdot b+c\pmod{4}.$$ Many people know the multiples of $4$ for numbers less than $100$, so it is commonly just said if the last two digits (as a number) is divisible by $4$, then the number is divisible by $4$.


Monday, January 21, 2019

Sum of a finite complex series




Let
$$C=\cos\theta+\cos(\theta+ \frac{2\pi}{n})+ \cos(\theta+ \frac{4\pi}{n})+...+\cos(\theta+ \frac{(2n-2)\pi}{n})$$
and
$$S=\sin\theta+\sin(\theta+ \frac{2\pi}{n})+ \sin(\theta+ \frac{4\pi}{n})+...+\sin(\theta+ \frac{(2n-2)\pi}{n})$$
Show that $C+iS$ forms a geometric series and hence show that $C=0$ and $S=0$





Then $C+iS$ = $(\cos\theta+\cos(\theta+ \frac{2\pi}{n})+ \cos(\theta+ \frac{4\pi}{n}))+i(\sin\theta+\sin(\theta+ \frac{2\pi}{n})+ \sin(\theta+ \frac{4\pi}{n}))$
$$e^{i\theta}+exp (i(\theta+\frac{2\pi}{n}))+...+exp (i(\theta+\frac{(2n-2)\pi}{n}))$$



This is a far as I've got as I'm a little stuck where to go from here. Feel like I'm quite close to the answer though.


Answer



Yes, you are close indeed:
\begin{align}
&\exp(i\theta)+ \exp(i(\theta+2\pi/n)) +\ldots+ \exp(i(\theta+2\pi(n-1)/n))\\
&= \exp(i\theta) \sum_{k=0}^{n-1} \exp(2\pi ik/n) = \exp(i\theta) \sum_{k=0}^{n-1} \exp(2\pi i/n)^k\\

&= \exp(i\theta) \frac{1-\exp(2\pi i/n)^n}{1-\exp(2\pi i/n)} =
\exp(i\theta) \frac{1-\exp(2\pi i)}{1-\exp(2\pi i/n)} = 0,
\end{align}

and hence $C+iS = 0,$ implying that $C=S=0$.



EDIT: By request, we can avoid using the sigma notation as follows. Put $q=\exp(2\pi i/n)$ for clarity. Then
\begin{align}
&\exp(i\theta)+ \exp(i(\theta+2\pi/n)) +\ldots+ \exp(i(\theta+2\pi(n-1)/n))\\
&= \exp(i\theta) \big( 1+\exp(2\pi i/n) + \ldots + \exp(2\pi i(n-1)/n) \big)\\
&= \exp(i\theta) \big( 1+q + \ldots +q^{n-1} \big)

= \exp(i\theta) \frac{1-q^n}{1-q},
\end{align}

etc.


reference request - Is this a well-known game?


In the last two days, I got a bit obsessed with the following game, partly because I beat others most of the time I play it. The gamers are expected to have a sound logical reasoning coupled with the ability to analyze data.


Here is the rule of the game:


$(1)$ The game is for two players.


$(2)$ Each player holds a $4$-digit positive number in his mind. (Numerals $0$ to $9$ are allowed. For instance you can hold $0176$ or $5678$ or $2384$, etc.) The numbers are to be kept secret.


$(3)$ Each then guesses the other's $4$-digit number.


$(4)$ Each person gives a mark for the other person's guess. The mark consists of two parts. The first part counts the number of digits the other person rightly guessed. The second part counts the number of digits that were guessed in the right place.


$(5)$ The fist person who comes to the right $4$-digit the other person held is the winner.


Here is a sample where the players are Mr.A and Mr.B. Assume A has held $3476$ and that B has $7609$. Let A begin.



\begin{vmatrix} \hline \text{Player(A)} & \text{Correct digits} & \text{Correct places} & \text{Player(B)} & \text{Correct digits} & \text{Correct places} \\ \text{guesses} & \text{(B- marks)} & \text{(B-marks)} & \text{guesses} & \text{( A- marks)} & \text{( A- marks)}\\ 4521 & 0 & 0 & 5735 & 2 & 0 \\ 8309 & 2 & 2 & 8762 & 2 & 0 \\ ... & ... & ... & ... & ... & ... \\ 7609 & 4 & 4 & 3467 & 4 & 2 \\ \hline \end{vmatrix}


So A is the winner. There is a lot of mathematical elimination strategy going in there which makes the play a very interesting pastime at least for me. If this is a known game, could you please point me to a reference? Thank you.


Answer



Seems like a variant of Mastermind. I believe it is called Cows and Bulls.


calculus - Evaluating $limlimits_{ntoinfty} e^{-n} sumlimits_{k=0}^{n} frac{n^k}{k!}$




I'm supposed to calculate:



$$\lim_{n\to\infty} e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!}$$



By using W|A, i may guess that the limit is $\frac{1}{2}$ that is a pretty interesting and nice result. I wonder in which ways we may approach it.


Answer



Edited. I justified the application of the dominated convergence theorem.



By a simple calculation,




$$ \begin{align*}
e^{-n}\sum_{k=0}^{n} \frac{n^k}{k!}
&= \frac{e^{-n}}{n!} \sum_{k=0}^{n}\binom{n}{k} n^k (n-k)! \\
(1) \cdots \quad &= \frac{e^{-n}}{n!} \sum_{k=0}^{n}\binom{n}{k} n^k \int_{0}^{\infty} t^{n-k}e^{-t} \, dt\\
&= \frac{e^{-n}}{n!} \int_{0}^{\infty} (n+t)^{n}e^{-t} \, dt \\
(2) \cdots \quad &= \frac{1}{n!} \int_{n}^{\infty} t^{n}e^{-t} \, dt \\
&= 1 - \frac{1}{n!} \int_{0}^{n} t^{n}e^{-t} \, dt \\
(3) \cdots \quad &= 1 - \frac{\sqrt{n} (n/e)^n}{n!} \int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du.
\end{align*}$$




We remark that




  1. In $\text{(1)}$, we utilized the famous formula $ n! = \int_{0}^{\infty} t^n e^{-t} \, dt$.

  2. In $\text{(2)}$, the substitution $t + n \mapsto t$ is used.

  3. In $\text{(3)}$, the substitution $t = n - \sqrt{n}u$ is used.



Then in view of the Stirling's formula, it suffices to show that




$$\int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du \xrightarrow{n\to\infty} \sqrt{\frac{\pi}{2}}.$$



The idea is to introduce the function



$$ g_n (u) = \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \mathbf{1}_{(0, \sqrt{n})}(u) $$



and apply pointwise limit to the integrand as $n \to \infty$. This is justified once we find a dominating function for the sequence $(g_n)$. But notice that if $0 < u < \sqrt{n}$, then



$$ \log g_n (u)

= n \log \left(1 - \frac{u}{\sqrt{n}} \right) + \sqrt{n} u
= -\frac{u^2}{2} - \frac{u^3}{3\sqrt{n}} - \frac{u^4}{4n} - \cdots \leq -\frac{u^2}{2}. $$



From this we have $g_n (u) \leq e^{-u^2 /2}$ for all $n$ and $g_n (u) \to e^{-u^2 / 2}$ as $n \to \infty$. Therefore by dominated convergence theorem and Gaussian integral,



$$ \int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du = \int_{0}^{\infty} g_n (u) \, du \xrightarrow{n\to\infty} \int_{0}^{\infty} e^{-u^2/2} \, du = \sqrt{\frac{\pi}{2}}. $$


Sunday, January 20, 2019

calculus - Antiderivative of unbounded function?

One way to visualize an antiderivative is that the area under the derivative is added to the initial value of the antiderivative to get the final value of the antiderivative over an interval.



The Riemann Series Theorem essentially says that you can basically get any value you want out of a conditionally convergent series by changing the order you add up the terms.



Now consider the function: $$f(x) = 1/x^3$$The function is unbounded over the interval $(-1,1)$, so it not integrable over this interval.




If you break $f(x)$ into Riemann rectangles over the interval $(-1,1)$ and express the area as a Riemann sum, you essentially get a conditionally convergent series. And because of the Riemann Series Theorem, you can make the sum add up to anything. In other words, you can make the rectangles add up to whatever area you want by changing the order in which you add them up. This is in fact why a function needs to be bounded to be integrable - otherwise the area has infinite values/is undefined.



So my question is, in cases like this, how does the antiderivative "choose" an area? I mean in this case the antiderivative, $\frac{1}{-2x^2}$, choose to increase by $\frac{1}{-2(1)^2} - \frac{1}{-2(-1)^2} = 0.$ In other words, the antiderivative "choose" to assign a area of zero to the area under $1/x^3$from $-1$ to $1$ even though the Riemann Series Theorem says the area can be assigned any value.



How did the antiderivative "choose" an area of $0$ from the infinite possible values?

real analysis - Solutions to matrix-valued multiplicative Cauchy equation under local boundedness

Let $f : (0, \infty) \to \mathbb{R}^{n \times n}$, $n \in \mathbb{N}$ satisfy the functional equation $f(x + y) = f(x) f(y)$. In general, $f$ need not be measurable (by the usual constructions of non-measurable solutions based on a Hamel basis of $\mathbb{R}$ over $\mathbb{Q}$).




Is it true that if $f$ is uniformly bounded on $(0, \infty)$ (or only on some subinterval of $(0, \infty)$) then $f$ is measurable. (It then can also be shown that $f$ is continuous and of the form $f(x) = \exp(A x)$ for some $A \in \mathbb{R}^{n \times n}$.)



Remarks:




  1. For $n=1$ this follows from here. For $n \geq 2$ the above functional equation translates into a system of one-dimensional functional equations $f_{ij}(x+y) = \sum_{k=1}^n f_{ik}(x) f_{kj}(y)$. Are similar arguments applicable?

  2. For "$n=\infty$" the above claim is not true, see e.g. Doob "Topics in the theory of Markoff chains" (1942), p. 2.



Edit: For the case of stochastic solutions ($f(x)$ is a stochastic matrix for each $x > 0$) the above question was solved in the affirmative by W. Doeblin in the 30s.)

algebra precalculus - Proof for formula for sum of sequence $1+2+3+ldots+n$?



Apparently $1+2+3+4+\ldots+n = \dfrac{n\times(n+1)}2$.




How? What's the proof? Or maybe it is self apparent just looking at the above?



PS: This problem is known as "The sum of the first $n$ positive integers".


Answer



Let $$S = 1 + 2 + \ldots + (n-1) + n.$$ Write it backwards: $$S = n + (n-1) + \ldots + 2 + 1.$$
Add the two equations, term by term; each term is $n+1,$ so
$$2S = (n+1) + (n+1) + \ldots + (n+1) = n(n+1).$$
Divide by 2: $$S = \frac{n(n+1)}{2}.$$


calculus - What is an intuitive approach to solving $lim_{nrightarrowinfty}biggl(frac{1}{n^2} + frac{2}{n^2} + frac{3}{n^2}+dots+frac{n}{n^2}biggr)$?




$$\lim_{n\rightarrow\infty}\biggl(\frac{1}{n^2} + \frac{2}{n^2} + \frac{3}{n^2}+\dots+\frac{n}{n^2}\biggr)$$





I managed to get the answer as $1$ by standard methods of solving, learned from teachers, but my intuition says that the denominator of every term grows much faster than the numerator so limit must equal to zero.



Where is my mistake? Please explain very intuitively.


Answer



Intuition should say:



the denominator grows with $n^2$, the numerator grows with $n$. However, the number of fractions also grows by $n$, so the total growth of the numerator is about $n^2$.







And that's where intuition stops. From here on, you go with logic and rigor, not intuition.



And it gets you to



$$\frac1{n^2} + \frac2{n^2}+\cdots + \frac{n}{n^2} = \frac{1+2+3+\cdots + n}{n^2} = \frac{\frac{n(n+1)}{2}}{n^2} = \frac{n^2+n}{2n^2}$$



and you find that the limit is $\frac12$ (not $1$!)


integration - Why do we treat differential notation as a fraction in u-substitution method




How did we come to know that treating the differential notation as a fraction will help us in finding the integral. And how do we know about its validity?


How can $\frac{dy}{dx}$ be treated as a fraction?

I want to know about how did u-substitution come about and why is the differential treated as a fraction in it?


Answer



It doesn't necessarily need to be.



Consider a simple equation $\frac{dy}{dx}=\sin(2x+5)$ and let $u=2x+5$. Then
$$\frac{du}{dx}=2$$
Traditionally, you will complete the working by using $du=2\cdot dx$, but if we were to avoid this, you could instead continue with the integral:
$$\int\frac{dy}{dx}dx=\int\sin(u)dx$$
$$\int\frac{dy}{dx}dx=\int\sin(u)\cdot\frac{du}{dx}\cdot\frac{1}{2}dx$$

$$\int\frac{dy}{dx}dx=\frac{1}{2}\int\sin(u)\cdot\frac{du}{dx}dx$$
$$y=c-\frac{1}{2}\cos(u)$$
$$y=c-\frac{1}{2}\cos(2x+5)$$



But why is this? Can we prove that the usefulness of the differentiatals' sepertation is justified? As Gerry Myerson has mentioned, it's a direct consequence of the chain rule:



$$\frac{dy}{dx}=\frac{dy}{du}\frac{du}{dx}$$
$$\int\frac{dy}{dx}dx=\int\frac{dy}{du}\frac{du}{dx}dx$$
But then if you 'cancel', it becomes
$$\int\frac{dy}{dx}dx=\int\frac{dy}{du}du$$

Which is what you desired.


calculus - What is the limit of the sequence n!/4^n?


I am trying to find the limit of the sequence by using root test and I don't understand why the limit is not zero? (the answer is inf).


Answer



By the root test:


$$\begin{array}{rcl} \displaystyle \limsup_{n\to\infty} \sqrt[n]{a_n} &=& \displaystyle \limsup_{n\to\infty} \sqrt[n]{\dfrac{n!}{4^n}} \\ &=& \displaystyle \dfrac14 \limsup_{n\to\infty} \sqrt[n]{n!} \\ &=& \displaystyle \dfrac14 \limsup_{n\to\infty} \sqrt[n]{\exp\left(n \ln n - n\right)} \\ &=& \displaystyle \dfrac14 \limsup_{n\to\infty} \exp\left(\ln n - 1\right) \\ &=& \displaystyle \dfrac1{4e} \limsup_{n\to\infty} n \\ &=& \infty \end{array}$$


Hence the sequence diverges to infinity.


proof writing - Prove that 1/2 + 1/4 + 1/8 ....... = 1





I've often heard that instead of adding up to a little less than one, 1/2 + 1/4 + 1/8... = 1. Is there any way to prove this using equations without using Sigma, or is it just an accepted fact? I need it without Sigma so I can explain it to my little sister.



It is not a duplicate because this one does not use Sigma, and the one marked as duplicate does. I want it to use variables and equations.


Answer



For physical intuition, so you can explain it to your little sister, I will use a 1m long ruler.



Take the ruler an divide it into two equal parts:



$$1=\frac{1}{2}+\frac{1}{2}$$




Take one of the parts you now have, and again divide it in half.



$$=\frac{1}{2}+\frac{1}{4}+\frac{1}{4}$$



Take one of the smaller parts you now have, and again divide it in half.



$$=\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\frac{1}{8}$$



Repeat. In general for $n$ a positive integer,




$$=\left(\frac{1}{2}+\frac{1}{4}+...+\frac{1}{2^n} \right)+\frac{1}{2^n}=1$$



So,



$$\frac{1}{2}+\frac{1}{4}+...+\frac{1}{2^n}=1-\frac{1}{2^n}$$



As we let $n$ become a really big (positive) integer, note the sum gets closer and closer to $1$, because $\frac{1}{2^n}$ gets really close to zero (the smallest part of the ruler you have left over gets close to 0 meters in length). We say the sum converges to $1$ in the limit that $n \to \infty$.


Saturday, January 19, 2019

elementary set theory - Bijection between $mathbb{R}timesmathbb{R}$ and $mathbb{R}$

It must be posted somewhere, but I can't find it. I've been working on it for a while too without getting anywhere. Does there exist a bijection between $\mathbb{R}\times\mathbb{R}$ and $\mathbb{R}$? Is it possible to give an explicit bijection?


NOTE: This question is not a duplicate of the link suggested. Please see comments for further detail.

calculus - Evaluating $lim_{xto 0} frac{tan x - sin x}{x^3} $. I get $0$, but book says $1/2$

$$\lim_{x\to\ 0} \frac{\tan x - \sin x}{x^3} $$


I am getting $0$. I even tried to plug in small values for $x$ and got the same answer. But the book's answer is $1/2$.


I don't understand. Am I doing something wrong? Please help ��

Gcd and coprime numbers

Assume: $c$ divides $ab$ and $GCD(a,c)=1$. Calculate $GCD(b,c)$.I think that $GCD(b,c)=c$ But how to prove it. Help me, please.

Extended Euclidean Algorithm with negative numbers minimum non-negative solution


I came through a problem in programming which needs Extended Euclidean Algorithm, with $a*s + b*t = \gcd(|a|,|b|)$ for $b \leq 0$ and $a \geq 0$


With the help of this post: extended-euclidean-algorithm-with-negative-numbers


I know we can just move the sign towards $t$ and just use normal Extended Euclidean Algorithm and use $(s,-t)$ as solution


However in my scenario, there is one more condition: I would like to find the minimum non-negative solution, i.e. $(s,t)$ for $s,t\geq 0$


And my question is how to find such minimum $(s,t)$?


Sorry if it sounds too obvious as I am dumb :(



Thanks!


Answer



Fact 1: One nice property of the Extended Euclidean Algorithm is that it already gives minimal solution pairs, that is, if $a, b \geq 0$, $|s| \lt \frac{b}{gcd(a,b)}$ and $|t| \lt \frac{a}{gcd(a,b)}$


Fact 2: If $(s,t)$ is a solution then $(s+k*\frac{b}{gcd(a,b)},t-k*\frac{a}{gcd(a,b)}), k \in \mathbb{Z}$ is also a solution.


Combining the two facts above, for your case in which $a \geq 0$ and $b \leq 0$, compute the pair $(s,t)$ using the extended algorithm on $(|a|,|b|)$, then either:


  1. $s \geq 0, t \leq 0$, in this case $(s,-t)$ is the solution you want.

  2. $s \leq 0, t \geq 0$, in this case $(s+\frac{|b|}{gcd(|a|,|b|)},-t+\frac{|a|}{gcd(|a|,|b|)})$ is your desired solution.

calculus - Determine $lim_{x to 0}{frac{x-sin{x}}{x^3}}=frac{1}{6}$, without L'Hospital or Taylor



How can I prove that $$\lim_{x \to 0}{\frac{x-\sin{x}}{x^3}}=\frac{1}{6}$$



without using L'Hospital or Taylor series?




thanks :)


Answer



Let $L = \lim_{x \to 0} \dfrac{x - \sin(x)}{x^3}$. We then have
\begin{align}
L & = \underbrace{\lim_{y \to 0} \dfrac{3y - \sin(3y)}{27y^3} = \lim_{y \to 0} \dfrac{3y - 3\sin(y) + 4 \sin^3(y)}{27y^3}}_{\sin(3y) = 3 \sin(y) - 4 \sin^3(y)}\\
& = \lim_{y \to 0} \dfrac{3y - 3\sin(y)}{27 y^3} + \dfrac4{27} \lim_{y \to 0} \dfrac{\sin^3(y)}{y^3} = \dfrac{3}{27} L + \dfrac4{27}
\end{align}
This gives us $24L = 4 \implies L = \dfrac16$


Friday, January 18, 2019

summation - Proving $sum_{i=1}^ni^2=frac{n(n+1)(2n+1)}{6}$ with Induction





I've been watching countless tutorials but still can't quite understand how to prove something like the following:
$$\sum_{i=1}^ni^2=\frac{n(n+1)(2n+1)}{6}$$



original image




The ^2 is throwing me off, I really wish I can show you what I've already tried but I have absolutely no clue where to start. I've also looked around for similar problems but can't find any that start with ^2.



I appreciate all help whatsoever, thank you!


Answer



Tips:



Every proof by induction contains the following steps: a base case, and the inductive step. Almost always, you should start with the base case first. For a base case, try to find the simplest case that works for whatever equality or thing you're trying to show. For example, if you're trying to show



$$\sum_{i=1}^n i^2=\frac{n(n+1)(2n+1)}{6}$$




I would try the case where $n=1$ first. Just show that it works when you plug in $n=1$ on both sides.



The next step is the trickier one, but it can be very algorithmic if you learn how to do it. First, what you do is assume that whatever you're trying to prove is true for some $n=k$. This assumption is called "the inductive hypothesis." Then you have to do something to show that if your statement is true when $n=k$, then it is also true when $n=k+1$. All of this constitutes "the inductive step." Once you do that you're done; you can simply say "Therefore by PMI, < some theorem > is true for all $n\ge$ < your base case >."



When dealing with equations like the one in your question, I will outline simpler approach. Once you assume your inductive hypothesis, rewrite your equation with $n=k$, and depending on the situation, perform some operation to include $k+1$ on both sides of the equation. In this case, we are adding successive squares, so we add $(k+1)^2$ to both sides. Suppose,



$$\sum_{i=1}^n i^2=\frac{n(n+1)(2n+1)}{6}$$



is true for $n=k$. Then




$$\sum_{i=1}^k i^2=\frac{k(k+1)(2k+1)}{6}$$



Now we can add $(k+1)^2$ to both sides



$$\sum_{i=1}^k i^2 + (k+1)^2=\frac{k(k+1)(2k+1)}{6}+(k+1)^2$$
$$\sum_{i=1}^{k+1} i^2=\frac{k(k+1)(2k+1)}{6}+(k+1)^2$$



Now you need to manipulate the r.h.s. to look like $\frac{(k+1)((k+1)+1)(2(k+1)+1)}{6}$. Once you're done you can write the conclusion. Can you do the rest?


probability - Explain why $E(X) = int_0^infty (1-F_X (t)) , dt$ for every nonnegative random variable $X$




Let $X$ be a non-negative random variable and $F_{X}$ the corresponding CDF. Show, $$E(X) = \int_0^\infty (1-F_X (t)) \, dt$$ when $X$ has : a) a discrete distribution, b) a continuous distribution.



I assumed that for the case of a continuous distribution, since $F_X (t) = \mathbb{P}(X\leq t)$, then $1-F_X (t) = 1- \mathbb{P}(X\leq t) = \mathbb{P}(X> t)$. Although how useful integrating that is, I really have no idea.


Answer



For every nonnegative random variable $X$, whether discrete or continuous or a mix of these, $$ X=\int_0^X\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\gt t}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\geqslant t}\,\mathrm dt, $$ hence



$$ \mathrm E(X)=\int_0^{+\infty}\mathrm P(X\gt t)\,\mathrm dt=\int_0^{+\infty}\mathrm P(X\geqslant t)\,\mathrm dt. $$




Likewise, for every $p>0$, $$ X^p=\int_0^Xp\,t^{p-1}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\gt t}\,p\,t^{p-1}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\geqslant t}\,p\,t^{p-1}\,\mathrm dt, $$ hence




$$ \mathrm E(X^p)=\int_0^{+\infty}p\,t^{p-1}\,\mathrm P(X\gt t)\,\mathrm dt=\int_0^{+\infty}p\,t^{p-1}\,\mathrm P(X\geqslant t)\,\mathrm dt. $$



Method of solving extended Euclidean algorithm for three numbers?



I already got idea of solving gcd with three numbers. But I am wondering how to solve the extended Euclidean algorithm with three, such as:



47x + 64y + 70z = 1


Could anyone give me a hint? Thanks a lot.


Answer




Notice that $gcd(x,y,z)=gcd(x,gcd(y,z))$. First we find $a$, $b$ such that $gcd(x,gcd(y,z))=ax+bgcd(y,z)$, then $c$, $d$ such that $gcd(y,z)=cy+dz$. Finally we obtain $gcd(x,y,z)=ax+bcy+bdz$.


calculus - Show that for any $r>0$, $ln x=O(x^r)$ as $xto infty$


Show that for any $r>0$, $\ln x=O(x^r)$ as $x\to \infty$


I know that if $x_n=O(\alpha_n)$ then there is a constant $C$ and a natural number $n_0$ such that $|x_n|=C|\alpha_n|$ for all $n\geq n_0$. But in this case I do not have sequences, how can I work with these functions? In this case there would be no natural number? Would only the constant be demanded? One would not have to $\ln x\leq x$ for all $x>0$ and with this could not solve much of the problem with $C=1$?


Answer




If you can use derivative-based methods: $$ \lim_{x\to\infty}\frac{\ln x}{x^r}= \lim_{x\to\infty}\frac{1/x}{r x^{r-1}}= \lim_{x\to\infty}\frac{1}{r x^{r}}=0 $$


number theory - Degrees of irreducible factors of a polynomial over finite field



I'm working on an exercise in Janusz's Algebraic Number Fields. (I simplified it.)




Let $\Phi(x)$ be the minimal polynomial of the primitive $p$-th root
of unity. ($p$ is an odd prime.) Let $q\ne p$ be a prime and consider

the reduced polynomial $\bar \Phi(x)$ modulo $q$. Show that the
splitting field of $\bar \Phi(x)$ over $GF(q)$ is the field $GF(q^m)$,
where $m$ is the order of $q$ in the multiplicative group
$\mathbb{F}_p^\times$. Conclude that every prime factor of
$\bar \Phi(x)$ over $GF(q)$ has degree $m$.




I proved that the splitting field is $GF(q^m)$ for such $m$, but I don't know how to conclude the final statement. I expect that $\bar \Phi(x)$ splits as a product of linear polynomials over $GF(q^m)$ and $m$ linear polynomials make a irreducible factor of $\bar \Phi(x)$ over $GF(q)$ in some sense. I also thought that if we can show first that every irreducible factor of $\bar \Phi(x)$ over $GF(q)$ has the same degree then it should be $m$, since the degree of the splitting field over $GF(q)$ is equal to the lcm of all degrees of irreducible factors. But I cannot finish both approaches.


Answer



Note that if you add one single primitive $p$-th root of unity to any field, you add ALL $p$-th roots of unity, since you can obtain them just by taking powers of that one single primitive $p$-th root of unity.




Hence for any irreducible factor $f$ of $\Phi$, the field $\operatorname{GF}(q)[x]/(f)$ is a splitting field of $\Phi$, hence equal to $\operatorname{GF}(q^m)$. Thus $\deg f = m$.


Thursday, January 17, 2019

derivatives - How is an infinitesimal $dx$ different from $Delta x,$?




When I learned calc, I was always taught
$$\frac{df}{dx}= f'(x) = \lim_{h\rightarrow 0} \frac{f(x+h)-f(x)}{(x+h)-x}$$
But I have heard $dx$ is called an infinitesimal and I don't know what this means. In particular, I gather the validity of treating a ratio of differentials is a subtle issue and I'm not sure I get it.

Can someone explain the difference between $dx$ and $\Delta x$?



EDIT:



Here is a related thread:



Is $\frac{\textrm{d}y}{\textrm{d}x}$ not a ratio?



I read that and this is what I don't understand:





There is a way of getting around the logical difficulties with infinitesimals; this is called nonstandard analysis. It's pretty difficult to explain how one sets it up, but you can think of it as creating two classes of real numbers: the ones you are familiar with, that satisfy things like the Archimedean Property, the Supremum Property, and so on, and then you add another, separate class of real numbers that includes infinitesimals and a bunch of other things.




Can someone explain what specifically are these two classes of real numbers and how they are different?


Answer



Back in the days of non-standard analysis, the idea of differentiation was (informally) defined as the ratio between two infinitesimal values.



The real number system (often denoted as $\mathbb{R}$) can be defined in terms of a field. It is a field with properties such as total ordering (basically means every element in it has an order), Archimedean property (which states that every two elements are within an integer multiple of each other). $\mathbb{R}$ can, however, be extended. One example is to allow the existence of imaginary number, $\sqrt{-1}$, in which case you would have complex numbers (and that is also a field).




Extending $\mathbb{R}$ by introducing the element infinitesimal to it would make it lose the Archimedean property.



So when Arturo Magidin talked about "two classes of real numbers", basically he is referring to $\mathbb{R}$ and an ordered field containing all elements in $\mathbb{R}$ and also infinitesimal, a "number" defined as greater than 0 but smaller than any integer unit fraction.


algebra precalculus - Prove by induction $sum_{i=1}^ni^3=frac{n^2(n+1)^2}{4}$ for $nge1$

Prove the following statement $S(n)$ for $n\ge1$:


$$\sum_{i=1}^ni^3=\frac{n^2(n+1)^2}{4}$$


To prove the basis, I substitute $1$ for $n$ in $S(n)$:


$$\sum_{i=1}^11^3=1=\frac{1^2(2)^2}{4}$$


Great. For the inductive step, I assume $S(n)$ to be true and prove $S(n+1)$:


$$\sum_{i=1}^{n+1}i^3=\frac{(n+1)^2(n+2)^2}{4}$$


Considering the sum on the left side:



$$\sum_{i=1}^{n+1}i^3=\sum_{i=1}^ni^3+(n+1)^3$$


I make use of $S(n)$ by substituting its right side for $\sum_{i=1}^ni^3$:


$$\sum_{i=1}^{n+1}i^3=\frac{n^2(n+1)^2}{4}+(n+1)^3$$


This is where I get a little lost. I think I expand the equation to be


$$=\frac{(n^4+2n^3+n^2)}{4}+(n+1)^3$$


but I'm not totally confident about that. Can anyone provide some guidance?

limits - Prove that $lim_{xto 0}frac {sqrt{x^2+x+1}-1}{sin(2x)}= infty$



How do I as precisely as possible prove that the following limit goes to infinity?




$$\lim_{x\to 0}\frac {\sqrt{x^2+x+1}-1}{\sin(2x)}=\infty $$



It seems difficult. I have started the proof by selecting an $M>0$ and attempting to show that the function is $M$ is always greater than the function. My problem seems to be algebraically manipulating the function so that I can extract $|x|$.


Answer



HINT:



The limit is not $\infty$. Note that



$$\begin{align}

\frac{\sqrt{x^2+x+1}-1}{\sin(2x)}&=\left(\frac{\sqrt{x^2+x+1}-1}{\sin(2x)}\right)\left(\frac{\sqrt{x^2+x+1}+1}{\sqrt{x^2+x+1}+1}\right)\\\\
&=\frac{x(x+1)}{\sin(2x)}\,\frac{1}{\sqrt{x^2+x+1}+1}
\end{align}$$



Now, use $\lim_{\theta \to 0}\frac{\sin(\theta)}{\theta}=1$.


real analysis - Natural derivation of the complex exponential function?




Bourbaki shows in a very natural way that every continuous group isomorphism of the additive reals to the positive multiplicative reals is determined by its value at $1$, and in fact, that every such isomorphism is of the form $f_a(x)=a^x$ for $a>0$ and $a\neq 1$. We get the standard real exponential (where $a=e$) when we notice that for any $f_a$, $(f_a)'=g(a)f_a$ where $g$ is a continuous group isomorphism from the positive multiplicative reals to the additive reals. By the intermediate value theorem, there exists some positive real $e$ such that $g(e)=1$ (by our earlier classification of continuous group homomorphisms, we notice that $g$ is in fact the natural log).



Notice that every deduction above follows from a natural question. We never need to guess anything to proceed.



Is there any natural way like the above to derive the complex exponential? The only way I've seen it derived is as follows:



Derive the real exponential by some method (inverse function to the natural log, which is the integral of $1/t$ on the interval $[1,x)$, Bourbaki's method, or some other derivation), then show that it is analytic with infinite radius of convergence (where it converges uniformly and absolutely), which means that it is equal to its Taylor series at 0, which means that we can, by a general result of complex analysis, extend it to an entire function on the complex plane.



This derivation doesn't seem natural to me in the same sense as Bourbaki's derivation of the real exponential, since it requires that we notice some analytic properties of the function, instead of relying on its unique algebraic and topological properties.




Does anyone know of a derivation similar to Bourbaki's for the complex exponential?


Answer



I think essentially the same characterization holds. The complex exponential is the unique Lie group homomorphism from $\mathbb{C}$ to $\mathbb{C}^*$ such that the (real) derivative at the identity is the identity matrix.


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...