Wednesday, October 31, 2018

calculus - How to compute the integral of $int sin(1-x^2),dx$


I tried substitution, $u=1-x^2,\, du = -2x\,dx ,\, dx = -\frac{du}{2x}$, yielding:



\begin{align} & \int \sin u \left(-\frac{du} 2 \right) \\[10pt] = {} & -\frac{1}{2}\int \sin(u)\,du \\[10pt] = {} & -\frac{1}{2}\cos u \\[10pt] = {} & -\frac{1}{2}\cos (1-x^2) \end{align}


WolframAlpha is showing something completely different, however. What's wrong with my solution?


Answer



Your substitution is wrong.


Set $$1 - x^2 = a ~~~~~~~ dx = \frac{-1}{2 \sqrt{1 - a}} da$$


Hence


$$-\frac{1}{2}\int \frac{\sin(a)\ da}{\sqrt{1 - a}}$$


Elementary knowledge of analysis tell you that this is a Fresnel Type integral, and the solution can be written as


$$-\frac{1}{2}\sqrt{2 \pi }\cos(1) \left( S\left(\frac{\sqrt{2-2 x}}{\sqrt{\pi }}\right)-\cot(1) C\left(\frac{\sqrt{2-2 x}}{\sqrt{\pi }}\right)\right)$$


Where $S$ and $C$ stand for "Fresnel Sine and Cosine Integral"



Cot stands for the co-tangent.


More on Fresnel Integrals:


https://en.wikipedia.org/wiki/Fresnel_integral


abstract algebra - Computing minimal polynomial in finite field F8


In a finite field $F_q$, I've read that one can get the minimum polynomial $f(z)$ of an element $\beta \in F_q$ using this formula:



$$f(z) = (z-\beta)(z-\beta^2)(z-\beta^4)(z-\beta^8)...$$


I'm trying to prove this to myself in the case of the finite field $F_8 = F_2[x]/(x^3 + x + 1)$


The elements of this field are:


$$0, 1, x, x^2, x^3\equiv (x+1), x^4\equiv (x^2+x), x^5\equiv (x^2+x+1), x^6\equiv (x^2 +1)$$


Let's say I want to find the minimal polynomial of $x$, which should also be the minimal polynomial for its conjugates $x^2$ and $x^4 \equiv x^2+x$. Since $x^8\equiv x$, I can ignore the higher powers.


Since coefficients are in $F_2$, $+1$ and $-1$ are considered equivalent:


$f(z) = (z+x)(z+x^2)(z+x^4)$


$f(z) = z^3 + z^2(x^4 + x^2 + 1) + z(x^6 + x^5 + x^3) + x^7$


Using the conversions I've listed above, I can replace all the powers of $x^3$ or higher with lower-degree polynomials:


$f(z) = z^3 + z^2((x^2+x) + x^2 + 1) + z((x^2+1) + (x^2+x+1) + (x+1)) + 1$



$f(z) = z^3 + z^2(x + 1) + z + 1$


This isn't what I'm expecting... I would have expected the $z^2$ term to go to zero. Am I making an algebra mistake here?


Answer



The $z^2$ term is $x^4 + x^2 + x$, not $x^4+x^2+1$ as you have written.


number theory - If $a mid m$ and $(a + 1) mid m$, prove $a(a + 1) | m$.



Can anyone help me out here? Can't seem to find the right rules of divisibility to show this:




If $a \mid m$ and $(a + 1) \mid m$, then $a(a + 1) \mid m$.


Answer



It is not surprising that you are finding this difficult, because it goes beyond basic divisibility rules -- it rather requires something which is essentially equivalent to the uniqueness of prime factorization. [Edit: Actually this is comment is incorrect -- as Robin Chapman's answer shows, it is possible to prove this using just divisibility rules. In particular it is true in any integral domain.]



I assume $a$ and $m$ are positive integers. The first observation is that $a$ and $a+1$ are relatively prime: i.e., there is no integer $d > 1$ -- or equivalently, no prime number-- which divides both $a$ and $a+1$, for then $d$ would have to divide $(a+1) - a = 1$, so $d = 1$.



Now the key step: since $a$ divides $m$, we may write $m = aM$ for some positive integer $M$. So $a+1$ divides $aM$ and is relatively prime to $a$. I claim that this implies $a+1$ divides $M$. Assuming this, we have $M = (a+1)N$, say, so altogether



$m = aM = a(a+1)N$, so $a(a+1)$ divides $m$.




The claim is a special case of:



(Generalized) Euclid's Lemma: Let $a,b,c$ be positive integers. Suppose $a$ divides $bc$ and $a$ is relatively prime to $b$. Then $a$ divides $c$.



A formal proof of this requires some work! See for instance



http://en.wikipedia.org/wiki/Euclid's_lemma



In particular, proving this is essentialy as hard as proving the fundamental theorem of arithmetic.


Tuesday, October 30, 2018

pi - This proof is wrong. How? (See image in link below)




Came across this on the internet. I have some ideas regarding this but I wanted to know more such reasonings.
Proof of Pi


Answer



Fundamentally, you have jumped in without a definition of the length of an arc.



The circumference isn't approximated by the sum of lengths of the lines drawn as shown in the image, but by the sum of the hypotenuses of the right-angled triangles formed around the edge of the circle.


convergence divergence - Show that the sequence $(T_n)_{ngeq 1}$ converges in probability to the constant $2p$



Let $X_n$ ~ Bernoulli(p). Let $Y_n = X_n + X_{n+1}$.
Let $T_n = \frac{1}{n}\sum_{i=1}^{n} Y_i$.
I want to show that the sequence $(T_n)_{n\geq 1}$ converges in probability to the constant 2p.



I found that $E[T_n] = 2p$ and that $\operatorname{Var}[T_n] = 2p(1-p)\frac{2n-1}{n^2}$.



My definition of convergence in probability is the following:
$$\forall \epsilon > 0 \space\mathbb{P}(\vert T_n - 2p \vert > \epsilon) \to 0$$




I can also use the following criterion:



Convergence in probability iff $$\lim_{n\to\infty} \mathbb{E}\Big[\frac{\vert T_n - 2p\vert}{\vert T_n - 2p\vert + 1}\Big] = 0$$



To me using the criterion here seems smart because I already know that the expected value is $2p$, but I am not sure how to proceed. Any hints?


Answer



Claim. If $\mu_n = \mathbf{E}(T_n) \to \mu$ and $\sigma_n^2 = \mathbf{V}\mathrm{ar}(T_n) \to 0$ then $T_n \to \mu$ in $\mathscr{L}^2$ and, hence, in probability too.



Proof. We have $\mathbf{E}(|T_n - \mu|^2) = \mathbf{E}(|T_n - \mu_n|^2) + 2(\mu_n - \mu) \mathbf{E}(T_n - \mu_n) + (\mu_n - \mu)^2 \to 0.$ Q.E.D.



divisibility - Number theory problem divides

In class today we were talking about proving the definition of 'divides' and the teacher never got to finish this proof.





if a divides b^2 , then a divides b.




First line was:
Let a and b be integers. a > 0 since you cant divide by 0. If a divides b^2, then there exists an integer q such that b^2 = aq. I would assume the next lines may be trying to show that b^2 is a product of some integer and itself and then that integer can always be divisible by the same q? Not sure here..

elementary number theory - Can you show that $3n+1$ is not divisible by $5$ using congruences?



I'm trying to prove that the difference of two consecutive cubes is never divisible by $5$, and I got to a point where I would have to prove that $3n+1$ is not divisible by $5$, where n is an integer. This problem is from a section that deals with congruences and that's why I'd like to know how to show that 3n+1 is not divisible by $5$ using congruences. There might be better ways to solve the actual problem, but for know I'm just interested in the $3n+1$ part.


Answer



$n = 3 \Rightarrow 3n+1 = 10 \equiv 0 \mod 5$. The steps you took to get to this point were not valid.




The difference of two consecutive cubes is $(n+1)^3 - n^3 = 3n^2 + 3n + 1$. A systematic way to check if this is ever divisible by $5$ is to check for $n = 0, 1, 2, 3, 4$. If none of these are, then the quantity is not divisible by $5$ for any integer $n$. Why?



Any integer can be written as $5m + n$ where $n = 0, 1, 2, 3, 4$ and $m \in \mathbb{Z}$. Then $$3(5m+n)^2 + 3(5m+n) + 1 = 3(25m^2 + 10mn + n^2) + 3(5m + n) + 1 \equiv 3n^2 + 3n + 1 \mod 5.$$


abstract algebra - Tensor product and compositum of linearly disjoint field extensions



Let $k$ be a field and $K/k$ and $L/k$ be two field extensions that are linearly disjoint over $k$. If $K$ and $L$ are finite extensions then it easily follows that the compositum $KL$ and the tensor product $K\bigotimes_k L$ are isomorphic as fields.



My question, is this still true when $K$ and $L$ are infinite extensions?



Edit: It seems unlikely if neither $K$ nor $L$ is algebraic as pointed out below by Jyrki Lahtonen. So, let us assume that $L$ is algebraic over $k$.


Answer



With the assumption that $L/k$ is algebraic this is indeed correct. An outline of an argument is as follows:




Since $L$ is an algebraic extension without loss of generality we may assume that it is finite (since an algebraic extension is a union of its finite subextensions). It is clear (at least to me) that $K\bigotimes_k L$ is a domain (this is true even if $L/k$ is not algebraic).



Let $\{l_1,\ldots,l_n\}$ be a basis of $L/k$ then $1\otimes l_1,\ldots, 1\otimes l_n$ is a basis for $K\bigotimes_k L$ over $K$. Therefore $K\bigotimes_k L$ is a finitely generalted $K$ module which is also a domain. Then it must indeed be a field.


limits - Evaluate $lim_{n to infty }frac{(n!)^{1/n}}{n}$.










Evaluate
$$\lim_{n \to \infty }\frac{(n!)^{1/n}}{n}.$$




Can anyone help me with this? I have no idea how to start with. Thank you.


Answer



Let's work it out elementarily by wisely applying Cauchy-d'Alembert criterion:



$$\lim_{n\to\infty} \frac{n!^{\frac{1}{n}}}{n}=\lim_{n\to\infty}\left(\frac{n!}{n^n}\right)^{\frac{1}{n}} = \lim_{n\to\infty} \frac{(n+1)!}{(n+1)^{(n+1)}}\cdot \frac{n^{n}}{n!} = \lim_{n\to\infty} \frac{n^{n}}{(n+1)^{n}} =\lim_{n\to\infty} \frac{1}{\left(1+\frac{1}{n}\right)^{n}}=\frac{1}{e}. $$



Also notice that by applying Stolz–Cesàro theorem you get the celebre limit:



$$\lim_{n\to\infty} (n+1)!^{\frac{1}{n+1}} - (n)!^{\frac{1}{n}} = \frac{1}{e}.$$




The sequence $L_{n} = (n+1)!^{\frac{1}{n+1}} - (n)!^{\frac{1}{n}}$ is called Lalescu sequence, after the name of a great Romanian mathematician, Traian Lalescu.



Q.E.D.


Monday, October 29, 2018

real analysis - Showing Bounded Derivative $implies$ Lipschitz Function (Uniformly Continuous)


Let $I$ be an interval in $\mathbb{R}$ (bounded or unbounded). Let $f: I \rightarrow \mathbb{R}$ be differentiable, with the property that $|f'(x)| \le M, \forall x \in I$.




Show that $|f(x) - f(y)| \le M |x - y|$, for all $x \in I$.




Essentially, I need to show that a function whose derivative is bounded is uniformly continuous.



I've seen the answer posted at Prove that a function whose derivative is bounded is uniformly continuous., but it seems that answer assumes the conclusion I need to arrive at.



I'm trying to understand the solution posted above, but I don't quite see how they assume the conclusion I need to arrive at. Is the case similar with my question, where we can simply use the Mean Value Theorem? If not, how should I approach this problem? A bounded derivative means that the rate of change for the function will decrease and decrease, which means the function itself must also be decreasing (monotonically?). Is my line of thinking correct, or on the right track? I can't seem to put these thoughts into a formal proof. Any help on this problem would be very helpful. Thank you!

real analysis - Discuss the convergence of the sequence: $a_1=1,a_{n+1}=sqrt{2+a_n} quad forall n in mathbb{N}$


Computing the first few terms $$a_1=1, a_2=\sqrt{3}=1.732....,a_3=1.9318....,a_4=1.9828...$$ I feel that $(a_n)_{n\in \mathbb{N}}$ is bounded above by 2, although I have no logical reasoning for this. Since, $(a_n)_{n\in \mathbb{N}}$ is monotone increasing sequence, it must converge by monotone convergence theorem, and converge to 2.


Can anyone help me to make this more formal? Besides, I would really appreciate if anyone could shed some light on how to find the bound and limit of such sequences (that are not in closed form but in recursion).


Answer



Hints:



$$a_1\le a_2\;\;\text{and}\;\;a_{n+1}:=\sqrt{2+a_n}\stackrel{\text{induction}}\le\sqrt{2+a_{n+1}}=:a_{n+2}$$


$$a_1\le 2\;\;\text{and}\;\; a_{n+1}:=\sqrt{2+a_n}\stackrel{\text{induction}}\le\sqrt{2+2}=2$$


The above shows your sequence is a monotone ascending one and bounded above, so its limit exists, say it is $\;w\;$, and now a little arithmetic of limits:


$$w\leftarrow a_{n+1}=\sqrt{2+a_n}\rightarrow\sqrt{2+w}$$


so $\;w=\ldots\ldots?$


Justifying why 0/0 is indeterminate and 1/0 is undefined



$\dfrac 00=x$
$0x=0$
$x$ can be any value, therefore $\dfrac 00$ can be any value, and is indeterminate.



$\dfrac 10=x$
$0x=1$
There is no such $x$ that satisfies the above, therefore $\dfrac 10$ is undefined.




Is this a reasonable or naive thought process?
It seems too simple to be true.


Answer



In the context of limits, $0/0$ is an indeterminate form (limit could be anything) while $1/0$ is not (limit either doesn't exist or is $\pm\infty$). This is a pretty reasonable way to think about why it is that $0/0$ is indeterminate and $1/0$ is not.



However, as algebraic expressions, neither is defined. Division requires multiplying by a multiplicative inverse, and $0$ doesn't have one.


calculus - A continuous function, with discontinuous derivative, but the limit must exist.



I was reading this question.




The simplest examples of continuous functions, with discontinuous derivatives in some point, are usually of the form:



$$
f(x) = \begin{cases}
x^2 \sin(1/x) &\mbox{if } x \neq 0 \\
0 & \mbox{if } x=0.
\end{cases}
$$
The derivative of $f$ is

$$
f'(x) = \begin{cases}
2 x \sin \left(\frac{1}{x}\right)-\cos \left(\frac{1}{x}\right)&\mbox{if } x \neq 0 \\
0 & \mbox{if } x=0,
\end{cases}
$$



The derivative is discontinuous because the limit of $\cos \left(\frac{1}{x}\right)$ does not exist for $x\rightarrow 0$.



Is there an example where the derivative is still discontinuous but with existing limit?




Thanks


Answer



Suppose $f$ is differentiable in some neighborhood $(x-\delta,x+\delta)$ of $x$, and $\lim\limits_{t\rightarrow x}{f'(t)}$ exists. Define $y:(x-\delta,x+\delta)\rightarrow\mathbb{R}$ such that $y(t)$ is strictly between $x$ and $t$ and
$$f(t)-f(x) = f'(y(t))(t-x)$$
for every $t\in(x-\delta,x+\delta)$. The existence of such a function is guaranteed by the Mean Value Theorem. Since $y(t)$ is between $x$ and $t$ for every $t$, this implies that $\lim\limits_{t\rightarrow x}{y(t)} = x$, and since $y(t)\ne x$ for $t\ne x$ as well, we have $\lim\limits_{t\rightarrow x}{f'(y(t))}=\lim\limits_{s\rightarrow x}{f'(s)}$ by the composition law (think of $s = y(t)$ in this substitution). This implies that
$$f'(x) = \lim\limits_{t\rightarrow x}{\frac{f(t)-f(x)}{t-x}} = \lim\limits_{t\rightarrow x}{f'(y(t))} = \lim\limits_{t\rightarrow x}{f'(t)},$$
i.e. $f'$ is continuous at $x$.







Remark: Typically, the composition law is phrased as follows: if $\lim\limits_{x\rightarrow c}{g(x)} = a$ and $f$ is continuous at $a$, then $\lim\limits_{x\rightarrow c}{f(g(x))} = \lim\limits_{u\rightarrow a}{f(u)}$. In our problem, we obviously cannot assume $f'$ is continuous at $x$, since that is what we are trying to show. However, the above conclusion still holds if we merely require that $g(x)\ne a$ if $x\ne c$ in some neighborhood of $c$. A proof of this can be found here (look for "Hypothesis 2").


Sunday, October 28, 2018

summation - Proof by induction: showing that two sums are equal





usually the tasks look like



$$\sum_{i=0}^n i^2 = \frac{(n^2+n)(2n+1)}{6}$$



or




$$\sum_{i=0}^n i^2 = i_1^2 + i_2^2 + i_3^2+...+i_n^2$$



But for the following task I have this form:



$$\left(\sum_{k=1}^n k\right)^2 = \sum_{k=1}^nk^3 $$



First I am a little confused by how I approach this. Do I transform them into a complete term like in the first example? Or can I do it by just using the sums themselves? And how should I treat the square of the sum best?



The first step is pretty straight forward creating the base case. But as soon as I get into doing the "Induction Step" I am getting stuck and not sure how to proceed.




I am looking to know best practice for this case.



Edit:
This question is a little different, since it is expected to proove this only by using complete induction using the sum notation.


Answer



Assume that $\displaystyle\left(\sum_{k=1}^n k\right)^2 = \sum_{k=1}^nk^3$ holds for $n.$ We want to show that $\displaystyle\left(\sum_{k=1}^{n+1} k\right)^2 = \sum_{k=1}^{n+1}k^3.$ How to do it? Note that



$$\begin{align}\left(\sum_{k=1}^{n+1} k\right)^2&=\left(\sum_{k=1}^{n} k+n+1\right)^2\\&= \color{blue}{\left(\sum_{k=1}^{n} k\right)^2}+2(n+1)\sum_{k=1}^nk+(n+1)^2\\&\underbrace{=}_{\rm{induction}\:\rm{hypothesis}}\color{blue}{\sum_{k=1}^nk^3}+\color{red}{2(n+1)\sum_{k=1}^nk+(n+1)^2}\\&=\sum_{k=1}^{n+1}k^3\end{align}$$ if and only if $\displaystyle(n+1)^3=2(n+1)\sum_{k=1}^nk+(n+1)^2.$ Show this equality and you are done.


convergence divergence - What is the $I_{0}(x)$ function?


While trying to calculate the following infinite sum:


$$ \sum_{k=0}^{\infty} \frac {4^k}{(k!)^{2}}$$


I got the result: $I_{0}(4) = 11.301...$


I've never encountered this function before ($ I_{0}(x) $), can someone please describe it and explain why the above infinite sum converges to an output of this function?


I expected something having to do with the exponential function since $$ \sum_{k=0}^{\infty} \frac {\mu^k}{k!} = e^\mu $$



Answer



The modified Bessel function of the first kind has a power series expansion $$ I_{\alpha}(x)=\sum_{k=0}^{\infty}\frac{1}{k!\Gamma(k+\alpha+1)}\Big(\frac{x}{2}\Big)^{2k+\alpha} $$


Taking $\alpha=0$ and using $\Gamma(k+1)=k!$, and then setting $x=4$, we get $$ I_0(4)=\sum_{k=0}^{\infty}\frac{4^k}{(k!)^2} $$ which is your sum.


algebra precalculus - Find closed formula by changing order of summation: $sum_{i=1}^ni3^i$



Working on homework for a probability and computing class, but my ability to work with summations is rusty to say the least, so I suspect this is going to turn out pretty straightforward.



Problem asks to find a closed formula for $$\sum_{i=1}^ni3^i$$ by representing it as a double sum and changing the order of summation. I did that by following a hint from the instructor and came up with $$\sum_{k=1}^n\sum_{i=k}^n3^i,$$ but I'm not really sure what that accomplished. What's the next step? What am I looking for here?


Answer



Here is a rather detailed elaboration which might be useful.





We obtain
\begin{align*}
\color{blue}{\sum_{i=1}^ni3^i}&=\sum_{i=1}^n\left(\sum_{k=1}^i 1\right)3^i\tag{1}\\
&=\sum_{i=1}^n\sum_{k=1}^i 3^i
=\sum_{1\leq k\leq i\leq n}3^i
=\sum_{k=1}^n\sum_{i=k}^n3^i\tag{2}\\
&=\sum_{k=1}^n\sum_{i=0}^{n-k}3^{i+k}\tag{3}\\
&=\sum_{k=1}^n3^k\cdot\frac{3^{n-k+1}-1}{3-1}\tag{4}\\
&=\frac{1}{2}\sum_{k=1}^n\left(3^{n+1}-3^k\right)\tag{5}\\

&=\frac{n}{2}3^{n+1}-\frac{1}{2}\sum_{k=1}^n3^k\tag{6}\\
&=\frac{n}{2}3^{n+1}-\frac{1}{2}\cdot\left(\frac{3^{n+1}-1}{3-1}-1\right)\tag{7}\\
&=\frac{n}{2}3^{n+1}-\frac{1}{4}3^{n+1}+\frac{3}{4}\tag{8}\\
&\color{blue}{=\frac{n}{4}(2n-1)3^{n+1}+\frac{3}{4}}\tag{9}
\end{align*}




Comment:





  • In (1) we represent the factor $i$ as sum.


  • In (2) we multiply out in the left-hand sum and write the index range somewhat more conveniently in the middle sum. We exchange the sums in the right-hand double-sum.


  • In (3) we shift the index of the inner sum to start from $i=0$.


  • In (4) we apply the finite geometric summation formula.


  • In (5) we do some simplifications.


  • In (6) we multiply out and do some simplifications.


  • In (7) we apply the finite geometric summation formula again.


  • In (8) and (9) we do some more simplifications.



Saturday, October 27, 2018

real analysis - I need to find all functions $f:mathbb R rightarrow mathbb R$ which are continuous and satisfy $f(x+y)=f(x)+f(y)$

I need to find all functions $f:\mathbb R \rightarrow \mathbb R$ such that $f(x+y)=f(x)+f(y)$. I know that there are other questions that are asking the same thing, but I'm trying to figure this out by myself as best as possible. Here is how I started out:




Try out some cases:



$x=0:$
$$f(0+y)=f(0)+f(y) \iff f(y)=f(0)+f(y) \iff 0=f(0) $$
The same result is for when $y=0$



$x=-y:$
$$f(-y+y)=f(-y)+f(y) \iff f(0)=f(-y)+f(y) \iff 0=f(-y)+f(y)\iff \quad f(-y)=-f(y)$$
I want to extend the result of setting $x=-y$ to numbers other that $-1$, perhaps all real numbers or all rational numbers. I got a little help from reading other solutions on the next part:




Let $q=1+1+1+...+1$. Then
$$f(qx)=f((1+1+...+1)x)=f(x+x+...+x)=f(x)+f(x)+...+f(x)=qf(x)$$
I understood this part, but I don't understand why this helps me find all the functions that satisfy the requirement that $f(x+y)=f(x)+f(y)$, but here is how I went on:



Thus
$$f(qx)=qf(x)$$ and it should follow that
$$f \bigg (\frac {1}{q} x\bigg)= \frac{1}{q}f(x)$$ where $q\not =0$, then it further follows that
$$f \bigg (\frac {p}{q} x\bigg)= \frac{p}{q}f(x)$$ where $\frac{p}{q}$ is rational, and lastly it further follows that
$$f (ax)= af(x)$$ where $a$ is real. Thus functions of the form $f(ax)$ where $a$ is real satisfies the requirement of $f(x+y)=f(x)+f(y)$.




I don't know how much of what I did is correct\incorrect, and any help would be greatly appreciated. Also is there any way that I can say that functions of the form $f(ax)$ where $a$ is real are the only functions that satisfy the requirement of $f(x+y)=f(x)+f(y)$? Or do other solutions exist?



Again, thanks a lot for any help! (Hints would be appreciated, I'll really try to understand the hints!)

real analysis - Understanding and Proving Continuity of a Function in a Broad Sense

At the moment I am studying a chapter about Sobolev spaces in a book on partial differential equations authored by Evans. I have a question about continuity of a function in a broad sense. More specifically how to show that a given function is continuous. A function could be continuous in different ways. Such as Holder, Lipschitz or uniform continuous. We could also define a function as being continuous in a weak or strong sense.



I am aware of the basic definition of continuity. A small change in the input should lead to a small change in the output. No jumps should occur in a graph of a function. I know a little why it is important. For instance: Picard-Lindelof theorem.




I asked my self how I could know that a function is continuous or not was by just plotting it in a plotting tool and evaluate the graph. No jumps in the graph means that it is continuous. However, if I would do this this would lead to a few confusions. Such as how can I see that it is Holder or Lipschitz continous? What is the Lipschitz constant? If this would work so well why do we need use a weak formulation any way? Why do we just not put the non-weak formulated function in a program, plot it and make our conclusions?



About what I said about weak formulation. I do know why we use a weak formulation (according to the explanation of Wikipedia and the book that I described earlier). However, I am unable to connect all the dots.



EDIT: I am also aware of the epsilon-delta definition.

calculus - Radius of Convergence of $sum_{n = 0}^{infty} frac{(-1)^nn!x^n}{n^n}$




I'm taking the AP Calculus BC Exam next week and ran into this problem with no idea how to solve it. Unfortunately, the answer key didn't provide explanations, and I'd really, really appreciate it if someone could explain why the answer is $e$.




which of the following is the radius of the convergence of the series $\large{\sum\limits_{n=0}^\infty\frac{(-1)^n n!x^n}{n^n}}$



$(A) \quad 0,\qquad (B) \quad\frac1e,\qquad (C)\quad1,\qquad (D) \quad e,\qquad (E) \infty$




Thank you all so much.


Answer




Denote $\sum a_n x^n$ the given series then



$$\left\vert\frac{a_{n+1}}{a_n}\right\vert=\left(1+\frac1n\right)^{-n}\xrightarrow{n\to\infty}\frac1e$$
so by the ratio test, the radius of convergence is $e$.


summation - Evaluate the sum of this series


Please help me find the sum of this series:


$$1 + \frac{2}{3}\cdot\frac{1}{2} + \frac{2}{3}\cdot\frac{5}{6}\cdot\frac{1}{2^2} + \frac{2}{3}\cdot\frac{5}{6}\cdot\frac{8}{9}\cdot\frac{1}{2^3} + \cdots$$


All I could figure out was to find the $n^{\text{th}}$ term as:


$$a_n = \frac{2 \cdot (2+3) \cdots(2+3(n-1))}{3 \cdot 6 \cdot 9 \cdots 3(n-1)} \cdot\frac{1}{2^{n-1}}$$


What To do ahead of it. I don't know. Please help.



Answer



Let $S$ denote the sum. We write each term (with indices starting at $0$) as


$$ \left( \prod_{k=1}^{n} \frac{3k-1}{3k} \right) \frac{1}{2^n} = \frac{\prod_{k=0}^{n-1} (-\frac{2}{3}-k)}{n!} \left(-\frac{1}{2}\right)^n = \binom{-2/3}{n} \left(-\frac{1}{2}\right)^n. $$


Then we easily recognize $S$ as a bionmial series and hence


$$ S = \sum_{n=0}^{\infty}\binom{-2/3}{n} \left(-\frac{1}{2}\right)^n = \left(1 - \frac{1}{2}\right)^{-2/3} = 2^{2/3}.$$


Friday, October 26, 2018

real analysis - Is there a way to show the sum of any different square root of prime numbers is irrational?

Is there a way to show the sum of any different square root of prime numbers is irrational? For example, $$\sqrt2+\sqrt3+\sqrt5 +\sqrt7+\sqrt{11}+\sqrt{13}+\sqrt{17}+\sqrt{19}$$ should be a irrational number.



One approach I used is to let the sum be a solution of an even polynomial $f(x)$with integer coefficients and prove by induction that by adding another $\sqrt{p_{k+1}}$. The new polynomial can be written as $$f(x+\sqrt{p_{k+1}})f(x-\sqrt{p_{k+1}})$$



where $$f(x+-\sqrt{p_{k+1}})=P(x)+- Q(x)\sqrt{p_{k+1}},$$



where $P(x)$ is an even plynomial and $Q(x)$
is an odd polynomial.




The new polynomial can be written as $$P^{2}(x)- Q^{2}(x)p_{k+1}.$$



Assume it has a rational solution $a$, we must have$$P(a)=Q(a)=0.$$



My calculation stopped here since I can't find any contradiction result from this. Can anyone continue this proof, or has other better way to solve this? Thanks!

Basic theory about divisibility and modular arithmetic




I am awfully bad with number theory so if one can provide a quick solution of this, it will be very much appreciated!



Prove that if $p$ is a prime with $p \equiv 1(\mod4) $ then there is an integer $m$ such that $p$ divides $m^2 +1$


Answer



I will assume that you know Wilson's Theorem, which says that if $p$ is prime, then $(p-1)!\equiv -1\pmod{p}$.



Let $m=\left(\frac{p-1}{2}\right)^2$. We show that if $p\equiv 1\pmod{4}$, then $m^2\equiv -1\pmod{p}$. This implies that $p$ divides $m^2+1$.



The idea is to pair $1$ with $p-1$, $2$ with $p-2$, $3$ with $p-3$, and so on until at the end we pair $\frac{p-1}{2}$ with $\frac{p+1}{2}$. To follow the argument, you may want to work with a specific prime, such as $p=13$. So we pair $1$ with $12$, $2$ with $11$, $3$ with $10$, $4$ with $9$, $5$ with $8$, and finally $6$ with $7$.




Thus for any $a$ from $1$ to $\frac{p-1}{2}$, we pair $a$ with $p-a$. Note that $a(p-a)\equiv -a^2\pmod{p}$.



So the product of all the numbers from $1$ to $p-1$ is congruent modulo $p$ to the product
$$(-1^2)(-2^2)(-3^2)\cdot(-\left(\frac{p-1}{2}\right)^2).$$
This is congruent to
$$(-1)^{\frac{p-1}{2}}m^2.$$
But $p-1$ is divisible by $4$, so $\frac{p-1}{2}$ is even, and therefore our product is congruent to $m^2$.



But our product is congruent to $(p-1)!$, and therefore, by Wilson's Theorem, it is congruent to $-1$.




We conclude that $m^2\equiv -1\pmod{p}$, which is what we wanted to show.


elementary set theory - Cardinality of a set A is strictly less than the cardinality of the power set of A



I am trying to prove the following statement but have trouble comprehending/going forward with some parts! Here is the statement:




If $A$ is any set, then $|A|$ $<$ $|P(A)|$





Here is what I have so far:



We need to show that there is an injection from $A$ to $P(A)$ but not a surjection.



A natural choice for an injection is the function $ f(x)$ $=$ $\{x \}$, which in plain English, takes any element $x$ (that is in $A$) and sends it to the one-element set $\{x \}$. Thus $f(x)$ is injective!



To show that there is no surjection, for the sake of contradiction, assume there is a surjection. Here is where I start to have trouble. Surjectivity means that every element of the co-domain is mapped to an element of the domain, correct? Consequently, in this particular case, we are "matching" sets (from $P(A)$) to elements (from $A$) right?



If the above is correct, my problem arises here. I am not sure how to prove that $f$ is not surjective. Unfortunately, I am easily confused by notation so please explain in English.
Thank you in advance!! :)



Answer



What you want here is the so-called diagonal argument. The idea is to show that no matter what function $f:A\to\wp(A)$ you look at, you can find a subset $S_f$ of $A$ that is not in the range of $f$. If you can do that, you’ve shown that there is no map of $A$ onto $\wp(A)$ and therefore certainly no bijection from $A$ to $\wp(A)$.



To build the set $S_f$, imagine that you could somehow go through the set $A$ one element at a time. You look at an element $a\in A$, and one of two things must be true: either $a\in f(a)$, or $a\notin f(a)$. (Remember, $f(a)$ is some subset of $A$, so it’s meaningful to ask whether that subset contains $a$.) Since we’re building the set $S_f$ to suit ourselves, we get to decide whether $a\in S_f$ or not, and we’ll decide in exactly the opposite way from the function $f$: if $a\notin f(a)$, we’ll put $a$ into $S_f$, and if $a\in f(a)$, we won’t put $a$ into $S_f$. After we’ve done this for each $a\in A$, our set $S_f$ will contain exactly those $a\in A$ such that $a\notin f(a)$:



$$S_f=\{a\in A:a\notin f(a)\}\;.$$



For each $a\in A$, therefore, the sets $S_f$ and $f(a)$ differ in how they treat $a$: if $a\in f(a)$, then $a\notin S_f$, and if $a\notin f(a)$, then $a\in S_f$.



That’s almost the entire argument: all you have to do to finish it off is explain why this ensures that for $S_f$ is not the set $f(a)$ for any $a\in A$ and why this implies that $S_f$ is not in the range of $f$ and hence that $f$ is not a surjection.



Thursday, October 25, 2018

analysis - Space of continuous functions linear operator eigenvalues



Let $V$ be the vector space of continuous functions from $\mathbb R$ to $\mathbb R$. Let $T$ be the linear operator on $V$ defined as $$(Tf)(x)=\int_0^x f(t)\,dt$$

Prove that $T$ doesn't have eigenvalues.



I want to show that it doesn't exist any continuous function $f \neq 0$ with $$(1) \space \int_0^x f(t)\,dt=\lambda f(x), \space \text{for some} \space\lambda \space\text{and for all }x \in \mathbb R $$



So suppose it does, then $\lambda f(0)=\int_0^0f(t)\,dt=0$, from here it follows $f(0)=0$. From equation (1) we also arrive to the inequality $$\lambda|f(x)| \leq |x|\max |f(t)|$$



where $\max |f(t)|$ is the maximum the function attains on the $[0,x]$ or $[x,0]$ respectively.



I would like to arrive to an absurd but I couldn't, any hints or suggestions would be greatly appreciated.


Answer




It follows from your equation (1) that $f$ is differentiable and
$f(x) = \lambda f'(x)$ for all $x \in \mathbb R$. The general solution of
this differential equation is $f(x) = C \exp(x/\lambda)$ for some $C \in \mathbb R$.
Since $f(0) = 0$, $f$ is identically zero.


Compute infinite sum of a arithmetico-geometric series $sum_{i=0}^{infty} frac{i}{2^i}$




I am trying to compute the sum



$\sum_{i=0}^{\infty} \frac{i}{2^i}$



which I know should be equal to $2$, but I cannot prove it.



If I am not mistaken, it should be a arithmetico-geometric series (Wikipedia), hence the title.




Any help greatly appreciated!


Answer



Hint



Consider the series $$\sum_{i=0}^{\infty} {i}{x^i}=x \sum_{i=0}^{\infty} {i}{x^{i-1}}=x \frac {d}{dx}\sum_{i=0}^{\infty} {x^{i}}$$ Now you have a geometric series. Compute its sum, take its derivative, multiply by $x$ and replace $x$ by $\frac {1}{2}$.



I am sure that you can take from here.


Power series solution of $ f(x+y) = f(x)f(y) $ functional equation




Here on StackExchange I read a lot of interesting questions and answers about functional equations, for example a list of properties and links to questions is Overview of basic facts about Cauchy functional equation.



I'm interested in the following problem:
if $f:\mathbb{R} \rightarrow \mathbb{R}$ is a continuous function verifying the functional equation $f(x+y)=f(x)f(y), \ \forall x,y\in \mathbb{R}$, find its non identically zero solution using power series.



My attempt so far using power series:
let
$$f(x) = \sum_{n=0}^{\infty} a_{n} \, x^{n}$$
so
$$f(y) = \sum_{n=0}^{\infty} a_{n} \, y^{n} $$
and

$$f(x+y) = \sum_{n=0}^{\infty} a_{n} \, (x+y)^{n}$$



The functional equation $f(x+y)=f(x)f(y)$ leads to
$$\sum_{n=0}^{\infty} a_{n} \, (x+y)^{n}=\sum_{n=0}^{\infty} a_{n} \, x^{n}\sum_{n=0}^{\infty} a_{n} \, y^{n}$$



Using the binomial theorem
$$(x+y)^{n} = \sum_{k=0}^{n}\binom{n}{k}x^ky^{n-k}$$
and the Cauchy product of series
$$\sum_{n=0}^{\infty} a_{n} \, x^{n}\sum_{n=0}^{\infty} a_{n} \, y^{n} = \sum_{n=0}^{\infty}(\sum_{k=0}^n a_k a_{n-k}x^k y^{n-k})$$
it follows
$$\sum_{n=0}^{\infty} a_{n} (\sum_{k=0}^{n}\binom{n}{k}x^ky^{n-k})=\sum_{n=0}^{\infty}(\sum_{k=0}^n a_k a_{n-k}x^k y^{n-k})$$

$$\sum_{n=0}^{\infty}(\sum_{k=0}^{n} a_{n} \binom{n}{k}x^ky^{n-k})=\sum_{n=0}^{\infty}(\sum_{k=0}^n a_k a_{n-k}x^k y^{n-k})$$



Now I need to equate the coefficients:
$$\forall n\in\mathbb N, \;\;\;\; \; a_{n} \binom{n}{k} = a_k a_{n-k} \;\; \textrm{for } k= 0,1,...,n $$



The first equation, for $n=0$, is $a_0=a_0a_0$, that is $a_0(a_0-1)=0$ with solutions $a_0=0$ and $a_0=1$. If $a_0=0$ every coefficient would be zero, so we have found the first term of the power series: $a_0=1$.



Now the problem is to determine the remaining coefficients. I tried, but it's too difficult to me.


Answer



From $a_n{n\choose n-1} = a_{n-1}a_1$ we have $a_n = a_{n-1}\dfrac{a_1}{n}$. So $a_n = \dfrac{a_1^n}{n!}$.




We know that the functional equation has as solutions the expnential functions $f(x) = a^x$ for some positive real number $a$. We are insterested to know if there is a relation between $a$ and the coefficient $a_1$.



Let us call $f_{a_1}(x)$ the solution of the functional equation where the coefficients are $(a_1)^n/n!$ and let $e$ be the real number defined by $f_1(x)$, i.e. $f_1(x) = e^x$. Then the series expansion tells us that $f_1(a_1x) = f_{a_1}(x)$, i.e. $e^{a_1x} = a^x$. For $x = 1$ we have that $e^{a_1} = a$.



From the series expansion, one sees that $e^x$ is a strictly increasing function and it's continuous by definition. Thus it has a continuous inverse. Let us call $\ln(x) = f^{-1}_{1}(x)$. Then $a_1 = \ln(a)$.


Induction proof for a summation: $sum_{i=1}^n i^3 = left[sum_{i=1}^n iright]^2$




Prove by induction: $\sum_{i=1}^n i^3 = \left[\sum_{i=1}^n i\right]^2$. Hint: Use $k(k+1)^2 = 2(k+1)\sum i$.



Basis: $n = 1$ $\sum_{i=1}^1 i^3 = \left[\sum_{i=1}^1 i\right]^2 \to 1^3 = 1^2 \to 1 = 1$.



Hypothesis: Assume true for all $n \le k$.




So far I have the following:



$$\sum_{i=1}^{k+1} i^3 = (k+1)^3 + \sum_{i=1}^k i^3$$



$$(k+1)^3 + \left[\sum_{i=1}^k i\right]^2$$


Answer



For $n=k+1$, $$\sum_{i=1}^{k+1}i^3 = \sum_{i=1}^{k}i^3+(k+1)^3=(\sum_{i=1}^{k}i)^2+(k+1)^3=(\sum_{i=1}^{k}i)^2+k(k+1)^2+(k+1)^2$$



Now using the Hint: $k(k+1)^2 = 2(k+1)\sum i$.




$$=(\sum_{i=1}^{k}i)^2+2(k+1)\sum_{i=1}^k i+(k+1)^2=(\sum_{i=1}^{k+1}i)^2$$


Wednesday, October 24, 2018

real analysis - Is this a Darboux function?



Let $f(x)=x$ if $0\leq x\leq 1$ and $f(x)=x-\frac{1}{2}$ if $1

Questions


  • Am I right? Is $f$ a Darboux function?

  • If yes, then why do authors give complicated examples of Darboux functions like $g(x)=\sin\left(\frac{1}{x}\right)$ if $x>0$ and $g(0)=0$ or $h(x)=\left(x^2\sin\left(\frac{1}{x}\right)\right)'$ which are hard to draw near $x=0$?. Even worse when they mention Conway base 13 function.
  • Answer



    It doesn't satisfy the intermediate value theorem :



    $$f(\frac{9}{10}) = \frac{9}{10}$$



    $$f(\frac{11}{10}) = \frac{6}{10}$$




    But there is no $x \in [\frac{9}{10}, \frac{11}{10}]$ such that $f(x) = \frac{8}{10}$


    Probability of winning dice roll-off with a re-roll

    I am looking to find the probability of winning a basic dice roll-off using a 6 sided die if one of the players can re-roll their die. The main thing that is throwing me off is that player 2 can re-roll the die but doesn't have to, and if the first roll or the re-roll equals player 1's roll then the process restarts.



    Example 1:
    Player 1 - Rolls a 2
    Player 2 - Rolls a 4 (win)




    Example 2:
    Player 1 - Rolls a 2
    Player 2 - Rolls a 1, re-rolls and gets a 5 (win)



    Example 3:
    Player 1 - Rolls a 5
    Player 2 - Rolls a 5
    At this point player 2 can call it a tie and start fresh, or use his re-roll to attempt and roll a 6, although this doesn't seem to be his best option to win.




    Example 4:
    Player 1 - Rolls a 4
    Player 2 - Rolls a 4
    At this point player 2 decides to call it a tie, and then they both re-roll. Player 2 still has the ability to then re-roll his result in this round.
    Player 1 - Rolls a 3
    Player 2 - Rolls a 2, re-rolls and gets a 1 (loss)

    Tuesday, October 23, 2018

    sequences and series - Finding $lim_{ntoinfty}(frac{1}{sqrt{n^2+1}} + frac{1}{sqrt{n^2+2}} + ... + frac{1}{sqrt{n^2+n}})$


    I'm trying to find $\lim_{n\to\infty}(\frac{1}{\sqrt{n^2+1}} + \frac{1}{\sqrt{n^2+2}} + ... + \frac{1}{\sqrt{n^2+n}})$.


    • I tried to use the squeeze theorem, failed.

    • I tried to use a sequence defined recursively: $a_{n+1} = {a_n} + \frac{1}{\sqrt{(n+1)^2 +n+1}}$. It is a monotone growing sequence, for every $n$, $a_n > 0$. I also defined $f(x) = \frac{1}{\sqrt{(x+1)^2 +x+1}}$. So $a_{n+1} = a_n + f(a_n)$. But I'm stuck.

    How can I calculate it?


    Answer



    It looks squeezable.


    \begin{align} \frac{n}{\sqrt{n^2+n}} \le \sum_{k=1}^n\frac{1}{\sqrt{n^2+k}} \le \frac{n}{\sqrt{n^2+1}} \\ \\ \frac{1}{\sqrt{1+\frac{1}{n}}} \le \sum_{k=1}^n\frac{1}{\sqrt{n^2+k}} \le \frac{1}{\sqrt{1+\frac{1}{n^2}}} \end{align}


    elementary set theory - A bijection between $X times (Y times Z)$ and $ (X times Y) times Z$

    Can someone help me prove that
    $$X \times (Y \times Z) \sim (X \times Y) \times Z$$

    I know that there is supposed to be a bijection between these two. The first one will contain elements like $(x, (y, z))$ and the second one $((x, y) z)$. I just need some instructions, I believe I can manage on my own from that.

    Number divisibility n



    Let, $S= 2^{1} + 22^{11} + 222^{111} \cdots +22222222222^{11111111111}$.




    1. What will be the remainder if $S$ is divided by $7$ ?



    2. Prove that $S$ is not divisible by $5$




    Attemp:



    *The first part can be solved by using the Chinese Remainder Theorem and modular arithmetic.



    Anyone has anyother clue or hint to solve the problem ?


    Answer



    For the last part: taking everything modulo $5$ and noting that bases are $\equiv 2$ modulo $5$ while we only have to consider the exponents modulo $4$ because of Fermat's little theorem, so we get remainders mod 5 of $2^1 = 2$, and $2^{3} \equiv 3$ for the other ones (as all exponents are 11 mod 100, so 3 mod 4), so $10 \times 3$ for the last terms and so 32 in total which is still $2$ modulo $5$ and not $0$ so the sum is not divisible by $5$.




    For the first part look at modulo $7$ values. The bases are (mod 7):



    $$2,1,5,3,4,0,2,1,5,3,4$$



    while the exponents (mod 6, because of little Fermat) are:



    $$1,5,3,1,5,3,1,5,3,1,5$$



    so consider $$2^1+ 1^5 + 5^3 + 3^1+4^5+ 0^1+2^5+1^3+5^1+4^5 \pmod{7}$$




    which equals $0$. So the number is divisible by $7$.


    Monday, October 22, 2018

    calculus - How to find $limlimits_{xto0}frac{e^x-1-x}{x^2}$ without using l'Hopital's rule nor any series expansion?

    Is it possible to determine the limit


    $$\lim_{x\to0}\frac{e^x-1-x}{x^2}$$


    without using l'Hopital's rule nor any series expansion?


    For example, suppose you are a student that has not studied derivative yet (and so not even Taylor formula and Taylor series).

    Sunday, October 21, 2018

    limits - "Proving" that $0^0 = 1$




    I know that $0^0$ is one of the seven common indeterminate forms of limits, and I found on wikipedia two very simple examples in which one limit equates to 1, and the other to 0. I also saw here: Prove that $0^0 = 1$ using binomial theorem
    that you can define $0^0$ as 1 if you'd like.



    Even so, I was curious, so I did some work and seemingly demonstrated that $0^0$ always equals 1.



    My Work:




    $$y=\lim_{x\rightarrow0^+}{(x^x)}$$



    $$\ln{y} = \lim_{x\rightarrow0^+}{(x\ln{x})} $$



    $$\ln{y}= \lim_{x\rightarrow0^+}{\frac{\ln{x}}{x^{-1}}} = -\frac{∞}{∞} $$



    $\implies$ Use L'Hôpital's Rule



    $$\ln{y}=\lim_{x\rightarrow0^+}\frac{x^{-1}}{-x^{-2}} $$
    $$\ln{y}=\lim_{x\rightarrow0^+} -x = 0$$

    $$y = e^{0} = 1$$



    What is wrong with this work? Does it have something to do with using $x^x$ rather than $f(x)^{g(x)}$? Or does it have something to do with using operations inside limits? If not, why is $0^0$ considered indeterminate at all?


    Answer



    Someone said that $0^0=1$ is correct, and got a flood of downvotes and a comment saying it was simply wrong. I think that someone, me for example, should point out that while saying $0^0=1$ is correct is an exaggeration, calling that "simply wrong" isn't quite right either. There are many contexts in which $0^0=1$ is the standard convention.



    Two examples. First, power series. If we say $f(t)=\sum_{n=0}^\infty a_nt^n$ that's supposed to entail that $f(0)=a_0$. But $f(0)=a_0$ depends on the convention that $0^0=1$.



    Second, elementary set theory: Say $|A|$ is the cardinality of $A$. The cardinality of the set off all functions from $A$ to $B$ should be $|B|^{|A|}$. Now what if $A=B=\emptyset$? There as well we want to say $0^0=1$; otherwise we could just say the cardinality of the set of all maps was $|B|^{|A|}$ unless $A$ and $B$ are both empty.




    (Yes, there is exactly one function $f:\emptyset\to\emptyset$...)



    Edit: Seems to be a popular answer, but I just realized that it really doesn't address what the OP said. For the record, of course the OP is nonetheless wrong in claiming to have proved that $0^0=1$. It's often left undefined, and in any case one does not prove definitions...


    abstract algebra - Algebraic Closure & Algebraic Elements

    I learn that the algebraic closure of rational numbers are algebraic numbers. Is this true in general, namely




    For all field $F$ and its algebraic closure $\bar F$, does $\alpha\in \bar F$ imply $\alpha$ is algebraic over $F$?




    My idea:



    The set of algebraic elements in the algebraic closure $F$ forms a field, since for every algebraic elements $\alpha, \beta \in \bar F$, $\alpha+\beta, -\alpha, \alpha \beta, \alpha^{-1} \in F(\alpha, \beta)$. But $[F(\alpha, \beta)]= [F(\alpha)(\beta): F(\alpha)][F(\alpha):F]$ is finite, so all the four of them is again an algebraic element.




    I think that what I conjectured is not true. Actually, in the famous proof of the existence of algebraic closure over a field (due to E. Artin), the algebraic closure of $F$ is constructed as follows:




    1. Construct a field $F_1$ over $F_0:=F$ by taking quotient ring over a huge polynomial ring and its maximal ideal so that every polynomial $p(x)\in F[x]$ has a root.

    2. Construct $F_2$, $F_3$, ... similarly.

    3. Define $E=\bigcup_{k=0}^{\infty} F_k$. Then $E$ is the algebraic closure of $F$ that we want.



    If the field of all algebraic numbers over any field is algebraic closed, then we do not have to construct $F_2$, $F_3$, ... sequentially. It seems that if $F$ is perfect, then every algebraic element over $F$ is contained in $F_1$, so $F_1$ is already sufficient. We don’t have to bother to write down the rest of the proof.




    Of course, this is not a proof, but I can’t find any counterexample. Any supporting materials can be helpful. Thanks in advance.






    As Don Thousand said in the comments, this is true since it is impossible to add transcendental elements in the process I mentioned. However, I cannot come up with a proof that explains why elements in $F_2$, $F_3$, ... remains to be algebraic, nor did I find any material that support that claim. Is that true? Can you please provide a prove?

    calculus - Simplify to terms of generalized power mean



    I have the following expression${}^1$ I'd like to simplify
    $$r \equiv \left[ \frac1{x\sqrt{x}} \left( 1 - \frac{k}{x} \right) - \frac1{y\sqrt{y}} \left( 1 - \frac{k}{y} \right) \right]\left[ \frac1{\sqrt{x}} \left( 1 - \frac{k}{x} \right) - \frac1{\sqrt{y}} \left( 1 - \frac{k}{y} \right)\right]^{-1}$$
    where $x > y > k\geq 1$.




    Some attempts led me to think that $r$ can be expressed nicely in terms of generalized means of $x,y$ like the harmonic ($p=-1$), geometric ($p \to 0$), and other powers:${}^2$
    $$\begin{align}
    H &\equiv \left( \frac1x + \frac1y\right)^{-1} & G &\equiv \sqrt{xy} & M \equiv \left( x^p + y^p \right)^{1/p} \quad \text{with perhaps} \quad p=\frac{-1}2
    \end{align}$$



    To be specific, my question is this:




    Is there a way to write it in this form $$r \overset{?}{=} \frac1x + \frac1y + \frac1{\sqrt{xy}} + {}\color{red}{??} =\frac1H + \frac1G + {}\color{red}{??} $$ such that the $\color{red}{??}$ part is "nice" in terms of $k$ and maybe $H,G$ or $M$ defined above?





    I also wonder if there's some standard techniques like examining the asymptotic form to guess the coefficients of the proposed terms. I tried a bit but didn't get very far.



    Directly pulling out $\frac1H$ and $\frac1G$ just change $r$ into something equally inviting but not more compact.



    Any suggestions will be appreciated. Honestly, one main reason for me to think that "oh there must be some nice and more compact forms" is the glaring symmetry. I will be okay if there turn out to be none. Thanks.







    Footnote 1:
    $\quad$For the record, $r$ is part of the expression of the ratio between $\frac{\partial^2 f(u)}{ \partial u^2}$ and $\frac{\partial f(u) }{ \partial u}$, evaluated at $u=0$, where $$f(u) = \left( \frac1{\sqrt{u+y}} - \frac1{\sqrt{u+x}} \right)\sqrt{u+k}$$



    Footnote 2:
    $\quad$This is a slight abuse of notations, as the generalized means should carry the averaging factors $\frac12$ or $\frac1{\sqrt{2}}$. My shorthands can also match the p-norms, which has the benefit of the norms ordered in powers have correspondingly decreasing magnitudes. However, p-norms doesn't cover $\sqrt{x y}$ like generalized mean.






    Update (Nov.28th, 2017)




    With all the inverses lying around, it turns out that the best way is to express $r$ in terms of the ... you guessed it ... inverses: $v \equiv 1/x$ and $w \equiv 1/y$. I'll spare the readers of the actual algebra. Suffice to say that this is also the natural course to take to adopt the notions of p-norms; one just have to give up on unifying $1 / \sqrt{xy} = 1/G = \sqrt{vw}$ formally.


    Answer



    We can write
    $$r=\frac 1H+\frac 1G+\color{red}{\frac{1}{M-G+\frac{GM}{k}}}$$
    where
    $$M=(x^{-1/2}+y^{-1/2})^{-2}$$






    If we write

    $$r=\frac{\frac{1}{x\sqrt x}-\frac{1}{y\sqrt y}-k\left(\frac{1}{x^2\sqrt x}-\frac{1}{y^2\sqrt y}\right)}{\frac{1}{\sqrt x}-\frac{1}{\sqrt y}-k\left(\frac{1}{x\sqrt x}-\frac{1}{y\sqrt y}\right)}=\frac 1x+\frac 1y+\frac{1}{\sqrt{xy}}+A$$ then we have
    $$A=\frac{k\left(\frac 1x-\frac 1y\right)\left(\frac{1}{x\sqrt y}+\frac{1}{y\sqrt x}\right)}{\frac{1}{\sqrt x}-\frac{1}{\sqrt y}-k\left(\frac{1}{x\sqrt x}-\frac{1}{y\sqrt y}\right)}$$



    Using
    $$x+y=\frac{G^2}{H},\quad xy=G^2,\quad \frac{1}{M}=\frac 1H+\frac 2G$$
    we have
    $$\left(\frac 1x-\frac 1y\right)^2=\left(\frac 1H+\frac 2G\right)\left(\frac 1H-\frac 2G\right)\implies \frac 1x-\frac 1y=-\sqrt{\frac 1M\left(\frac 1H-\frac 2G\right)}$$



    $$\left(\frac{1}{x\sqrt y}+\frac{1}{y\sqrt x}\right)^2=\frac{1}{G^2}\left(\frac 1H+\frac 2G\right)\implies \frac{1}{x\sqrt y}+\frac{1}{y\sqrt x}=\frac 1G\sqrt{\frac 1M}$$




    $$\left(\frac{1}{\sqrt x}-\frac{1}{\sqrt y}\right)^2=\frac 1H-\frac{2}{G}\implies \frac{1}{\sqrt x}-\frac{1}{\sqrt y}=-\sqrt{\frac 1H-\frac{2}{G}}$$



    $$\small\left(\frac{1}{x\sqrt x}-\frac{1}{y\sqrt y}\right)^2=\left(\frac 1H+\frac 1G\right)^2\left(\frac 1H-\frac 2G\right)\implies \frac{1}{x\sqrt x}-\frac{1}{y\sqrt y}=-\left(\frac 1M-\frac 1G\right)\sqrt{\frac 1H-\frac 2G}$$



    So, we get



    $$A=\frac{k\left(-\sqrt{\frac 1M\left(\frac 1H-\frac 2G\right)}\right)\frac 1G\sqrt{\frac 1M}}{-\sqrt{\frac 1H-\frac{2}{G}}-k\left(-\left(\frac 1M-\frac 1G\right)\sqrt{\frac 1H-\frac 2G}\right)}=\frac{1}{M-G+\frac{GM}{k}}$$


    arithmetic - Multiplication of repeating decimal $0.3333overline{3}$ by $3$


    Let's start considering a simple fractions like $\dfrac {1}{2}$ and $\dfrac {1}{3}$.


    If I choose to represent those fraction using decimal representation, I get, respectively, $0.5$ and $0.3333\overline{3}$ (a repeating decimal).


    That is where my question begins.


    If I multiply either $\dfrac {1}{2}$ or $0.5$ by $2$, I end up with $1$, as well as, if I multiply $\dfrac {1}{3}$ by $3$.



    Nonetheless, if I decide to multiply $0.3333\overline{3}$ by $3$, I will not get $1$, but instead, $0.9999\overline{9}$


    What am I missing here?


    *Note that my question is different than the question Adding repeating decimals


    Answer



    Hint: compute the difference between $1$ and $0.9\bar9$. How much is that ? What do you conclude ?


    limits - $lim_{xto0} frac{x-sin x}{x-tan x}$ without using L'Hopital


    $$\lim_{x\to0} \frac{x-\sin x}{x-\tan x}=?$$


    I tried using $\lim_{x\to0} \frac{\sin x}{x}=1$.


    But it doesn't work :/


    Answer



    $$\frac{x - \sin(x)}{x - \tan(x)} = \frac{x - \sin(x)}{x^3} \cdot \frac{x^3}{x - \tan(x)}$$


    Let $x = 3y$ and $x\to 0 \implies y\to 0$ $$\lim_{x\to0} \frac{x - \sin(x)}{x^3} = L $$ $$L = \lim_{y\to0}\frac{3y - \sin(3y)}{(3y)^3} = \lim_{y\to0} \frac 3 {27} \frac{y - \sin(y)}{y^3} + \lim_{y\to0} \frac{4}{27} \frac{\sin^3(y)}{y^3} = \frac{1}{9} L + \frac 4{27} $$


    This gives $$\lim_{x\to0}\frac{x - \sin(x)}{x^3} = \frac 1 6 $$



    \begin{align*} L &= \lim_{y\to0}\frac{ 3y - \tan(3y)}{27 y^3} \\ &= \lim_{y\to0} \frac{1}{(1 - 3\tan^2(y ))} \cdot \frac{3y(1 - 3\tan^2(y )) - 3 \tan(y) + \tan^3(y)}{27y^3}\\ &= \lim_{y\to0} \frac{1}{(1 - 3\tan^2(y ))} \cdot \left( \frac 3 {27} \frac{y - \tan(y)}{y^3} + \frac 1 {27} \frac{\tan^3(y)}{y^3} - \frac 9 {27} \frac{y \tan^2(y)}{y^3 } \right )\\ &= \frac {3L}{27} + \frac 1 {27} - \frac 1 3 \\ \end{align*}


    This gives other limit to be $-1/3$, put it up and get your limit.


    sequences and series - Find sum of a finite nested radical $ sqrt{2012+sqrt{2012+sqrt{2012+cdots}}} $ 2012 times



    $$ \sqrt{2012+\sqrt{2012+\sqrt{2012+\cdots}}} $$




    This is not an infinite series but is limited to the 2012th term. How can this expression be simplified?


    Answer



    $$x_n = \sqrt{2012 + x_{n-1}} \iff x_n^2 = 2012 + x_{n-1} \iff x_n^2 - 2012 = x_{n-1}$$
    And setting $$x_{0} = 0 $$ yields the relation $$x_1^2 - 2012 = 0 \therefore (x_2^2 - 2012)^2 - 2012 = 0 \therefore ((x_3^2 - 2012)^2 - 2012)^2 - 2012 = 0$$ which you can keep unravelling until you get a polynomial equation in $x_{2012}$, and maybe a CAS can then express a nice closed form solution for the roots of that polynomial (which is somehow simpler than what you wrote). Otherwise you can solve it approximately as Ross Milikan did.



    The way his solution works is that you notice that as $n \rightarrow \infty$, $u_n$ becomes the same as $u_{n-1}$ so you then get the equation $$x_\infty^2 - x_\infty - 2012 = 0$$ which is a quadratic equation you can solve to get $$x_\infty = {1 \over 2}(1 + \sqrt{8049}) $$ and it should be the case that $x_{2012} \approx x_\infty \approx 45$.


    limits - Proof of $L^p $ norm

    Can someone show me how to proof this ?
    $$\sqrt[p]{\sum_i {a_i}^p}\to \max a_i \quad\text{ if }p\to\infty.$$

    Saturday, October 20, 2018

    calculus - Is it possible to find the limit without l'Hopital's Rule or series

    Is it possible to find $$\lim_{x\to0}\frac{\sin(1-\cos(x))}{x^2e^x}$$
    without using L'Hopital's Rule or Series or anything complex but just basic knowledge (definition of a limit, limit laws, and algebraic expansion / cancelling?)

    linear algebra - Characteristic polynomial of a matrix 7x7?




    Avoiding too many steps, which is the characteristic polynomial of this matrix 7x7? And why?



    \begin{pmatrix}
    5&5&5&5&5&5&5\\5&5&5&5&5&5&5\\5&5&5&5&5&5&5\\5&5&5&5&5&5&5\\5&5&5&5&5&5&5\\5&5&5&5&5&5&5\\5&5&5&5&5&5&5\end{pmatrix}


    Answer



    As it was stated in the commentaries, the rank of this matrix is $1$; so it will have $6$ null eigenvalues, which means the characteristic polynomial will be in the form:




    $p(\lambda)=\alpha\,\lambda^6(\lambda-\beta) = \gamma_6\,\lambda^6 +\gamma_7\,\lambda^7$



    Using Cayley-Hamilton:



    $p(A)=\gamma_6\,A^6+\gamma_7\,A^7 =0$



    Any power of this matrix will have the same format, a positive value for all elements.



    $B=\begin{bmatrix}1&1&1&1&1&1&1\\1&1&1&1&1&1&1\\1&1&1&1&1&1&1\\1&1&1&1&1&1&1\\1&1&1&1&1&1&1\\1&1&1&1&1&1&1\\1&1&1&1&1&1&1\end{bmatrix}$




    $A = 5\,B$



    $A^2 = 5^2\,7\,B$



    $...$



    $A^6 = 5^6\,7^5\,B$



    $A^7=5^7\,7^6\,B$




    $p(A) = (\gamma_6+35\,\gamma_7)\,B=0\Rightarrow\gamma_6=-35\gamma_7$



    So we have: $\alpha=\gamma_7$ and $\beta = 35$



    $p(\lambda)=\alpha\,\lambda^6(\lambda-35)$


    logarithms - Prove that $log X 0$

    I'm working through Data Structures and Algorithm Analysis in C++, 2nd Ed, and problem 1.7 asks us to prove that $\log X < X$ for all $X > 0$.



    However, unless I'm missing something, this can't actually be proven. The spirit of the problem only holds true if you define several extra qualifiers, because it's relatively easy to provide counter examples.



    First, it says that $\log_{a} X < X$ for all $X > 0$, in essence.




    But if $a = -1$, then $(-1)^{2} = 1$. Therefore $\log_{-1} 1 = 2$. Thus, we must assume
    $a$ is positive.



    if $a$ is $< 1$, then $a^2 < 1$. Therefore we must assume that $a \geq 1$.



    Now, the book says that unless stated otherwise, it's generally speaking about base 2 for logarithms, which are vital in computer science.



    However, even then - if $a$ is two and $X$ is $\frac{1}{16}$, then $\log_{a} X$ is $-4$. (Similarly for base 10, try taking the log of $\frac{1}{10}$ on your calculator: It's $-1$.) Thus we must assume that $X \geq 1$.




    ...Unless I'm horribly missing something here. The problem seems quite different if we have to prove it for $X \geq 1$.



    But even then, I need some help solving the problem. I've tried manipulating the equation as many ways as I could think of but I'm not cracking it.

    real analysis - $f:mathbb{R}rightarrowmathbb{R}$ be continuous function such that $fcirc f=f.$



    Lt $f:\mathbb{R}\rightarrow\mathbb{R}$ be continuous function such that $f\circ f=f.$ I like to prove that if $f$ is non constant function then $f(x)=x$ i.e. $f$ is identity function. I think to prove it it is sufficient to prove that $f $ is one one function as in that case if $f(x)\neq x $ for some $x$ then $f(f(x))\neq f(x)$ which gives a contradiction. So how to prove that it is one one? Please help. Thanks a lot.


    Answer



    This sort of problem is discussed in linear algebra, where a linear map $f$ between vector spaces is called a projection if $f\circ f = f$. Examples in linear algebra of such functions are like $\pi_1:\mathbb{R}^2\to\mathbb{R}^2$ defined by $\pi_1(x,y) = (x,0)$. So, we see that functions that behave as you want move points into an image and once they are there, they stay there no matter how many times you apply $f$. This is formulated in the following way:



    Claim: If $x$ is in the range of $f$, then $f(x) = x$.



    Proof: If $x$ is in the range of $f$, then there is a point $x'$ such that $f(x') = x$. Then we see that $$f(x) = f(f(x')) = (f\circ f)(x') = f(x') = x,$$
    which is what we wanted to show. $\blacksquare$




    So, we see that if $f$ has range $R \subset \mathbb{R}$, then $f$ restricted to $R$ is the identity function. However, outside $R$ $f$ can do anything as long as it maps into $R$. Note that all such functions $f$ have to have this property and that if a function has this property then $f\circ f = f$. So, we get the following, which is formulated above in another response.



    Theorem: Let $f:X\to X$. Then $f$ restricted to its range is the identity function if and only if $f \circ f = f$.


    Friday, October 19, 2018

    limits - Proof by epsilon-delta of $lim limits_{x to 1} 3x^2+1=4$



    I have some doubts proving the following:



    $$\lim \limits_{x \to 1} 3x^2+1=4$$



    My try



    $(\forall \varepsilon > 0)(\exists \space \delta > 0): (0<|x-1|< \delta \implies |3x^2+1-4| < \varepsilon)$




    $\implies 0<|x-1|< \delta \iff -\delta \lt x -1 \lt \delta \iff -\delta +2 \lt x+1 \lt \delta + 2 $



    By transitivity:



    $\implies -(\delta + 2) \lt x+1 \lt \delta + 2 \iff 0 \lt |x+1| \lt \delta + 2$



    Now, working with the epsilon part:



    $|3x^2+1-4| < \varepsilon \iff 3 |x-1||x+1| \lt e$




    Using the fact that $|x+1| \lt \delta + 2 \iff 3|x+1| \lt 3(\delta + 2)$, and $|x-1| \lt \delta$ by transitivity:



    $3|x-1||x+1| \lt 3(\delta + 2)\delta$



    Then, $\varepsilon = 3(\delta + 2) \delta$ should be enough, but, if i solve the quadratic for delta:



    $\delta = \frac{-6 \pm \sqrt{36 + 12\varepsilon}}{6}$, a contradiction, because in one of the solutions $\delta \lt 0 \space \forall \varepsilon \gt 0$.



    After that i tried using $\delta = 1$:




    $\iff |x-1| \lt 1 \implies |3x^2-3| \lt \varepsilon$



    $|x-1| \lt 1 \iff -1 \lt x-1 \lt 1 \iff 1 \lt x+1 \lt 3$
    $\implies -3 \lt x+1 \lt 3 \iff |x+1| \lt 3$



    With $|x-1| \lt 1$ and $|x+1| \lt 3 \iff 3|x+1| \lt 9: $



    $3 |x-1||x+1| \lt \varepsilon \implies 9 |x-1| \lt \varepsilon \iff |x-1| \lt \frac{\varepsilon}{9}$.



    So $\delta = \frac{\varepsilon}{9}$ should satisfy, but i used the fact that $\delta = 1.$ How can i proceed here?. I saw that i have to use $\delta = \min\{1, \frac{\varepsilon}{9}\}$, but i don't know why. Any hints?. Are my steps correct or i did something wrong?. I'm not familiarized with epsilon-delta proofs.



    Answer



    Given $\varepsilon>0$, your formula $\delta:= \min\{1, \frac{\varepsilon}{9}\}$ works! Let's verify it.



    If $|x-1|<\delta$ with $\delta>0$, then
    $$|3x^2+1-4|=|3(x-1)(x-1+2)|\leq 3|x-1|(|x-1|+2)<3\delta(\delta+2).$$
    Now, by the above definition, $\delta\leq 1$ AND $\delta\leq \frac{\varepsilon}{9}$ which implies
    $$3\delta(\delta+2)<3\cdot \frac{\varepsilon}{9}\cdot (1+2)=\varepsilon$$
    and we are done.



    P.S. If we take just $\delta:=\frac{\varepsilon}{9}$, then we have that $$|3x^2+1-4|<3\cdot\frac{\varepsilon}{9}\cdot (\frac{\varepsilon}{9}+2).$$

    Is it true that for all $\varepsilon>0$, $$3\cdot\frac{\varepsilon}{9}\cdot (\frac{\varepsilon}{9}+2)<\varepsilon\;?$$
    No, because the LHS is quadratic in $\varepsilon$! Therefore we have to impose a bound on $\delta$.


    Thursday, October 18, 2018

    functional equations - A non-zero function satisfying $g(x+y) = g(x)g(y)$ must be positive everywhere

    Let $g: \mathbf R \to \mathbf R$ be a function which is not identically zero and which satisfies the equation
    $$
    g(x+y)=g(x)g(y) \quad\text{for all } x,y \in \mathbf{R}.
    $$
    Show that $g(x)\gt0$ for all $x \in \mathbf{R}$.

    Alternative way to calculate sum of geometric series


    I have been asked to find $S_8$ of a geometric series in which $U_3$ is 32 and $U_6$ is 4.


    I have done this by.


    Step 1) Drawing a chart to find the first term and the common difference.


    Step 2) Using $S_n = a*\frac{1-r^n}{1-r}$ to calculate the sum


    It is seen that $S_8 = 128*\frac{1-(1/2)^n}{1-(1/2)} = 255$


    However this could have been done by simply adding the values on the chart.


    enter image description here


    I want to find out if there is another way to do it that only uses formulas? as this seems a bit easy for my university engineering maths exam.


    Answer




    As you pointed out you can just add the values up in the chart to get the value of the series. Hence by the definition of a series as repeated algorithmic addition. A nice result of developing the frame work of series is the geometric series, because it has a nice formula equivalent to adding up the first n-terms of your series (as you pointed out in step 2).


    In other words the easier way of adding up the first n-terms of a geometric series is the formula you pointed out in part 2, and the long way of going about it is to add up the first n-terms individually. I know it may seem like adding up the first 8 terms for this case is the easy method, but if you were asked to add up the first 2000 terms, then the formula is by far superior.


    Another nice relationship is that the ratio of any consecutive terms is r, and you could use this fact to find r if you are given that the series is geometric.


    In your example $U6/U3 = r^3$.


    limits - Is $frac{sin(x)}{x}$ continuous at $x=0$? Whats the value at $x=0$?

    Is $\dfrac{\sin(x)}{x}$ at $x = 0$ continuous?
    Whats the value at $x=0~?$

    Wednesday, October 17, 2018

    real analysis - Prove that $frac{2^n}{n!}$ converges 0.











    Prove that $\frac{2^n}{n!}$ converges 0.



    I can see why, I just don't get how exactly to do convergence proofs. Right now I have:



    For $n>6$, $|\frac{2^n}{n!}-0|=\frac{2^n}{n!}<\frac{2^n}{3^n}$



    and



    assuming $\frac{2^n}{3^n}<\epsilon$, $n<\frac{\ln\epsilon}{\ln\frac2 3}$




    Not sure if the last step is even right...



    (This was an exam question today)


    Answer



    I'm pretty sure that last one need to be $n > \frac{\ln \varepsilon}{\ln \frac{2}{3}}$. But then that this works. For every $\varepsilon$ you give an explicit way to find $N\left(= \frac{\ln \varepsilon}{\ln \frac{2}{3}})\right)$ such that for all $n > N$ we have $|x_n - 0| < \varepsilon$. Definitions, ta da!


    calculus - Evaluation of $ lim_{xrightarrow infty}frac{ln (x)}{x}$ using Squeeze theorem

    Evaluation of $\displaystyle \lim_{x\rightarrow \infty}\frac{\ln (x)}{x}$ using sandwich Theorem (Squeeze theorem).



    $\bf{My\ Try:}$ Let $\ln (x) = y\Rightarrow x=e^y$ and when $x\rightarrow \infty, y=\ln(x)\rightarrow \infty$




    $\displaystyle \lim_{y\rightarrow \infty}\frac{y}{e^y}$, Now $\displaystyle e^y = 1+\frac{y}{1!}+\frac{y^2}{2!}+\frac{y^3}{3!}+..........+\infty$



    Now I did not understand how I calculate The Given limit using Squeeze theorem



    Help required



    Thanks

    measure theory - If $x$ is a recurrent state of a discrete Markov chain and the probability to go from $x$ to $y$ is positive, then $y$ is recurrent

    Let




    • $E$ be an at most countable Polish space and $\mathcal E$ be the Borel $\sigma$-algebra on $E$


    • $X=(X_n)_{n\in\mathbb N_0}$ be a Markov chain with values $(E,\mathcal E)$, distributions $(\operatorname P_x)_{x\in E}$ and transition matrix $$p=\left(p(x,y)\right)_{x,y\in E}=\left(\operatorname P_x\left[X_1=y\right]\right)_{x,y\in E}$$

    • $\tau_x^0:=0$ and $$\tau_x^k:=\inf\left\{n>\tau_x^{k-1}:X_n=x\right\}$$ for $x\in E$ and $k\in\mathbb N$ and $$\varrho(x,y):=\operatorname P_x\left[\tau_y^1<\infty\right]\color{blue}{=\operatorname P_x\left[\exists n\in\mathbb N:X_n=y\right]}$$



    Suppose $x\in E$ is a recurrent state, i.e. $$\varrho(x,x)=1\;.$$ Let $y\in E$ with $\varrho(x,y)>0$ .




    Clearly, since $\varrho(x,y)>0$, there exists a sequence $x_1,\ldots,x_n\in E$ with $x_n=y$ and $$\operatorname P\left[X_i=x_i\text{ for all }i\in\mathbb N\right]>0\;.$$ Moreover,



    \begin{equation}

    \begin{split}
    1-\varrho(x,x)&=\operatorname P_x\left[\tau_x^1=\infty\right]\\
    &\ge\operatorname P_x\left[X_1=x_1,\ldots,X_n=x_n\text{ and }\tau_x^1=\infty\right]\;,
    \end{split}\tag 1
    \end{equation}



    but I don't understand why the last term is equal to $$\operatorname P_x\left[X_1=x_1,\ldots,X_n=x_n\right]\operatorname P_y\left[\tau_x^1=\infty\right]\;.\tag 2$$




    Some authors state, that $(2)$ holds by "the" (elementary? weak? strong?) Markov property, but honestly, I don't see that.




    As another important note: They assume $x_i\ne x$. That might be crucial for $(2)$, but that's the next problem: I don't understand why we can choose $n$ such that $x_i\ne x$. I've seen that authors define $n$ to be the infimum of $$\left\{n\in\mathbb N:\operatorname P_x\left[X_n=y\right]>0\right\}\;,$$ but that doesn't seem to guarantee, that $X$ doesn't return a number of time to $x$ before it goes to $y$.



    EDIT:$\;\;\;$The last term in $(1)$ is equal to $$\operatorname P_x\left[\tau_x^1=\infty\mid X_1=x_1,\ldots,X_n=x_n\right]\operatorname P_x\left[X_1=x_1,\ldots,X_n=x_n\right]\;.$$ Intuitively, by the memorylessness of $X$, $\operatorname P_x\left[\tau_x^1=\infty\mid X_1=x_1,\ldots,X_n=x_n\right]$ should be equal to $\operatorname P_x\left[\tau_x^1=\infty\mid X_n=x_n\right]$ and since $x_n=y$, this probability should be equal to $\operatorname P_y\left[\tau_x^1=\infty\right]$.



    How can we verify this formally?

    calculus - Test the series $sum_{k=0}^{infty}frac{(-1)^k 1 cdot3cdot5 cdots(2k+1)}{1cdot4cdot7cdots(3k+1)}$



    Use the ratio test for absolute convergence to determine
    whether the series absolutely or
    diverge.



    $$\sum_{k=0}^{\infty}\frac{(-1)^k 1 \cdot3\cdot5 \cdots(2k+1)}{1\cdot4\cdot7\cdots(3k+1)}$$




    I don't understand how the general term becomes this $$\lim_{k\to\infty} \frac{a_{k+1}}{a_k} = \lim_{k\to\infty} \frac{2k+3}{3k+4}$$



    Obviously this Absolutely converges, I just dont get the 2nd step.


    Answer



    Let us write
    $$a_k=\frac{ 1 \cdot3\cdot5 \cdots(2k+1)}{1\cdot4\cdot7\cdots(3k+1)}.$$
    Then, we have
    $$\begin{align}
    \frac{a_{k+1}}{a_k}&=\frac{1\cdot3\cdot5\cdots [2(k+1)+1]}{1\cdot 4\cdot 7\cdots [3(k+1)+1]}\quad\cdot\quad\frac{1\cdot 4\cdot 7\cdots (3k+1)}{1\cdot3\cdot5\cdots(2k+1)}\\
    &=\frac{1\cdot3\cdot5\cdots (2k+3)}{1\cdot 4\cdot 7\cdots (3k+4)}\quad\cdot\quad\frac{1\cdot 4\cdot 7\cdots (3k+1)}{1\cdot3\cdot5\cdots(2k+1)}\\

    &=\frac{1\cdot3\cdot5\cdots(2k+1) (2k+3)}{1\cdot 4\cdot 7\cdots (3k+1)(3k+4)}\quad\cdot\quad\frac{1\cdot 4\cdot 7\cdots (3k+1)}{1\cdot3\cdot5\cdots(2k+1)}\\
    &=\frac{1\cdot3\cdot5\cdots(2k+1)}{1\cdot 4\cdot 7\cdots (3k+1)}\quad\cdot\quad\frac{2k+3}{3k+4}\quad\cdot\quad\frac{1\cdot 4\cdot 7\cdots (3k+1)}{1\cdot3\cdot5\cdots(2k+1)}\\
    &=\frac{2k+3}{3k+4}.
    \end{align}$$
    This proves the 2nd step you asked.


    Tuesday, October 16, 2018

    calculus - Recognizing that a function has no elementary antiderivative

    Is there a method to check whether a function is integrable?



    Of-course trying to solve it is one but some questions in integration may be so tricky that I don't get the correct method to start off with those problems. So, is there a method to find correctly whether a function is integrable?


    Clarification: I am asking about indefinite integrals which have no elementary anti derivative.

    calculus - Some integral with sine





    $$\begin{align}
    & \int_{0}^{+\infty }{\frac{\sin px}{1+{{\text{e}}^{qx}}}}\text{d}x ,\ \ p,\ q>0\\ \\ \\
    & \int_{0}^{+\infty }{{{\left( \frac{\sin x}{x} \right)}^{n}}\text{d}x} \\
    \end{align}$$


    Answer



    I will sketch an outline of the first integral. There are convergence questions, and a sum that evaluates into a special function for which we will need a derivation.




    Write the integral as



    $$\int_0^{\infty} dx \: \frac{\sin{p x} e^{-q x}}{1+e^{-q x}} $$



    Taylor expand the denominator:



    $$\int_0^{\infty} dx \: \sin{p x} \, e^{-q x} \sum_{n=0}^{\infty} (-1)^n e^{-n q x} $$



    Reverse the order of sum and integral. Again, the justification is not trivial (see, e.g., Abel's Theorem):




    $$\begin{align} \sum_{n=0}^{\infty} (-1)^n \int_0^{\infty} dx \: \sin{p x} \, e^{-(n+1) q x} &= \Im[\sum_{n=0}^{\infty} (-1)^n \int_0^{\infty} dx \: e^{-[(n+1) q -i p]x} \\ &= \Im{\sum_{n=0}^{\infty} \frac{(-1)^n}{(n+1)q - i p}} \\ &= p \sum_{n=0}^{\infty} \frac{(-1)^n}{(n+1)^2 q^2 + p^2} \\ &= \frac{1}{2 p} \left [ 1 - \frac{p^2}{q^2} \sum_{n=-\infty}^{\infty} \frac{(-1)^n}{n^2 + \frac{p^2}{q^2}} \right ] \end{align} $$



    I will not provide a derivation of the following closed form yet (perhaps in an update):



    $$\begin{align} \sum_{n=-\infty}^{\infty} \frac{(-1)^n}{n^2 + a^2} = \frac{\pi}{a} \mathrm{csch}{\pi a} & (a>0) \\ \end{align}$$



    (I can say that one derivation uses residues in the complex plane.) Using this result, we find that



    $$\int_0^{\infty} dx \: \frac{\sin{p x}}{1+e^{q x}} = \frac{1}{2 p} - \frac{\pi}{2 q} \mathrm{csch}{\pi \frac{p}{q}} $$



    sequences and series - Does $sum_{n=1}^infty frac{1}{phi(n)^s}$ have a euler product?



    Does $$ \sum_{n=1}^\infty \frac{1}{\phi(n)^s}$$



    have a euler product and functional equation? $\phi(n)$ is the euler phi function.



    Since $\phi(n)$ is multiplicative I think the series could have a euler product and functional equation.



    $$ \sum_{n=1}^\infty \frac{1}{\phi(n)^s}= \sum_{n=1}^\infty \frac{a(n)}{n^s}$$




    where $a(n)$ is sequence https://oeis.org/A058277 in the OEIS.






    I did some research and found many relations between the zeta function and other special functions such as:



    $$ \sum_{n=1}^\infty \frac{\phi(n)}{n^s}=\frac{\zeta(s-1)}{\zeta(s)}. $$


    Answer



    Firstly, $$\phi(p^k)=p^k\left(1-\frac1p\right)\qquad{\text{$k\ge1$, prime $p$}}$$




    Define $f(k)=\phi(k)^{-s}$.



    For the moment, consider the set of integers $S_N=\{k\,|\,k=p_1^{a_1}p_2^{a_2}\cdots p_N^{a_N},a_{(\cdot)}\ge1\}$.



    Then,
    $$\begin{align}
    \sum_{k\in S_N}f(k)
    &=\sum^\infty_{a_N=1}\cdots\sum^\infty_{a_1=1}
    \left[p_1^{a_1}\cdots p_N^{a_N}
    \left(1-\frac1{p_1}\right)\cdots\left(1-\frac1{p_1}\right)\right]^{-s} \\

    &=\left(\prod^N_{i=1}\frac1{1-1/p_i}\right)^s\cdot\prod^N_{j=1}\sum^\infty_{a_j=1}p_j^{-a_js} \\
    &=\prod^N_{i=1}\frac{(1-1/p_i)^{-s}}{p_i^s-1}
    \end{align}
    $$



    Define $$f_i:=\frac{(1-1/p_i)^{-s}}{p_i^s-1}$$



    Now we want to find
    $\displaystyle{\sum_{k\in S^*_N}f(k)}$ where $S^*_N=\{k\,|\,k=p_1^{a_1}\cdots p_N^{a_N},a_{(\cdot)}\color{red}{\ge0}\}$




    Summing $f(k)$ over all the elements in $S_N^*$ with $a_\alpha=0$ and other indexes non-zero gives $\displaystyle{\frac1{f_\alpha}\prod^N_{i=1}f_i}$.



    How about having two zero indexes $a_\alpha,a_\beta$? $\displaystyle{\frac1{f_\alpha f_\beta}\prod^N_{i=1}f_i}$.
    Three?
    $\displaystyle{\frac1{f_\alpha f_\beta f_\gamma}\prod^N_{i=1}f_i}$.



    Summing all these and factorizing naturally give
    $$\sum_{k\in S^*_N}f(k)=\left(1+\frac1{f_1}\right)\left(1+\frac1{f_2}\right)\cdots\left(1+\frac1{f_N}\right)\cdot\prod^N_{i=1}f_i=\prod^N_{i=1}(1+f_i)$$



    Taking the limit $N\to\infty$, we find that

    $$\sum^\infty_{n=1}\frac1{\phi(n)^s}=\prod_{\text{prime }p}\left(1+\frac{(1-1/p)^{-s}}{p^s-1}\right)$$



    I am still working on the functional equation. It is clear that the function is holomorphic on $\text{Re }s>1$, and is likely to have a pole at $s=1$, as plugging in $s=1$ gives $\prod_p\left(1+\frac1p\right)=\infty$.



    A few more words on analytic continuation:



    Obviously,
    $$\begin{align}
    F(s):=\sum^\infty_{n=1}\frac1{\phi(n)^s}
    &=\prod_{\text{prime }p}\left(1+\frac{(1-1/p)^{-s}}{p^s-1}\right)\\

    &=\prod_{p}\frac1{1-p^{-s}}\cdot\prod_p[1-p^{-s}+(p-1)^{-s}] \\
    &=\zeta(s)\cdot \underbrace{\prod_p[1-p^{-s}+(p-1)^{-s}]}_{G(s)}
    \end{align}
    $$




    1. $G(s)$ converges for $\text{Re }s>0$, as $1-p^{-s}+(p-1)^{-s}=1+sp^{-s-1}+O(p^{-s-2})$.

    2. Therefore, $F(s)$ has a simple pole at $s=1$ due to zeta function. The residue there equals $$\operatorname*{Res}_{s=1}F(s)=\prod_p\left(1+\frac1{p(p-1)}\right)=\frac{315\zeta(3)}{2\pi^4}$$
      (See Landau’s totient constant.)

    3. $$\lim_{s\to0^+}G(s)=1$$ This can be seen by plugging in $s=0$ into the above expression.


    4. Let's look at $G'(0)$.
      $$\frac{G'(s)}{G(s)}=\sum_p\frac{p^{-s}\ln p-(p-1)^{-s}\ln(p-1)}{1-p^{-s}+(p-1)^{-s}}$$
      $$G'(0)=-G(0)\sum_p\ln\left(1-\frac1p\right)=\infty$$
      As $G(0)$ is finite and $G'(0)$ is not, this suggests that $0$ is a branch point of $F(s)$. Thus, meromorphic continuation is not possible.






    Further analysis shows that $\text{Re }s=0$ is the natural boundary of $F(s)$.




    Firstly, by means of successive series expansion, we obtain
    $$\begin{align}
    \ln(1-p^{-s}+(p-1)^{-s})
    &=\sum^\infty_{n=1}\frac{\left[p^{-s}-(p-1)^{-s}\right]^n}{n} \\
    &=\sum^\infty_{n=1}\frac{1}{n}\sum^n_{r=0}\binom nr p^{-s(n-r)}(-1)^r(p-1)^{-sr} \\
    &=\sum^\infty_{n=1}\frac{p^{-ns}}{n}\sum^n_{r=0}\binom nr (-1)^r\left(1-\frac1p\right)^{-sr} \\
    &=\sum^\infty_{n=1}\frac{p^{-ns}}{n}\sum^n_{r=0}\binom nr (-1)^r\sum^\infty_{k=0}\binom{-sr}{k}\frac{(-1)^k}{p^k} \\
    &=\sum^\infty_{n=1}\sum^\infty_{k=0}\alpha_{n,k}(s)\frac1{p^{k+ns}}
    \end{align}
    $$




    where $$\alpha_{n,k}(s)=\frac{(-1)^k}{n}\sum^n_{r=0}(-1)^r\binom nr \binom{-sr}{k}$$



    Furthermore, notice that
    $$\alpha_{n,0}(s)=\frac1n\sum^n_{r=0}(-1)^r\binom nr=0$$



    Therefore,
    $$\ln(1-p^{-s}+(p-1)^{-s})=\sum_{(n,k)\in\mathbb N^2}\alpha_{n,k}(s)\frac1{p^{k+ns}}$$
    $$\begin{align}
    \implies \ln G(s)

    &=\sum_p \ln(1-p^{-s}+(p-1)^{-s}) \\
    &=\sum_{(n,k)\in\mathbb N^2}\alpha_{n,k}(s)\zeta_{\mathbb P}(k+ns) \\
    \end{align}
    $$

    where $\zeta_{\mathbb P}$ is the prime zeta function.



    It is well known that $\zeta_{\mathbb P}$ has the natural boundary $\text{Re }s=0$, because $\mathcal S$ (the set of singularities of $\zeta_{\mathbb P}$) clusters on the imaginary axis. Hence, obviously $G(s)$ cannot be analytically continued across $\text{Re }s=0$. A functional equation does not exist.



    Meanwhile, we obtained a representation of $F(s)$ in terms of well-known functions:





    $$\sum^\infty_{n=1}\frac1{\phi(n)^s}
    =\zeta(s)\exp\left[\sum_{(n,k)\in\mathbb N^2}\alpha_{n,k}(s)\zeta_{\mathbb P}(k+ns)\right]
    $$

    $$\alpha_{n,k}(s)=\frac{(-1)^k}{n}\sum^n_{r=0}(-1)^r\binom nr \binom{-sr}{k}$$



    Monday, October 15, 2018

    summation - Why $sum_{i=0}^infty i^2p^{i-1} = sum_{i=0}^infty i(frac{d}{dp}p^{i-1})$?



    I'm going through a proof right now and am having trouble figuring out the math behind one line. It says:




    $$\sum_{i=0}^\infty i^2p^{i-1} = \sum_{i=0}^\infty i(\frac{d}{dp}p^i) $$



    I know this question is vague but can anybody explain why this is the case?


    Answer



    First of all, the summation notation is unnecessary and irrelevant. Remove it and we get
    $$i^2p^{i-1}=i\big(\frac{d}{dp}p^i\big)$$
    Okay. Recall the formula
    $$\frac{d}{dx}x^k=kx^{k-1}, k\ne0$$
    Then we have that

    $$\frac{d}{dp}p^{i}=ip^{i-1}, i\ne0$$
    and so, by substitution, we have
    $$i\big(\frac{d}{dp}p^i\big)=i\big(\frac{d}{dp}p^i\big)$$
    $$i\big(\frac{d}{dp}p^i\big)=i(ip^{i-1})$$
    $$i\big(\frac{d}{dp}p^i\big)=i^2p^{i-1}$$
    Does that answer your question?


    linear algebra - The effect of elementary row operations on characteristic polynomial



    I have a $3 \times 3$ matrix $A$ for which I need to determine the characteristic polynomial.



    Suppose I had $\det(A)=f$. If I performed row operations on $A$ I could manipulate $f$ accordingly to find the determinant for the new matrix. So for example if I multiplied row $1$ by $k$ and swapped row $2$ and row $3$ the determinant for the new matrix would be given by $-1 \times k \times f$.




    My question: Could I simplify $A$ by performing row operations on it,
    then use the simplified $A$ in $(\lambda I_{3}-A)$, calculate $\det(\lambda I_{3}-A)$ and finally manipulate $\det(\lambda I_{3} -A)$ according to the row operations that I did on the original $A$. So if I swapped row $1$ and $3$ to get the simplified $A$, I would multiply $\det(\lambda I_{3}-A)$ by $-1$ in the end to get the final determinant.



    Would this change the characteristic polynomial? If yes, why?


    Answer



    No, elementary matrix operations don't preserve the eigenvalues in general. As an example, consider $A = I$, where $I$ is the $3 \times 3$ identity matrix.
    $$
    \det{(\lambda I - A)} = \det{(\lambda I - I)} = (\lambda-1)^3.
    $$

    Now, lets construct $B$ by swapping rows 1 and 3 while multiplying row 2 by 2 (as suggested in your comment), i.e.
    $$
    B =
    \begin{pmatrix}
    0 & 0 & 1\\
    0 & 2 & 0\\
    1 & 0 & 0
    \end{pmatrix},
    $$
    and

    $$
    \det{(\lambda I - B)} = (x-2)(x^2 - 1) \neq -2 (\lambda - 1)^3 = -2 \det{(\lambda I - A)}.
    $$



    However, we could construct $B_\lambda$ by starting with the matrix $\lambda I - A$, swapping rows 1 and 3 and multiplying row 2 by 2, i.e.
    $$
    B_\lambda =
    \begin{pmatrix}
    0 & 0 & \lambda-1\\
    0 & 2\lambda-2 & 0\\

    \lambda-1 & 0 & 0
    \end{pmatrix}.
    $$
    Then
    $$
    \det{B_\lambda} = -2(\lambda-1)^3 = -2 \det{(\lambda I - A)}
    $$
    as expected.


    Sunday, October 14, 2018

    elementary number theory - Prove that $a$ divides $b$ and $b$ divides $a$ if and only if $a = pm b$


    Let $a$ and $b$ be nonzero integers. Prove that $a$ divides $b$ and $b$ divides $a$ if and only if $a = \pm b$.




    Since this is a iff statement, I need to prove it both ways:



    $\Rightarrow$ If $a=\pm b$, then $a$ divides $b$ and $b$ divides $a$; and



    $\Leftarrow$ If $a$ divides $b$ and $b$ divides $a$, then $a=\pm b$.




    I tried to prove it but I don't know how to manage the $\pm$ sign, can anyone give me a hit or suggestion to start this proof?



    Thanks

    real analysis - Limit of the nested radical $sqrt{7+sqrt{7+sqrt{7+cdots}}}$











    Where does this sequence converge?
    $\sqrt{7},\sqrt{7+\sqrt{7}},\sqrt{7+\sqrt{7+\sqrt{7}}}$,...


    Answer



    For a proof of convergence,



    Define the sequence as



    $\displaystyle x_{0} = 0$



    $\displaystyle x_{n+1} =\sqrt{7 + x_n}$



    Note that $\displaystyle x_n \geq 0 \ \ \forall n$.



    Notice that $\displaystyle x^2 - x - 7 = (x-a)(x-b)$ where $\displaystyle a \lt 0$ and $\displaystyle b \gt 0$.



    We claim the following:




    i) $\displaystyle x_n \lt b \Longrightarrow x_{n+1} \lt b$
    ii) $\displaystyle x_n \lt b \Longrightarrow x_{n+1} \gt x_n$



    For a proof of i)



    We have that



    $\displaystyle x_n \lt b = b^2 - 7$ and so $x_n +7 \lt b^2$ and thus by taking square roots $x_{n+1} \lt b$




    For a proof of ii)



    We have that



    $\displaystyle (x_{n+1})^2 - (x_n)^2 = -(x^2_n - x_n -7) = -(x_n-a)(x_n-b) \gt 0$ if $x_n \lt b$.



    Thus $\displaystyle \{x_{n}\}$ is monotonically increasing and bounded above and so is convergent.



    By setting $L = \sqrt{7+L}$, we can easily see that the limit is $\displaystyle b = \dfrac{1 + \sqrt{29}}{2}$






    In fact, we can show that the convergence is linear.



    $\displaystyle \dfrac{b-x_{n+1}}{b-x_n} = \dfrac{b^2 - (7+x_n)}{(b+\sqrt{7+x_n})(b-x_n)} = \dfrac{1}{b + x_{n+1}}$



    Thus $\displaystyle \lim_{n\to \infty} \dfrac{b-x_{n+1}}{b-x_n} = \dfrac{1}{2b}$.




    We can also show something a bit stronger:



    Let $\displaystyle t_n = b - x_n$.



    The we have shown above that $\displaystyle t_n \gt 0$ and $\displaystyle t_n \lt b^2$



    We have that




    $\displaystyle b - t_{n+1} = \sqrt{7 + b - t_n} = \sqrt{b^2 - t_n}$



    Dividing by $\displaystyle b$ throughout we get



    $\displaystyle 1 - \dfrac{t_{n+1}}{b} = \sqrt{1 - \dfrac{t_n}{b^2}}$



    Using $\displaystyle 1 - \dfrac{x}{2} \gt \sqrt{1-x} \gt 1 - x \ \ 0 \lt x \lt 1$ we have that




    $\displaystyle 1 - \dfrac{t_n}{2b^2} \geq 1 - \dfrac{t_{n+1}}{b} \geq 1 - \dfrac{t_n}{b^2}$



    And so



    $\displaystyle \dfrac{t_n}{2b} \leq t_{n+1} \leq \dfrac{t_n}{b}$



    This gives us that $\displaystyle b - \dfrac{b}{b^n} \leq x_n \leq b - \dfrac{b}{(2b)^n}$



    real analysis - Evaluate $lim_{n to infty} sum_{k=n}^{2n} frac{1}{n+sqrt{k}}$

    Evaluate $$\lim_{n \to \infty} \sum_{k=n}^{2n} \frac{1}{n+\sqrt{k}}$$



    I tried to write the sum above as $$\lim_{n \to \infty} \frac{1}{n+\sqrt{n}} + \frac{1}{n+\sqrt{n+1}}+\dotsb+\frac{1}{n+\sqrt{2n}}$$
    Now if I solve the limit the result is $0$ but obviously it's wrong. I think because it's an infinity sum of factors that go to zero, but it could be that the sum goes to infinity faster than the factors so that the result isn't zero. However I don't know how to solve it. Some advice?

    linear algebra - Writing a matrix as a product of elementary matrices.


    So if I have a matrix and I put it into RREF and keep track of the row operations, I can then write it as a product of elementary matrices. An elementary matrix is achieved when you take an identity matrix and perform one row operation on it. So if we have a matrix like $\begin{bmatrix}1&0\\0&1\end{bmatrix}$, one elementary matrix could look like $\begin{bmatrix}1&0\\-1&1\end{bmatrix}$ for the row operation $r_2 - r_1$ or $\begin{bmatrix}1&0\\0&1/2\end{bmatrix}$ for the row operation $\dfrac{r_2}{2}$. So if you put a matrix into reduced row echelon form then the row operations that you did can form a bunch of elementary matrices which you can put together as a product of the original matrix. So if a have a $2\times{2}$ matrix, what is the most elementary matrices that can be used. What would that look like?



    Answer



    Let's assume that nonzero entries in our matrices are invertible.


    If $a \ne 0$, then a $2\times 2$ matrix with $a$ in the upper corner can be written as a product of 4 matrices that are elementary in the sense described:


    $$ \left( \begin{array}{cc} 1 & 0 \\ \frac{c}{a} & 1 \end{array}\right) \left( \begin{array}{cc} a & 0 \\ 0 & 1 \end{array}\right) \left( \begin{array}{cc} 1 & 0 \\ 0 & d-\frac{bc}{a} \end{array}\right) \left( \begin{array}{cc} 1 & \frac{b}{a} \\ 0 & 1 \end{array}\right) = \left( \begin{array}{cc} a & b \\ c & d \end{array}\right) $$


    Notice that when $a=1$, three elementary matrices suffice.


    If $a=0$ but $c\ne 0$, then $$ \left( \begin{array}{cc} 1 & \frac{1}{c} \\ 0 & 1\end{array}\right) \left( \begin{array}{cc} 0 & b \\ c & d\end{array}\right)= \left( \begin{array}{cc} 1 & * \\ c & d\end{array}\right) $$ Since $\left( \begin{array}{cc} 1 & * \\ c & d\end{array}\right)$ can be written as a product of 3 elementary matrices, $\left( \begin{array}{cc} 0 & b \\ c & d\end{array}\right)$ can again be written as the product of 4. A similar argument holds when $a=0$ but $b \ne 0$.


    I'll leave the case $a=b=c=0$ to the reader.


    analysis - Injection, making bijection

    I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...