Friday, November 29, 2019

ELEMENTARY PROOF: Prove $a^2b^2(a^2-b^2)$ is divisible by $12$.



My first thought was to treat see if $a^2 b^2(a^2-b^2)$ is divisible by $2$ and $3$ since they are the prime factors. But I cannot seem to get anywhere. Please give me initial hints. We did not learn about modular arithmetic, so please try not to use it to prove it.



Thanks


Answer



First, let us see how squares behave modulo 3:



$$ n^2\, \text{mod}\, 3$$




We know n is either 0, 1, or 2 mod 3. Squaring this gives 0, 1, and 4 = 1 mod 3. In other words,
$$ n^2\, \text{mod}\, 3 = 0$$



or



$$ n^2\, \text{mod}\, 3 = 1$$



Now, consider the different possible cases (both are 0 mod 3; both are 1 mod 3; one is 0 and the other is 1).




Next, do the same thing but under mod 2. You should notice that if a or b (or both) are even, the result follows easily. The only case left to consider is if a and b are odd... how can we factor the expression $a^2 - b^2$?


Real Numbers to Irrational Powers



In a related question we discussed raising numbers to powers.



I am interested if anybody knows any results for raising numbers to irrational powers.



For instance, we can easily show that there exists an irrational number raised to an irrational power such that the result is a rational number. Observe ${\sqrt 2 ^ {\sqrt 2}}$. Since we do not know if ${\sqrt 2 ^ {\sqrt 2}}$ is rational or not, there are two cases.




  1. ${\sqrt 2 ^ {\sqrt 2}}$ is rational, and we are finished.



  2. ${\sqrt 2 ^ {\sqrt 2}}$ is irrational, but if we raise it by ${\sqrt 2}$ again, we can see that
    $$\left ( \sqrt 2 ^ \sqrt 2 \right ) ^ \sqrt 2 = \sqrt 2 ^ {\sqrt 2 \cdot \sqrt 2} = \sqrt 2 ^ 2 = 2.$$




Either way, we have shown that there exists an irrational number raised to an irrational power such that the result is rational.



Can more be said about raising irrational numbers to irrational powers?


Answer



Some examples:





  • Eulers Identity $e^{i \pi} + 1 = 0$


  • Gelfond-Schneider Constant: $2^\sqrt{2} = 2.66514414269022518865\cdots$ is trancendental by Gelfond-Schneider.


  • $i^i = 0.2078795763507619085469556198\cdots$ is also trancendental


  • Ramanujan constant: $e^{\pi \sqrt{163}} = 62537412640768743.999999999999250072597\cdots$.


  • $e^\gamma$ where $\gamma$ is the Euler–Mascheroni constant. (okay nobody has proved it's irrational yet, but surely is)



complex numbers - Did Euler discover the Euler's identity




We know that in $1748$ Euler published the "Introductio in analysin infinitorum", in which, he released the discovery of the Euler's formula:



$$e^{ix} = \cos x+i \sin x$$



But who was the first mathematician to convert this to the form we all know and love, the Euler's identity:



$$e^{i\pi}+1=0$$
When was this formula first explicitly written in this way?




Was it Bernoulli, Euler's teacher and mentor, or another more modern mathematician?


Answer



For a reference pointing to Euler, see :





discussing Bernoulli's thesis that $l (-1)=l(+1)=0$ :




le rayon [du cercle] est à la quatrieme partie de la circonference, comme $\sqrt -1$ à $l \sqrt -1$. Donc posant le rapport du diametre à la circonference $= 1 : \pi$, il sera $\frac 1 2 \pi = \dfrac {l \sqrt -1}{\sqrt -1}$ et pertant $l \sqrt -1 = \frac 1 2 \pi \sqrt -1$.





In a nutshell, he derive for the area of the first quadrant of the unit circle the formula :




$\dfrac {\pi} 4 = \dfrac 1 {4 \sqrt -1} l (-1)$,




from which : $l (-1) = \pi \sqrt -1$.







See also page 165-on where, starting from his formula "dont la vérité est suffissament prouvée ailleurs" :




$$x= \text {cos} \phi + \sqrt -1 \ \text {sin} \phi \ ,$$




posing $C$ as the real logarithm of the positive quantity $\sqrt {(aa+bb)}=c$, he derives the general formula for the logarithm of :





$$a+b \sqrt -1 = C + (\phi + p \pi) \sqrt -1.$$




With $c=1$ and $C=0$ he get :




$$l (\text {cos} \phi + \sqrt -1 \ \text {sin} \phi) = (\phi + p \pi) \sqrt -1.$$





Finally, with $\phi = 0$ [and thus : $\text {cos} \phi = 1$ and $\text {sin} \phi = 0$] :




$l(+1) = p \pi \sqrt (-1)$ and thus [for $p=0$] : $l(+1) = 0$




and, with $\phi = \pi$ :




$l(-1) = \frac + - \pi \sqrt -1$.








Euler in : Introductio in analysin infinitorum, Tomus Secundus (1748), Ch.XXI, page 290, uses $i$ for an imaginary quantity :




Cum enim numerorum negativorum Logarithmi sint imaginarii (...) erit $l(-n)$, quantitas imaginaria, quae sit $= i$.





But he does not say that the symbol $i$ is such that $i^2 = -1$.



In the same Introductio, Tomus Primus, §138, the formula is written as :




$e^{v \sqrt -1}= \text {cos} v + \sqrt -1 \text {sin} v$.








In conclusion, Euler "knows" the identity and he is the "iventor" of $i$ to name an imaginary quantity, but it seems that he never writed it in the "modern form", at least because he constantly writes $\sqrt -1$.









Note



See also Cuchy's Cours (1821) for Euler's identity ; again, $\sqrt -1$ is used.




I've not made an extensive research but, due to the fact that Cauchy uses systematically $i$ for denoting an increment [see : Résumé des leçons sur le calcul infinitésimal (1823) ] :




$\Delta x = i$,




my conjecture is that we hardly find any use of $i$ as imaginary.


linear algebra - Eigenvalue decomposition of $A = I - xx^T$





Let $A = I - xx^T$, where $x \in \mathbb{R}^n$ and $I$ is the identity matrix of $\mathbb{R}^n$



We know that $A$ is a real symmetric matrix, therefore there exists an eigenvalue decomposition of $A$ such that



$$A = Q^T\Lambda Q$$




Is it possible to find $Q$, $\Lambda$?



$I - xx^T = Q^TQ - xQ^TQx^T = Q^TQ - (Q^Tx)^T(x^TQ)^T...$


Answer



Consider
$$
Ax=(I-xx')x=(1-x'x)x
$$
so $x$ itself is an eigenvector with eigenvalue $1-x'x$. In fact, if $v$ is an eigenvector with some eigenvalue $\alpha$, we have

$$
\alpha v=Av=(I-xx')v=v-(x'v)x\implies(1-\alpha)v=(x'v)x.
$$
This means if $\alpha\neq 1$, then $v$ is proportional to $x$ so in fact $v$ is an eigenvector with eigenvalue $1-x'x$. If $\alpha=1$, then $x'v=0$. Conversely, if $v'x=0$, then $v$ is an eigenvector with eigenvalue $1$:
$$
Av=(I-xx')v=v-(x'v)v=v.
$$
Conclusion: $I-xx'$ has eigenvalues $1-x'x$ and $1$ where $1$ has multiplicity $n-1$. The eigenvectors for $1-x'x$ are parallel to $x$ and the eigenvectors of $1$ are any vector in the space orthogonal to the space spanned by $x$. So you can take $Q'=\Big(\frac{x}{|x|}\;r_1\;\cdots\;r_{n-1}\Big)$ where each $r_i$ is $n\times 1$ and $\{r_1,\ldots,r_{n-1}\}$ is some orthonormal basis of $[\text{span}(x)]^\perp$.


trigonometry - How can we sum up $sin$ and $cos$ series when the angles are in arithmetic progression?


How can we sum up $\sin$ and $\cos$ series when the angles are in arithmetic progression? For example here is the sum of $\cos$ series:


$$\sum_{k=0}^{n-1}\cos (a+k \cdot d) =\frac{\sin(n \times \frac{d}{2})}{\sin ( \frac{d}{2} )} \times \cos \biggl( \frac{ 2 a + (n-1)\cdot d}{2}\biggr)$$


There is a slight difference in case of $\sin$, which is: $$\sum_{k=0}^{n-1}\sin (a+k \cdot d) =\frac{\sin(n \times \frac{d}{2})}{\sin ( \frac{d}{2} )} \times \sin\biggl( \frac{2 a + (n-1)\cdot d}{2}\biggr)$$


How do we prove the above two identities?


Answer




Let $$ S = \sin{(a)} + \sin{(a+d)} + \cdots + \sin{(a+nd)}$$ Now multiply both sides by $\sin\frac{d}{2}$. Then you have $$S \times \sin\Bigl(\frac{d}{2}\Bigr) = \sin{(a)}\sin\Bigl(\frac{d}{2}\Bigr) + \sin{(a+d)}\cdot\sin\Bigl(\frac{d}{2}\Bigr) + \cdots + \sin{(a+nd)}\cdot\sin\Bigl(\frac{d}{2}\Bigr)$$


Now, note that $$\sin(a)\sin\Bigl(\frac{d}{2}\Bigr) = \frac{1}{2} \cdot \biggl[ \cos\Bigl(a-\frac{d}{2}\Bigr) - \cos\Bigl(a+\frac{d}{2}\Bigr)\biggr]$$ and $$\sin(a+d) \cdot \sin\Bigl(\frac{d}{2}\Bigr) = \frac{1}{2} \cdot \biggl[ \cos\Bigl(a + d -\frac{d}{2}\Bigr) - \cos\Bigl(a+d+\frac{d}{2}\Bigr) \biggr]$$


Then by doing the same thing you will have some terms cancelled out. You can easily see which terms are going to get Cancelled. Proceed and you should be able to get the formula.


I tried this by seeing this post. This has been worked for the case when $d=1$. Just take a look here:


Thursday, November 28, 2019

real analysis - Is every function with the intermediate value property a derivative?


As it is well known every continuous function has the intermediate value property, but even some discontinuous functions like $$f(x)=\left\{ \begin{array}{cl} \sin\left(\frac{1}{x}\right) & x\neq 0 \\ 0 & x=0 \end{array} \right.$$ are having this property.
In fact we know that a derivative always have this property.



My question is, if for every function $f$ with the intermediate value property exists a function $F$ such that $F'=f$. And if so, is $F$ unique (up to a constant you may add)?


My attempts till now: (Just skip them if you feel so)


The most natural way of finding those functions would be integrating, so I guess the question can be reduced to, if functions with the intermediate value property can be integrated.
This one depends heavy on how you define when a functions is integrable, we (my analysis class) said that we call a function integrable when it is a regulated function (the limits $x\to x_0^+ f(x)$ and $x\to x_0^- f(x)$ exists ) .
As my example above shows, not every function with the intermediate value property is a regulated function. But if we say a function is integrabel when every riemann sum converges the above function is integrable, so it seems like this would be a better definition for my problem.


Edit: As Kevin Carlson points out in a commentar that being a derivative is different from being riemann integrable, he even gave an example for a function which is differentiable but it's derivative is not riemann integrable. So we can't show that those functions are riemann integrable as they are not riemann integrable in general. Now I have no clue how to find an answer.


Answer



If you compose $ \tan^{-1} $ with Conway’s Base-$ 13 $ Function, then you get a bounded real-valued function on the open interval $ (0,1) $ that satisfies the Intermediate Value Property but is discontinuous at every point in $ (0,1) $. Therefore, by Lebesgue’s theorem on the necessary and sufficient conditions for Riemann-integrability, this function is not Riemann-integrable on any non-degenerate closed sub-interval of $ (0,1) $.


Now, it cannot be the derivative of any function either, because by the Baire Category Theorem, if a function defined on an open interval has an antiderivative, then the function must be continuous on a dense set of points. This thread may be of interest to you. :)


algebra precalculus - Why $sqrt{-1 times {-1}} neq sqrt{-1}^2$?


I know there must be something unmathematical in the following but I don't know where it is:


\begin{align} \sqrt{-1} &= i \\ \\ \frac1{\sqrt{-1}} &= \frac1i \\ \\ \frac{\sqrt1}{\sqrt{-1}} &= \frac1i \\ \\ \sqrt{\frac1{-1}} &= \frac1i \\ \\ \sqrt{\frac{-1}1} &= \frac1i \\ \\ \sqrt{-1} &= \frac1i \\ \\ i &= \frac1i \\ \\ i^2 &= 1 \\ \\ -1 &= 1 \quad !!! \end{align}


Answer



Between your third and fourth lines, you use $\frac{\sqrt{a}}{\sqrt{b}}=\sqrt{\frac{a}{b}}$. This is only (guaranteed to be) true when $a\ge 0$ and $b>0$.


edit: As pointed out in the comments, what I meant was that the identity $\frac{\sqrt{a}}{\sqrt{b}}=\sqrt{\frac{a}{b}}$ has domain $a\ge 0$ and $b>0$. Outside that domain, applying the identity is inappropriate, whether or not it "works."



In general (and this is the crux of most "fake" proofs involving square roots of negative numbers), $\sqrt{x}$ where $x$ is a negative real number ($x<0$) must first be rewritten as $i\sqrt{|x|}$ before any other algebraic manipulations can be applied (because the identities relating to manipulation of square roots [perhaps exponentiation with non-integer exponents in general] require nonnegative numbers).


This similar question, focused on $-1=i^2=(\sqrt{-1})^2=\sqrt{-1}\sqrt{-1}\overset{!}{=}\sqrt{-1\cdot-1}=\sqrt{1}=1$, is using the similar identity $\sqrt{a}\sqrt{b}=\sqrt{ab}$, which has domain $a\ge 0$ and $b\ge 0$, so applying it when $a=b=-1$ is invalid.


abstract algebra - Some field extensions with coprime degrees

Let $L/K$ be a finite field extension with degree $m$. And let $n\in \mathbb N$ such that $m$ and $n$ are coprime. Show the following:



If there is a $a\in \mathbb K$ such that an $n$-th root of $a$ lies in $L$ then we have already $a\in \mathbb K$.



My attempt:




The field extension $K(\sqrt[n]{a})/K$ has degree smaller $n$. The minimal polynomial of
$\sqrt[n]{a}$ namely $m_{\sqrt[n]{a}}(X)\in K[X]$ divides $X^n-a$. I.e. let $k$ be the degree of the minimal polynomial, then $k|n$.



But because of the formula $[L:K]=[L:K(\sqrt[n]{a}]\cdot [K(\sqrt[n]{a}):K]$ $k|m$, hence $k=1$ and hence our conclusion follows because $[K(\sqrt[n]{a}):K]=1$ yields $\sqrt[n]{a}\in K$ .



Can someone go through it? Thanks.

trigonometry - Euler's formula simplification



I have been trying to simplify this expression:



$$\cos (\theta) - \frac{1}{\cos (\theta) + i \sin(\theta)}$$




into:



$$\ i \sin(\theta).$$



However I can't find what steps to take to get to this simplification.



Euler's formula states:



$$\ e^{i\theta}= \cos (\theta) + i \sin(\theta) $$




It is linked to this formula however I am not sure how to go about this.


Answer



$$cos(\theta)-\frac{1}{\cos(\theta)+i\sin(\theta)}=cos(\theta)-\frac{1}{e^{i\theta}}=$$
$$cos(\theta)-e^{-i\theta}=\frac{1}{2}e^{i\theta}+\frac{1}{2}e^{-i\theta}-e^{-i\theta}=$$
$$=\frac{1}{2}e^{i\theta}-\frac{1}{2}e^{-i\theta}=i\Big(\frac{1}{2i}e^{i\theta}-\frac{1}{2i}e^{-i\theta}\Big)=i\sin(\theta)$$


Wednesday, November 27, 2019

linear algebra - Can we prove $BA=E$ from $AB=E$?


I was wondering if $AB=E$ ($E$ is identity) is enough to claim $A^{-1} = B$ or if we also need $BA=E$. All my textbooks define the inverse $B$ of $A$ such that $AB=BA=E$. But I can't see why $AB=E$ isn't enough. I can't come up with an example for which $AB = E$ holds but $BA\ne E$. I tried some stuff but I can only proof that $BA = (BA)^2$.


Edit: For $A,B \in \mathbb{R}^{n \times n}$ and $n \in \mathbb{N}$.


Answer



If $AB = E$, then (the linear application associated to) $A$ has a right inverse, so it's surjective, and as the dimension is finite, surjectivity and injectivity are equivalent, so $A$ is bijective, and has an inverse. And the inverse is also a right inverse, so it's $B$


elementary number theory - Proving that $gcd(ac,bc)=|c|gcd(a,b)$

Let $a$, $b$ an element of $\mathbb{Z}$ with $a$ and $b$ not both zero and let $c$ be a nonzero integer. Prove that $$(ca,cb) = |c|(a,b)$$

real analysis - Is it true that $0.999999999dots=1$?


I'm told by smart people that $$0.999999999\dots=1$$ and I believe them, but is there a proof that explains why this is?


Answer



What does it mean when you refer to $.99999\ldots$? Symbols don't mean anything in particular until you've defined what you mean by them.


In this case the definition is that you are taking the limit of $.9$, $.99$, $.999$, $.9999$, etc. What does it mean to say that limit is $1$? Well, it means that no matter how small a number $x$ you pick, I can show you a point in that sequence such that all further numbers in the sequence are within distance $x$ of $1$. But certainly whatever number you choose your number is bigger than $10^{-k}$ for some $k$. So I can just pick my point to be the $k$th spot in the sequence.


A more intuitive way of explaining the above argument is that the reason $.99999\ldots = 1$ is that their difference is zero. So let's subtract $1.0000\ldots -.99999\ldots = .00000\ldots = 0$. That is,


$1.0 -.9 = .1$



$1.00-.99 = .01$


$1.000-.999=.001$,


$\ldots$


$1.000\ldots -.99999\ldots = .000\ldots = 0$


functional equations - Strange Examples of Involutory Functions?

I am very interested in functions $\gamma:\mathbb R\to\mathbb R$ with the following property:
$$\gamma^2(x)=x$$
One form of a function satisfying this is
$$f(x)=\frac{a-x}{1+bx}$$
Which has the property $f^2(x)=x$. Infinitely many more functions with this property can be obtained by finding some other injective function $g$, its inverse $g^{-1}$, and then composing $g, g^{-1},$ and $f$ as follows:
$$g^{-1}\circ f\circ g$$

However, I am not very interested in involutory functions of this form, since they seem to all be ripoffs of the general form that I already stated.



In fact, it seems that all involutory functions can be put in the form
$$g^{-1}\circ f\circ g$$
for some $g$, and for some $a,b$. I can't find any counterexamples, but I don't know how to prove it either. It seems to me that the best way to approach this would be to set up some kind of differential equation like
$$(f'\circ f)(x)=\frac{1}{f'(x)}$$
But I have absolutely no idea how I might show that any involutory function can be put in the aforementioned form.



Any ideas?




NOTE: I'm sure there are some elaborate piecewise-defined answers that can destroy my conjecture. However, I can't expect people to know what I mean when I ask to prove this for all "reasonable" functions - so I will establish some stricter restrictions on $\gamma$. The function must be expressible using some finite composition of these functions and their inverses:
$$\phi_1(x,a)=x+a$$
$$\phi_2(x,a)=ax$$
$$\phi_3(x,a)=x^a$$
$$\phi_4(x,a)=a^x$$
For example, $x^2+x+1$ can be expressed as
$$\phi_1(\phi_3(x,2),\phi_1(x,1))$$

real analysis - Infinity norm related to $L_p(X)$ norm on finite measure space $X$.

Let $(X, \cal M, \mu)$ be a measure space with $\mu(X)<\infty$. Why
\begin{align}
\lim_{p\to\infty}\|f\|_p=\|f\|_\infty
\end{align}
for all $f\in L_\infty$?

Tuesday, November 26, 2019

calculus - What is the probability that the resulting four line segments are the sides of a quadrilateral?



Question: Divide a given line segment into two other line segments. Then, cut each of these new line segments into two more line segments. What is the probability that the resulting line segments are the sides of a quadrilateral?



I am stuck on this problem. I think I am close, but I am not sure if it is correct. Any help or conformation on this would be helpful.



My thoughts on the problem:




Let us say that the line segment is of length 1. The only restriction for these for line segments to form a quadrilateral is that no one side > .5 (correct me if I am wrong).



With our first cut we have two smaller line segments, one larger than the other. We only need to look at the longer on of these two line segments. Let us call $y$ the smaller line segment and $x$ the larger one. $x$ will be between 0.5 and 1.



When we cut each of these new line segments we need to find where it does not work for it to be a quadrilateral. We only have to look at $x$. Let us call $a$ the length that we cut.



If we use the example $x=0.6$ we can see that $a$ cannot be less than 0.1 or greater than 0.5.



We can generalize this for any $x$. $a$ cannot be less than $(x-1/2)$ or greater than $1/2$.




This is where I get stuck.



I believe that the probability for any $x$ value that these four line segments will not be a quadrilateral is $$\frac{2(x-1/2)}{x}$$



If this is correct is the total probability that it cannot be a quadrilateral $$\int\limits_{1/2}^1\frac{2(x-1/2)}{x}\mathrm{d}x?$$



Any help is much appreciated. Thank you


Answer



In your very last integral, $\mathrm dx$ should be $2\mathrm dx$ $(*)$. Apart from that, your reasoning looks fine. To compute the last integral, use $\displaystyle\int(2−1/x)\mathrm dx=2x−\log x$, which yields the numerical value $2−2\log2\approx61\%$ for the probability of no quadrilateral.




$(*)$ To understand why, note that the total mass of $\mathrm dx$ on $[1/2,1]$ is $1/2$ and that one wants a total mass of $1$. This indicates that the probability density function of $X$ the longest of the two first lengthes is $2$ on the interval $[1/2,1]$. Once $X=x$ is known, the probability of no quadrilateral is $p(x)=2(x-1/2)/x$, as you aptly showed, hence the total probability of no quadrilateral is
$$
\int_{1/2}^1p(x)\,(2\mathrm dx)=2\int_{1/2}^1(2-1/x)\mathrm dx.
$$


calculus - Find $limlimits_{ntoinfty}sumlimits_{k=1}^n frac{2k+1}{k^2(k+1)^2}$



I have to find the limit $$\lim_{n\to\infty}\sum_{k=1}^n \frac{2k+1}{k^2(k+1)^2}.$$ I tried to make it into a telescopic series but it doesn't really work out...



$$\lim_{n\to\infty} \sum_{k=1}^n \frac{2k+1}{k^2(k+1)^2}=\sum_{k=1}^n \left(\frac{1-k}{k^2}+\frac1{k+1}-\frac1{(k+1)^2} \right)$$ so that is what I did using telescopic...




I said that:



$$\frac{2k+1}{k^2(k+1)^2}=\frac{Ak+B}{k^2}+\frac C{k+1}+\frac D{(k+1)^2}$$ but now as I look at it.. I guess I should "build up the power" with the ${k^2}$ too, right?


Answer



$$\lim_{n\rightarrow\infty}\sum^{n}_{k=1}\bigg[\frac{1}{k^2}-\frac{1}{(k+1)^2}\bigg]$$


proof writing - Prove $1^3 + 2^3 + cdots + n^3 = (1 + 2 + cdots + n)^2$ using Induction





Prove $1^3 + 2^3 + \cdots + n^3 = (1 + 2 + \cdots + n)^2$ using Induction



My proof so far:



Let $P(n)$ be $1^3 + 2^3 + \cdots + n^3 = (1 + 2 + \cdots + n)^2$







Base Case



$P(1):$



LHS = $1^3 = 1$



RHS = $(1)^2 = 1$




Since LHS = RHS, therefore base case holds






Induction Hypothesis



Let $n \in \mathbb{N}$ be arbitrary



Assume $P(n)$ holds







Induction Step



Prove $P(n+1)$ holds:



$$
\begin{align}
& 1^3 + 2^3 + \cdots + \;n^3 + (n+1)^3 \\

= {} & (1 + 2 + \cdots + \; n)^2 + (n+1)^3 \text{ (by Induction Hypothesis)} \\
= {} & (1 + 2 + \cdots + \; n)^2 + (n^3 + 3n^2 + 3n + 1)
\end{align}
$$






This is where I get stuck. I don't know how to prove that my last step is equivalent to:



$$(1 + 2 + \cdots + \;n + (n+1))^2$$



Answer



So basically you want that last expression to turn out to be $\big(1+2+\ldots+n+(n+1)\big)^2$, so you want $(n+1)^3$ to be equal to the difference



$$\big(1+2+\ldots+n+(n+1)\big)^2-(1+2+\ldots+n)^2\;.$$



That’s a difference of two squares, so you can factor it as



$$(n+1)\Big(2(1+2+\ldots+n)+(n+1)\Big)\;.\tag{1}$$



To show that $(1)$ is just a fancy way of writing $(n+1)^3$, you need to show that




$$2(1+2+\ldots+n)+(n+1)=(n+1)^2\;.$$



Do you know a simpler expression for $1+2+\ldots+n$?



Work forward from this...



Okay so going forward.



Basically, Induction works like this, I'll use your question as an example:




Consider the case when $n = 1$. If so, then we will have $1^3 = 1^2$.



Now suppose that $1^3 + 2^3 + 3^3 + \cdots + n^3 = (1 + 2 + 3 + \cdots + n)^2$ for some $n \in \mathbb N$.



Recall first that $\displaystyle (1 + 2 + 3 + \cdots + n) = \frac{n(n+1)}{2}$ so we know $\displaystyle 1^3 + 2^3 + 3^3 + \cdots + n^3 = \bigg(\frac{n(n+1)}{2}\bigg)^2$.



Now consider $\displaystyle 1^3 + 2^3 + 3^3 + \cdots + n^3 + (n + 1)^3 = \bigg(\frac{n(n+1)}{2}\bigg)^2 + (n+1)^3 = \frac{n^2 (n+1)^2 + 4(n+1)^3}{4} = \bigg( \frac{(n+1)(n+2)}{2} \bigg)^2$.



Hence, the statement holds for the $n + 1$ case. Thus by the mathematical induction $1^3 + 2^3 + 3^3 + \cdots + n^3 = (1 + 2 + 3 + \cdots + n)^2$ for each $n \in \mathbb N$.




$\mathbb Q.\mathbb E.\mathbb D$


real analysis - Seeking clarification on, and explanation of difference between, the proofs given at question "Differentiability implies continuity"



The answer to the question Differentiability implies continuity - A question about the proof states that , with regards to the proof given in the original question, "there is an implicit issue of existence of limits which is being swept under the rug in the presentation you have given".





Concerning the proof given in the original question: Could someone clarify what issue there is with the existence of limits, as well as how the proof in the original question is assuming continuity first?



Concerning both the proof in the original question and in the accepted answer: could someone clarify what the difference(s) is(are) between the proof in the original question, and the proof given in the answer?




To make answering the question easier, I will copy the proofs below






Proof given in the original question:




Differentiability Definition



When we say a function is differentiable at $x_0$, we mean that the limit:



$$‎f^{\prime} ‎(x) = \lim_{x\to x_0} \frac{f(x) - f(x_0)}{x-x_0}$$ exists.



Continuity Definition



When we say a function is continuous at $x_0$, we mean that:

$$\lim_{x\to x_0} f(x) - f(x_0) = 0$$



Theorem: Differentiability implies Continuity: If $f$ is a differentiable function at $x_0$, then it is continuous at $x_0$.



Proof:



Let us suppose that $f$ is differentiable at $x_0$. Then
$$ \lim_{x\to x_0} \frac{f(x) - f(x_0)}{x-x_0} = ‎f^{\prime} ‎(x) $$



and hence




$$ \lim_{x\to x_0} f(x) - f(x_0) = lim_{x\to x_0} \left[ \frac{f(x) - f(x_0)}{x-x_0} \right] \cdot lim_{x\to x_0} (x-x_0) = 0$$



We have therefore shown that, using the definition of continuous, if the function is differentiable at $x_0$, it must also be continuous.






Proof in the accepted answer



The assumption of differentiability at $x_0$ says that the limit




$$\lim_{x \to x_0} \frac{f(x) - f(x_0)}{x-x_0}$$



exists as a finite number. The limit $\lim_{x \to x_0} x-x_0$ exists and is zero regardless of our assumptions. Then the product rule for limits tells us both that $\lim_{x \to x_0} f(x)-f(x_0)$ exists, and that it is the product of the two limits above, which means it must be zero. Because the product rule also tells us that the limit exists, we do not have to assume continuity first.


Answer



In the original question, $\lim_{x \to x_0} [f(x)-f(x_0)]$ is written down directly, it seems to imply directly that we know for sure it exists.



In the answer,



First, we know that $\lim_{x \to x_0}\frac{f(x)-f(x_0)}{x-x_0}$ exists and it is equal to a finite number, $a$.




Also, we know $\lim_{x \to x_0} (x-x_0)$ exists and it is equal to zero.



and then product rule is used, and now we know $\lim_{x \to x_0}\frac{f(x)-f(x_0)}{x-x_0}\cdot \lim_{x \to x_0} (x-x_0)$ exists and it is equal to zero.



Now we know $\lim_{x \to x_0} (f(x)-f(x_0))$ exists and it is equal to zero.


Monday, November 25, 2019

real analysis - Continuous Function and a Dense Set



Let $S$ be an open set in $\mathbb{R}$. Let $f$ be a continuous function such that $f(S)$ is dense in $\mathbb{R}$. Let $\alpha$ be an arbitrary element of $\mathbb{R}$. Then as $f(S)$ is dense in $\mathbb{R}$, there exists $\{z_{i}\} \subset S$ such that $\lim f(z_{i}) = \alpha$. Since $f$ is continuous does this imply that $\alpha = f(z)$ for some $z \in S$?


Answer



No, it does not. Let $S = \mathbb{R}\setminus \{0\}$, and let $f(x)=x$; there is no $\alpha\in S$ such that $f(\alpha)=0$.


linear algebra - How to find the correct order to multiply elimination matrices?



Let's say I have the following matrix $A \in \mathbb{R}^{4\times4}$



$$A = \begin{bmatrix}
a & a & a & a \\
a & b & b & b \\
a & b & c & c \\

a & b & c & d
\end{bmatrix}$$



And I perform the following Row Operations (which are really just matrix multiplications) to reduce $A$ into an upper triangular matrix $U$




  1. $R_2 - (l_{21} = 1)R_1$ ($\text{corresponds to }E_{21})$

  2. $R_3 - (l_{31} = 1)R_2$ ($\text{corresponds to }E_{31})$

  3. $R_4 - (l_{41} = 1)R_3$ ($\text{corresponds to }E_{41})$

  4. $R_3 - (l_{32} = 1)R_2$ ($\text{corresponds to }E_{32})$


  5. $R_4 - (l_{42} = 1)R_3$ ($\text{corresponds to }E_{42})$

  6. $R_4 - (l_{43} = \frac{c-a}{c-b})R_3$ ($\text{corresponds to }E_{43})$



After performing all these row operations we arrive at
$$U = \begin{bmatrix}
a & a & a & a \\
0 & b-a & b-a & b-a \\
0 & 0 & c-b & c-b \\
0 & 0 & 0 & d-c

\end{bmatrix}$$



But now if I want to find the elimination matrix $E$, that does all of these row operation in one step? How do I go about finding it? To re-iterate, I'm trying to find a $E$ such that $EA = U$.



I understand $E$ would be the product of the elimination matrices used in the row operations, but in what order would the matrices be multiplied? The order being of importance as matrix multiplication isn't commutative.



As it turns out, $E \neq E_{21}E_{31}E_{41}E_{32}E_{42}E_{43}$



Is there a theorem/general rule that can be used to find the correct order to multiply the matrices used in the row operations, so that a single elimination matrix can be found without trying every possible permutation?







A Second Example, $B \in \mathbb{R}^{3\times3}$



$$B = \begin{bmatrix}
1 & 4 & 0 \\
4 & 12 & 4 \\
0 & 4 & 0 \\
\end{bmatrix}$$




The following row operations are performed to transform $B$ into an upper triangular $U$




  1. $R_2 - (l_{21} = 4)R_1$ $\text{(corresponds to } E_{21})$

  2. $R_3 - (l_{32} = -1)R_2$ $\text{(corresponds to } E_{32})$



We arrive at



$$U = \begin{bmatrix}

1 & 4 & 0 \\
0 & 1 & -1 \\
0 & 0 & 1 \\
\end{bmatrix}$$



In this case, $E_{32}E_{21}B \neq U$, however $E_{21}E_{32}B = U$, going against the convention in example 1


Answer



Perhaps I am misunderstanding something, but here is the answer I think you're looking for. Suppose that we first conduct the row operation ${\bf R}_1$ on the matrix ${\bf A}$, then the resulting matrix is ${\bf A}_1 = {\bf R}_1 {\bf A}$. The next row operation ${\bf R}_2$ is then applied to ${\bf A}_1$, resulting in ${\bf A}_2 = {\bf R}_2 {\bf A}_1 = {\bf R}_2 {\bf R}_1 {\bf A}$. Continuing this way, you can see that if you apply $n$ row operations ${\bf R}_1, \ldots ,{\bf R}_n$, the result will be ${\bf A}_n = {\bf R}_n {\bf R}_{n-1} \cdots {\bf R}_1 {\bf A}$, showing you the order in which to multiply the operations ${\bf R}_i$.


Calculate limit with summation index in formula











I want to calculate the following:



$$ \lim_{n \rightarrow \infty} \left( e^{-n} \sum_{i = 0}^{n} \frac{n^i}{i!} \right) $$



Numerical calculations show it has a value close to 0.5. But I am not able to derive this analytically. My problem is that I am lacking a methodology of handling the $n$ both as a summation limit and a variable in the equation.


Answer



I don't want to put this down as my own solution, since I have already seen it solved on MSE.



One way is to use the sum of Poisson RVs with parameter 1, so that $S_n=\sum_{k=1}^{n}X_k, \ S_n \sim Poisson(n)$ and then apply Central Limit Theorem to obtain $\Phi(0)=\frac{1}{2}$.




The other solution is purely analytic and is detailed in the paper by Laszlo and Voros(1999) called 'On the Limit of a Sequence'.


real analysis - Use of asymptotically equivalent equations in limits

I was wondering about the steps to show that the following limit does not exists.
$$\lim_{x\rightarrow\infty}[\log(x^2-3)-\log(x+2)]$$
I know that by using L'Hopital's Rule and the continuity of logarithms I can reduce the limit to:
$$\lim_{x\rightarrow\infty}[\log(2x)] \text{ which does not exists}$$



But since that
$$x^2-3 \sim x^2,\text{as } x \rightarrow \infty $$

$$x+2 \sim x,\text{as } x \rightarrow \infty $$
would it be possible to show that the original limit does not exists by replacing the two asymptotically equivalent function into the logs (due to the continuity of logarithms). To result with:
\begin{align}
\lim_{x\rightarrow\infty}[\log(x^2-3)-\log(x+2)] & = \lim_{x\rightarrow\infty}[\log(x^2)-\log(x)]\\
& = \lim_{x\rightarrow\infty} [2\log(x)-\log(x)]\\
& = \lim_{x\rightarrow\infty} [\log(x)]\\
\end{align}

combinatorics - Give the combinatorial proof of the identity $sumlimits_{i=0}^{n} binom{k-1+i}{k-1} = binom{n+k}{k}$



Given the identity



$$\sum_{i=0}^{n} \binom{k-1+i}{k-1} = \binom{n+k}{k}$$



Need to give a combinatorial proof



a) in terms of subsets




b) by interpreting the parts in terms of compositions of integers



I should not use induction or any other ways...



Please help.


Answer



HINTS:




  1. Consider a $k$-element subset of $[n+k]=\{1,\dots,n+k\}$; it has a maximum element, which can be anything from $k$ through $n+k$. How many such subsets are there with maximum element $k+i$ for $i=0,\dots,n$?



  2. There are $\binom{k-1+i}{k-1}$ compositions of $k+i$ with $k$ terms. There are $\binom{n+k}k$ compositions of $n+k+1$ with $k+1$ terms.



calculus - How to find $lim_{xtoinfty}{frac{e^x}{x^a}}$?




$$\lim_{x\to\infty}{\frac{e^x}{x^a}}$$ For $a\in \Bbb R,$ find this limit.



I would say for $a\ge 0$ lim is equal to $\lim_{x\to\infty}{\frac{e^x}{a!x^0}=\infty}$ (from L'Hopital).



For $a<0$, lim eq. to $\frac{\infty}{0}$so lim doesnt exist. Is this correct?


Answer



Because $e^x > x$ for all $x$, $$\lim_{x \to \infty}\frac{e^x}{x}=\lim_{x \to \infty}\frac{1}{2}\left(\frac{e^{x/2}}{x/2}\right)e^{x/2} = \infty.$$
since $e^{x/2}/(x/2) > 1,$ and $e^{x/2} \to \infty$ as $x \to \infty$.




Then, it follows that $$\lim_{x \to \infty}\frac{e^x}{x^a}=\lim_{x \to \infty}\frac{1}{a^a}\cdot \left( \frac{e^{x/a}}{x/a}\right)^a = \infty$$



since we just showed that what is in parentheses approaches $\infty$ as $x \to \infty$, so the whole limit has to go to $\infty.$


Sunday, November 24, 2019

real analysis - Using the definition of a limit, prove that $lim_{n rightarrow infty} frac{n^2+3n}{n^3-3} = 0$




Using the definition of a limit, prove that $$\lim_{n \rightarrow \infty} \frac{n^2+3n}{n^3-3} = 0$$





I know how i should start: I want to prove that given $\epsilon > 0$, there $\exists N \in \mathbb{N}$ such that $\forall n \ge N$



$$\left |\frac{n^2+3n}{n^3-3} - 0 \right | < \epsilon$$



but from here how do I proceed? I feel like i have to get rid of $3n, -3$ from but clearly $$\left |\frac{n^2+3n}{n^3-3} \right | <\frac{n^2}{n^3-3}$$this is not true.


Answer



This is not so much of an answer as a general technique.



What we do in this case, is to divide top and bottom by $n^3$:

$$
\dfrac{\frac{1}{n} + \frac{3}{n^2}}{1-\frac{3}{n^3}}
$$
Suppose we want this to be less than a given $\epsilon>0$. We know that $\frac{1}{n}$ can be made as small as we like. First, we split this into two parts:
$$
\dfrac{\frac{1}{n}}{1-\frac{3}{n^3}} + \dfrac{\frac{3}{n^2}}{1-\frac{3}{n^3}}
$$



The first thing we know is that for large enough $n$, say $n>N$, $\frac{3}{n^3} < \frac{3}{n^2} < \frac{1}{n}$. We will use this fact.




Let $\delta >0$ be so small that $\frac{\delta}{1-\delta} < \frac{\epsilon}{2}$. Now, let $n$ be so large that $\frac{1}{n} < \delta$, and $n>N$.



Now, note that $\frac{3}{n^3} < \frac{3}{n^2} < \frac{1}{n} < \delta$. Furthermore, $1- \frac{3}{n^3} > 1 - \frac{3}{n^2} > 1-\delta$.



Thus,
$$
\dfrac{\frac{1}{n}}{1-\frac{3}{n^3}} + \dfrac{\frac{3}{n^2}}{1-\frac{3}{n^3}}
< \frac{\delta}{1+\delta} + \frac{\delta}{1+\delta} < \frac{\epsilon}{2} + \frac{\epsilon}{2} < \epsilon
$$




For large enough $n$. Hence, the limit is zero.



I could have had a shorter answer, but you see that using this technique we have reduced powers of $n$ to this one $\delta$ term, and just bounded that $\delta$ term by itself, bounding all powers of $n$ at once.


Formula for series $frac{sqrt{a}}{b}+frac{sqrt{a+sqrt{a}}}{b}+cdots+frac{sqrt{a+sqrt{a+sqrt{cdots+sqrt{a}}}}}{b}$



All variables are positive integers.




For:



$$a_1\qquad\frac{\sqrt{x}}{y}$$
$$a_2\qquad\frac{\sqrt{x\!+\!\sqrt{x}}}{y}$$
$$\cdots$$
$$a_n\qquad\frac{\sqrt{x\!+\!\sqrt{\!x+\!\sqrt{\!\cdots\!+\sqrt{x}}}}}{y}$$



Is there a formula of an unconditional form to describe series $a_n$?







I thought of something along the lines of:



$$\sum _{k=1}^{n } \left(\sum _{j=1}^k \frac{\sqrt{x}}{y}\right)$$



but, I quickly realized that it was very incorrect; Then I thought of:



$$\sum _{k=1}^{n} \frac{\sum _{j=1}^k \sqrt{x}}{y}$$




which I also concluded as very incorrect...



I'm blank, but I would like to see an example of something along the lines of:



$$\sum _{k=1}^{n } \frac{\sqrt{x+\sqrt{x+\sqrt{\cdot\cdot\cdot+\sqrt{x}}}}}{y}$$



where each $\sqrt{x+\sqrt{\cdots}}$ addition, repeats $k$ times. (i.e $k=3 \Rightarrow \sqrt{x+\sqrt{x+\sqrt{x}}}$);
If it is possible...



Cheers!



Answer



If all you are looking for is a compact representation, let
$$
s_{k}=\begin{cases}
0 & \text{if }k=0\\
\sqrt{a+s_{k-1}} & \text{if }k>0
\end{cases}.
$$
Then
\begin{align*}

S_n & =\left(\frac{\sqrt{a}}{b}\right)+\left(\frac{\sqrt{a+\sqrt{a}}}{b}\right)+\ldots+\left(\frac{\sqrt{a+\sqrt{a+\ldots+\sqrt{a}}}}{b}\right)\\
& =\frac{1}{b}\left[\sqrt{a}+\sqrt{a+\sqrt{a}}+\ldots+\sqrt{a+\sqrt{a+\ldots+\sqrt{a}}}\right]\\
& =\frac{1}{b}\sum_{k=1}^{n}s_{k}.
\end{align*}



Assume $a\in\mathbb{R}$ (you don't have to do this). We can show that the recurrence is
stable everywhere (weakly stable at $a=-\frac{1}{4}$). Particularly,
the fixed point is given by
$$
s^{2}-s-a=0,

$$
which has roots
$$
\frac{1\pm\sqrt{1+4a}}{2}.
$$
Particularly, the locally stable fixed point is the solution with $\pm$ is $+$.
So, for large enough $k$,
$$
s_k\approx\frac{1+\sqrt{1+4a}}{2}.
$$




This is as good an answer as you can hope for, save for error bounds on the above expression.


number theory - Greatest common divisor of $(2^{21}-1,2^{27}-1)$




Find $\text{gcd}(2^{21}-1,2^{27}-1).$




My proof: We know that $2^{21}-1=(2^3)^7-1=8^7-1=(8-1)(8^6+\dots+8+1)=7(8^6+\dots+8+1)=7N_1$ and $2^{27}-1=(2^3)^9-1=8^9-1=(8-1)(8^8+\dots+8+1)=7(8^8+\dots+8+1)=7N_2.$ Also $N_2-N_1=8^8+8^7$. Hence $$\text{gcd}(2^{21}-1,2^{27}-1)=\text{gcd}(7N_1,7N_2)=7\text{gcd}(N_1,N_2).$$



I guess that $\text{gcd}(N_1,N_2)=1$ but I can't prove rigorously. Can anyone show a full proof?


Answer



We know that $$\gcd(a^n-1,a^m-1)=a^{\gcd(n,m)}-1$$
So now $$\gcd(2^{21}-1,2^{27}-1)=2^{\gcd(21,27)}-1=7$$



We'll now prove the theorem I used here.

We can do the following. Assume $n>m$, then: \begin{align}

\gcd(a^n-1,a^m-1)&=\gcd((a^n-1)-(a^m-1),a^m-1)\\
&=\gcd(a^m(a^{n-m}-1),a^m-1)\\
\end{align}
since $\gcd(a^m,a^m-1)=1$, we now know
$$\gcd(a^n-1,a^m-1)=\gcd(a^{n-m}-1,a^m-1)$$
Now we can do Euclid's algorithm in the exponents! We obtain the final equality$$\gcd(a^n-1,a^m-1)=a^{\gcd(n,m)}-1$$


elementary number theory - Finding modular inverse (wrong approach)




I'm trying to find the modular inverse of $$30 \pmod{7} $$ I have tried using the Euclidean algorithm and it gave me the right answer, which is $x \equiv 6 \pmod{7} $. However, I tried using another approach that I thought would be simpler, but it resulted in a wrong answer. These were my steps:



Suppose x is the modular inverse of 30 mod 7. $$30x \equiv 1 \pmod{7} $$
$$(7*4 + 2)x \equiv 1 \pmod{7} $$
$$2x \equiv 1 \pmod{7}$$
<- I have a feeling it's the previous line of simplification that's causing the problem.) So the inverse of 2 mod 7 is 4. Thus the resulting answer is $x \equiv 4 \pmod{7} $, which is wrong. Could anyone point out what is the problem here?


Answer



First, write $\;30=2\pmod 7\;$ , and now use the Euclidean algorithm with this, which is way easier.




By the way, the answer indeed is $\;4\;$ , since $\;30\cdot4=120=1+17\cdot7\;$ , or simpler: $\;2\cdot4=1+7\;$


elementary number theory - Modular Arithmetic & Congruences

Show that if $p$ is an odd prime and $a \in \mathbb Z$ such that $p$ doesn't divide $a$ then $x^2\equiv a(mod p)$ has no solutions or exactly 2 incongruent solutions.



The only theorem that I thought could help was:



Let $a, b$ and $m$ be integers with $m > 0$ and $gcd(a,m)=d$. If $ d $ doesn't divides $b$ then $ax\equiv b(mod p)$ has no solutions. If $d$ divides $b$ then $ax\equiv b(mod p)$ has exactly $d$ incongruent solutions modulo $m$.




But I feel that this would be an invalid theorem for this proof since $ax$ and $x^2$ are of different degrees.



Thoughts? Other methods to approach this?

calculus - Showing that $int_{-infty}^{infty} exp(-x^2) ,mathrm{d}x = sqrt{pi}$





The primitive of $f(x) = \exp(-x^2)$ has no analytical expression, even so, it is possible to evaluate $\int f(x)$ along the whole real line with a few tricks. How can one show that $$ \int_{-\infty}^{\infty} \exp(-x^2) \,\mathrm{d}x = \sqrt{\pi} \space ? $$


Answer



Such an integral is called a Gaussian Integral


This link should help you out.


calculus - How can I find the infinite sum of this non-conventional geometric series?



There's something about a geometric series that makes it easily verifiable. Series like $\sum\frac{10^n}{9^n}$ or $\sum\frac{1}{2^n}$ aren't too bad; the variables $n$ are simple and easily reachable, and the fractions are not complex. But I'm having trouble with a series that looks somewhat different:



$$\sum\frac{2^n}{9^{2n+1}}$$



Its sequence converges, so I know I can apply the learned methods. The first thing I did was extract a constant from the sequence. So I go from the original sequence, which is:




$$a_n = \{\frac{2}{729}, \frac{4}{59049}, \frac{8}{478296}, \frac{16}{387420489}\}$$



to



$$a_n = \frac{2}{9}(\frac{1}{81}, \frac{2}{6561}, \frac{4}{531441}, \frac{8}{43046721})$$



I figured out the new sequence as: $\frac{2^n}{9^{2n}}$, and after the simplifying the constants, I was able to recreate the series in an almost geometric form of $ar^{n-1}$, with $\frac{1}{9}$ as $a$ and $\frac{2^n}{9^{2n}}$ as kind of my $r$. Right now, I have this:



$$\sum\frac{1}{9}(\frac{2^n}{9^{2n}})$$




This is sort of my dilemma. Having the $2n$ in the denominator is a serious issue; it prevents me from creating an $ar^{n-1}$ formula, and I need an $ar^{n-1}$ formula if I want to test the convergence of this series, at least with the methods I've learned so far. So I'm quite stuck.



Did something go wrong in my calculations? How can I turn this into the proper formula so I can test the series' convergence? Any help is appreciated.



Much Thanks,



-Zolani


Answer



Hint:




$$9^{2n}=(9^2)^n=81^n.$$



:-)


calculus - Proving $ int_{0}^{infty} frac{ln(t)}{sqrt{t}}e^{-t} mathrm dt=-sqrt{pi}(gamma+ln{4})$


I would like to prove that:


$$ \int_{0}^{\infty} \frac{\ln(t)}{\sqrt{t}}e^{-t} \mathrm dt=-\sqrt{\pi}(\gamma+\ln{4})$$


I tried to use the integral $$\int_{0}^{n} \frac{\ln(t)}{\sqrt{t}}\left(1-\frac{t}{n}\right)^n \mathrm dt$$


$$\int_{0}^{n} \frac{\ln(t)}{\sqrt{t}}\left(1-\frac{t}{n}\right)^n \mathrm dt \;{\underset{\small n\to\infty}{\longrightarrow}}\; \int_{0}^{\infty} \frac{\ln(t)}{\sqrt{t}}e^{-t} \mathrm dt$$ (dominated convergence theorem)


Using the substitution $t\to\frac{t}{n}$, I get:



$$ \int_{0}^{n} \frac{\ln(t)}{\sqrt{t}}\left(1-\frac{t}{n}\right)^n \mathrm dt=\sqrt{n}\left(\ln(n)\int_{0}^{1} \frac{(1-t)^n}{\sqrt{t}} \mathrm dt+\int_{0}^{1} \frac{\ln(t)(1-t)^n}{\sqrt{t}} \mathrm dt\right) $$


However I don't know if I am on the right track for these new integrals look quite tricky.


Answer



Consider integral representation for the Euler $\Gamma$-function: $$ \Gamma(s) = \int_0^\infty t^{s-1} \mathrm{e}^{-t} \mathrm{d} t $$ Differentiate with respect to $s$: $$ \Gamma(s) \psi(s) = \int_0^\infty t^{s-1} \ln(t) \mathrm{e}^{-t} \mathrm{d} t $$ where $\psi(s)$ is the digamma function. Now substitute $s=\frac{1}{2}$. So $$ \int_0^\infty \frac{ \ln(t)}{\sqrt{t}} \mathrm{e}^{-t} \mathrm{d} t = \Gamma\left( \frac{1}{2} \right) \psi\left( \frac{1}{2} \right) $$ Now use duplication formula: $$ \Gamma(2s) = \Gamma(s) \Gamma(s+1/2) \frac{2^{2s-1}}{\sqrt{\pi}} $$ Differentiating this with respect to $s$ gives the duplication formula for $\psi(s)$, and substitution of $s=1/2$ gives $\Gamma(1/2) = \sqrt{\pi}$. $$ \psi(2s) = \frac{1}{2}\psi(s) + \frac{1}{2} \psi(s+1/2) + \log(2) $$ Substitute $s=\frac{1}{2}$ and use $\psi(1) = -\gamma$ to arrive at the result.


Saturday, November 23, 2019

elementary set theory - Cardinality of the Cartesian Product of Two Equinumerous Infinite Sets


Is the cardinality of the Cartesian product of two equinumerous infinite sets the same as the cardinality of any one of the sets? I couldn't find this explicitly stated in any handout or text.


This certainly seems to be true from the examples I have seen:
- The Cartesian product of two infinitely countable sets is again infinitely countable.
- The Cartesian product of two sets with cardinality of continuum again has cardinality of continuum.


I found a question here, but it is with regard to finite sets only.


Answer



This depends on whether we assume the axiom of choice.


In the presence of choice, then yes, $\vert X^2\vert=\vert X\vert$ for all infinite $X$. This was proved by Zermelo.


If choice fails, however, this may no longer be the case: e.g. it is consistent with ZF that there is a set $X$ which is infinite but cannot be partitioned into two infinite sets. Since (exercise) if $X$ is infinite then $X^2$ can be partitioned into two infinite sets, this means that such an $X$ (called amorphous) is a counterexample to the rule.


In fact, this will happen whenever choice fails: the principle "$\vert X^2\vert=\vert X\vert$ for all infinite $X$" is exactly equivalent to the axiom of choice! See For every infinite $S$, $|S|=|S\times S|$ implies the Axiom of choice.


probability - How do wer find this expected value?




I'm just a little confused on this. I'm pretty sure that I need to use indicators for this but I'm not sure how I find the probability. The question goes like this:





A company puts five different types of prizes into their cereal boxes, one in each box and in equal proportions. If a customer decides to collect all five prizes, what is the expected number of boxes of cereals that he or she should buy?




I have seen something like this before and I feel that I'm close, I'm just stuck on the probability. So far I have said that $$X_i=\begin{cases}1 & \text{if the $i^{th}$ box contains a new prize}\\ 0 & \text{if no new prize is obtained} \end{cases}$$ I know that the probability of a new prize after the first box is $\frac45$ (because obviously the person would get a new prize with the first box) and then the probability of a new prize after the second prize is obtained is $\frac35$, and so on and so forth until the fifth prize is obtained. What am I doing wrong?! Or "what am I missing?!" would be the more appropriate question.


Answer



As the expected number of tries to obtain a success with probability $p$ is $\frac{1}{p}$, you get the expect number :
$$1+\frac{5}{4}+\frac{5}{3}+\frac{5}{2}+5=\frac{12+15+20+30+60}{12}=\frac{137}{12}\approx 11.41$$


integration - Integral from $0$ to Infinity of $e^{-3x^2}$?

How do you calculate the integral from $0$ to Infinity of $e^{-3x^2}$? I am supposed to use a double integral. Can someone please explain? Thanks in advance.

integration - Absolutely continuous and differentiable almost everywhere

I've read the following claim and I wonder if someone can direct me to or provide me with a proof of it:





"A strongly absolutely continuous function which is differentiable
almost everywhere is the indefinite integral of strongly integrable
derivative"




It was in the context of Bochner integrable functions so I'm assuming that "strongly" means with respect to the norm.



Thanks!

elementary number theory - Finding the remainder of a factorial using modular arithmetic

How do I find the remainder of $5^{2001} + (27)!$ when it is divided by $8$? Can someone please show me the appropriate steps? I'm having a hard time with modular arithmetic.



So far this is how far I've got:



$$5^2=25 \equiv 1\pmod{8} $$



So \begin{align}5^{2001}&=5^{2000}\cdot 5\\

&=(5^2)^{1000}\cdot 5\\
&=25^{1000}\cdot 5\\
&\equiv 1^{1000}\cdot 5\\
&\equiv 5 \pmod{8}\end{align}



How do I go about somthing similar for $27!$? Also, could someone direct me to a video or some notes where I can learn this?

elementary number theory - Reference for a Post about Gauss formula for primes written as sum of two squares?

In a post Efficiently finding two squares which sum to a prime



I read



"In 1825 Gauss gave the following construction for writing a prime congruent to $1 \pmod{4}$ as a sum of two squares: Let $p=4k+1$ be a prime number. Determine $x$ (this is uniquely possible...) so that




$$ x = \frac{(2k)!}{2(k!)^2} \pmod{p}, \quad |x| < \frac{p}{2}$$



Now determine $y$ so that



$$ y = x \cdot (2k)! \pmod{p}, \quad |y| < \frac{p}{2}$$



Gauss showed that $x^2+y^2=p$."



I checked the Stark's book and I did not see a direct reference to the Gauss original paper or book or other reference to explain the method how to come to this and find the formula. If you know the reference or you can explain its procedure, I will be grateful.

Calculation of modular multiplicative inverse of A mod B when A > B



I'm trying to understand a Montgomery reduction algorithm, for which I need to calculate a multiplicative inverse.
However, euclidean algorithm only helps if A < B.




Example is 11 mod 3.
Multiplicative inverse of 11 is 2,but ext_gcd gives you Bezout numbers such as -1 and 4.



https://en.wikipedia.org/wiki/Extended_Euclidean_algorithm



Wikipedia says so:
The extended Euclidean algorithm is particularly useful when a and b are coprime, since x is the modular multiplicative inverse of a modulo b, and y is the modular multiplicative inverse of b modulo a.



But as far as I see this can't be true, either X is multiplicative inverse of A modulo B or Y is multiplicative inverse of B modulo A, but not both at the same time, because one of them (A or B) is going to be bigger than another. We have X=4, Y=-1 for A=3,B=11, and X=4 is valid inverse, while -1 is indeed not.




A lot of online calculators that I tried are also said that a has to be bigger than be, but they (some of them) are still able to calculate inverse of 11 mod 3.



The only workaround I found so far is perform A = A mod B first, so A is now a remainder of divisios and therefore is less than modulus, so we can perform ext_gcd(2, 3) now and get our 2 as answer.



Probably I'm missing something, this thing is pretty new for me.



Thanks.


Answer



It is inevitable that a Bézout's identity equation will give you modular multiplicative inverses, since given:




$$am+bn = 1$$



we can take $\bmod m$ for



$$ bn\equiv 1 \bmod m$$



or $\bmod n $ for



$$ am \equiv 1 \bmod n$$




To get $a$ and $b$ in your preferred range, you can simply add or subtract a suitable multiple of the modulus.



So in this case
$$-1\cdot 11 + 4\cdot 3 = 1$$



and thus
$$-1\cdot 11\equiv 1 \bmod 3$$



($-11$ being one more than $-12$), so $-1$ is a valid inverse of $11$ modulo $3$. Then of course $$-1\equiv 2 \bmod 3$$




so this is consistent with your observation that $2$ is the inverse of $11 \bmod 3$ also.


modular arithmetic - How to calculate $5^{2003}$ mod $13$


How to calculate $5^{2003}$ mod $13$




using fermats little theorem



5^13-1 1 mod 13




(5^12)^166+11 mod 13



a+b modn=(a modn + b modn) modn



(1+11mod13)mod13



12 mod 13 = 12



why answer is 8 ?




how do we calculate this



thanks

Friday, November 22, 2019

real analysis - What can we conclude from $f(x+y)+f(x-y)=f(xy)$?


Let $f : \mathbb{R}\rightarrow\mathbb{R}$ be a function such that $f(x + y) + f(x − y) = f(xy)$ for all $x, y \in\mathbb{R}$. Then $f$ is:



A. Strictly increasing.




B. Strictly decreasing.



C. Identically zero.



D. Constant but not necessarily zero.




I have no idea how to do this. Thanks for any hint.

Thursday, November 21, 2019

calculus - Is it possible for a continuous function to have a nowhere-continuous derivative?

This is motivated by a question I saw elsewhere that asks whether there is a real-valued function on an interval that contains no monotone subintervals.


Edit: Note that I am asking for a function whose derivative exists but is not continuous anywhere.

Simple Complex Number Problem: $1 = -1$








I'm trying to understand the exact point of failure in the following reasoning:



\begin{equation*}
1 = \sqrt{1} = \sqrt{(-1)(-1)} = \sqrt{\sqrt{-1}^2\sqrt{-1}^2} = \sqrt{(\sqrt{-1}\sqrt{-1})^2} = \sqrt{-1}\sqrt{-1} = \sqrt{-1}^2 = -1.
\end{equation*}




I've been previously told that the problem is due to square root not being a function in C; which I found totally unhelpful. Could someone please explain the problem here in simpler terms.



Edit:



Thank you all for your comments in trying to help me understand this. I finally do. Following is the explanation on my problems in understanding this, in case it'll be of any help to anyone else.



My problem was really due to using an incorrect definition of i: $i = \sqrt{-1}$. While the correct definition would be: $i^2 = -1$.



My incorrect definition led me reasoning, such as (which superficially seemed to give expected results. I see now that this is incorrect, too):




\begin{equation*}
\sqrt{-9} = \sqrt{9 * (-1)} = \sqrt{\sqrt{9}^2 \sqrt{-1}^2} = \sqrt{(\sqrt{9} \sqrt{-1})^2} = \sqrt{9} \sqrt{-1} = 3i.
\end{equation*}



Instead, had I used the correct definition of i:



\begin{equation*}
{(xi)}^2 = x^2i^2 = -x^2 = -9, \\
x^2 = 9, \\

x = +- 3.
\end{equation*}



Now, analyzing the equations in the original problem, I can see at least the following two errors:



1) In the third =, I'm relying on $-1 = {\sqrt{-1}}^2$, while I should be relying on: $-1 = (+-\sqrt{-1})^2$ which would of course give two different branches. Hmm.. on the second reading, this isn't really a problem, as even with the two separate branches, both of them will lead to the result in the next step.



2) In the fifth =, I'm relying on $\sqrt{i^4} = i^2$, which would be correct, if i was a non-negative number in R. But as i is the imaginary unit and in C: $\sqrt{i^4} = \sqrt{i} = +-(1/\sqrt{2})(1 + i)$.

linear algebra - How to find a characteristic polynomial of 5x5 matrix



From an example that I am looking at, the characteristic polynomial of



$\begin{bmatrix}
-1&4&0&0&0 \\

0&3&0&0&0 \\
0&-4&-1&0&0 \\
3&-8&-4&2&1 \\
1&5&4&1&4
\end{bmatrix}$



is $(x-3)^3(x+1)^2$. I understand how to find the characteristic polynomial of 2x2 and 3x3 matrices but anything nxn beyond that I'm not sure what to do. Could someone walk me through the steps of the calculation to find the characteristic polynomial for this matrix?


Answer



The characteristic polynomial is (up to sign) the determinant of $A - \lambda I$, in this case




$\begin{bmatrix}
-\lambda-1&4&0&0&0 \\
0&-\lambda+3&0&0&0 \\
0&-4&-\lambda-1&0&0 \\
3&-8&-4&-\lambda+2&1 \\
1&5&4&1&-\lambda+4
\end{bmatrix}$



The most common definition of determinant is based on an expansion to minors (see https://en.wikipedia.org/wiki/Determinant#Definition), but using that directly for the calculation is horribly inefficient. In general, finding the determinant of a matrix if based on a form of Gaussian Elimination using the following rules:





  1. The determinant of a triangular matrix is the product of entries on the diagonal.

  2. Multiplying a single row by a scalar multiples the determinant by the same scalar.

  3. Switching two rows negates the determinant.

  4. Adding a scalar multiple of a row to another row leaves the determinant unchanged.



However, in this particular case the calculation is much easier because $A$ is block triangular. That is, it can be written in the form



$A = \begin{bmatrix}

B&0&0\\C&D&0\\E&F&G
\end{bmatrix}$



Where $B=\begin{bmatrix}-1&4\\0&3\end{bmatrix}$, $D = \begin{bmatrix}-1\end{bmatrix}$ and $G = \begin{bmatrix}2&1\\1&4\end{bmatrix}$.



In such a case, the determinant of $A$ is the product of the determinants of $B$,$D$ and $G$, and the characteristic polynomial of $A$ is the product of the characteristic polynomials of $B$,$D$ and $G$. Since each of these is up to $2\times2$, you should find the result easily. The result is $(\lambda-3)(\lambda+1)(\lambda+1)(\lambda^2-6\lambda+7)$ (and not as you wrote).


abstract algebra - Addition in finite fields



For a question, I must write an explicit multiplication and addition chart for a finite field of order 8. I understand that I construct the field by taking an irreducible polynomial in $F_2[x]$ of degree 3, and creating the splitting field for that polynomial.



The polynomial I chose for the field of order 8 is $x^3 + x + 1$. Since every finite field's multiplicative group is cyclic, it's my understanding that I can write the 6 elements of this field that are not 0 and 1 and $a, a^2, ..., a^6$, where $a^7 = 1$. And if my thinking is correct about that, I know how multiplication works. But I'm lost on how to develop addition in this field. I know in this case that since 1 + 1 = 0, every element should be its own additive inverse, but beyond that I'm lost as to how, for example, I should come up with a proper answer for what $1 + a$ is.



That is, assuming I'm right about the first parts.



An answer that could help me understand how to do this in general would be very helpful, as I need to do this for a field of another order as well.


Answer




Hint: $a$ being a root of your polynomial gives you the relation $a^3+a+1=0$


algebra precalculus - train is traveling from point A to point B, the distance between these two points is $329$ miles. The total time it takes for the train to travel...


A train is traveling from point A to point B, the distance between these two points is $329$ miles. The total time it takes for the train to travel between point A and B is $7$ hours. If for the first $74$ miles the train travels at a speed of $14$mph slower than its speed during the last $255$ miles, what is the trains speed during the last $255$ miles?


I know $speed=\frac{distance}{time}$, if we represent $x$ as the trains speed during the last $255$ miles then $x-14$ represents the speed during the first $74$ miles, and the average speed throughout the entire speed is $speed=\frac{329}{7}$, this is $47$mph.
Now I don't know how to find the speed during the last $255$ miles?


I don't want the answer I just need help with figuring it out.


Answer



You can solve $$\dfrac{74}{(x-14)}+\dfrac{255}x=7$$


discrete mathematics - Understanding this pattern behind the Fibonacci sequence



To be honest, I'm pretty awful at mathematics however, when up till 6AM I do like to do random things throughout the night to keep me occupied. Tonight, I began playing with the Fibonacci sequence in the Python programming language. I understand that the Fibonacci sequence is just adding the previous two numbers together to produce your next value so I defined a function to spit out the sequence up to the 200th number like so,




def fib(n):
a, b = 0, 1
i=1
while i < 200:
print("ITERATION: " + str(i))
a, b = b, a + b
print(a)
i += 1
print(fib(1))



What I found interesting is a pattern I came across when adding up the total amount of numbers before the sequence added the next digit. (see picture A.)



PICTURE A:



number sets

from there, I added up the number "sets" and the pattern emerged.(see picture B.)



PICTURE B:




Pattern




This pattern continued, I went up to the 22nd "set" of numbers and the whole pattern was like so:




1
2
1
3

1
4
1
5
1
2
1
4
1
4

1
3
1
4
1
3
1
4





I found it interesting that the numbers added a digit sequentially by either 4 or mainly 5 integers and how the overall pattern that emerged out of the "sets" appeared to become less stable after the 8th set which was ironically 5;




1
2
1
3
1
4
1

5




forgive me if this seems obvious or silly, but like I said, I'm pretty bad at math. Can anyone explain why this pattern emerges and a little bit more in depth on what the fibonacci sequence can be used for?


Answer



The ratio between Fibonacci numbers soon settles down to a number close to $1.618$. This number is called the Golden Ratio.
You get an extra digit every time the Fibonacci numbers have increased by a factor 10.
$1.618^4=6.854$ and $1.618^5=11.09$
Once the ratio settles down, you get at least one extra digit every five numbers. Sometimes the extra digit arrives sooner, and you only get four numbers with so many digits.


Wednesday, November 20, 2019

real analysis - Simple proof that this sequence converges [verification]



This is a relatively simple problem. I'm just making sure I have the right idea here. I'd like to prove that the sequence $\displaystyle a_n = 1 + \frac{1}{n^{1/3}}$ converges. My proof is:




We conjecture that $a_n$ converges to 1. Thus, we must show that, for all $\epsilon \in \mathbb{R}$, there exists an $N(\epsilon) \in \mathbb{N}$ such that $|a_n - 1| < \epsilon$ for all $n > N(\epsilon)$.



From that inequality, I do some algebra and find that: $~~\displaystyle n > \frac{1}{\epsilon^3}$, so, if we choose $\displaystyle N(\epsilon) > \frac{1}{\epsilon^3}$, we've shown that the sequence converges to 1.



Is this correct?


Answer



Just so this won't go unanswered:
Yes, you're correct. If $n>1/\epsilon^{3}$ then $n^{-1/3}<\epsilon$, for all $\epsilon>0$.


sequences and series - Arithmetic and geometric progression question (too long for title)



Good evening,
I've been struggling with this one for a while now.



The sum of an increasing geometrical progression is equal to 65. If we substract 1 from the smallest number there, and 19 from the biggest, the three numbers are going to form an arithmetical progression. Find these numbers.



I've gone trough many methods and none of them worked, so I't would be nice if you could explain this step by step. Not to look like someone just hunting for answers, I'll provide what I know:
The geometric progression, consisted of members b1, b2, b3.
The sum of it is equal to 65. b1q = b2; b1q^2=b3




The aritmetic progression, consisted of a1, a2, a3.
a1 = b1 - 1; a2 = b2 ; a3 = b3 - 19.
a1 + d = a2, a1 + 2d = a3
I'm sort of new to this site, so sorry if I didnt provide something, like tags, because these type of things are called progressions here and there no such things in tag list. Im in 12th grade which is equal to senior year in USA as far as I know if that matters. thank you!


Answer



You can develop a system of two equations in two unknowns as follows. Represent the terms of the geometric progression as $c$, $rc$, and $r^2c$. Since you are told that they sum to $65$, you know
$$c + rc + r^2c = 65\tag{1}$$
Now the terms of the arithmetic progression must be $c-1$, $rc$, and $r^2c-19$. Since this is an arithmetic progression, the difference between any two successive terms is the same; thus
$$(rc)-(c-1) = (r^2c - 19)-(rc)\tag{2}$$
Equations (1) and (2) provide your system of equations. Cleaning them up a little bit, you have the system

$$\left\{\begin{array}{l}r^2c + rc + c = 65 \\ r^2c - 2rc + c = 20
\end{array}\right.$$
to solve for $r$ and $c$, which will then give you the terms of the progressions.



Subtracting the equations in this system gives you immediately that $3rc=45$. Since necessarily $r\neq 0$ (why?), you can substitute $c=15/r$ in (say) the first equation and clean it up to get
$$3r^2 - 10r + 3 = 0$$
This factors as $$(r-3)(3r-1)=0$$
so the solutions are $r=3$ (which gives $c=5$) and $r=1/3$ (which gives $c=45$). Since the geometric progression is known to be increasing, we eliminate the solution $r=1/3$ (it produces a decreasing geometric progression). This means the geometric progression is
$$\boxed{5,15,45}$$ and the arithmetic progression is $$\boxed{4,15,26}.$$


elementary number theory - How to prove or disprove that $gcd(ab, c) = gcd(a, b) times gcd(b, c)$?



I'm new to proofs, and am trying to solve this problem from William J. Gilbert's "An Introduction To Mathematical Thinking: Algebra and Number Systems". Specifically, this is from Problem Set 2 Question 74. It asks:




How to prove or disprove that $\gcd(ab, c) = \gcd(a, b) \times \gcd(b, c)$?




What I've tried is to use the proposition that $\gcd(a, b) = ax + by$ to rewrite the whole equality, but I can't manage to equate the two statements.




Any help would be appreciated.


Answer



Notice if $a = b = c = 3$, then



$$ \gcd(ab,c) = \gcd(9,3) = 3 $$



while



$$ gcd(a,b) \times gcd(b,c) = gcd(3,3) \times gcd(3,3) = 3 \times 3 = 9 $$




$$ \therefore gcd(ab,c) \neq gcd(a,b)\times gcd(b,c) $$


sequences and series - Find the limit of $frac{(n+1)^sqrt{n+1}}{n^sqrt{n}}$.



Find $$\lim_{n\to \infty}\frac{(n+1)^\sqrt{n+1}}{n^\sqrt{n}}$$



First I tried by taking $\ln y_n=\ln \frac{(n+1)^\sqrt{n+1}}{n^\sqrt{n}}=\sqrt{n+1}\ln(n+1)-\sqrt{n}\ln(n),$




which dose not seems to take me anywhere. Then I tried to use squeeze theorem, since $\frac{(n+1)^\sqrt{n+1}}{n^\sqrt{n}}\geq 1$, but I need an upper bound now. It's for a while I am trying to come up with it but stuck. Can you help me please.


Answer



Notice that for $f(x)=\sqrt x \ln x$ you have
$$f'(x)=\frac{\ln x}{2\sqrt x}-\frac1{\sqrt x}.$$
Now by mean value theorem
$$f(n+1)-f(n) = f'(\theta_n)$$
for some $\theta_n$ such that $n<\theta_n

If we notice that $\lim\limits_{x\to\infty} f'(x) = 0$ we get that

$$\lim\limits_{n\to\infty} (\sqrt{n+1}\ln(n+1)-\sqrt{n}\ln n) = \lim\limits_{n\to\infty} f'(\theta_n) = 0.$$


algebra precalculus - Repeating Square Root Simplification




Alright, so I have a question on a little open-book challenge-test thingy that deals with repeating square roots, in a form as follows...



$\sqrt{6+\sqrt{6+\sqrt{6+\cdots}}}$



Repeated 2012 times (2012 total square roots)



Over and over and over again. I am newish to TeX, so I am not exactly sure how to model it the way it shows up on paper. It looks sorta like:



$$s_n = \sqrt{n+\sqrt{n+\sqrt{n+\cdots}}}$$




How is something like this simplified?



Working it out logically (I am a highschool freshman, mind you), I get something like this for my example:
$3-\frac{1}{6^{2011}}$



Is this correct? It seems like I could use some sort of limit to prove this, but I have not officially gone through anything beyond Geometry. Now, I do own bits and pieces of knowledge when it comes to calculus and such, but not enough to count on with this sorta thing ;)



EDIT|IMPORTANT: This is what I need to prove: $3 > \sqrt{6+\sqrt{6+\sqrt{6+\cdots}}} > 3-\frac{1}{5^{2011}}$




Where the \cdots means however many more square roots are needed to make a total of 2012


Answer



If you start with $$x = \sqrt{6+\sqrt{6+\sqrt{6+\cdots}}}$$ with an infinite number of terms then $$x=\sqrt{6+x}$$ which you can solve. Depending on how you do it, perhaps by squaring both sides, this could give two potential solutions and you need to satisfy youself about which, if any, is correct.



With $2012$ sixes I would have thought you could only get an empirical result, perhaps something close to $$3-\frac{3.36566}{6^{2012}}$$ as suggested with seven to thirteen sixes.



With $f(n) = \sqrt{6 +f(n-1)}$ starting with $f(0)=0$, you might try to prove something like $$3-6^{n-1} \lt f(n) \lt 3$$ for $n \ge 1$ by induction: it may not be easy, as I think $\sqrt{6+3-6^{-(n-1)}} \lt 3 - 6^{-n}$.


calculus - Compute $ limlimits_{ntoinfty}sqrt[n]{logleft|1+left(frac{1}{ncdotlog n}right)^kright|}$.



Compute $$ \lim\limits_{n\to\infty}\left(\sqrt[n]{\log\left|1+\left(\dfrac{1}{n\cdot\log\left(n\right)}\right)^k\right|}\right). $$
What I have: $$ \forall\ x\geq 0\ :\ x- \frac{x^2}{2}\leq \log(1+x)\leq x. $$

Apply to get that the limit equals $1$ for any real number $k$.



Is this correct? Are there any other proofs?


Answer



Yes it works, here's another proof using a little more sofisticate tool (in this case unnecessary, but sometimes more useful).



By Stolz-Cesaro if $ (x_n) $ is a positive sequence and
$$ \lim_n \dfrac{x_{n+1}}{x_n} = l $$
then
$$ \lim_n \sqrt[n]{x_n} = l.$$




Taking as $(x_n)$ the sequence you defined, an easy calculation shows that $$ \dfrac{x_{n+1}}{x_n} \rightarrow 1,$$
therefore the thesis.


Tuesday, November 19, 2019

sequences and series - $A=frac{6}{frac{6}{frac{6}{6-cdots}-5}-5}-5 :A=text{?}$

$$A=\cfrac 6 {\cfrac 6 {\cfrac 6 {6-\cdots}-5}-5}-5 :A=\text{?}$$
my try :




enter image description here



But I want to recurrence Sequence method!!



$$a_{n+1}=\frac{6}{a_n}-5: a_1=\text{?}$$



$$a_2=\frac 6 {a_1}-5,$$



$$a_3=\frac{6}{a_2}-5$$




???



thank you very much !

abstract algebra - Galois Group of $(x^2-p_1)cdots(x^2-p_n)$


For distinct prime numbers $p_1,...,p_n$, what is the Galois group of $(x^2-p_1)\cdots(x^2-p_n)$ over $\mathbb{Q}$?




This problem appears to be quite common, however my understanding of Galois theory is quite poor, and I have no idea how to do this problem.

induction - Proving that two summations are equivalent: $sum_{i=1}^n i^3 = (sum_{i=1}^n i)^2$





Give a constructive proof to show that for all $n \geq 1$ ,



$\sum\limits_{i=1}^n i^3 = (\sum\limits_{i=1}^n i)^2$



Observe that $(n+1)^4 - n^4 = 4n^3 + 6n^2 + 4n + 1$ .







Now, the two following equalities are obvious:



$\sum\limits_{i=1}^n i^3 = 1^3 + 2^3 + 3^3 + ... + n^3$



$(\sum\limits_{i=1}^n i)^2 = (1 + 2 + 3 + ... + n)^2$



And they are both obviously equivalent given the first few test cases:




$\sum\limits_{i=1}^n i^3 = A(n)$




  • $A(1) = 1^3 = 1$

  • $A(2) = 1^3 + 2^3 = 1 + 8 = 9$

  • $A(3) = 1^3 + 2^3 + 3^3 = 9 + 27 = 36$



$(\sum\limits_{i=1}^n i)^2 = B(n)$





  • $B(1) = (1)^2 = 1$

  • $B(2) = (1 + 2)^2 =9 $

  • $B(3) = (1 + 2 + 3)^2 = 36$



Now, I am thinking of finding the closed-forms for both functions in the hopes that they are indeed the same. Then I would prove those closed forms to work by induction. But:




  1. I don't know if that would be a sound way to do it.


  2. I don't know if this would even qualify as constructive, as the question requests.



As you may tell, I am no math major. I am a Computer Science major, though. This is a computing fundamentals class. I took discrete 1.5 years ago, so my knowledge is about as fresh as a litter box. I've been in quite a rut for a few hours over this.


Answer



Your goal is to prove the statement $S(n)$ for all $n\geq 1$ where
$$
S(n) : 1^3 + 2^3 +3^3 +\cdots + n^3 = \left[\frac{n(n+1)}{2}\right]^2.
$$
Using $\Sigma$-notation, we may rewrite $S(n)$ as follows:

$$
S(n) : \sum_{r=1}^n r^3 = \left[\frac{n(n+1)}{2}\right]^2.
$$
Base step: The statement $S(1)$ says that $(1)^3 = (1)^2$ which is true because $1=1$.



Inductive step [$S(k)\to S(k+1)$]: Fix some $k\geq 1$, where $k\in\mathbb{N}$. Assume that
$$
S(k) : \sum_{r=1}^k r^3 = \left[\frac{k(k+1)}{2}\right]^2
$$
holds. To be proved is that

$$
S(k+1) : \sum_{r=1}^{k+1} r^3 = \left[\frac{(k+1)((k+1)+1)}{2}\right]^2
$$
follows. Beginning with the left side of $S(k+1)$,
\begin{align}
\sum_{r=1}^{k+1}r^3 &= \sum_{r=1}^k r^3 + (k+1)^3\tag{evaluate sum for $i=k+1$}\\[1em]
&= \left[\frac{k(k+1)}{2}\right]^2+(k+1)^3\tag{by $S(k)$}\\[1em]
&= \frac{(k+1)^2}{4}[k^2+4(k+1)]\tag{factor out $\frac{(k+1)^2}{4}$}\\[1em]
&= \frac{(k+1)^2}{4}[(k+2)(k+2)]\tag{factor quadratic}\\[1em]
&= \frac{(k+1)^2(k+2)^2}{4}\tag{multiply and rearrange}\\[1em]

&= \left[\frac{(k+1)(k+2)}{2}\right]^2\tag{rearrange}\\[1em]
&= \left[\frac{(k+1)((k+1)+1)}{2}\right]^2,\tag{rearrange}
\end{align}
one arrives at the right side of $S(k+1)$, thereby showing that $S(k+1)$ is also true, completing the inductive step.



By mathematical induction, it is proved that for all $n\geq 1$, where $n\in\mathbb{N}$, that the statement $S(n)$ is true.



Note: The step where $\dfrac{(k+1)^2}{4}$ is factored out is an important one. If we do not factor this out and, instead, choose to expand $(k+1)^3$, the problem becomes much more messy than it needs to be.


algebra precalculus - Is $lfloor xrfloor$ defined in terms of $|x|$ and branch selection?


Is $\lfloor x\rfloor$ defined in terms of repeated square root branch selection:


$$\lfloor x\rfloor = \frac{1}{2}\left(\sum_{n=-\infty}^\infty\frac{+\sqrt{(x-n)^2}}{x-n} \right) - \frac1{2}$$


Or the arctangent's logarithmic branch selection:


$$\lfloor x\rfloor = x - \frac1{2}-\frac1{\pi}\tan^{-1}\left(\tan\left(\pi(x - \frac1{2})\right)\right)$$


Or both, or something else?



Answer



First of all, I emphasize that neither is a definition of the floor function found in any standard text or paper. This obfuscation seems like an attempt to create an "algebraic" formula to express $\lfloor x \rfloor$. Let me play along anyway.


Since Yuval already explained the second formula, I will restrict to the first. However, I will modify it slightly to make the expression even meaningful. First, let us replace $$ \frac{\sqrt{x^2}}{x} = \frac{|x|}{x}, $$ by $$ \mathrm{sgn} \ x = \begin{cases} 1 & x \geq 0, \\ -1 & x < 0. \end{cases} $$ Notice that $\mathrm{sgn}\ 0$ is arbitrarily defined as $1$. Further, we will modify the right hand by grouping the positive and negative terms together. That is, we want to prove that the expression: $$ \frac{1}{2}\sum_{n=1}^{\infty} (\mathrm{sgn} (x-n) + \mathrm{sgn} (x+n)) + \frac{1}{2}\mathrm{sgn} (x) - \frac{1}{2}. \tag{1} $$ sums to $\lfloor x \rfloor$ for all $x \in \mathbb R$. (Convince yourself that I have included all the terms in the above formula.) The reason for pairing up the terms like this is that the original series had obvious convergence issues; in contrast, for any real $x$ all but finitely many terms of the modified series vanish.


First of all, suppose $x \geq 0$, and define $r := \lfloor x \rfloor \geq 0$. Since for any integer $k$, we have $k \leq x$ iff $k \leq \lfloor x \rfloor = r$, the $n$th summand of the series $\mathrm{sgn}(x-n) + \mathrm{sgn}(x+n)$ vanishes for $n \geq r+1$. On the other hand, for $1 \leq n \leq r$, the two terms double up to $2$. Plugging in these values, $(1)$ simplifies to: $$ \frac{1}{2} \sum_{n=1}^{r} 2 + \frac{1}{2} - \frac{1}{2} = r = \lfloor x \rfloor, $$ and so we are done. (Note that $\mathrm{sgn} x = 1$.)


Now consider the case $x < 0$. The trick in this case is to define $r := \lceil -x \rceil \geq 0$. I leave it as an exercise to verify that $\lfloor x \rfloor = -r$, in all cases whether or not $x$ is an integer. In this case, the $n$th summand cancels for $n \geq r$, since $x+n \geq x + r = x + \lceil -x \rceil \geq x -x = 0$, whereas $x-n$ is clearly negative. On the other hand, for $1 \leq n \leq r-1$, we have $x + n < 0$ (following a similar argument as above). So in this range the two terms double up to $-2$. Substituting these values, $(1)$ simplifies to $$ \frac{1}{2}\sum_{n=1}^{r-1} (-2) + \frac{-1}{2} - \frac{1}{2} = -(r-1) -1 = -r, $$ which is what we want to show. (Note that $\mathrm{sgn} \ x$ is $-1$.)


Monday, November 18, 2019

complex analysis - Breaking a contour integral into 3 separate contours?


We can try to integrate the following function around a counter-clockwise circular contour:


$$\frac{x^3}{(x-1)(x-2)(x-3)}$$


Can someone show how to use the Cauchy–Goursat theorem (explained here and here) to break this apart into 3 separate contours?


In other words, I'd like to take


$$\int_c{\frac{x^3}{(x-1)(x-2)(x-3)}}$$


and get



$$\int_{C_1}{f_1(x)} + \int_{C_2}{f_2(x)} + \int_{C_3}{f_3(x)}$$


I'm hoping for a pretty thorough explanation with at least one of the contours. I just want to be certain I have the idea perfected.


Answer



If you're looking to break apart the contour into three parts so that each part contains exactly one pole, take a look at the Mercedes-Benz symbol. Imagine the circle is the original contour. Your function $f(z)$ has three poles, so we subdivide the circle into three parts so that each part corresponds to one of the contours, each of which contains exactly one pole.


So now we have our $C_1, C_2, C_3$ contours. So then we can break up our original integral into $$ \int_{C_1} f(z) \, dz + \int_{C_2} f(z) \, dz + \int_{C_3} f(z) \, dz$$ So the integrand isn't actually different. However, because each contour contains only one pole, we can take advantage of the integral formula as follows: suppose we are looking at the $C_1$ contour; then $z = 1$ is the only pole, so then we should write $$ f(z) = \frac{1}{z - 1} \left( \frac{ x^3}{(x - 2)(x - 3)} \right) = \frac{f_1(z)}{z - 1}$$ then $$ \int_{C_1} f(z) \, dz = \int_{C_1} \frac{f_1(z)}{z - 1} \, dz = 2\pi i f_1(1) $$ according to the integral formula. We repeat this same process similarly for the other contours.


combinatorics - Proof that $sum_{k=0}^n binom{k}{p}$ = $binom{n+1}{p+1}$




I am currently doing a little self study on difference sequences and they use the following identity $$\sum_{k=0}^n \binom{k}{p}= \binom{n+1}{p+1}$$
I have never seen this identity without proof as if it is obvious, am I missing something is this an obvious result that perhaps is a consequence of some other theorem, if not could someone provide some intuition as to why this identity is true or even a proof. Any help is appreciated, thanks in advance.


Answer



It is more or less famously known as the hockey-stick identity. If you recall Pascal's triangle, notice that each number is the sum of the two above. Now take the orange number $84$ below. It is the sum of the two above it: $84=56+28$. Similarly, $28$ is the sum of the two numbers above it, and then the sum above it etc. so that we end up with



$$\sum_{k=0}^3\binom k6=\binom47$$



which more or less uses the definition of the binomial coefficient through Pascal's triangle.




enter image description here






If this is not so obvious, an induction proof isn't too hard:



$$\sum_{k=0}^{n+1}\binom kp=\binom{n+1}p+\sum_{k=0}^n\binom kp=\binom{n+1}p+\binom{n+1}{p+1}=\binom{n+2}{p+1}$$



The last step was the sum of two binomials equalling the next below it.




It is then clear that this holds for $n=1$ reqardless of $p$.


sequences and series - Proof of $sum_{n=1}^{infty}frac{1}{n^2}=frac{pi^2}{6}$




Prove that $$\sum_{n=1}^{\infty}\frac{1}{n^2}=\frac{\pi^2}{6}$$



I have actually got the HINT for this Proof from a very nice book:




The HINT i started with is:



$$\frac{1}{\sin^2 x}=\frac{1}{4\sin^2\left(\frac{x}{2}\right)\cos^2\left(\frac{x}{2}\right)}=\frac{1}{4}\left(\frac{1}{\sin^2\left(\frac{x}{2}\right)}+\frac{1}{\sin^2\left(\frac{x+\pi}{2}\right)}\right) \tag{1}$$



Now we have:



$$1=\frac{1}{\sin^2\left(\frac{\pi}{2}\right)}=\frac{1}{4}\left(\frac{1}{\sin^2\left(\frac{\pi}{4}\right)}+\frac{1}{\sin^2\left(\frac{3\pi}{4}\right)}\right)$$



Applying $(1)$ Repeatedly we get




$$1=\frac{1}{16}\left (\frac{1}{\sin^2\left(\frac{\pi}{8}\right)}+\frac{1}{\sin^2\left(\frac{3\pi}{8}\right)}+\frac{1}{\sin^2\left(\frac{5\pi}{8}\right)}+\frac{1}{\sin^2\left(\frac{7\pi}{8}\right)}\right)$$



So Generalizing we get:



$$1=\frac{1}{4^n}\sum_{k=0}^{2^n-1} \frac{1}{\sin^2\left(\frac{(2k+1)\pi}{2^{n+1}}\right)}$$



EDIT:



Ok let me explain the next step the author of the book has followed:

$$1=\frac{1}{4^n}\sum_{k=0}^{2^n-1} \frac{1}{\sin^2\left(\frac{(2k+1)\pi}{2^{n+1}}\right)}=\frac{2}{4^n}\sum_{k=0}^{2^{n-1}-1} \frac{1}{\sin^2\left(\frac{(2k+1)\pi}{2^{n+1}}\right)}$$



My doubt is how a factor of $2$ came outside and why the upper limit has changed?


Answer



This answer is limited to the question asked in the OP's edit. The puzzling step taken there uses the fact that $\sin(\pi-x)=\sin x$ and may best be explained with the example



$$\begin{align}
{1\over16}\left({1\over\sin^2\left(\pi\over8\right)}+{1\over\sin^2\left(3\pi\over8\right)}+{1\over\sin^2\left(5\pi\over8\right)}+{1\over\sin^2\left(7\pi\over8\right)} \right)
&={1\over16}\left({1\over\sin^2\left(\pi\over8\right)}+{1\over\sin^2\left(3\pi\over8\right)}+{1\over\sin^2\left(3\pi\over8\right)}+{1\over\sin^2\left(\pi\over8\right)} \right)\\
&={2\over16}\left({1\over\sin^2\left(\pi\over8\right)}+{1\over\sin^2\left(3\pi\over8\right)} \right)

\end{align}$$


Sunday, November 17, 2019

calculus - Why Euler's formula is true?





I need to know why Euler's formula is true? I mean why is the following true: $$ e^{ix} = \cos(x) + i\sin(x) $$


Answer



Hint: Notice $$\sin (x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + ..... $$ and $$i\cos (x) = i - i\frac{x^2}{2!} + i\frac{x^4}{4!} - i\frac{x^6}{6!} + .... $$ Now add them and use the fact that $i^2 = -1$, $i^3 = -i$, $i^4 = 1$. You should obtain $e^{ix}$. Also notice: $$e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + ....... $$


convergence divergence - Prove that $x^n/n!$ converges to $0$ for all $x$

Prove that $a_n=x^n/n! \to 0$ for all $x$



Here is what I tried, but it seems to lead to nowhere.



Choose $\epsilon > 0$. We need to show that there exists $N\in \mathbb{N}$ such that for all $n>N$ we have $|a_n| < \epsilon$




So, $|(x^n/n!)| < \epsilon \implies |x^n| < n!\cdot \epsilon$ (since $n!$ is positive we ignore the absolute signs). So $|x|/(\epsilon^{1/n}) < [n!^{(1/n)}]$.
Now I am stuck in solving this for $n$, and hence finding $N$ ...

algebra precalculus - Simplify the expression $binom{n}{0}+binom{n+1}{1}+binom{n+2}{2}+cdots +binom{n+k}{k}$


Simplify the expression $\binom{n}{0}+\binom{n+1}{1}+\binom{n+2}{2}+\cdots +\binom{n+k}{k}$


My attempt: Using the formula $\binom{n+1}{k}=\binom{n}{k}+\binom{n}{k-1}$


$\binom{n}{0}+\binom{n+1}{1}+\binom{n+2}{2}+\cdots \binom{n+k-1}{k-1}+\binom{n+k}{k}$



=$\binom{n}{0}+\binom{n+1}{1}+\binom{n+2}{2}+\cdots +\binom{n+k-1}{k-1}+(\binom{n+k-1}{k}+\binom{n+k-1}{k-1})$


=$\binom{n}{0}+\binom{n+1}{1}+\binom{n+2}{2}+\cdots +2\binom{n+k-1}{k-1}+\binom{n+k-1}{k}$


I can again use the same formula for the term $2\binom{n+k-1}{k-1}$, and in the next to step to the term $3\binom{n+k-2}{k-2}$.


But I don't think this way the expression will get simplified.


Any help is appreciated.


Answer



First show that $\large{n\choose r}={{n-1}\choose {r-1}}+{{n-1}\choose r}$,


from which we get $\large{{n+r+1}\choose r}={{n+r}\choose r}+{{n+r}\choose {r-1}}$


By the same rule, $\large{{n+r}\choose {r-1}}={{n+r-1}\choose {r-1}}+{{n+r-1}\choose {r-2}}$


$\large{{n+r-1}\choose {r-2}}={{n+r-2}\choose {r-2}}+{{n+r-3}\choose {r-3}}$



$..$ $\quad \quad \quad \quad ..$ $ \quad \quad \quad ..$


$..$ $\quad \quad \quad \quad ..$ $ \quad \quad \quad ..$


$..$ $\quad \quad \quad \quad ..$ $ \quad \quad \quad ..$


$\large{{n+3}\choose {2}}={{n+2}\choose {2}}+{{n+2}\choose {1}}$


$\large{{n+2}\choose {1}}={{n+1}\choose {1}}+{{n+1}\choose {0}}$


Adding, we get $\large{n\choose 0}+{{n+1}\choose 1}+...+{{n+r-1}\choose {r-1}}+{{n+r}\choose r}={{n+r+1}\choose r}$



Alternatively, we can fix any $r$ of the $(n+r+1)$ objects given. Label them $A_1,A_2,...A_r$. Now our choices of $r$ objects from the $(n+r+1)$ objects may or may not contain any or all of the set $\{A_1,A_2,...,A_r\}$.


Case I: It does not contain $A_1$


This will happen in $\large{{n+r}\choose r}$ ways as the $r$ things have to be chosen from the remaining $(n+r)$ things.


Case II: It contains $A_1$ but does not contain $A_2$.



This will happen in $\large{{n+r-1}\choose {r-1}}$ ways, because having chosen $A_1$ and rejectd $A_2$, we have only $(n+r-1)$ things to choose from and we need only $(r-1)$.


$...$


Case r: It contains $A_1,A_2,...,A_{r-1}$ but does not contain $A_r$.


This will happen in $\large {{n+1}\choose 1}$ ways.


Case $(r+1)$: It contains $A_1,A_2,...,A_r$.


This will happen in $\large{n\choose 0}=1$ way.


Hence, $\displaystyle\sum_{k=0}^{r}{{n+k}\choose k}={{n+r+1}\choose r}$


real analysis - Show that if $|s_{mn}-S|



This is the exercise 2.8.5 of the book Understanding analysis of Abbott. To put you in context I have that




  1. The sequence of partial sums $(s_{mn})$ is absolutely convergent, and the definition is $s_{mn}=\sum_{i=1}^{m}\sum_{j=1}^{n}a_{ij}$. All the next statements are derived from here.


  2. $(s_{nn})\to S$


  3. $|s_{mn}-S|<\varepsilon,\forall n>m\ge N$


  4. $\lim_{n\to\infty} s_{mn}=r_1+r_2+\cdots+r_m$





Now it is supposed that I can show that



$$|s_{mn}-S|<\varepsilon, \forall n>m\ge N\implies |(r_1+r_2+\cdots+r_m)-S|\le\varepsilon$$



Maybe this is a very dumb question but I cant see where the "$\le$" comes. So my thought is that the "$\le$" symbol must appear as a consequence of some order relation between limits but I cant see how, can you enlighten me please? Thank you in advance.



EDIT: to be specific, I cant see why the case is not just




$$|s_{mn}-S|<\varepsilon, \forall n>m\ge N\implies |(r_1+r_2+\cdots+r_m)-S|<\varepsilon$$



instead of the stated one. To me if $|s_{mn}-S|<\varepsilon$ holds for any $n>m\ge N$ I cant see why in the limit when $n\to\infty$ this maybe different.


Answer



If for some sequence $(x_n)$ we have $x_n < \epsilon$ for all $n$ and $x_n \to x$ then we can conclude $x \leqslant \epsilon$. This is easily proved by contradiction assuming $x > \epsilon$. Then we arrive at a contradiction where infinitely many $x_n$ are in a neighborhood $(x- \delta,x+\delta)$ with $x_n > x - \delta > \epsilon$.



The inequality need not be strict to allow for the possibility that $\epsilon$ is actually the limit. For example, $x_n = 1 - 1/n < 1$ for all $n$ but $\lim_n (1 - 1/n) = 1.$



In your case, the sequence in question is $x_n = |s_{mn}-S|$ and $\lim_n s_{mn} = |(r_1 + \ldots + r_m) - L|.$


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...