Wednesday, February 28, 2018

Complex power series and radius of convergence



Let $c$ be a non-zero complex number, and consider the power series
\begin{equation}
S(z)=\frac{z-c}{c}-\frac{(z-c)^2}{2c^2}+\frac{(z-c)^3}{3c^3}-\ldots.

\end{equation}
By using the Ratio Test, or otherwise, show that the series has radius of convergence $|c|$. By differentiating term by term, show that $S'(z)= \frac{1}{z}$.



I've never done power series in complex analysis, so this is what I've attempted so far:



\begin{equation}
S(z)=\frac{z-c}{c}-\frac{(z-c)^2}{2c^2}+\frac{(z-c)^3}{3c^3}-\ldots\\
=\sum_{n=0}^{\infty} \frac{(-1)^n (z-c)^n}{nc}.
\end{equation}
Let $x_{n}=\frac{(-1)^n}{nc}$. Using the Ratio test, we have:

\begin{equation}
lim_{n \rightarrow \infty} \lvert \frac{x_{n+1}(z-c)^{n+1}}{x_{n}(z-c)^n} \rvert = \lvert z-c \rvert lim_{n \rightarrow \infty} \lvert - \frac{n}{n+1} \rvert = -\lvert z-c \rvert <1.
\end{equation}



I'm pretty sure this is wrong somewhere, but I have no idea how to continue to show that the radius of convergence is $|c|$.


Answer



Notice (hint):



First of all:





  • When $n$ 'starts' with $0$ than we've got a problem, because we get (dividing by $0$):



$$\frac{(-1)^0(z-c)^0}{0c}$$




  • Use the ratio test, to proof that this series converges, when $|c-z|<1$.

  • So:




$$\sum_{n=1}^{\infty}\frac{(-1)^n(z-c)^n}{cn}=-\frac{\ln(1+z-c)}{c}\space\space\space\space\space\space\text{when}\space|c-z|<1$$


Tuesday, February 27, 2018

Find the limit without using L'Hopital's rule $lim_{x to 0} (x-tan x)/(x tan x)$.


I solved it with L'Hopital's rule but I want to find out how can I solve it without using L'Hopital's rule.


Answer



$$\lim_\limits{x\to 0}\dfrac{x-\tan x}{x\tan x}$$ $$=\lim_\limits{x\to 0}\bigg(\dfrac{x}{x\tan x}-\dfrac{\tan x}{x\tan x}\bigg)$$ $$=\lim_\limits{x\to 0}\bigg(\dfrac{1}{\tan x}-\dfrac{1}{x}\bigg)$$ $$=\lim_\limits{x\to 0}\bigg(\dfrac{\cos x}{\sin x}- \dfrac{1}{x}\bigg)$$ $$=\lim_\limits{x\to 0}\bigg(\dfrac{\cos x}{\dfrac{x\sin x}{x}}- \dfrac{1}{x}\bigg)$$ $$=\lim_\limits{x\to 0}\bigg(\dfrac{\cos x}{\dfrac{\sin x}{x}}\dfrac{1}{x}- \dfrac{1}{x}\bigg)$$ $$=\lim_\limits{x\to 0}\bigg(\dfrac{1}{x}-\dfrac{1}{x}\bigg)$$ $$\text{Because, $\lim_\limits{x\to 0}\cos x=1~~and~~\lim_\limits{x\to 0}\dfrac{\sin x}{x}=1$}$$ $$=0$$


sequences and series - Why does the sum of inverse squares equal $pi^2/6$?




I've heard that $1+1/4+1/9+1/16+1/25+...$ converges to $\pi^2/6$. This was very surprising to me and I was wondering if there was a reason that it converges to this number?




I am also confused why $\pi$ would be involved. If someone could provide a proof of this or a intuitive reason/ derivation I would appreciate it. I am only able to understand high school maths however (year 12).


Answer



Some of the proofs in the given link are somewhat technical and I'll try to vulgarize a variant of one of them.



Consider the function $$f(x):=\frac{\sin\pi\sqrt x}{\pi\sqrt x}.$$



This function has roots for every perfect square $x=n^2$, and it can be shown to equal the infinite product of the binomials for the corresponding roots



$$p(x):=\left(1-\frac x{1^2}\right)\left(1-\frac x{2^2}\right)\left(1-\frac x{3^2}\right)\left(1-\frac x{4^2}\right)\cdots$$




(obviously, $p(0)=f(0)=1$ and $p(n^2)=f(n^2)=0$.)



If we expand this product to the first degree, we get



$$1-\left(\frac1{1^2}+\frac1{2^2}+\frac1{3^2}+\frac1{4^2}+\cdots\right)\,x+\cdots$$



On the other hand, the Taylor development to the first order is



$$f(0)+f'(0)\,x+\cdots=1-\frac{\pi^2}6x+\cdots$$ hence the claim by identification.




The plot shows the function $f$ in blue, the linear approximation in red, and the product of the first $4,5$ and $6$ binomials, showing that they coincide better and better.



enter image description here


probability - How do wer find this expected value?




I'm just a little confused on this. I'm pretty sure that I need to use indicators for this but I'm not sure how I find the probability. The question goes like this:





A company puts five different types of prizes into their cereal boxes, one in each box and in equal proportions. If a customer decides to collect all five prizes, what is the expected number of boxes of cereals that he or she should buy?




I have seen something like this before and I feel that I'm close, I'm just stuck on the probability. So far I have said that $$X_i=\begin{cases}1 & \text{if the $i^{th}$ box contains a new prize}\\ 0 & \text{if no new prize is obtained} \end{cases}$$ I know that the probability of a new prize after the first box is $\frac45$ (because obviously the person would get a new prize with the first box) and then the probability of a new prize after the second prize is obtained is $\frac35$, and so on and so forth until the fifth prize is obtained. What am I doing wrong?! Or "what am I missing?!" would be the more appropriate question.


Answer



As the expected number of tries to obtain a success with probability $p$ is $\frac{1}{p}$, you get the expect number :
$$1+\frac{5}{4}+\frac{5}{3}+\frac{5}{2}+5=\frac{12+15+20+30+60}{12}=\frac{137}{12}\approx 11.41$$


calculus - Is there a general formula for $I(m,n)$?





Consider the integral



$$I(m,n):=\int_0^{\infty} \frac{x^m}{x^n+1}\,\mathrm dx$$



For $m=0$, a general formula is $$I(0,n)=\frac{\frac{\pi}{n}}{\sin\left(\frac{\pi}{n}\right)}$$




Some other values are $$I(1,3)=\frac{2\pi}{3\sqrt{3}}$$ $$I(1,4)=\frac{\pi}{4}$$ $$I(2,4)=\frac{\pi}{2\sqrt{2}}$$



For natural $m,n$ the integral exists if and only if $n\ge m+2$.




Is there a general formula for $I(m,n)$ with integers $m,n$ and $0\le m\le n-2$ ?



Answer



We can use contour integration to arrive at the general result. Note that




$$\begin{align}
\oint_C \frac{z^m}{z^n+1}\,dz&=2\pi i \text{Res}\left(\frac{z^m}{z^n+1}, z=e^{i\pi/n}\right)\\\\
&=-2\pi i \frac{e^{i\pi(m+1)/n}}{n}\tag 1
\end{align}$$



where $C$ is the "pie slice" contour comprised of (i) the real-line segment from $0$ to $R$, where $R>1$, (ii) the circular arc of radius $R$ that begins at $R$ and ends at $Re^{i2\pi/n}$, and $(3)$ the straight line segment from $Re^{i2\pi/n}$ to $0$.



Then, we can write




$$\oint_C \frac{z^m}{z^n+1}\,dz=\int_0^R \frac{x^m}{x^n+1}\,dx+\int_0^{2\pi/2}\frac{R^me^{im\phi}}{R^ne^{in\phi}+1}\,iRe^{i\phi}\,d\phi-\int_0^R \frac{x^me^{i2\pi m/n}}{x^n+1}e^{i2\pi/n}\,dx \tag 2$$



If $n>m+1$, then as $R\to \infty$, the second integral on the right-hand side of $(2)$ vanishes and we find that



$$\bbox[5px,border:2px solid #C0A000]{\int_0^\infty \frac{x^m}{x^n+1}\,dx=2\pi i\frac{e^{i\pi(m+1)/n}}{n(e^{i2\pi(m+1)/n}-1)}=\frac{\pi/n}{\sin(\pi(m+1)/n)}}$$


trigonometry - Understanding imaginary exponents



Greetings!



I am trying to understand what it means to have an imaginary number in an exponent. What does $x^{i}$ where $x$ is real mean?




I've read a few pages on this issue, and they all seem to boil down to the same thing:




  1. Any real number $x$ can be written as $e^{\ln{x}}$ (seems obvious enough.)

  2. Mumble mumble mumble

  3. This is equivalent to $e^{\cos{x} + i\sin{x}}$



Clearly I'm missing something in step 2. I understand (at least I think I do) how the complex number $\cos{x} + i\sin{x}$ maps to a point on the unit circle in a complex plane.




What I am missing, I suppose, is how this point is related to the natural log of $x$. Moreover, I don't understand what complex exponentiation is. I can understand integer exponentiation as simple repeated multiplication, and I can understand other things (like fractional or negative exponents) by analogy with the operations that undo them. But what does it mean to repeat something $i$ times?


Answer



Consider a real number $A$, and take it to the power $i$. If our system of complex numbers is to be consistent, then $A^i$ must be a complex number; in other words, there must be two real numbers $x$ and $y$, which depend on $A$, such that:



$A^i=x+iy$



Furthermore, we can write $A^{-i}=x-iy$ for the same $x$ and $y$. Hence:



$x^2+y^2=(x+iy)(x-iy)=A^iA^{-i}=A^{i-i}=A^0=1$




We have shown that for any real number $A$, $|A^i|=1$, and therefore $A^i$ corresponds to a complex number which lies some angle $\theta$ along the unit circle.



Now consider the sine and cosine functions for extremely small angles $\epsilon$. A tiny angle $\epsilon$ cuts out a slice of the unit circle, and the curvature of the circumference over this small angle is negligible. We can therefore think of this slice as a right triangle with angle $\epsilon$, and the hypotenuse and adjacent sides are both length one since they correspond to the radius of the unit circle.



Using the formula for the arc length of a circle, it's easy to determine that in the right triangle formed by the small angle approximation, the length of the side opposite to the angle $\epsilon$ is equal to $\epsilon$. We can read off the $(x,y)$ coordinates from this diagram (which are $(cos(\epsilon),sin(\epsilon))$), and therefore we conclude that for very small angles $\epsilon$:



$sin(\epsilon) \approx \epsilon \hspace{10mm} cos(\epsilon) \approx 1$



therefore $cos(\epsilon) + isin(\epsilon) \approx 1+i\epsilon$, and hence for real numbers $A$ which are extremely close to one (so that $lnA$ is small), the complex number $A^i$ lies approximately at an angle $lnA$ along the unit circle, since $A^i=e^{ilnA}\approx 1+i(lnA)$.



graphing functions - Wrong result in Wolfram Alpha on graph

I'm learning to code in Python, so I tried to find roots and draw a graph of a function
$-(16x^2-24x+5)e^{-x}$ 1



Result i got in Python using the mat.plotlib library is this
enter image description here



The problem is that Wolfram alpha is giving me the same graph, but it's incorrect.
If you use x=1 in 1, y =1.104, not 0 as Wolfram alpha or Python are showing. I'm confused why Wolfram and even Python are plotting wrong graphs?

Monday, February 26, 2018

calculus - Find the limit of $lim_{xto0}{frac{ln(1+e^x)-ln2}{x}}$ without L'Hospital's rule



I have to find: $$\lim_{x\to0}{\frac{\ln(1+e^x)-\ln2}{x}}$$

and I want to calculate it without using L'Hospital's rule. With L'Hospital's I know that it gives $1/2$.
Any ideas?


Answer



Simply differentiate $f(x)=\ln(e^x +1)$ at the point of abscissa $x=0$ and you’ll get the answer. in fact this is the definition of the derivative of $f$!!


indeterminate forms - What is undefined times zero?

Einstein's energy equation (after substituting the equation of relativistic momentum) takes this form:

$$E = \frac{1}{{\sqrt {1 - {v^2}/{c^2}} }}{m_0}{c^2}
% $$
Now if you apply this form to a photon (I know this is controversial, in fact I would not do it, but I just want to understand the consequences), you get the following:
$$E = \frac{1}{0}0{c^2}% $$
On another note, I understand that after dividing by zero:




  • If the numerator is any number other than zero, you get an "undefined" = no solution, because you are breaching mathematical rules.

  • If the numerator is zero, you get an "indeterminate" number = any value.




Here it seems we would have an "indeterminate" [if (1/0) times 0 equals 0/0], although I would prefer to have an "undefined" (because I think that applying this form to a photon breaches physical/logical rules, so I would like the outcome to breach mathematical rules as well...) and to support this I have read that if a subexpression is undefined (which would be the case here with gamma = 1/0), the whole expression becomes undefined (is this right and if so does it apply here?).



So what is the answer in strict mathematical terms: undefined or indeterminate?

Sunday, February 25, 2018

real analysis - Can a function from $(0,1)$ to $(0,1]$ be one-to-one and onto?


Does there exist a function from $(0,1)$ to $(0,1]$ both one-to-one and onto, not necessarily continuous?





I couldn't think of any. Any help would be appreciated!



Thanks,

Saturday, February 24, 2018

number theory division of power for the case $(n^r −1)$ divides $(n^m −1)$ if and only if $r$ divides $m$.

Let $n > 1$ and $m$ and $r$ be positive integers. Prove that $(n^r −1)$ divides $(n^m −1)$ if and only if $r$ divides $m$.

calculus - Prove $frac{x-1}{x}lt ln(x)lt x-1$ with Mean Value Theorem

I just started learning the Mean Value Theorem but have no idea how to apply it to prove this inequality.



$$\dfrac{x-1}{x}\lt \ln(x)\lt x-1$$




Can someone help?

Friday, February 23, 2018

calculus - Without using L'Hospital rule or series expansion find $lim_{xto0} frac{x-xcos x}{x-sin x}$.




Is it possible to find $\displaystyle{\lim_{x\to 0} \frac{x-x\cos x}{x-\sin x}}$ without using L'Hopital's Rule or Series expansion.






I can't find it.If it is dublicated, sorry :)


Answer



$$\dfrac{x(1-\cos x)}{x-\sin x}=\dfrac{x^3}{x-\sin x}\cdot\dfrac1{1+\cos x}\left(\dfrac{\sin x}x\right)^2$$



For $\lim_{x\to0}\dfrac{x^3}{x-\sin x}$ use Are all limits solvable without L'Hôpital Rule or Series Expansion



general topology - True or false: sets, subsets, and topologies in $mathbb R$

I am pondering the following statements about sets, subsets and topologies in $\mathbb R$.


The empty set is a closed subset of $\mathbb R$ regardless of the topology on $\mathbb R$.


Any open interval is an open subset of $\mathbb R$ regardless of the topology on $\mathbb R$.


Any closed interval is a closed subset of $\mathbb R$ regardless of the topology on $\mathbb R$.



A half-open interval of the form $[a, b)$ is neither an open set nor a closed set regardless of the topology on $\mathbb R$. I think this is a false statement but I am unsure about the first 3. I am in an introduction to proofs class and we are touching on topology. I know these are important distinctions to make because my professor keeps commenting how their is still a lot of confusion about these statements.

Thursday, February 22, 2018

algebra precalculus - Why does $2+2=5$ in this example?


I stumbled across the following computation proving $2+2=5$


calculation proving 2+2=5


Clearly it doesn't, but where is the mistake? I expect that it's a simple one, but I'm even simpler and don't really understand the application of the binomial form to this...


Answer



The error is in the step where the derivation goes from $$\left(4-\frac{9}{2}\right)^2 = \left(5-\frac{9}{2}\right)^2$$ to $$\left(4-\frac{9}{2}\right) = \left(5-\frac{9}{2}\right)$$



In general, if $a^2=b^2$ it is not necessarily true that $a=b$; all you can conclude is that either $a=b$ or $a=-b$. In this case, the latter is true, because $\left(4-\frac{9}{2}\right) = -\frac{1}{2}$ and $\left(5-\frac{9}{2}\right) = \frac{1}{2}$. Once you have written down the (false) equation $-\frac{1}{2} = \frac{1}{2}$ it is easy to derive any false conclusion you want.


calculus - Why is $1^{infty}$ considered to be an indeterminate form



From Wikipedia: In calculus and other branches of mathematical analysis, an indeterminate form is an algebraic expression obtained in the context of limits. Limits involving algebraic operations are often performed by replacing subexpressions by their limits; if the expression obtained after this substitution does not give enough information to determine the original limit, it is known as an indeterminate form.




  • The indeterminate forms include $0^{0},\frac{0}{0},(\infty - \infty),1^{\infty}, \ \text{etc}\cdots$




My question is can anyone give me a nice explanation of why $1^{\infty}$ is considered to be an indeterminate form? Because, i don't see any justification of this fact. I am still perplexed.


Answer



Forms are indeterminate because, depending on the specific expressions involved, they can evaluate to different quantities. For example, all of the following limits are of the form $1^{\infty}$, yet they all evaluate to different numbers.



$$\lim_{n \to \infty} \left(1 + \frac{1}{n^2}\right)^n = 1$$



$$\lim_{n \to \infty} \left(1 + \frac{1}{n}\right)^n = e$$



$$\lim_{n \to \infty} \left(1 + \frac{1}{\ln n}\right)^n = \infty$$



To expand on this some (and this thought process can be applied to other indeterminate forms, too), one way to think about it is that there's a race going on between the expression that's trying to go to 1 and the expression that's trying to go to $\infty$. If the expression that's going to 1 is in some sense faster, then the limit will evaluate to 1. If the expression that's going to $\infty$ is in some sense faster, then the limit will evaluate to $\infty$. If the two expressions are headed toward their respective values at essentially the same rate, then the two effects sort of cancel each other out and you get something strictly between 1 and $\infty$.



There are some other cases, too, like
$$\lim_{n \to \infty} \left(1 - \frac{1}{\ln n}\right)^n = 0,$$
but this still has the expression going to $\infty$ "winning." Since $1 - \frac{1}{\ln n}$ is less than 1 (once $n > 1$), the exponentiation forces the limit to 0 rather than $\infty$.



real analysis - Properties of $>$ for rational numbers

This question comes from an introductory undergraduate course in Analysis. We have just started from defining the set of rational numbers and then we will construct the set of real numbers. We define the order relation "$> $" on the set of rationals as follows: $$(\frac{a}{b})>(\frac{c}{d})\Leftrightarrow (ad-bc)\in \mathbb{N}$$. I am trying to prove that the set of rationals with this order relation is an ordered set. I want to prove the transitive property, by doing the following: $$(\frac{a}{b})>(\frac{c}{d})\Leftrightarrow (ad-bc)\in \mathbb{N}$$



$$(\frac{c}{d})>(\frac{e}{f})\Leftrightarrow (cf-de)\in \mathbb{N}$$. I need to show that:
$$(\frac{a}{b})>(\frac{e}{f}), i.e\ (af-be)\in \mathbb{N}$$
I tried to prove the last statement, by saying that $(ad-bc)(cf-de)=(acdf-adde-bccf+bcde)\in \mathbb{N}$. Then, I didn't really what do next. Any help is highly appreciated?

trigonometry - How Can One Prove $cos(pi/7) + cos(3 pi/7) + cos(5 pi/7) = 1/2$




Reference: http://xkcd.com/1047/



We tried various different trigonometric identities. Still no luck.




Geometric interpretation would be also welcome.



EDIT: Very good answers, I'm clearly impressed. I followed all the answers and they work! I can only accept one answer, the others got my upvote.


Answer



Hint: start with $e^{i\frac{\pi}{7}} = \cos(\pi/7) + i\sin(\pi/7)$ and the fact that the lhs is a 7th root of -1.



Let $u = e^{i\frac{\pi}{7}}$, then we want to find $\Re(u + u^3 + u^5)$.



Then we have $u^7 = -1$ so $u^6 - u^5 + u^4 - u^3 + u^2 -u + 1 = 0$.




Re-arranging this we get: $u^6 + u^4 + u^2 + 1 = u^5 + u^3 + u$.



If $a = u + u^3 + u^5$ then this becomes $u a + 1 = a$, and rearranging this gives $a(1 - u) = 1$, or $a = \dfrac{1}{1 - u}$.



So all we have to do is find $\Re\left(\dfrac{1}{1 - u}\right)$.



$\dfrac{1}{1 - u} = \dfrac{1}{1 - \cos(\pi/7) - i \sin(\pi/7)} = \dfrac{1 - \cos(\pi/7) + i \sin(\pi/7)}{2 - 2 \cos(\pi/7)}$



so




$\Re\left(\dfrac{1}{1 - u}\right) = \dfrac{1 - \cos(\pi/7)}{2 - 2\cos(\pi/7)} = \dfrac{1}{2} $


calculus - Why isn't it mathematically rigorous to treat dx's and dy's as variables?





If I do something like:



$$\frac{dy}{dx} = D$$



$$dy = D \times dx$$




People would often say that it is not rigorous to do so. But if we start from the definition of the derivative:



$$\lim_{h \to 0}{\frac{f(x + h) - f(x)}{h}} = D$$



And by using the properties of limits we can say:



$$\frac{\lim_{h \to 0}{f(x + h) - f(x)}}{\lim_{h \to 0}{h}} = D$$



And then finally:




$$\lim_{h \to 0}(f(x + h) - f(x)) = D \times (\lim_{h \to 0} h)$$



Isn't this the same? Or am I missing something?


Answer



I can spot the following mistake in your attempt:





  1. And by using the properties of limits we can say:




    $$\frac{\lim_{h \to 0}{f(x + h) - f(x)}}{\lim_{h \to 0}{h}} = D$$





You cannot actually do that, as smcc has said. You must note that $$\lim_{x\to 0} \frac{f(x)}{g(x)}=\frac{\lim_\limits{x\to 0} f(x)}{\lim_\limits{x\to 0} g(x)} \,\,\,\,\,\,\,\,\,\,\mathrm {iff \lim_\limits{x\to 0} g(x) \not = 0}$$
So what you have said is not right.






Now coming to the actual question, if $dx$ and $dy$ can be treated as variables,

most of the mathematicians treat $\frac{d}{dx}$ as a mathematical operator (like $+,-,*,/$) which acts on the variable $y$. So that way, you can clearly understand what is the variable and what is not.



However, if you are strict enough to observe from the "limit" viewpoint, then observe that $\frac{dy}{dx}$ is nothing but $\lim_\limits{\Delta x\to 0}\frac{\Delta y}{\Delta x}$. Now $\frac{\Delta y}{\Delta x}$ is a fraction with $\Delta y$ and $\Delta x$ in the numerator and denominator. So you can view them as variables now.



Looks a bit weird, I know, but it entirely depends on how you want to support your argument and from which point of view you want to make your claim.


linear algebra - Prove that elementary matrices perform row operations

How to prove that elementary matrices actually perform their intended row operations: multiplying by a constant, adding a multiple of one row to another, and switching two rows?




I've seen examples of their use, but I haven't seen a proof for an $n$ by $n$ matrix.

calculus - Let $f:mathbb{R} to mathbb{R}$ be continuous satisfying: $f(x+y) = f(x) + f(y)$. Show $f$ is linear




Let $f:\mathbb{R} \to \mathbb{R}$ be continuous satisfying: $f(x+y) = f(x) + f(y)$. Show $f$ is linear.




What I thought would be simple and still probably is, I'm having trouble with. So to show this function is linear, I have to show it satisfies two properties:



1) for any $x+y \in \mathbb{R}$ that $h(x+y) = h(x) + h(y)$



This one is satisfied, just by definition of the function.



2) for any $x \in \mathbb{R}$ and scalar $c \in \mathbb{R}$ then $h(cx) = ch(x)$



I'm having trouble proving this. Since there is no explicit function which would make this easier I'm left struggling how to apply continuity. I thought perhaps if I said:




"suppose $f$ is continuous at the point $a$. Then by definition:



$\forall \epsilon > 0$ and $\forall a >0$ there exists $\delta > 0$ such that if $|x-a| < \delta \rightarrow |f(x) - f(a)|< \epsilon$.



So I thought if I adapt this definition and instead say $\forall \epsilon > 0$ and $\forall a >0$ there exists $\delta > 0$ such that if $|cx-ca| = |c||x-a| < \delta \rightarrow |c||f(x) - f(a)|< \epsilon$.



I'm not sure that really says anything at all, but it is what i came up with.



Advice on how to proceed?


Answer




First try to prove that $f(nx) = nf(x)$ for integers $n$, then do the same for rational numbers by similar methods, then you can conclude the result by continuity.


limits without lhopital - $lim_{ x to0^- }frac{2^{frac{1}{x}}+2^{frac{-1}{x}}}{3^{frac{1}{x}}+3^{frac{-1}{x}}}=?$

fine the limits-without-lhopital rule



$$\lim_{ x \to0^- }\frac{2^{\frac{1}{x}}+2^{\frac{-1}{x}}}{3^{\frac{1}{x}}+3^{\frac{-1}{x}}}=?$$



My Try :




$h= \frac{1}{x} :h\to - \infty$



so :



$$\lim_{ h\to - \infty }\frac{2^{h}+2^{-h}}{3^{h}+3^{-h}}=?\\\lim_{ h\to - \infty}\frac{(2^{-h})2^{2h}+1}{(3^{-h})3^{2h}+1}=?\\\lim_{ h\to - \infty }\frac{(2^{-h})2^{2h}+1}{(3^{-h})3^{2h}+1}=?$$



now :?

Wednesday, February 21, 2018

integration - Evaluation of the integral $int{frac{x+sin{x}}{1+cos{x}}mathrm{d}x}$ by parts

I have to evaluate the following integral by parts: $$\int {\dfrac{x+\sin{x}}{1+\cos{x}}}\mathrm{d}x $$



So I tried to put:



$ u = x + \sin{x}$ $~\qquad\rightarrow \quad$ $\mathrm{d}u=\left(1+\cos{x}\right) \mathrm{d}x$




$\mathrm{d}v = \dfrac{\mathrm{d}x}{1+\cos(x)}$ $\quad \rightarrow \quad$ $v = \int{\dfrac{\mathrm{d}x}{1+\cos{x}}}$



But there is an extra integral to do ( the $v$ function) I evaluated it by sibstitution, and I get $v = \tan{\dfrac{x}{2}}$, Now



$$\int {\dfrac{x+\sin(x)}{1+\cos(x)}}\mathrm{d}x = (x+\sin{x})\, \tan{\dfrac{x}{2}}-x +C$$
My question: is it possible to evaluate this integral entirely by parts (without using any substitution) ?



I appreciate any ideas

Tuesday, February 20, 2018

linear algebra - Find the eigenvalues and eigenvectors with zeroes on the diagonal and ones everywhere else.

I have been working on this problem for a couple hours and am completely stuck. Any help would be greatly appreciated.


Let $A$ be the $n \times n$ matrix which has zeros on the main diagonal and ones everywhere else. Find the eigenvalues and eigenvectors of $A$.

sequences and series - Intuition behind $zeta(-1)$ = $frac{-1}{12}$

When I first watched numberphile's 1+2+3+... = $\frac{-1}{12}$ I thought the sum actually equalled $\frac{-1}{12}$ without really understanding it.


Recently I read some wolframalpha pages and watched some videos and now I understand (I think), that $\frac{-1}{12}$ is just an associative value to the sum of all natural numbers when you analytically continue the riemann-zeta function. 3Blue1Brown's video really helped. What I don't really understand is why it gives the value $\frac{-1}{12}$ specifically. The value $\frac{-1}{12}$ seems arbitrary to me and I don't see any connection to the sum of all natural numbers. Is there any intuition behind why you get $\frac{-1}{12}$ when analytically continue the zeta function at $\zeta(-1)$?


EDIT(just to make my question a little clearer): I'll use an example here. Suppose you somehow didn't know about radians and never associated trig functions like sine to $\pi$ but you knew about maclaurin expansion. By plugging in x=$\pi$ to the series expansion of sine, you would get sine($\pi$) = 0. You might have understood the process in which you get the value 0, the maclaurin expansion, but you wouldn't really know the intuition behind this connection between $\pi$ and trig functions, namely the unit circle, which is essential in almost every branch of number theory.


Back to this question, I understand the analytic continuation of the zeta function and its continued form for $s < 0$ $$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$ and how when you plug in s = -1, things simplify down to $\frac{-1}{12}$ but I don't see any connection between the fraction and the infinite sum. I'm sure there is a beautiful connection between them, like the one between trig functions and $\pi$, but couldn't find any useful resources on the internet. Hope this clarified things.

Monday, February 19, 2018

calculus - Prove that $sum frac{1}{n^2} = frac{pi^2}{6}$

In this answer two sequences are mentioned.
In particular, I would like to prove that



$$\sum_{n = 1}^{+ \infty} \frac{1}{n^2} = \frac{\pi^2}{6}$$



If I knew that the sequence converges to $\frac{\pi^2}{6}$, I could use the $\epsilon$-$M$ criterion to prove the convergence to that value.



But how to prove that the above sequence converges to that value if I don't know the value itself? Is there a general way to proceed in such cases?

sequences and series - Yet another nested radical

Consider $$F(x) = \sqrt{x -\sqrt{2x - \sqrt{3x - \cdots}}}$$



I believe I can prove (with some handwaving) that





  • $F$ does converge everywhere in $\mathbb{C}$

  • $\Im F = 0$ for sufficiently large real $x$ (actually larger than $x0 \approx 0.5243601\dots$ Does this number ring a bell?)

  • Coincidentally $F(x0) = 0$



Weird things happen in the limit to $0$. Obviously, $F(0) = 0$. However, it seems that $$\lim_{x \to +0}F(x) = \overline{\zeta} $$ $$\lim_{x \to -0}F(x) = \zeta $$
where $\zeta = \frac{1 + i\sqrt{3}}{2}$ is a usual cubic root of $-1$. Moreover, $F$ seems to reach one of those as $x$ approaches $0$ at a rational angle. I understand that this may well be a computational artifact (still making no sense to me), but proving or refuting these limits is definitely out of my league.



Any help?

real analysis - Continuity $Rightarrow$ Intermediate Value Property. Why is the opposite not true?



Continuity $\Rightarrow$ Intermediate Value Property. Why is the opposite not true?



It seems to me like they are equal definitions in a way.



Can you give me a counter-example?




Thanks


Answer



I.



Some of the answers reveal a confusion, so let me start with the definition. If $I$ is an interval, and $f:I\to\mathbb R$, we say that $f$ has the intermediate value property iff whenever $abetween $a$ and $b$ with $f(d)=c$.



If $I=[\alpha,\beta]$, this is significantly stronger than asking that $f$ take all values between $f(\alpha)$ and $f(\beta)$:




  • For example, this implies that if $J\subseteq I$ is an interval, then $f(J)$ is also an interval (perhaps unbounded).


  • It also implies that $f$ cannot have jump discontinuities: For instance, if $\lim_{x\to t^-}f(x)$ exists and is strictly smaller than $f(t)$, then for $x$ sufficiently close to $t$ and smaller than $t$, and for $u$ sufficiently close to $f(t)$ and smaller than $f(t)$, $f$ does not take the value $u$ in $(x,t)$, in spite of the fact that $f(x)
  • In particular, if $f$ is discontinuous at a point $a$, then there are $y$ such that the equation $f(x)=y$ has infinitely many solutions near $a$.



II.



There is a nice survey containing detailed proofs of several examples of functions that both are discontinuous and have the intermediate value property: I. Halperin, Discontinuous functions with the Darboux property, Can. Math. Bull., 2 (2), (May 1959), 111-118. It contains the amusing quote




Until the work of Darboux in 1875 some mathematicians believed that [the intermediate value] property actually implied continuity of $f(x)$.





This claim is repeated in other places. For example, here one reads




In the 19th century some mathematicians believed that [the intermediate value] property is equivalent to continuity.




This is very similar to what we find in A. Bruckner, Differentiation of real functions, AMS, 1994. In page 5 we read





This property was believed, by some 19th century mathematicians, to be equivalent to the property of continuity.




Though I have been unable to find a source expressing this belief, that this was indeed the case is supported by the following two quotes from Gaston Darboux's Mémoire sur les fonctions discontinues, Ann. Sci. Scuola Norm. Sup., 4, (1875), 161–248. First, on pp. 58-59 we read:




Au risque d'être trop long, j'ai tenu avant tout, sans y réussir peutêtre, à être rigoureux. Bien des points, qu'on regarderait à bon droit comme évidents ou que l'on accorderait dans les applications de la science aux fonctions usuelles, doivent être soumis à une critique rigoureuse dans l'exposé des propositions relatives aux fonctions les plus générales. Par exemple, on verra qu'il existe des fonctions continues qui ne sont ni croissantes ni décroissantes dans aucun intervalle, qu'il y a des fonctions discontinues qui ne peuvent varier d'une valeur à une autre sans passer par toutes les valeurs intermédiaires.





The proof that derivatives have the intermediate value property comes later, starting on page 109, where we read:




En partant de la remarque précédente, nous allons montrer qu'il existe des fonctions discontinues qui jouissent d'une propriété que l'on regarde quelquefois comme le caractère distinctif des fonctions continues, celle de ne pouvoir varier d'une valeur à une autre sans passer par toutes les valeurs intermediaires.




III.



Additional natural assumptions on a function with the intermediate value property imply continuity. For example, injectivity or monotonicity.




Derivatives have the intermediate value property (see here), but there are discontinuous derivatives: Let $$f(x)=\left\{\begin{array}{cl}x^2\sin(1/x)&\mbox{ if }x\ne0,\\0&\mbox{ if }x=0.\end{array}\right.$$ (The example goes back to Darboux himself.) This function is differentiable, its derivative at $0$ is $0$, and $f'(x)=2x\sin(1/x)-\cos(1/x)$ if $x\ne0$, so $f'$ is discontinuous at $0$.



This example allows us to find functions with the intermediate value property that are not derivatives: Consider first $$g(x)=\left\{\begin{array}{cl}\cos(1/x)&\mbox{ if }x\ne0,\\ 0&\mbox{ if }x=0.\end{array}\right.$$ This function (clearly) has the intermediate value property and indeed it is a derivative, because, with the $f$ from the previous paragraph, if $$h(x)=\left\{\begin{array}{cl}2x\sin(1/x)&\mbox{ if }x\ne 0,\\ 0&\mbox{ if }x=0,\end{array}\right.$$ then $h$ is continuous, and $g(x)=h(x)-f'(x)$ for all $x$. But continuous functions are derivatives, so $g$ is also a derivative. Now take $$j(x)=\left\{\begin{array}{cl}\cos(1/x)&\mbox{ if }x\ne0,\\ 1&\mbox{ if }x=0.\end{array}\right.$$ This function still has the intermediate value property, but $j$ is not a derivative. Otherwise, $j-g$ would also be a derivative, but $j-g$ does not have the intermediate value property (it has a jump discontinuity at $0$). For an extension of this theme, see here.



In fact, a function with the intermediate value property can be extremely chaotic. Katznelson and Stromberg (Everywhere differentiable, nowhere monotone, functions, The American Mathematical Monthly, 81, (1974), 349-353) give an example of a differentiable function $f:\mathbb R\to\mathbb R$ whose derivative satisfies that each of the three sets $\{x\mid f'(x)>0\}$, $\{x\mid f'(x)=0\}$, and $\{x\mid f'(x)<0\}$ is dense (they can even ensure that $\{x\mid f'(x)=0\}=\mathbb Q$); this implies that $f'$ is highly discontinuous. Even though their function satisfies $|f'(x)|\le 1$ for all $x$, $f'$ is not (Riemann) integrable over any interval.



On the other hand, derivatives must be continuous somewhere (in fact, on a dense set), see this answer.



Conway's base 13 function is even more dramatic: It has the property that $f(I)=\mathbb R$ for all intervals $I$. This implies that this function is discontinuous everywhere. Other examples are discussed in this answer.




Halperin's paper mentioned above includes examples with even stronger discontinuity properties. For instance, there is a function $f:\mathbb R\to\mathbb R$ that not only maps each interval onto $\mathbb R$ but, in fact, takes each value $|\mathbb R|$-many times on each uncountable closed set. To build this example, one needs a bit of set theory: Use transfinite recursion, starting with enumerations $(r_\alpha\mid\alpha<\mathfrak c)$ of $\mathbb R$ and $(P_\alpha\mid\alpha<\mathfrak c)$ of its perfect subsets, ensuring that each perfect set is listed $\mathfrak c$ many times. Now recursively select at stage $\alpha<\mathfrak c$, the first real according to the enumeration that belongs to $P_\alpha$ and has not been selected yet. After doing this, continuum many reals have been chosen from each perfect set $P$. List them in a double array, as $(s_{P,\alpha,\beta}\mid\alpha,\beta<\mathfrak c)$, and set $f(s_{P,\alpha,\beta})=r_\alpha$ (letting $f(x)$ be arbitrary for those $x$ not of the form $s_{P,\alpha,\beta}$).



To search for references: The intermediate value property is sometimes called the Darboux property or, even, one says that a function with this property is Darboux continuous.



An excellent book discussing these matters is A.C.M. van Rooij, and W.H. Schikhof, A second course on real functions, Cambridge University Press, 1982.


algebra precalculus - How to determine linear function from "at a price (p) of 220 the demand is 180"



In my Math book I can look-up the answers for exercises. And as I do I have no idea how I would solve the following example. Probably my mind is stuck as I don't find new ways to think about the issue.




"The supply function of a good is a linear function. At a price (p) of 220 the demand is 180 units. At a price of 160 the demand is 240 units."




  1. Determine the demand function.



"Also the supply function is linear. At a price of 150 the supply is 100 units and at a price of 250 the supply is 300 units".





  1. Determine the supply function.



Could someone explain to me how I would approach to solve these two questions as the book doesn't provide the explanation but only the answers? Thank you.


Answer



You know that the demand function is a linear function of price $p$, say $D(p)=\alpha\cdot p+\beta$ for suitable parameters $\alpha,\beta\in\mathbb R$. From the conditions given in your problem, you know that



$$
D(220)=\boldsymbol{220\alpha+\beta=180}\qquad\text{and}\qquad
D(160)=\boldsymbol{160\alpha+\beta=240}.

$$



From the bold equations (system of two linear equations with two variables), one simply obtains the coeffcients $\alpha=-1$, $\beta=400$ that enables you to write down the demand function as $D(p)=400-p$.



In fact, the same can be done for the supply function.


number theory - Fermat's Last Theorem near misses?

I've recently seen a video of Numberphille channel on Youtube about Fermat's Last Theorem. It talks about how there is a given "solution" for the Fermat's Last Theorem for $n>2$ in the animated series The Simpsons.



Thanks to Andrew Wiles, we all know that's impossible. The host tells that the solution that appears in the episode is actually a near miss.



There are two near-miss solutions and they are:



$$3987^{12} + 4365^{12} = 4472^{12}$$




$$1782^{12} + 1841^{12} = 1922^{12}$$



Anyone who knows modular arithmetic can check that those solutions are wrong, even without a calculator. You can check that $87^{12} \equiv 81 \pmod{100}$ and $65^{12} \equiv 25 \pmod {100}$, while $72^{12} \equiv 16 \pmod {100}$. So we have:



$$81 + 25 \equiv 16 \pmod {100}$$
$$106 \equiv 16 \pmod {100} \text{, which is obviously wrong}$$



For the second example it's even easier. We know that LHS is an addition of an even and odd number, and the RHS is even number, which is impossible, because we know that the addition of an even and an odd number will provide an odd number.



But what's made me most interested in this is the following. Using a calculator I expanded the equations and I get:




$$3987^{12} + 4365^{12} = 4472^{12}$$



$$63976656349698612616236230953154487896987106 = 63976656348486725806862358322168575784124416$$



$$1211886809373872630985912112862690 = 0$$



And you'll immediately conclude that those numbers aren't even close, their difference is a $33$-digit number. But bearing in mind that we are working with really, really big numbers, it's probably better to take relative difference. So we really want to find the ratio of LHS and RHS:



$$\frac{63976656349698612616236230953154487896987106}{63976656348486725806862358322168575784124416} \approx 1.00000000002$$




Somehow that's impressive, but if take a look into the second example the things start to get more interesting:



$$1782^{12} + 1841^{12} = 1922^{12}$$
$$2541210258614589176288669958142428526657 = 2541210259314801410819278649643651567616$$



As we can see the first 9 digits are exactly the same and the apsolute difference is: $700212234530608691501223040959$. But if we take a relative difference or ratio we'll get:



$$\frac{2541210258614589176288669958142428526657}{2541210259314801410819278649643651567616} \approx 0.9999999997244572$$




And this is pretty amazing, because if we make comparion using smaller number this is the same as comparing $10 000 000 000$ and $10 000 000 001$



Probably there are many more, probably infinite amount of such "close" examples, but are there any known examples? Is there any list of them?



And as user17762 commented it would be nice to find a bound $\phi(n) = \inf\{\left| x^n + y^n - z^n\right| : x,y,z \in \mathbb{Z}\}$, although I would be more interested in finding the ratio bound, such that the ratio is closer to $1$



Also as user17762 pointed Taxicab can be used to provide a really close examples for $n=3$, but what about other values for $n$?

Sunday, February 18, 2018

real analysis - Bijection from $mathbb R$ to $mathbb {R^N}$



How does one create an explicit bijection from the reals to the set of all sequences of reals? I know how to make a bijection from $\mathbb R$ to $\mathbb {R \times R}$.



I have an idea but I am not sure if it will work. I will post it as my own answer because I don't want to anchor your answers and I want to see what other possible ways of doing this are.


Answer




The nicest trick in the book is to find a bijection between $\mathbb R$ and $\mathbb{N^N}$, in this case we are practically done. Why?



$$\mathbb{(N^N)^N\sim N^{N\times N}\sim N^N}$$



And the bijections above are easy to calculate (I will leave those to you, the first bijection is a simple Currying, and for the second you can use Cantor's pairing function).



So if we can find a nice bijection between the real numbers the infinite sequences of natural numbers we are about done. Now, we know that $\mathbb{N^N}$ can be identified with the real numbers, in fact continued fractions form a bijection between the irrationals and $\mathbb{N^N}$.



We first need to handle the rational numbers, but that much is not very difficult. Take an enumeration of the rationals (e.g. Calkin-Wilf tree) in $(0,1)$, suppose $q_i$ is the $i$-th rational in the enumeration; now we take a sequence of irrationals, e.g. $r_n = \frac1{\sqrt{n^2+1}}$, and we define the following function:




$$h(x)=\begin{cases} r_{2n} & \exists n: x=r_n\\ r_{2n+1} & \exists n: x=q_n \\ x &\text{otherwise}\end{cases}$$



Now we can finally describe a list of bijections which, when composed, give us a bijection between $\mathbb R$ and $\mathbb{R^N}$.




  1. $\mathbb{R^N\to (0,1)^N}$ by any bijection of this sort.

  2. $\mathbb{(0,1)^N\to \left((0,1)\setminus Q\right)^N}$ by the encoding given by $h$.

  3. $\mathbb{\left((0,1)\setminus Q\right)^N\to \left(N^N\right)^N}$ by continued fractions.

  4. $\mathbb{\left(N^N\right)^N\to N^{N\times N}}$ by Currying.

  5. $\mathbb{N^{N\times N}\to N^N}$ by a pairing function.


  6. $\mathbb{N^N\to (0,1)\setminus Q}$ by decoding the continued fractions.

  7. $\mathbb{(0,1)\setminus Q\to (0,1)}$ by the decoding of $h$, i.e. $h^{-1}$.

  8. $\mathbb{(0,1)\to R}$ by any bijection of this sort, e.g. the inverse of the bijection used for the first step.


sequences and series - Why $ sum_{k=0}^{infty} q^k $ sum is $ frac{1}{1-q}$ when $|q| < 1$




Why is the infinite sum of $ \sum_{k=0}^{\infty} q^k = \frac{1}{1-q}$ when $|q| < 1$



I don't understand how the $\frac{1}{1-q}$ got calculated. I am not a math expert so I am looking for an easy to understand explanation.


Answer



By definition you have
$$
\sum_{k=0}^{+\infty}q^k=\lim_{n\to+\infty}\underbrace{\sum_{k=0}^{n}q^k}_{=:S_n}
$$

Notice now that $(1-q)S_n=(1-q)(1+q+q^2+\dots+q^n)=1-q^{n+1}$; so dividing both sides by $1-q$ (in order to do this, you must be careful only to have $1-q\neq0$, i.e. $q\neq1$) we immediately get
$$
S_n=\frac{1-q^{n+1}}{1-q}.
$$
If you now pass to the limit in the above expression, when $|q|<1$, it's clear that
$$
S_n\stackrel{n\to+\infty}{\longrightarrow}\frac1{1-q}\;\;,
$$
as requested. To get this last result, you should be confident with limits, and know that $\lim_{n\to+\infty}q^n=0$ when $|q|<1$.


matrices - Linear Algebra - Prove $AB=BA$



Let $A$ and $B$ be any $n \times n$ defined over the real numbers.



Assume that $A^2+AB+2I=0$.




  • Prove $AB=BA$




My solution (Not full)



I didn't managed to get so far.



$A(A+B)=-2I$



$-\frac{1}{2}(A(A+B)=I$



Therefore $A$ reversible and $A+B$ reversible.




I don't know how to get on from this point, What could I conclude about $A^2+AB+2I=0?$



Any ideas? Thanks.


Answer



From
$$
A(A+B)=A^2+AB=-2I
$$
we have that
$$

A^{-1}=-\frac12(A+B)
$$
then multiplying by $-2A$ on the right and adding $2I$ gives
$$
A^2+BA+2I=0=A^2+AB+2I
$$
Cancelling common terms yields
$$
BA=AB
$$







Another Approach



Using this answer (involving more work than the previous answer), which says that
$$
AB=I\implies BA=I
$$
we get

$$
-\frac12A(A+B)=I\implies-\frac12(A+B)A=I
$$
Cancelling common terms gives $AB=BA$.


Saturday, February 17, 2018

limits - Prove convergence of sequence given by $a_{1}=1$ and $a_{n+1}= frac{1}{a_1+a_2+ldots+a_n}$



For sequence given by $a_{1}=1$ and $a_{n+1}= \frac{1}{a_1+a_2+\ldots+a_n}$ I have to prove that it converges to some number and find this number.



I tried toshow that it's monotonic by calculating
$$
\frac{a_{n+1}}{a_{n}} = \frac{1}{a_{n}(a_1+a_2+\ldots+a_n)}
$$

but I cannot say anything about the denominator. How can I try to find it's limit?



Answer



Let $s_n = \sum\limits_{k=1}^n a_k$. We can rewrite the recurrence relation as



$$s_{n+1} - s_n = a_{n+1} = \frac{1}{s_n} \quad\implies s_{n+1} = s_n + \frac{1}{s_n}$$



This implies
$$s_{n+1}^2 = s_n^2 + 2 + \frac{1}{s_n^2} \ge s_n^2 + 2$$



So for all $n > 1$, we have




$$s_n^2 = s_1^2 + \sum_{k=1}^{n-1} (s_{k+1}^2 - s_k^2) \ge 1 + \sum_{k=1}^{n-1} 2 = 2n - 1$$



Since all $a_n$ is clearly positive, we have $\displaystyle\;0 < a_n = \frac{1}{s_{n-1}} \le \frac{1}{\sqrt{2n-3}}$.



By squeezing, $a_n$ converges to $0$ as $n\to\infty$.


Friday, February 16, 2018

proof verification - Prove or Disprove Inequality By Induction

Prove or Disprove



$\sum_{i=0}^n(2i)^3 \le (8n)^3
$




If true, prove using induction. If false, give the smallest value of n that is a counter example and the values for the left and right hand sides of the equation.



I started out with the Base Case at n = 1:



$\sum_{i=0}^1(2i)^3 = 8, 8^3 = 512
$



$8 \le 512 \therefore
$ true




Induction Hypothesis: Assume
$\sum_{i=0}^k(2i)^3 \le (8n)^3
$ is true



Induction: $\sum_{i=0}^{k+1} (2i)^3 \le (8(k+1))^3
$



$\sum_{i=0}^{k+1} (2i)^3 = \sum_{i=0}^k(2i)^3 + (2(k+1))^3
$




This is where I'm stuck in the problem right now. I'm not sure how to use the hypothesis when it's an inequality.

calculus - Determine whether $sumlimits_{n=1}^{infty} (-1)^{n-1}(frac{n}{n^2+1})$ is absolutely convergent, conditionally convergent, or divergent.




Determine whether the series is absolutely convergent, conditionally convergent, or divergent.



$$\sum_{n=1}^{\infty} (-1)^{n-1}\left(\frac{n}{n^2+1}\right)$$





Here's my work:



$b_n = (\dfrac{n}{n^2+1})$



$b_{n+1} = (\dfrac{n+1}{(n+1)^2+1})$



$\lim\limits_{n \to \infty}(\dfrac{n}{n^2+1}) = \lim\limits_{n \to \infty}(\dfrac{1}{n+1/n})=0$



Then I simplified $b_n - b_{n+1}$ in hopes of showing that the sum would be greater than or equal to $0$, but I failed (and erased my work so that's why I haven't included it).




I know the limit of $|b_n|$ is also 0, and I can use that for testing conditional convergence there, but I would run into the same problem for the second half of the test.



I'm having trouble wrapping my head around tests involving absolute values, or more specifically when I have to simplify them.


Answer



This definitely converges by the alternating series test. The AST asks that the unsigned terms decrease and have a limit of 0. In your case, the terms $\frac{n}{n^2+1}$ do exactly that, so it converges.



Now, which flavor of convergence?



If you take absolute values, the resulting series $\sum_n \frac{n}{n^2+1}$ diverges. You can probably get this quickest by limit comparison: terms are on the order of $1/n$. Also, the integral test here is pretty fast because you can see the logarithm.




To apply limit comparison, let's compare $\sum_n \frac{n}{n^2+1}$ to $\sum_n \frac{1}{n}$. Dividing a term in the first by a term in the second gives
$$
(\frac{n}{n^2+1})/(\frac{1}{n}) = \frac{n^2}{n^2+1}.
$$
Taking the limit gives $L=1$. Since $L>0$, both series "do the same thing." Since $\sum_n \frac{1}{n}$ diverges, so does $\sum_n \frac{n}{n^2+1}$.



Hence it converges conditionally because it converges, but the series of absolute values does not.


Prime powers that divide a factorial




If we have some prime $p$ and a natural number $k$, is there a formula for the largest natural number $n_k$ such that $p^{n_k} | k!$.




This came up while doing an unrelated homework problem, but it is not itself homework. I haven't had any good ideas yet worth putting here.



The motivation came from trying to figure out what power of a prime you can factor out of a binomial coefficient. Like $\binom{p^m}{k}$.


Answer



This follows from a famous result of Kummer:



Theorem. (Kummer, 1854) The highest power of $p$ that divides the binomial coefficient $\binom{m+n}{n}$ is equal to the number of "carries" when adding $m$ and $n$ in base $p$.



Equivalently, the highest power of $p$ that divides $\binom{m}{n}$, with $0\leq n\leq m$ is the number of carries when you add $m-n$ and $n$ in base $p$.




As a corollary, you get



Corollary. For a positivie integer $r$ and a prime $p$, let $[r]_p$ denote the exact $p$-divisor of $r$; that is, we say $[r]_p=a$ if $p^a|r$ but $p^{a+1}$ does not divide $r$. If $0\lt k\leq p^n$, then
$$\left[\binom{p^n}{k}\right]_p = n - [k]_p.$$



Proof. To get a proof, assuming Kummer's Theorem: when you add $p^n - k$ and $k$ in base $p$, you get a $1$ followed by $n$ zeros. You start getting a carry as soon as you don't have both numbers in the column equal to $0$, which is as soon as you hit the first nonzero digit of $k$ in base $p$ (counting from right to left). So you really just need to figure out what is the first nonzero digit of $k$ in base $p$, from right to left. This is exactly the $([k]_p+1)$th digit. So the number of carries will be $(n+1)-([k]_p+1) = n-[k]_p$, hence this is the highest power of $p$ that divides $\binom{p^n}{k}$, as claimed. $\Box$


radicals - Prove that the square root of 3 is irrational

I'm trying to do this proof by contradiction. I know I have to use a lemma to establish that if $x$ is divisible by $3$, then $x^2$ is divisible by $3$. The lemma is the easy part. Any thoughts? How should I extend the proof for this to the square root of $6$?

Thursday, February 15, 2018

sequences and series - Find $lim_{nto infty} (1-frac{1}{2}+frac{1}{3}-...-frac{1}{2n})$



Find : $\lim_{n\to \infty} (1-\frac{1}{2}+\frac{1}{3}-...-\frac{1}{2n})$ .



My Approach:



Let $x_n=(1-\frac{1}{2}+\frac{1}{3}-...-\frac{1}{2n})$ .




I know that $\gamma_{n}=\sum_{n=1}^n \frac{1}{n} -\log(n)$ . And $<\gamma_n>$ converges to Euler's constant i.e. $\gamma$ which lies between $0.3$ and $1$ .



Using this fact I found ,



$x_n=1+\gamma_{2n-1}+\log(2n-1)-\gamma_{2n}-\log(2n)$



since $\gamma_{2n-1}$ and $\gamma_{2n}$ are subsequences of convergent sequence $<\gamma_n>$ , so they converges to same limit as $n$ goes to $\infty$ .



Hence , $\lim_{n\to \infty} x_n= 1$ .




Question:



To check whether I'm wrong or right . Actually I don't have answers , Please help!



EDIT:
$(1)$ My approach is wrong.


Answer



To answer the question of what you did wrong in your approach specifically, it's just that your arithmetic is wrong. You write (essentially) $x_n=1+\gamma_{2n-1}+\log(2n-1)-\left(\gamma_{2n}+\log(2n)\right)$; your idea of writing $x_n$ as a difference of harmonic terms is broadly correct, but let's look at what's going on here. I'll write (using your notation) $H_n=\gamma_n+\log(n)=\sum_{i=1}^n\frac1i$ for the harmonic numbers; then what you've written is $1+H_{2n-1}-H_{2n}$. But almost all of the terms cancel out of this sum, because it's not the case that $H_{2n-1}=1+\frac13+\frac15+\ldots+\frac1{2n-1}$ and $H_{2n}=\frac12+\frac14+\ldots+\frac1{2n}$; instead, $H_{2n-1}=1+\frac12+\frac13+\ldots$ and similarly for $H_{2n}$, so what you've written is just $1-\frac1{2n}$.



Instead, to use the approach you're trying, you need to start with $1+\frac12+\frac13+\ldots+\frac1{2n}$ and then subtract the even terms twice :once to 'eliminate' them from the harmonic sum, yielding $1+\frac13+\frac15+\ldots$, and then a second time to 'add' the negative terms and give the sum $1-\frac12+\frac13-\frac14+\ldots$ that you want. This means that your sum is $$\begin{eqnarray}

1+\frac12+\frac13+\ldots+\frac1{2n}-2(\frac12+\frac14+\ldots+\frac1{2n}) \\ =1+\frac12+\ldots+\frac1{2n}-(\frac22+\frac24+\frac26+\ldots+\frac2{2n}) \\ =1+\frac12+\ldots+\frac1{2n}-(1+\frac12+\frac13+\ldots+\frac1n) \\
=H_{2n}-H_n
\end{eqnarray}$$. Can you finish from here?


calculus - the value of $limlimits_{nrightarrowinfty}n^2left(int_0^1left(1+x^nright)^frac{1}{n} , dx-1right)$



This is exercise from my lecturer, for IMC preparation. I haven't found any idea.



Find the value of




$$\lim_{n\rightarrow\infty}n^2\left(\int_0^1 \left(1+x^n\right)^\frac{1}{n} \, dx-1\right)$$



Thank you


Answer



By integration by parts,



\begin{align*}
\int_{0}^{1} (1 + x^{n})^{\frac{1}{n}} \, dx
&= \left[ -(1-x)(1+x^{n})^{\frac{1}{n}} \right]_{0}^{1} + \int_{0}^{1} (1-x)(1 + x^{n})^{\frac{1}{n}-1}x^{n-1} \, dx \\

&= 1 + \int_{0}^{1} (1-x) (1 + x^{n})^{\frac{1}{n}-1} x^{n-1} \, dx
\end{align*}



so that we have



\begin{align*}
n^{2} \left( \int_{0}^{1} (1 + x^{n})^{\frac{1}{n}} \, dx - 1 \right)
&= \int_{0}^{1} n^{2} (1-x) (1 + x^{n})^{\frac{1}{n}-1} x^{n-1} \, dx.
\end{align*}




Let $a_{n}$ denote this quantity. By the substitution $y = x^{n}$, it follows that



\begin{align*}
a_{n}
&= \int_{0}^{1} n \left(1-y^{1/n}\right) (1 + y)^{\frac{1}{n}-1} \, dy
= \int_{0}^{1} \int_{y}^{1} t^{\frac{1}{n}-1} (1 + y)^{\frac{1}{n}-1} \, dtdy
\end{align*}



Since $0 \leq t (1 + y) \leq 2$ and $ \int_{0}^{1} \int_{y}^{1} t^{-1}(1+y)^{-1} \, dtdy < \infty$, an obvious application of the dominated convergence theorem shows that




\begin{align*}
\lim_{n\to\infty} a_{n}
= \int_{0}^{1} \int_{y}^{1} \frac{dtdy}{t(1+y)}
&= - \int_{0}^{1} \frac{\log y}{1+y} \, dy \\
&= \sum_{m=1}^{\infty} (-1)^{m} \int_{0}^{1} y^{m-1} \log y \, dy
= \sum_{m=1}^{\infty} \frac{(-1)^{m-1}}{m^2}
= \frac{\pi^2}{12}.
\end{align*}


elementary number theory - Divisibility by 7 rule, and Congruence Arithmetic Laws

I have seen other criteria for divisibility by 7. Criterion described below present in the book Handbook of Mathematics for IN Bronshtein (p. 323) is interesting, but could not prove it.
Let $n = (a_ka_{k-1}\ldots a_2a_1a_0)_{10} = \displaystyle{\sum_{j=0}^{k}}a_{k-j}10^{k-j}$. The expression

$$
Q_{3}^{\prime}(n) = (a_2a_1a_0)_{10} - (a_5a_4a_3)_{10} + (a_8a_7a_6)_{10} -\ldots
$$

are called alternating sum of the digits of third order of $n$. For example,
$$
Q_{3}^{\prime}(123456789) = 789-456+123=456
$$

Proposition: $7 | n \ \Leftrightarrow \ 7 | Q_{3}^{\prime}(n)$.



proof. ??




Thanks for any help.

integration - Evaluating the (complex) integral $int_gamma frac{e^{z+z^{-1}}}{z}mathrm dz$ using residues.



I am trying to evaluate the following integral.





$$\int_\gamma \frac{e^{z+z^{-1}}}{z}\mathrm dz$$
where $\gamma$ is the path $\cos(t)+2i\sin(t)$ for $0\leq t <4\pi$.




So, $\gamma$ is an ellipse running twice counterclockwise around $0$, which is where the function has a singularity. I'm sure I need to use the residue theorem to evaluate this.




  1. (for homework) I'm not good with the Residue theorem yet. Can I get a road map for the canonical solution to this problem? (i.e. the way I'm "probably supposed to" do it.) I can work through the details myself.



  2. (non-homework) Is it possible to solve this problem with the Laurent series approach from this answer using the residues for $e^z/z$ and $e^{-z}$? (or $e^z/\sqrt{z}$ and $e^{-z}/\sqrt{z}$, if that would be better.)







To be clear about where I'm confused for part (1): I see that the hypothesis for the Residue theorem is met: the above function is analytic with an isolated singularity at $0$, we're goin around it twice, so $\int_\gamma f=4\pi i \operatorname{Res}(0,f)$. But from here I don't know how to perform the computations.


Answer



The only singularity for the integrand is at $z=0$, which is within the contour of integration. The integral is nothing but $$2 \pi i \cdot \left(\text{Residue at } z=0 \text{ of }\left(\dfrac{e^{z+1/z}}z \right) \right) \cdot \text{Number of times the closed curve goes about the origin}$$
Let us write the Laurent series about $z=0$. We then get
$$e^{z+1/z} = e^z \cdot e^{1/z} = \sum_{k=0}^{\infty} \dfrac{z^k}{k!} \cdot \sum_{m=0}^{\infty} \dfrac1{z^m \cdot m!}$$

Hence,
$$\dfrac{e^{z+1/z}}z = \dfrac{e^z \cdot e^{1/z}}z = \sum_{k=0}^{\infty} \sum_{m=0}^{\infty} \dfrac{z^{k-m-1}}{k! m!}$$
The term $z^{-1}$ in the series is when $k=m$. Hence, the coefficient of $\dfrac1z$ is $$\sum_{k=0}^{\infty} \dfrac1{(k!)^2}$$
Hence, your answer is $$4 \pi i \sum_{k=0}^{\infty} \dfrac1{(k!)^2} = 4 \pi iI_0(2)$$where $I_{\alpha}(z)$ is the modified Bessel's' function of the first kind given by $$I_{\alpha}(z) = \sum_{m=0}^{\infty} \dfrac1{m! \Gamma(m+\alpha+1)} \left(\dfrac{z}2 \right)^{2m+\alpha}$$


real analysis - Graph of measurable function has measure 0 in the product measure space




I have the following homework problem:



Let $(X, M , \mu )$ be a $\sigma$-finite measure space. Show that the graph of any measurable function $f: X \rightarrow \mathbb{R}$ has measure 0 in the product measure space $(X×\mathbb{R},M⊗B_{\mathbb{R}}, \mu\times\lambda)$. Where $\lambda$ is the Lebesgue measure on $\mathbb{R}$.



My idea was to cover the graph by countably many generalized rectangles of arbitrarily small measure or, in other words, to prove that the graph has outer measure 0 but I'm not sure how to go with that idea since we'll probably need $\mu(X)< \infty $.



Any help will be appreciated.


Answer



Here's an alternate idea:




Use Tonelli's theorem to show that for any measurable function $f \geq 0$: if we define the sets
$$
\Gamma = \{(x,t): 0 \leq t < f(x)\}\\
\Gamma' = \{(x,t): 0 \leq t \leq f(x)\}
$$
Then $(\mu\times \lambda)(\Gamma) = (\mu\times \lambda)(\Gamma') = \int_{X}f(x)\,dx$



Extend this to include general measurable $f$. Conclude that your set, $\Gamma' - \Gamma$, has measure zero.


Parameterize a polynomial with no real roots



An even polynomial with a constant term of 1 will have no real roots if the coefficients of the powers (the c's below) are non-negative. So



$$1 + c_2x^2 + c_4x^4 + c_6x^6$$




has no real roots. Is there a general way to parameterize an nth order polynomial with a constant term of 1 so that it has no real roots? I know that the above conditions (even powers, with non-negative coefficients) are more restrictive than necessary. The application is fitting (x,y) data where y is always positive with a polynomial in x.


Answer



Let $p(x)=1+c_1+\dots c_n$. Since $p(0)=1>0$, if $p$ does not have real roots, it must be positive. This implies $c_n>0$ (otherwise there would be a positive root) and $n$ even (otherwise there would be a negative root.) Applying Descartes rule of signs to $p(x)$ and $p(-x)$ we get the following necessary condition: the sequences of coefficients
$$
1,\,c_1,\,c_2,\,c_3,\dots,c_n,\quad\text{and}\quad
1,\,-c_1,\,c_2,-\,c_3,\dots,c_n
$$
must have an even number of sign changes.


Wednesday, February 14, 2018

real analysis - Is this proof that if $a_{n+1} = sqrt{2 + sqrt{a_n}}$ and $a_1 = sqrt{2}$, then $sqrt{2} leq a_n leq 2$ correct?



Let $a_1 = \sqrt{2}$ and $a_{n+1} = \sqrt{2 + \sqrt{a_n}}$. Now I want to show by induction that $\sqrt{2} \leq a_n \leq 2$ for all $n$.




The base case is $n=1$ and it is clear $\sqrt{2} \leq a_1 \leq 2$. Then I assume that $\sqrt{2} \leq a_n \leq 2$ holds and I want to show $\sqrt{2} \leq a_{n+1} \leq 2$.



Then I note that $\sqrt{2} \leq a_{n+1} \leq 2 \implies \sqrt{2} \leq \sqrt{2 + \sqrt{a_n}} \leq 2$. By squaring both sides I get $2 \leq 2 + \sqrt{a_n} \leq 4$. Then by subtracting $2$, I get $0 \leq \sqrt{a_n} \leq 2$. This means that $0 \leq a_n \leq 4$. This is ok because I assumed $0 \leq \sqrt{2} \leq a_n \leq 2 \leq 4$.



Edit
How about this: Since I know $\sqrt{2} \leq a_n \leq 2$. Then it is clear that $0 \leq a_n \leq 4$. By taking square roots I get $ 0 \leq \sqrt{a_n} \leq 2$. Now if I add 2, $2 \leq \sqrt{a_n} + 2 \leq 4$. Taking another square root I get $\sqrt{2} \leq \sqrt{\sqrt{a_n} + 2} \leq 2$. So $\sqrt{2} \leq a_{n+1} \leq 2$.


Answer



Note that you want to show the case $n$ implies $n+1$, but you're going the other way.



The good thing about this sequence is the inductive step is indeed easy.




The base case is $\sqrt 2 \leq a_1 \leq 2$



We now assume $\sqrt 2 \leq a_n \leq 2$



Take a square root, then add two, then take a square root.



$$\sqrt{\root 4 \of 2+2} \leq \sqrt{2+\sqrt a_n}\leq \sqrt{\sqrt2+2}$$



$$\sqrt{\root 4 \of 2+2} \leq a_{n+1}\leq \sqrt{\sqrt2+2}$$




Note that since $\sqrt 2 + 2 < 4 \Rightarrow \sqrt {\sqrt 2 + 2} < 2$ and similarily $2 < \root 4 \of 2 + 2 \Rightarrow \sqrt 2 < \sqrt {\root 4 \of 2 + 2} $



Add: Your edit makes perfect sense.


linear algebra - Maximum eigenvalue of a hollow symmetric matrix




Is the maximum eigenvalue (or spectral radius) of the matrix with the following form equalled to row or column sum of the matrix?



$$
A=\left( \begin{array}{cccc}
0 & a & ... & a \\
a & 0 & ... & a \\
: & : & ...& : \\
a & a & ... & 0\end{array} \right) $$




The matrix is square with dimension $n \times n$ where $n = 2,3,4,...$, hollow (all elements in the principal diagonal = 0), symmetric and all off diagonal elements have the same value.



Is the spectral radius of such matrices = $(n-1)\times a$? Why?


Answer



Start with the matrix $A$ of all $a$'s, whose eigenvalues are zero except for eigenvalue $na$ having multiplicity one (because rank$(A) = 1$).



Now subtract $aI$ from $A$ to get your matrix. The eigenvalues of $A-aI$ are those of $A$ shifted down by $a$. We get a eigenvalue $(n-1)a$ of multiplicity one and eigenvalue $-a$ with multiplicity $n-1$.



So the spectral radius (largest absolute value of an eigenvalue) of $A$ is $|na|$, and the spectral radius of $A-aI$ is $\max(|(n-1)a|,|a|)$. The latter is simply $|(n-1)a|$ unless $n=1$.



calculus - For what values of $k$ to both of the following series converge?

I'm taking the AP Calculus BC Exam next week and ran into this problem with no idea how to solve it. Unfortunately, the answer key didn't provide explanations, and I'd really, really appreciate it if someone could explain how to solve this problem, and why the answer is 4. It's a non-calculator question.



For which of the following values of k do both




$$\sum\limits_{n=1}^\infty\left(\frac3k\right)^n\;\;\;\text{ and}\;\;\;\;\; \sum\limits_{n=1}^\infty\frac{(3-k)^n}{\sqrt{n+3}}$$ converge?



Answer choices: None, 2, 3, 4, 5.

probability - What is the chance of picking y in a set of x objects, given x chances to pick at random?



Suppose you have a set of x objects. In it is an object y. Suppose you pick at random one object from this set. This set is uniformly distributed. There is a 1 / x chance of picking y. Let's say that you pick x times from the set, then eliminating it from the set. This means there is a 100% chance of picking y. But, let's say that after you picked an object from this set, it is not eliminated. What is the probability of picking y in that case? Would this (Link 1) apply? No, because it presumes you leave the marbles out of the bag, which is not what I mean. That would mean that logically scaling it up, there would be a 100% chance of picking object y, which I know not to be true. Or, would this (Link 2) apply?



To simplify things, let's use an example.
We have a bag with 10 marbles. 9 of them are white, and one is black. You pick one marble at a time at random, then drop it back in. If you do this 3 times, according logically, this would yield a 3/10 chance of picking the black marble. Now, suppose we scale this up to picking 10 marbles. At a quick glance, a 100% chance would be the logical answer, but if you think about it longer, you can pick the same marble more than once. So what is the probability, in this case, of picking the black marble?




Link 1: Probability: best chance of picking a desired marble out of 10



Link 2: Picking a uniformly at random element from a random set


Answer



In your example of 9 white marbles and 1 black marbles, you say that if you pick 1 marble out of those 10 3 times, you have a chance of 30% of picking the black marble (at least) once. That is actually a very common reasoning mistake, and you basically realized this yourself, for when you pick a marble $10$ times, we know it is of course not going to be $100$%, and when you pick a marble $20$ times, it is certainly not $200$%!



So, what is the mistake? Well, as again you say yourself, we can pick the same marble multiple times (indeed, when we pick $20$ marbles, we have to pick at least one marble multiple times). And in fact that goes for the black marble as well. And that has the following effect:



Let's just look at the case where we twice pick 1 marble (again, with replacement). Let $B1$ be the event of picking the black marble for the first pick, and let $B2$ be the event for picking the black marble the second time. Now, we can all agree that $P(B1)=10$%, and that $P(B2)=10$%. OK! But what is the chance of picking the black marble at least once between these two picks? That is, what is the chance of picking the black marble at the first or the second pick?




By the mistaken logic, it would be $P(B1) +P(B2)=20$%. However, that only works if $B1$ and $B2$ are mutually exclusive, i.e. if they cannot both happen. But that is of course not the case: they can both happen. And it is because they can both happen, that the chance of either one of them happening will be less then the sum of them happening by themselves.



Here is the general formula that is always true:



$$P(A \cup B) = P(A) + P(B) -P(A \cap B)$$



In here, $P(A)$ and $P(B)$ are of course the chances of $A$ and $B$ happening as individual events. $P(A \cup B)$ is the chance of either $A$ or $B$ happening (or both), and $P(A \cap B)$ is the chance of $A$ and $B$ both happening.



Now, if $A$ and $B$ are mutually exclusive, then $P(A \cap B)=0$, and hence $P(A \cup B)=P(A) +P(B)$. But if they can both happen, then $P(A \cap B)>0$, and hence $P(A \cup B)


Going back to the black marble case, $P(B1 \cap B2)=\frac{1}{10}\cdot \frac{1}{10}=\frac{1}{100}=1$% Hence, $P(B1 \cup B2)=P(B1) +P(B2)-P(B1 \cap B2)=19$% ... and not 20%!



And of course likewise, picking a marble 3 times will end up giving us a less than 30% chance of picking the black marble at least once, and picking a marble 10 or 20 times will actually end up well below 100%



Here is yet another way to think about it. Suppose you number the marbles 1 through 10, with the black one being number 1. Now, bewen the first two picks, what can be the possible outcomes? Well, you could pick marble 7 for the first pick, and marble 4 for the second. Or: marble 3 for the first, and marble 3 again for the second. We can write these outcomes as $(7,4)$ and $(3,3)$ respectively.



Now, how many possible outcomes can there be for the first two picks? Well, any outcome is $(x,y)$ wih both $x$ and $y$ ranging from 1 to 10, so that gives us 100 possible outcomes. OK! And how many outcomes lead to the black marble being picked the first time? That is of course all the outcomes of the form $(1,y)$, of which there are 10 ... which is why $P(B1)=\frac{10}{100}=10$%. And there are also 10 outcomes of the form $(x,1)$, which is why $P(B2)=\frac{10}{100}=10$% as well. OK! But are there 20 possible outcomes that contain a $1$? NO! Because when you add the 10 and 10 you double-count the $(1,1)$ outcome. So, there are really only 19 outcomes out of the 100 possible outcomes where you have a 1.



OH, and there are only 18 in which you pick the black marble exactly once, so this also tells you that a question like 'what is the chance of picking the black marble when you pick $x$ times?' is ambiguous, because it is not clear whether you mean 'picking the black marble at least once' or 'picking the black marble exactly once.




So finally, what then is the chance of getting at least one black marble when picking 3 times? Well, the easiest way to calculate that is not to consider the outcomes where you get the black marble in the first, second, and third try, and then try to remove the double- or even triple-counts, but rather to consider the possible outcomes where you do not get any black marble at all between these three picks. And how many possibilities are there for that? Well, out of the $10*10*10=1000$ possible putcomes $(x,y,z)$ there are $9*9*9=729$ where you don't get any black marble, and so there are $1000-729=271$ possible outcomes where there is at least 1 black marble, making the chance of getting at least one marble $\frac{271}{1000}=27.1$%



In terms of probabilities, you can do this as well:$ P(B1\cup B2 \cup B3)=1-P((B1 \cup B2 \cup B3)^C)=1-P(B1^C \cap B2^C \cap B3^C)=1-P(B1^C) \cdot P(B2^C) \cdot P(B3^C)= 1- 0.9 \cdot 0.9 \cdot 0.9 =1-0.729=0.271$



OK! And for picking the black marble at least once when picking a marble 10 times? Then it would be $1-(0.9)^{10}\approx 0.65$



And so what is the general formula for the probability of picking one specific object $y$ at least once out of $x$ objects, when making $n$ picks with replacement?



It will be $1-(\frac{1}{x})^n$



algebra precalculus - How to reach $dfrac{(n-1)n(2n-1)}{6n^3}$

I am trying to refresh my Maths after a lot of years without studying them, and I am finding a lot of difficulties (which is actually nice). So, my question:




I don't understand the next equality. How to reach $\frac{(n-1)n(2n-1)}{6n^3}$ from $$\frac{1^2+ 2^2+...+ (n-1)^2}{n^3}$$?



Thank you very much (and sorry if I make mistakes; my English is also rusty :))

Tuesday, February 13, 2018

abstract algebra - Proving that a field $K$ can be generated by algebraically independent elements and an separable element

Let $k$ be a perfect field (either $k$ has characteristic $0$, or characteristic $p > 0$ and every element has a $p$th root), and let $K$ be a finitely generated extension field.



I have a question about a step in the proof of the following statement. This can be found in the Vol. 1 of Shafarevich (Appendix 5) or "Introduction to Algebraic Geometry and Algebraic Groups" by Geck (exercise 1.8.15).



Let $d$ be the transcendence degree of $K/k$. The claim is that then there exist elements $z_1, \ldots, z_{d+1}$ such that:




  1. $K = k(z_1, \ldots, z_{d+1})$.

  2. $z_1, \ldots, z_d$ are algebraically independent.


  3. $z_{d+1}$ is separable algebraic over $k(z_1, \ldots, z_d)$.



The proof proceeds as follows. Now there exist $a_i$ such that $K = k(a_1, \ldots, a_n)$, with $d \leq n$, and $a_1, \ldots, a_d$ are algebraically independent. The case $n = d$ is easy, so assume $n > d$ and proceed by induction on $n$.



First: $\{a_1, \ldots, a_{d+1}\}$ is not algebraically independent, so there exists a nonzero, nonconstant irreducible polynomial $F \in k[X_1, \ldots, X_{d+1}]$ such that $F(a_1, \ldots, a_{d+1}) = 0$.



Since $k$ is perfect it follows that for some $i$ the partial derivative of $F$ with respect to $X_i$ is nonzero.



So far this makes sense. The next claim is that $a_i$ is separable algebraic over $L = k(a_1, \ldots, a_{i-1}, a_{i+1}, \ldots, a_{d+1})$, and this is the only thing in the proof I have a problem with. Why is this true? In the proof they claim we can use $F$ because $X_i$ appears in it, but I fail to see how this follows. Couldn't $F(a_1, \ldots, a_{i-1}, X, a_{i+1}, \ldots, a_{d+1})$ be the zero polynomial in $L[X]$?

Functional Equation $f(x+y)=f(x)+f(y)+f(x)f(y)$

I need to find all the continuous functions from $\mathbb R\rightarrow \mathbb R$ such that $f(x+y)=f(x)+f(y)+f(x)f(y)$. I know, what I assume to be, the general way to attempt these problems, but I got stuck and need a bit of help. Here is what I have so far:



Try out some cases:


Let $y=0$: $$ \begin{align} f(x)&=f(x)+f(0)+f(x)f(0) \\ 0&=f(0)+f(x)f(0) \\0 & = f(0)[1+f(x)] \end{align}$$ Observe that either $f(0)=0$ or $f(x)=-1$. So this gives me one solution, but I am having trouble finding the other solution(s). Somebody suggested to me that $f(x)=0$ is also a solution but I can't find a way to prove what they said is true. Can anyone please, without giving away the answer, give me a teeny hint? I really want to figure this out as much as I can. I've tried the case when $y=-x$ and $x=y$ but I don't feel like those cases help me towards the solution.


Thanks in advance

real analysis - continuous series

Let $(f_n)$ be a sequence of continuous functions on $(0, \infty)$ satisfying $|f_n(x)| \leq 1$ for all $x > 0$ and all $n \geq 1$. Show that the function $f(x) = \sum_{n=1}^{\infty} \frac{f_n(x)}{2^n}$ defines a continuous function on $(0, \infty)$. If, in addition, the $f_n$ satisfy $\lim_{x \to \infty} f_n(x) = 0 $, show that $\lim_{ x \to \infty} f(x) = 0$, as well.



I have an idea of what to do: My idea is to couple the Weierstrass M-Test with the fact that the uniform limit of a sequence of continuous functions is continuous, for the first part; this should be sufficient for out result. I would think that second would easily follow from part 1.



Attempt:



Consider the sequence $(g_N) \subset C((0, \infty))$, where $g_N(x) = \sum_{n=1}^{N} 2^{-n} f_n(x)$ for each $N$. Clearly, the sequence $(g_N)$ converges pointwise to $f$. Since $|f_n(x)| \leq 1$ for all $x > 0$ and all $n \geq 1$, it follows that $||g_N||_{\infty} \leq ||\sum_{n=1}^{N} 2^{-n}||_{\infty}$. Thus, we see that $||f||_{\infty} = ||\lim_{n \to \infty} g_n||_{\infty} \leq \sum_{n=1}^{\infty} ||2^{-n}||_{\infty} = \sum_{n=1}^{\infty} 2^{-n}< \infty, $ since $\sum_{n=1}^{\infty} 2^{-n}$ is geometric. Hence, since $||f||_{\infty} < \infty$, by the Weierstrass M-Test, it must be that $(g_N)$ converges uniformly to $f$ on $(0, \infty)$. In particular, since each $g_N$ is continuous, it must be that $f$ is continuous as the uniform limit of continuous functions.




Do this argument hold water? I think my intuition is right, but the argument seems very sloppy to me. Any help is appreciated!

Monday, February 12, 2018

calculus - Proof that monotone functions are integrable with the classical definition of the Riemann Integral



Let $f:[a,b]\to \mathbb{R}$ be a monotone function (say strictly increasing).




Then, do for every $\epsilon>0$ exist two step functions $h,g$ so that $g\le f\le h$ and $0\le h-g\le \epsilon$?



Does there exist some closed form of these functions like the one below?



I encountered the problem when trying to prove the Riemann Integrability of monotone functions via the traditional definition of the Riemann Integral (not the one with Darboux sums). The book I am reading proves that a continuous function is integrable via the process below:



Let $\mathcal{P}=\left\{ a=x_0<... be a partition of $[a,b]$. Define $m_i=\min_{x\in [x_{i-1},x_i]}f(x)$ and $M_i=\max_{x\in [x_{i-1},x_i]}f(x)$. By the Extreme Value theorem $M_i,m_i$ are well defined. We approximate $f$ with step functions:
\begin{gather}g=m_1\chi_{[x_0,x_1]}+\sum_{i=2}^nm_i\chi_{(x_{i-1},x_i]}\\
h=M_1\chi_{[x_0,x_1]}+\sum_{i=2}^nM_i\chi_{(x_{i-1},x_i]}\end{gather}


It is easily seen that they satisfy the "step function approximation" and as $0\le \int_a^bh-g\le \epsilon$ (uniform continuity) by a previous theorem $f$ is integrable.



The book then goes on to generalise by discussing regulated functions. I would like however to see a self contained proof similar to the previous one if $f$ is monotone. This question is reduced to the two questions asked in the beggining of the post


Answer



Let $\varepsilon>0$ and $N_\varepsilon$ the smallest $n \in \mathbb{N}$ such that
$$
\frac{1}{n}(b-a)(f(b)-f(a)) \le \varepsilon
$$
For $n \ge \max\{2,N_\varepsilon\}$ consider the following partition of $[a,b]$:
$$

\mathcal{P}=\{a=x_0$$
Set
$$
A_i=\begin{cases}
[x_0,x_1] & \text{ for } i=0\\
(x_i,x_{i+1}]& \text{ for } 1 \le i \le n-1
\end{cases}
$$
Since $f$ is strictly increasing for each $i \in \{0,\ldots,n-1\}$ we have

$$
f(x_i) \le f(x) \le f(x_{i+1}) \quad \forall\ x_i \le x \le x_{i+1}.
$$
Setting
$$
h=\sum_{i=0}^{n-1}f(x_{i+1})\chi_{A_i},\ g=\sum_{i=0}^{n-1}f(x_i)\chi_{A_i}
$$
we have
$$
g(x)\le f(x) \le h(x) \quad \forall\ x \in [a,b],

$$
and
$$
h(x)-g(x)=\sum_{i=0}^{n-1}\Big(f(x_{i+1})-f(x_i)\Big)\chi_{A_i}(x)>0 \quad \forall\ x \in [a,b].
$$
In addition
\begin{eqnarray}
\int_a^b(h-g)&=&\sum_{i=0}^{n-1}(f(x_{i+1})-f(x_i))\int_a^b\chi_{A_i}=\sum_{i=0}^{n-1}(f(x_{i+1})-f(x_i))(x_{i+1}-x_i)\\
&=&\frac{b-a}{n}\sum_{i=0}^{n-1}(f(x_{i+1})-f(x_i))=\frac{1}{n}(b-a)(f(b)-f(a))\\
&\le&\frac{1}{N_\varepsilon}(b-a)(f(b)-f(a))\le \varepsilon.

\end{eqnarray}


calculus - Is there an example of using L'Hospital's Rule on a product where it doesn't work?

I was reading that, when trying to solve something like:




$$\lim_{x\to\infty} f(x)g(x)$$



I can rewrite is as:



$$\lim_{x\to\infty} \frac{f(x)}{\frac{1}{g(x)}}$$



and use L'Hospital's Rule to solve. And, if this doesn't work, I can try using the other function as the denominator:



$$\lim_{x\to\infty} \frac{g(x)}{\frac{1}{f(x)}}$$




So I wondered: are there well-known quotients of functions that don't work in either case and, if so, how do I then solve them?



An example that doesn't submit to this process is:



$$\lim_{x\to\infty} x.x$$



But obviously L'Hospital's Rule would not be necessary in this case.

algebra precalculus - Exponential Form of Complex Numbers - Why e?


Please delete this question please. It is a duplicate. Thank you!!!!!! I cannot delete the question.


Thanks!


Answer



If you remember series, notice that


$$ e^{i x } = \sum_{n \geq 0} \frac{ i^n x^n }{n!} $$


Now, notice that $i^2 = -1 $ but $i^{3} = -i$, and $i^4 = 1 $ and $i^5 = i$ so on, and since


$$ \sin x = \sum_{n \geq 0} \frac{ (-1)^n x^{2n+1 } }{(2n+1)!} \; \; \text{and} \; \;\cos x = \sum_{n \geq 0} \frac{ (-1)^n x^{2n } }{(2n)!} $$


after breaking the $n$ in the first summation for even cases and odd cases and seeing in the third line how the $i's$ alternate, one obtains the result



calculus - Convergence of $sum_{n=1}^inftyfrac {n^{n}}{e^nn!}$


Check the convergence of: $\displaystyle\sum_{n=1}^\infty\frac {n^{n}}{e^nn!}$




Using the root test I get: $\displaystyle\lim_{n \to\infty} \dfrac {n}{e\sqrt[n]{n!}}$ now I'm left with showing that $n > \sqrt[n]{n!} \ \ \forall n$, can I just raise it to the power of $n$ like so: $\ n^n>n!$ ?



Alternatively, using limit arithmetic: $\displaystyle\lim_{n \to\infty} \dfrac {n}{e\sqrt[n]{n!}}=\displaystyle\lim_{n \to\infty} \dfrac {1}{\large\frac e n \sqrt[n]{\frac {n!}{n^n}}}>1$ (that's not very persuasive I know) so it diverges.



Edit: Root test won't work.




Note: Stirling, Taylor or integration are not allowed.

discrete mathematics - Using the Principle of Mathematical Induction to Prove propositions



I have three questions regarding using the Principle of Mathematical Induction:




  1. Let $P(n)$ be the following proposition:




    $f(n) = f(n-1) + 1$ for all $n ≥ 1$, where $f(n)$ is the number of subsets of a set with $n$ elements.


  2. Let $P(n)$ be the following proposition:



    $n^3 + 3n^2 + 2n$ is divisible by 3 for all $n ≥ 1$. Determine whether $P(n)$ holds.


  3. Use the Principle of Mathematical Induction to prove that $1 \cdot 1! + 2 \cdot 2! + 3 \cdot 3! + ... + n \cdot n! = (n+1)! -1$ for all $n ≥ 1$.




Here is the work I have so far:




For #1, I am able to prove the basis step, 1, is true, as well as integers up to 5, so I am pretty sure this is correct. However, I am not able to come up with a formal proof.



For #2, for the basis step, I have $1^3 + 3(1)^2 + 2(1) = 6$, which is divisible by 3. For the inductive step, I need to prove that $P(k) \rightarrow P(k+1)$, so I have $P(k+1) = (k+1)^3 + 3(k+1)^2 + 2(k+1)$. However, I'm not sure how to take the inductive step and plug in the inductive hypothesis to make this formal proof true.



For #3, I think that the inductive hypothesis would be $\sum_{i=1} ^ {k+1} i \cdot i! = (k+2)! -1$. When I do this, I am getting $\sum_{i=1} ^ {k+1} i \cdot i! = \sum_{i=1} ^ k (k+1)! + \sum_{i=1} ^ 1 1! - 1$ , but I don't think this will work for plugging in the inductive hypothesis. I think I should be using $1 \cdot 1! + 2 \cdot 2! + 3 \cdot 3! + ... + n \cdot n! + n+1 \cdot n+1! = (n+2)! -1$ instead for the proof. I'm getting nowhere with this one.



Any help would be appreciated.


Answer



the number of subsets for a set with order $n$ is $2^n$. Consequently, $2^n-1 \neq 2^{n-1}$ in general, so it is false.




To prove this fact, you can actually use induction!



More specifically, think about the number of ways you can have a k-element subset which is going to be$\dbinom{n}{k}$ [how many ways can you pick a group of k people out of an $n$-person group.] Then all possible subsets will look like the total number of ways to be pick $k$-element subsets. In particular:
$$\dbinom{n}{0}+\dbinom{n}{1}+...+\dbinom{n}{n}=2^n$$



For practice, try to prove this with induction.



For (2), the base case is clear. We assume that $3 \mid n^3+3n^2+2n$. This means that there exists some integer $k$ so that $3k=n^3+3n^2+2n$



Then:

$$\begin{align}(n+1)^3+3(n+1)^2+2(n+1)&=(n^3+3n^2+3n+1)+3(n^2+2n+1)+2n+2 \\
&=(n^3+3n^2+2n)+3n+3+3(n^2+2n+1)\\ \end{align}$$
by substitution from the hypothesis, we obtain:
$$\begin{align}&=3k+3(n^2+3n+2) \\
&=3(n^2+3n+2+k)\end{align}$$



Hence, the result follows readily.



I leave part 3 to you. You got the induction step correctly, at least in the set up. Try some algebraic manipulation to start with; keep in mind what your assumption is, you just want it to show up somewhere in your inductive step. Induction is an easy enough idea, but the problem is that it doesn't show much in the way of intuition. As in, it generally doesn't tell you why something works. We just get used to the fact that it solves problems for us. Kind of like l'hopital's rule in Calculus.


linear algebra - A possible converse to the Cayley-Hamilton theorem?

Happy new year MSE! During my holiday vacation I had an interesting idea! The Cayley-Hamilton theorem states that if $f:\mathbb C^n\to\mathbb C^n$ is a linear function, then it is a root of its own characteristic polynomial $\chi_f(f) = 0$.



So I wondered: what if we had a function $f:\mathbb C^n \to \mathbb C^n$ satisfying a polynomial functionial equation (PFE), i.e. there is some polynomial $p=\sum a_k x^k$ such that $p(f)=0$, where we interpret




$$ f^k = \underbrace{f\circ\ldots\circ f}_{k \text{ -times}}$$



and $f^0 = \text{id}$. (replace product of variables with composition of functions) E.g. if $p=x^2+ax+b$ then $p(f)=0$ iff $f(f(x)) + af(x) +bx=0$ for all $x$. This is motivated by the fact that after all matrix multiplication is nothing but the composition of linear functions.




Quesiton: Under which conditions would we be able to conclude that $f$ must be a linear function?




Here a few things are important to keep in mind





  • If $f$ solves the PFE $p(f)=0$, then so does $\phi^{-1} \circ f\circ \phi$ for any bijective function $\phi$.


  • The way we write down $p$ matters, e.g. although $x(x-1) = x^2 -x$, the resulting PFE $f(f(x)-x) = 0$ and $f(f(x)) - f(x)= 0$ are different. (This raises an interesting side question about under which conditions their solutions must coincide)


  • General solutions to functional equations can be messy if no additional regularity assumptions are made (cf. Cauchy's equation).




With these caveats in mind I would question the validity of the following





Conjecture: Let $f\colon\mathbb C^n\to\mathbb C^n$ be an entire function satisfying a PFE $p(f) = 0$. Then $f$ is conjugate linear, i.e. there exists a holomorphic bijection $\phi\in\text{Aut}(\mathbb C^n)$ such that $\phi^{-1}\circ f\circ\phi$ is linear.




I did some digging in the literature and found this wonderful paper by Ahern and Rudin. They consider holomorphic $f$ that are functional roots of unity $f^m =\text{id}$ (also known as Babbage's equation), which is equivalent to the PFE given by $p=x^m-1$. Among other things they prove:




  • If $f^m = \text{id}$ and $f$ is affine, i.e. $f(z)= Lz+c$, then $f$ is conjugate linear.


  • If $f^m = \text{id}$ and $f$ has a fixed point, then $f$ is conjugate linear locally around it.


  • If $f^m = \text{id}$ and $f$ is $\mathbb C^2\to\mathbb C^2$ and a finite composition of overshears, then $f$ is conjugate linear.





Here and overshear is a map of the form



$$\begin{pmatrix}x\\ y\end{pmatrix}
\longrightarrow
\begin{pmatrix}g(y)x+h(y)\\ y\end{pmatrix}$$



with $g,h$ entire and $g(y)\neq 0$ for all $y$; or more generally $f(x_i) = x_i$ for $i\neq j$ and $f(x_j) = g(x) x_j + h(x)$ where $g,h$ are entire, independent of $x_j$ and $g(x)\neq 0$ for all $x$. It is known that the set of all finite compositions of overshears forms a dense subgroup of $\text{Aut}(\mathbb C^n)$.



There are also some known negative results of non-linearizable holomorphic automorphisms (e.g. Derksen 1997) but I don't understand enough of the advanced algebra to really fathom this paper and its possible implications on the question at hand.




There are some simpler sub-problems that might be easier to track:




Problem 1: Let $f(z)=Lz+c$ be affine and satisfy a PFE $p(f)=0$. Does $f$ admit a fixed point?




In this case $f$ is conjugate linear by choosing $\phi$ to be the translation onto the fixed point. If false, this might be the easiest route towards a counter example. If true the next logical step should be to try





Problem 2: If $f:\mathbb C^n \to \mathbb C^n$ is entire and solves the PFE $p(f)=0$, then $f$ admits a fixed point.




Finally, a neat little observation I made is the following: if $f$ solves the PFE $p(f)=0$, and there exists a non-zero vector $v$ and entire function $g$ such that $f(\lambda v) = g(\lambda) v$ for all $\lambda \in \mathbb C$, then $f^k(\lambda v) = g^k(\lambda) v$, hence $g$ is a scalar function solution to the PFE $p(g)=0$. Maybe this indicates that some sort of Eigenvalue theory is possible?



Anyway, it seems like some of this stuff is still untapped terrain so it might be worth some further investigation. Thanks for reading!

analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...